meta
dict | text
stringlengths 1
1.2M
|
---|---|
{
"arxiv_id": "2302.13155",
"language": "en",
"timestamp": "2023-02-28T02:11:24",
"url": "https://arxiv.org/abs/2302.13155",
"yymm": "2302"
} |
\section{Introduction}
In recent years, we see an increased number of low-resource systems that are running deep neural networks under extreme CPU, memory, time, and energy constraints~\cite{zygarde,yao2017deepsense,yao2018deep,yao2017deepiot,lee2019intermittent}. Nowadays, it is becoming common to see multiple neural networks co-residing on the same portable, wearable, and mobile edge device in order to offer a wide variety of intelligent applications and services to the user~\cite{kawsar2018esense,islam2019device}. Many IoT devices have built-in voice assistants that authenticate the speaker, understand what they say, and recognize gestures, facial expressions, and emotions. Mobile vision technology~\cite{redmon2015real,xie2017towards,liao2016understand,sarikaya2017detection,giusti2015machine,smarton,spoton} is built into many mobile and social robots that perform on-device object recognition, obstacle detection, scene classification, localization, and navigation. In order to increase the accuracy and robustness of these classifiers, numerous \emph{multitask learning} techniques have been proposed in the mainstream machine learning literature~\cite{ruder2017overview,sun2020adashare}. Some of these techniques have been adopted by the embedded systems community to scale up the number of classifiers that co-exist on an embedded system~\cite{nwv, yono}.
Unfortunately, multitask learning on low-resource embedded systems still remains a challenge. Slow CPU, scarce memory (RAM), and high overhead of external storage (flash) make the response time and the energy cost of multitask inference on these systems extremely high. To deal with these challenges, recent works~\cite{nwv, yono, nws} have proposed bold measures such as squeezing all~\cite{nwv, yono} or most~\cite{nws} of the neural networks into the main memory (RAM) --- in order to avoid the high overhead of storage and to rely on fast, in-memory computation for the most part. However, speedup gained in such extreme ways inevitably comes at the cost of lower accuracy and/or hidden time and energy cost that overshadows the benefit of in-memory execution. In general, state-of-the-art multitask inference techniques for low-resource systems lack two major aspects that could significantly reduce the inference time and energy consumption:
Firstly, inference tasks that run on the same system generally show affinity. For instance, a speaker identification task and a speech recognition task running on a voice assistant device share common latent subtasks such as noise compensation and phoneme identification. These overlapping subtasks should be factored out and executed only once to reduce the time and energy waste due to repeating them for both tasks. Existing works~\cite{nwv, yono, nws} pack multiple tasks in the main memory by sharing task constructs at the granularity of weights. They do not exploit the higher-level affinity between tasks and thus fail to recognize that even though in-memory operations are faster, repeatedly executing subtasks adds up significantly higher overhead, especially when it involves multiple convolutional layers, which is completely unnecessary.
Secondly, inference tasks that run on the same system generally manifest inter-task and intra-task dependencies -- requiring the system to execute tasks and subtasks in a certain order. This also creates opportunities to skip a dependent task or a subtask. For instance, voice classification tasks are routinely preceded by a lightweight voice activity detector to reduce computational overhead. Likewise, subtasks such as noise compensation and phoneme detection often precede the rest of the audio processing pipeline in typical speech classification tasks. Existing works~\cite{nwv, yono, nws} that merge or load tasks based on the byte-values of the weights are not capable of exploiting higher-level inter-task and intra-task dependencies, and thus they waste time and energy in executing tasks and subtasks that are unnecessary.
In this paper, we introduce Antler\xspace -- which exploits the affinity between tasks in a multitask inference system to construct a compact graph representation of the task set. Unlike existing task grouping techniques that are primarily concerned with inference accuracy only, we construct task graphs considering both the accuracy as well as the time and/or energy waste from repeated execution of subtasks. Furthermore, we observe that different pairs of tasks exhibit different degrees of affinity and the cost of switching from one task to another is nonidentical. We formally prove that ordering tasks in a multitask learning scenario is NP-Complete and provide an integer linear programming formulation to solve it. We extend the formulation to include dependency constraints between tasks and subtasks. We describe a genetic algorithm to solve the optimization problem for both constrained and unconstrained cases.
\begin{figure*}[!t]
\centering
\includegraphics[width=6.5in]{figure/algo/overview.pdf}
\caption{Overview of Antler\xspace: (a) A set of tasks defined over a domain is taken as the input. A common network architecture is individually trained to produce network instances (one for each task). (b) A task graph is formed considering both accuracy and task execution cost. The task graph is retrained. (c) An optimal task execution order minimizes the task execution cost.}
\label{fig:sys_overview}
\end{figure*}
In order to evaluate Antler\xspace, we develop two systems: 1) a 16-bit TI MSP430FR5994-based custom-designed ultra-low-power system, and 2) a 32-bit ARM Cortex M4/M7-based off-the-shelf STM32H747 board. We conduct dataset-driven experiments as well as real-world deployments with these systems. In the dataset-driven experiments, we compare the performance of Antler\xspace against four baseline solutions, including three state-of-the-art multitask inference systems for low-resource systems: YONO~\cite{yono}, NWV~\cite{nwv} and NWS~\cite{nws} over nine datasets that are used in the literature. We observe that Antler\xspace's execution time and energy consumption is the lowest compared to all baseline systems. By leveraging the similarity of tasks and by reusing the intermediate results from previous task, Antler\xspace reduces the inference time by 2.3X -- 4.6X and saves 56\% -- 78\% energy, when compared to the baselines. In the real-world deployments, we implement two multitask inference systems having five audio and four image inference tasks. Results show that Antler\xspace reduces the time and energy cost by 2.7X -- 3.1X while its inference accuracy is similar to running individually-trained classifiers within an average deviation of $\pm 1\%$.
\section{Discussion}
\parlabel{Fine-Grained Task Decomposition.}
Although Antler\xspace uses only three branch points to in our experiments, it can be easily extended to have more fine-grained decomposition to form more compact task graphs. Such fine-grained decomposition of tasks might be necessary when the number of tasks is above 20.
\parlabel{Optimization Alternatives.} Antler\xspace's task graph formation and task ordering are formed as two independent optimization problems and solved independently. We choose this design to make task ordering flexible so that dynamic constraints can be handled flexibly. An alternative approach such as joint optimization of task graph formation and execution order determination is possible but the downside is that if task dependencies change, the optimization problem has to be solved again.
\parlabel{Generalization to Other Systems.} Although Antler\xspace is motivated by the constraints of a low-resource system, some of the techniques such as affinity-aware task graphs that execute under constraints and optimal ordering of tasks should apply to server-grade larger multitask inference systems as well. Cloud-based inference systems that execute more complex and larger number of inference tasks, exploiting task affinity could help reduce a server's response time.
\parlabel{Generalization to Other Workloads.} Although Antler\xspace's scope is limited to neural networks, the concept of tasks, task affinity, and task graphs are generalizable to any workload where a task can be factored into subtasks, compared, and merged. Antler\xspace is readily applicable to many non-neural classifiers such as decision tree and random forest that process data in stages.
\parlabel{Improvement Over Antler\xspace.} Antler\xspace's advantage over the state-of-the-art in-memory multitask learning systems~\cite{nwv, yono} is its ability to reduce time and energy cost while offering the same level of accuracy. Antler\xspace, however, consumes more memory than these systems. We envision that Antler\xspace will inspire new techniques that will not only reduce time and energy overhead of multitask inference on embedded systems but also improve inference accuracy and perform complete in-memory execution of tasks.
\section{Real-World Deployment}
We deploy Antler\xspace in two real-world multitask learning scenarios that involve audio and image classification tasks.
\begin{figure}[!htb]
\centering
\subfloat[Multitask Audio Inference System]
{
\includegraphics[width = 0.5\linewidth]{figure/pcb/audio1.pdf}
\includegraphics[width = 0.5\linewidth]{figure/pcb/audio2.pdf}
}
\subfloat[Multitask Image Inference System]
{
\includegraphics[width = 0.5\linewidth]{figure/pcb/image1.pdf}
\includegraphics[width = 0.5\linewidth]{figure/pcb/image2.pdf}
}
\caption{Deployment Setup.}
\vspace{-15pt}
\label{fig:setup_wild}
\end{figure}
\subsection{Multitask Audio Inference System}
\parlabel{Inference Tasks.}
We implement five audio-based tasks: 1) a speaker presence detection task ($\tau_0$) which detects if there is human voice in the audio, 2) a command detection task ($\tau_1$) which detects eleven commands \{yes, no, up, down, go, stop, left, right, on, off, Alexa\}, 3) a speaker identification task ($\tau_2$) which identifies who is speaking (five speakers), 4) an emotion classification task ($\tau_3$) which classifies the audio into three emotions \{positive, negative, neutral\}, and 5) a distance classification task ($\tau_4$) which tells whether the speaker is close to or far from the device.
\parlabel{System Setup.} We use the 16-bit custom MSP430FR5994~\cite{msp430fr5994} to conduct the audio-based experiments as shown in Figure~\ref{fig:setup_wild}(a).
Audio signal is sampled at 2KHz and converted to a feature map having a window-length of 128ms and a stride of 64ms after performing the FFT.
\begin{figure}[!htb]
\centering
\subfloat[Audio Inference Graph]
{
\includegraphics[width = 0.45\linewidth]{figure/algo/deployment_a.pdf}
}%
\subfloat[Image Inference Graph]
{
\includegraphics[width = 0.45\linewidth]{figure/algo/deployment_b.pdf}
}
\caption{Multitask inference graphs.}
\vspace{-15pt}
\label{fig:wildgraphs}
\end{figure}
\parlabel{Data Collection and Network Training.} Five volunteers (four male and one female) participate in this experiment. We have followed Institutional Review Board (IRB) approved protocol to conduct this study. We collect 15 samples for each task from each volunteer. We use 80\% data for training and 20\% for testing. We design a 5-layer CNN having 2 convolutional and 3 dense layers and pre-train it on~\cite{warden2018speech} prior to training on our own dataset. We use 3 branch points and the tradeoff budget for the task graph.
\subsection{Multitask Image Inference System}
\parlabel{Inference Tasks.}
We implement four image classification tasks: a human presence detection task ($\tau_0$) which detects human faces in an image, a mask detection task ($\tau_1$) which detects if the person is wearing a mask, a person identification task ($\tau_2$) which recognizes a person's face (5 volunteers), and an emotion recognizer ($\tau_3$) which classifies three emotions as in the audio inference system.
\parlabel{System Setup.}
We use off-the-shelf 32-bit STM32H747 H7 as shown in Figure~\ref{fig:setup_wild}(b).
Images are taken with a HM01B0 camera module and has the dimensions of 64$\times$64 pixels.
\parlabel{Data Collection and Network Training.} Data collection and network training processes are identical to the audio inference system except for the neural network which is a 7-layer CNN having 3 convolutional and 4 dense layers and is pre-trained on ~\cite{LFW}.
\subsection{Evaluation Results}
\parlabel{Task Decomposition and Grouping.} Figure~\ref{fig:wildgraphs} shows task graphs for both applications. There are 4 blocks in each task graph (for 3 branch points). One of the blocks (second block) contain multiple layers. This is unlike task graphs observed earlier in Section~\ref{sec:eval} where deeper layers are lumped into the same block. Overall, having more layers lumped in earlier blocks decreases execution cost but may decrease accuracy as well. Antler\xspace finds a trade-off point between the two to optimize both accuracy and cost.
\parlabel{Task Dependency and Ordering.} We include a precedence constraint in the image inference system that the presence detection task ($\tau_0$) must be executed before any other task. Additionally, in the audio inference system, we make presence detection a conditional constraint such that other tasks are executed at 80\% probability. The ordering of tasks in audio and image inference systems are: $\mathrm{\tau_0 \rightarrow \tau_3 \rightarrow \tau_4 \rightarrow \tau_2 \rightarrow \tau_1}$ and $\mathrm{\tau_0 \rightarrow \tau_3 \rightarrow \tau_1 \rightarrow \tau_2}$, respectively, and they are just one of several orderings that yield the best performance for these tasks.
\parlabel{Inference Time and Energy Consumption.}
We evaluate Antler\xspace's execution time and energy consumption for three cases: Antler\xspace having no constraints, Antler\xspace-PC having precedence constraints, and Antler\xspace-CC having conditional constraints, and compare their performance with the Vanilla system. Figure~\ref{fig:eval_overhead_wild} shows that Antler\xspace yields 2.7X -- 3.1X reduction in time and energy costs and the results are consistent across both systems. Antler\xspace-PC has the same overhead as Antler\xspace because there are only four tasks and the execution order with precedence constraint is already in the optimal ordering. For Antler\xspace-CC, overhead decreases as tasks are skipped occasionally based on the conditional probability.
\begin{figure}[!htb]
\centering
\vspace{-5pt}
\subfloat[Execution Time]
{
\includegraphics[width = 0.24\textwidth]{figure/eval/overhead_time_wild.pdf}
}
\subfloat[Energy Consumption]
{
\includegraphics[width = 0.24\textwidth]{figure/eval/overhead_energy_wild.pdf}
}
\caption{Execution time and energy consumption.}
\vspace{-5pt}
\label{fig:eval_overhead_wild}
\end{figure}
\parlabel{Inference Accuracy and Memory Usage.}
Figure~\ref{fig:eval_wild_accuracy} shows the average accuracy of all tasks for both systems. We observe that in the audio inference system, except for the command detection task, all tasks have over 90\% accuracy. This is because the command detection task is the hardest of these tasks with eleven class labels. The accuracy of both Antler\xspace and Vanilla are very similar within an average deviation of $\pm 1\%$.
\begin{figure}[!htb]
\centering
\vspace{-15pt}
\subfloat[Audio Inference System]
{
\includegraphics[width = 0.24\textwidth]{figure/eval/accuracy_wild_audio.pdf}
}
\subfloat[Image Inference System]
{
\includegraphics[width = 0.24\textwidth]{figure/eval/accuracy_wild_image.pdf}
}
\caption{Inference accuracy of audio and image classifiers.}
\vspace{-5pt}
\label{fig:eval_wild_accuracy}
\end{figure}
The memory usage of both systems are shown in Table~\ref{table:memoryusage}. We observe that the memory usage of Antler\xspace is approximately half of Vanilla's, which is consistent with earlier results from the dataset-driven experiments in Section~\ref{sec:eval}.
\begin{table}[!thb]
\begin{tabular}{|l|lll|}
\hline
\textbf{System} & & Vanilla & Antler \\ \hline
\multirow{2}{*}{\textbf{Memory (KB)}} & Audio & 397 & 202 \\ \cline{2-4}
& Image & 445 & 222 \\ \hline
\end{tabular}
\caption{Memory usage.}
\vspace{-30pt}
\label{table:memoryusage}
\end{table}
\section{Background and Motivation}
\label{sec:background}
\subsection{Multitask Learning}
Multitask learning refers to a genre of machine learning algorithms that jointly train two or more learners to increase their accuracy as well as their ability to generalize well by sharing the common knowledge in them~\cite{caruana1997multitask, ruder2017overview,zhang2017overview,zhang2017survey}. A wide variety of multitask learning approaches have been proposed in the literature which includes common feature learning~\cite{liu2017adversarial,zhang2016deep,misra2016cross,liu2015multi,mrkvsic2015multi,li2014heterogeneous,zhang2014facial}, task clustering~\cite{zhou2015flexible,barzilai2015convex,han2015learning,kumar2012learning,kang2011learning,thrun1996discovering}, task relation learning~\cite{long2017learning,lee2016asymmetric,zhang2013learning,zhang2013multilabel}, and low-rank optimization~\cite{yang2016trace,han2016multi,mcdonald2014spectral,agarwal2010learning,chen2009convex,zhang2008flexible}.
The key technique behind all multitask learning methods is to share a subset of parameters among the learners, e.g., weights of the first few layers of the network networks, to enable sharing of learned representations of related tasks. This process improves the accuracy and generalization ability of each learner because most learning tasks can be broken down into multiple sequential stages and early stages of many tasks are often similar and sometimes the same. An analogy from biological learning would be how a human child first learns to recognize faces and then uses that knowledge to learn new tasks such as object recognition.
\subsection{System Considerations}
The primary objective of multitask learning is to increase the accuracy and robustness of classifiers via knowledge sharing. This is implemented by introducing constraints on parameters across multiple learners during the joint training phase. Like any neural network training algorithm, however, this too has randomness built into it, e.g., parameter initialization and dropout, and as a consequence, the resultant networks are not unique, and the parameters that are shared among networks are not always the same ones. In other words, the same level of inference accuracy is achievable by a set of learners while sharing their network parameters in many different ways. Although this is not an issue in the mainstream multitask learning literature which is primarily concerned with inference accuracy, from a systems perspective, the time and energy cost of executing these \emph{similarly-accurate but architecturally-different} multitask learners are not the same.
The time and energy cost of performing neural network inference on an embedded system depends on both the amount of CPU-bound operations, which are largely a series of matrix multiplication and accumulation operations, as well as on the frequency of I/O operations to load the neural network parameters such as the weights into the main memory (i.e., RAM) from the non-volatile storage, e.g., EEPROM, flash, or FRAM. In certain learning scenarios, e.g., online~\cite{hoi2021online}, reinforcement~\cite{kaelbling1996reinforcement}, and federated learning~\cite{lim2020federated}, the network weights are often updated at runtime and need to be written back to non-volatile storage, which incurs additional overhead.
In high-end machines that have powerful CPUs and large RAMs, I/O overheads as such is negligible. However, for low-resource, battery-powered IoT devices, the I/O overhead of frequent swapping in and out of classifiers between the main memory and the non-volatile storage is significant for two main reasons. First, the size of the internal RAM on these systems ranges from a few KB to a few hundred KB. The size of a neural network, on the other hand, is generally larger than this. Therefore, the CPU has to read in the network weights from flash at a frequency that is proportional to the ratio of the internal RAM and the size of the neural network. Second, multitask learning amplifies this overhead even more as it requires the system to execute more than one neural network. Recently, there have been multiple proposals~\cite{nwv, yono} to
squeeze multiple neural networks to a size as small as 256KB to fit them in the main memory of an MCU, these methods do not guarantee inference accuracy as the number of neural networks increases.
\begin{figure}[!htb]
\centering
\subfloat[]
{
\includegraphics[width = 0.223\textwidth]{figure/io_time_overhead.pdf}
}
\subfloat[]
{
\includegraphics[width = 0.22\textwidth]{figure/io_energy_overhead.pdf}
}
\caption{Time and energy cost of multitask inference.}
\vspace{-15pt}
\label{fig:iopercent}
\end{figure}
To demonstrate the time and energy cost of executing multiple neural networks on a low-resource system, we conduct an experiment where we execute three pre-trained neural networks of sizes 35KB, 100KB, and 132KB on a 16-bit MSP430 MCU as well as on a 32-bit Raspberry Pi Pico (PR2040). The networks are stored in the FRAM and the flash of these two systems, respectively, and they are executed sequentially. We measure the inference time and energy to process one input sample through all networks and break it down into in-memory inference and weight-reloading overhead. In Figure~\ref{fig:iopercent}, we observe that 16-bit MSP430 takes up to 6X more time per inference than 32-bit RP2040, but consumes $20\%-60\%$ less energy. Overall, we observe a $6\%-15\%$ time overhead and $8\%-24\%$ energy overhead --- which is significant in a low-resource system.
From the experiment above, it is evident that a faster CPU cannot solve the overhead problem entirely. It reduces the inference time at the cost of a higher energy overhead and thus significantly reduces the battery life. One could argue that by choosing a low-power system that comes with a larger RAM could mitigate the problem. But in practice, chip manufacturers do not offer large RAMs in 16-bit MCUs due to cost, size, and overheating concerns. Later in this paper, we propose a new platform that uses multiple external FRAM chips as an energy-efficient and relatively faster alternative to flash memory. But that too has its time and energy overheads. In short, the time and energy overhead in multitask learning on low-resource systems cannot be solved using hardware provisioning only. We need intelligent algorithms in combination with better hardware design to address this issue.
\section{Task Graph Generation}
\label{sec:taskgraphgeneration}
This section describes how task graphs are generated from the network instances obtained after the preprocessing step.
\subsection{Quality of a Task Graph}
\label{sec:affinitytaskgraph}
\parlabel{Task Affinity.} Task affinity refers to the degree at which two tasks are similar in their data representation~\cite{vandenhende2019branched,dwivedi2019}. For a pair of tasks in Antler\xspace, we should ideally compare the outputs of every layer over the entire dataset. This is, however, costly and is also not necessary in practice. Hence, to lower the computational overhead, we choose $\mathrm{D}$ layes, which are referred to as \emph{branch points}, and measure the similarity of outputs of the two networks at these branch points over a subset of $\mathrm{K}$ random samples from the dataset. Computing task affinity is a two-step process:
\textbf{Step 1 --} Each task is profiled using $\mathrm{K}$ data samples. At each branch point, for all pairs of samples, the dissimilarity of their representations is computed using inverse Pearson correlation coefficient~\cite{taylor1990interpretation} to obtain a $\mathrm{D \times K \times K}$ dimensional tensor. This tensor is flattened to have a vector that encodes the data representation profile of a task. This process is repeated for each task.
\textbf{Step 2 --} Affinity score for each pair of tasks is computed. At each branch point, for all pairs of tasks, the similarity of their data representation profile tensors is computed using Spearman's correlation coefficient~\cite{taylor1990interpretation} to obtain a $\mathrm{D \times n \times n}$ dimensional matrix where $\mathrm{n}$ is the number of tasks. This matrix encodes the similarity between pair of tasks at each branch point. This information is used later when tasks are grouped to form affinity-aware task graphs.
\parlabel{``Variety'' Score of Task Graphs.} We extend the definition of task affinity to task graphs. The subtree rooted at each branch point of a task graph contains a subset of tasks that share one or more blocks. In other words, all the blocks from the root of the graph to the root of the subtree are shared by all tasks that are on the leaf of the subtree. At the root of the subtree, tasks diverge and follow different paths. We quantify this using \emph{variety} score.
We define \emph{variety} score for the tasks under each branch point as the average maximum dissimilarity score over all pair of tasks under that branch point. The variety score quantifies the dissimilarity or misfit within tasks under each branch point. The overall variety score of a task graph is the sum of variety scores at all branch points. Computing the variety score of a task graph is a two-step process:
\textbf{Step 1 --} Variety score at each branch point is computed using Equation~\ref{eq:sl}, where $\mathrm{S_{\rho, i,j}}$ denotes the affinity between tasks $\mathrm{\tau_i}$ and $\mathrm{\tau_j}$ at branch point $\mathrm{\rho}$, $\mathrm{c_k}$ denotes the $\mathrm{k}$-th child branch, and $\mathrm{m}$ denotes the total number of child branches.
\begin{align}
\mathrm{v_{\rho} = \frac{1}{m} \left[\sum_{k=1}^m \max_{ i,j \in c_k}\big(1-S_{\rho,i,j}\big)\right]} \label{eq:sl}
\end{align}
\textbf{Step 2 --} Variety score, $\mathrm{v_{\rho}}$ from all branch points are summed to obtain the variety score for the task graph using Equation~\ref{eq:ft}.
\begin{align}
\mathrm{V = \sum_{\rho} v_{\rho}} \label{eq:ft}
\end{align}
Note that although we use \emph{affinity}, a similarity score to quantify the similarity between two tasks, we use \emph{variety} score, a measure of dissimilarity to quantify a task graph's quality. This may seem counter intuitive but this is similar to intra-cluster distance in clustering algorithms which measures the impurity in a cluster.
\begin{figure}[!htb]
\centering
\includegraphics[scale=.9]{figure/algo/variety.pdf}
\caption{Examples of task graphs having very high (left) and very low (right) variety scores.}
\label{fig:taskaffinitydifferent}
\end{figure}
\subsection{Tradeoff Analysis}
\label{sec:tradeoff}
Task graphs with low variety scores are generally desired as variety score tends to inversely correlate with inference accuracy. However, the lower the variety score of a task graph is, the higher its time, energy, and storage overhead is.
For example, the task graph in Figure~\ref{fig:taskaffinitydifferent} {(left)} has the highest variety score --- all tasks are essentially in one group. This is the most compact representation for any task set and has several benefits such as the least storage requirement and the least time and energy overhead when switching tasks. However, since all tasks share pretty much all layers, the likelihood of individual task performing accurate inferences is low.
On the other hand, the task graph in Figure~\ref{fig:taskaffinitydifferent} {(right)} has the lowest variety score --- each task forms its own group, has the maximum time, energy, and storage overhead (as no blocks are shared), but since each task retains its weights, the inference accuracy is likely to be relatively higher.
\begin{figure}[!htb]
\includegraphics[width=0.3\textwidth]{figure/algo/tradeoff.pdf}
\caption{Tradeoff between variety score and execution cost.}
\label{fig:algo1_tradeoff}
\end{figure}
\parlabel{Empirical Tradeoff Curve.} Figure~\ref{fig:algo1_tradeoff} shows this tradeoff using empirical data obtained from one of our experiments. We define five image classification tasks on the dataset~\cite{lecun1998gradient} and use a five-layer CNN having 2 convolutional and 3 fully-connected layers as the common network architecture. We generate all possible task graphs, compute their variety scores, estimate their execution costs, and record their model sizes.
To draw the tradeoff curve, we vary the maximum model size budget, and for each budget, we pick the task graph having the lowest variety score and whose size is within the budget. The variety score and the execution cost of that task graph are normalized and plotted on the Y-axis. Thus, we get trend lines for variety score and execution cost.
We observe that although an increased model size budget allows us to have a task graph with lower variety score, it comes at the cost of increased execution overhead. To balance these two opposing goals, Antler\xspace chooses the task graph that lies at the intersection of the two trend lines.
\subsection{Task Graph Generation Algorithm}
Given $\mathrm{n}$ individually trained neural networks having the same architecture, generating the task graph is a five-step process:
\textbf{Step 1 --} For each pair of tasks, their affinity score is computed at $\mathrm{D}$ branch points to obtain a $\mathrm{D \times n \times n}$ matrix.
\textbf{Step 2 --} The set of all task graphs containing $\mathrm{n}$ tasks, $\mathrm{G_T(n)}$ is generated through a recursive process. For every task graph $\mathrm{g \in G_T(n-1)}$, where $\mathrm{g}$ contains $\mathrm{n-1}$ tasks, we generate $\mathrm{\Lambda(g)}$ new task graphs, each containing $\mathrm{n}$ tasks. $\mathrm{\Lambda(g)}$ denotes the number of non-leaf internal node of $\mathrm{g}$. This is because the $\mathrm{n}$-th task can only branch out from one of the non-leaf internal nodes of $\mathrm{g}$.
\textbf{Step 3 --} For each task graph, $\mathrm{g \in G_T(n)}$, its variety score, model size, and execution cost are estimated. The variety score is obtained using Equation~\ref{eq:ft}. The execution cost is estimated from empirical measurements of the cost of executing each block of the common network architecture. Execution cost estimation also requires the optimal execution order of the tasks, which is obtained using the algorithm described in the next section.
\textbf{Step 4 --} The variety score vs. execution cost tradeoff curve is computed. The task graph, $\mathrm{\hat{g}}$ corresponding to the point where the two trend lines intersect is returned.
\textbf{Step 5 --} The selected task graph, $\mathrm{\hat{g}}$ is retrained using ~\cite{vandenhende2019branched}.
\section{System Implementation}
\label{sec:systemdesign}
This section describes the hardware, the embedded system software, and the application development tool.
\subsection{Hardware}
\parlabel{Boards.} In order to implement and evaluate Antler\xspace, we use two hardware platforms: 1) a 16-bit TI MSP430FR5994-based custom-designed ultra-low-power system, and 2) a 32-bit ARM Cortex-M4/M7-based off-the-shelf Portenta H7. These two systems and their specifications are shown in Figure~\ref{fig:pcb} and Table~\ref{tab:hardwaretable}, respectively.
\begin{figure}[!htb]
\centering
\vspace{-10pt}
\subfloat[16-bit Custom]
{
\includegraphics[height = 1.6in]{figure/pcb/pcb_msp.png}
}
\subfloat[32-bit Portenta]
{
\includegraphics[height=1.6in]{figure/pcb/pcb_portenta.png}
}
\vspace{-5pt}
\caption{Hardware platforms.}
\vspace{-10pt}
\label{fig:pcb}
\end{figure}
\begin{table}[!htb]
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{Platform} & \textbf{Custom} & \textbf{STM32H747}\\
\hline
CPU & MSP430FR5994 & ARM Cortex-M4/M7 \\
& 16-bit, $\le$16MHz & 32-bit, $\le$480MHz \\
Memory & 8KB SRAM & 1MB SRAM\\
& 512KB+2MB FRAM & 2MB eFlash\\
Power & 1.8V - 3.6V & 3.3V\\
& 118 uA/MHz (active) & 100 mA \\
\hline
\end{tabular}
\caption{Hardware specification.}
\vspace{-20pt}
\label{tab:hardwaretable}
\end{table}
The rationale behind using two different platforms is to demonstrate Antler\xspace's performance as we vary the CPU (16-bit vs. 32-bit), external memory type (FRAM vs. flash), and application (audio vs. image). Since off-the-shelf 16-bit MCUs have very limited on-board FRAM, we design a custom PCB that contains a 16-bit MSP430FR5994 MCU and expands its non-volatile memory from 512KB to 2MB+ by provisioning extra slots to connect up to four 512KB external FRAMs.
For a fair comparison with YONO~\cite{yono}, we choose the same platform (STM32H747) in 32-bit experiments, enforce similar restrictions on the memory usage, and use the same core (M7).
\parlabel{Sensors.} Both systems have interfaces to connect a camera, a microphone, and an inertial measurement unit (IMU). We use an ultra-low-power camera (HM01B0~\cite{hm01b0}) that consumes <2mW at QVGA 30FPS for image sensing and a MAX4466~\cite{max4466} microphone for audio sensing.
\subsection{Embedded System Software}
All neural network inference-related software that run on the target device are referred to as the embedded system software. One of the challenges to this has been the portability of Antler\xspace across different platforms such as MSP430FR5994 and STM32H747. To streamline this, we follow a two-step process:
First, we develop a Python-based tool to convert a TensorFlow model to a C header file containing the weights and the architecture of the neural network. Then a task-specific C header file for each task is generated. All such task-specific C header files are combined (as dictated by the task graph) to generate a single compact header file containing network weights, the common network architecture, and task graph information.
Second, the header file generated above is combined with platform-specific C implementation of neural network modules. A complete C program is auto-generated, which executes the neural networks in the desired order. The code is cross-compiled to produce executable binary files for the target device. The tool supports sequential TensorFlow models and has a C library that implements dense layers, convolution, maxpooling, flattening, dropout, and leaky ReLU operations.
\subsection{Application Development Tool}
We provide a Python-based application development tool that takes a dataset and the tasks as the input, and produces a task graph and task execution order as the output. There are three steps to the process:
First, neural network instances are created and trained on the given dataset to obtain task-specific neural networks using Tensor Flow. These Tensor Flow models are auto-converted to C files for the target platform. Execution time of each layer of these networks are profiled by running them on the target platform. These measurements are used later in the task graph generation and selection process.
Second, variety score vs. overhead tradeoff analysis is performed for the full range of model size budgets. Branch point parameter is set to 3. All task graphs within the budget range are generated, and their variety scores and execution overheads are recorded in a file. Task graph that balances variety score and execution overhead is selected by default. However, the developer at this point may choose to use any task graph from the file that meets their application and system requirements.
Third, All tasks are retrained on the input dataset using a multitask learning algorithm~\cite{vandenhende2019branched}.
\section{Optimal Task Execution Order}
\label{sec:taskorder}
This section describes how Antler\xspace achieves the optimal task ordering for a given task graph that may or may not have precedence or conditional constraints. This problem is NP-complete and a proof is in Appendix~\ref{sec:appendix1}.
\subsection{Task Execution Order}
\parlabel{Significance.} Task graphs provide a compact representation of tasks showing the overlaps in them. These graphs do not explicitly impose any order for executing the tasks. We observe that for a given task graph with $\mathrm{n}$ tasks, not all $\mathrm{n!}$ permutations of their execution order are the same. This is because different pairs of tasks in a task graph generally share different number of blocks.
\begin{figure}[!htb]
\centering
\includegraphics[scale=1]{figure/algo/ordering.pdf}
\caption{Switching cost is different for different task pairs.}
\label{fig:taskorderingexample}
\end{figure}
Consider the task graph with five tasks in Figure~\ref{fig:taskorderingexample} as an example. The cost of switching from one task to another is shown on the weighted complete graph on its right. For simplicity of illustration, we assume the cost of loading and executing each block is 1 unit. We observe that executing the tasks in the order: $\mathrm{\tau_2 \rightarrow \tau_1 \rightarrow \tau_3 \rightarrow \tau_5 \rightarrow \tau_4}$ incurs significantly higher overhead when compared to the optimal order: $\mathrm{\tau_1 \rightarrow \tau_5 \rightarrow \tau_2 \rightarrow \tau_3 \rightarrow \tau_4}$.
\parlabel{Cost Matrix.} The cost matrix $\mathrm{C_{n,n}}$ is an $\mathrm{n \times n}$ matrix, in which, each entry $\mathrm{c_{i,j}}$ denotes the additional cost of loading and executing task $\mathrm{\tau_j}$, given that the last executed task was $\mathrm{\tau_i}$. Note that the cost can be measured in terms of time or energy.
\begin{equation}
\mathrm{C_{n,n}} =
\begin{pmatrix}
0 & \mathrm{c_{1,2}} & \mathrm{c_{1,3}} & \cdots & \mathrm{c_{1,n}} \\
- & 0 & \mathrm{c_{2,3}} & \cdots & \mathrm{c_{2,n}} \\
- & - & 0 & \cdots & \mathrm{c_{3,n}} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
- & - & - & \cdots & 0
\end{pmatrix}
\label{eq:costmatrix}
\end{equation}
The cost matrix is symmetric. Hence, only the upper or the lower triangular entries of the matrix are measured. These values are obtained empirically by measuring the time or energy overhead of switching between all pairs of tasks.
The cost matrix explains why task execution order matters. If all entries of the cost matrix were the same, the execution order of the tasks would not matter. This may only happen in extreme cases when tasks are too similar (i.e., they share all intermediate layers) or too different (i.e., they share nothing). The cost matrix is used to find the optimal execution order of the tasks.
\subsection{Optimal Task Execution Order}
\label{sec:problemform}
Given a set of $\mathrm{n}$ tasks, $\mathrm{\tau=\{\tau_1,...,\tau_n \}}$ and cost matrix, $\mathrm{C_{n,n}}$, our goal is to find an optimal ordering of the tasks so that the total cost of executing the task set is minimized.
\parlabel{Mathematical Formulation.} We define a binary variable $\mathrm{x_{ij}}$ to denote whether a task switching happens from $\mathrm{\tau_i}$ to $\mathrm{\tau_j}$:
\begin{equation}
\mathrm{x_{i,j}} =
\begin{cases}
1, & \text{if}\ \text{ a task switch happens from } \mathrm{\tau_i} \text{ to } \mathrm{\tau_j}\\
0, & \text{otherwise}
\end{cases}
\end{equation}
The task ordering problem is formulated as the following integer linear programming problem:
\begin{equation*}
\begin{array}{lr@{}ll}
\text{minimize} & \mathrm{\displaystyle\sum_{i=1}^n\sum_{j=1,j\neq i}^n} & \mathrm{c_{i,j}x_{i,j}} & \\
\text{subject to} & \mathrm{\displaystyle\sum_{i=1,i\neq j}^n} & \mathrm{x_{i,j}} = 1 & \mathrm{j = 1 ,\cdots, n}\\
& \mathrm{\displaystyle\sum_{j=1,j\neq i}^n} & \mathrm{x_{i,j} = 1} & \mathrm{i = 1 ,\cdots, n}\\
& \mathrm{\displaystyle\sum_{i\in Z}\sum_{j\neq i,j\in Z}} & \mathrm{x_{i,j} \leq |Z| - 1} & \mathrm{\forall Z\subsetneq \{1 ,\dots, n\}, |Z|\leq 2}
\end{array}
\end{equation*}
Here, the objective function minimizes the overall task switching overhead for all tasks. The first two constraints ensure that tasks are executed only once. The last constraint ensures that there is no subset that can form a sub-tour and thus the final solution is a single execution order and not a union of smaller sub-orders~\cite{papadimitriou1998combinatorial}.
\subsection{Inter-Task Dependencies}
\label{sec:tspvariants}
We have thus far discussed multitask inference scenarios where every task is executed in an orderly manner. In many real-world systems, however, there are additional constraints on these tasks that affect their execution decision. We categorize these into two broad classes: \emph{precedence} constraints and \emph{conditional} constraints.
\parlabel{Precedence Constraints.} These constraints dictate that certain tasks (prerequisites) must be executed prior to some other tasks (dependents). These constraints are static. They are determined at the design time of the classifiers. We express these constraints using directed edges on a graph as shown in Figure~\ref{fig:algo_top_sort}(a) where each node, $\mathrm{\tau_i}$ denotes a task and each edge, $\mathrm{(\tau_i, \tau_j)}$ denotes a precedence constraint such that $\mathrm{\tau_i}$ must finish before $\mathrm{\tau_j}$ starts.
\begin{figure}[!htb]
\centering
\subfloat[Precedence Constraint]
{
\includegraphics[width=1.64in]{figure/algo/constraint_a.pdf}
}%
\subfloat[Conditional Constraint]
{
\includegraphics[width=1.64in]{figure/algo/constraint_b.pdf}
}
\caption{Precedence and conditional constraints are expressed by directed edges. Unlike precedence constraints, conditional constraints affect the task switching cost.}
\label{fig:algo_top_sort}
\end{figure}
To account for the precedence constraints, we augment the optimization problem described in Section~\ref{sec:problemform} with additional constraints. We assume a given set of precedence constraints, $\mathrm{P}$ of tuples of tasks, $\mathrm{(\tau_i, \tau_j) \in P \subseteq |\tau| \times |\tau|}$, for which, the task, $\mathrm{\tau_j}$ must start after $\mathrm{\tau_i}$ finishes. For each task-pair, $\mathrm{(\tau_i, \tau_j)}$, we define the remaining execution time to finish $\mathrm{\tau_j}$, given that $\mathrm{\tau_i}$ has already finished, as $\mathrm{d_{i,j}}$. To formally incorporate precedence constraints, we define a new binary variable:
\begin{equation}
\mathrm{s_{i,t}} =
\begin{cases}
1, & \text{if}\ \mathrm{\tau_i} \text{ starts by time } \mathrm{t}\\
0, & \text{otherwise}
\end{cases}
\end{equation}
The following constraint ensures the inclusion of all precedence constraints:
\begin{equation}
\begin{array}{lll}
\mathrm{\displaystyle\sum_{t'\le t} s_{i,t'}} & \mathrm{\geq} & \mathrm{\displaystyle\sum_{t'\le t + d_{i,j}} s_{j,t'}}
\end{array}
\label{eq:pconstraint}
\end{equation}
\parlabel{Conditional Constraints.} These are a special type of precedence constraints where the decision to execute a dependent task depends on the outcome of a prerequisite task. These constraints manifest dynamically at runtime when a prerequisite task finishes and its inference result is available.
Conditional constraints are also represented by directed edges. However, these constraints being dynamic, to accommodate their effect on the task switching cost (which in turn affects the task ordering), we utilize their probability of execution. We estimate this probability offline over a dataset by counting the fraction of the time a dependent task is executed after its prerequisite task finishes. We assume a given set of conditional constraints, $\mathrm{R}$ of triplets $\mathrm{(\tau_i, \tau_j, p_{i,j}) \in R \subseteq |\tau| \times |\tau| \times [0, 1]}$, where $\mathrm{(\tau_i, \tau_j) \in P}$ and $\mathrm{p_{i,j}}$ is the probability of executing $\mathrm{\tau_j}$ after $\mathrm{\tau_i}$ finishes. This probability is used to determine the expected cost of switching to a dependent task as shown in Figure~\ref{fig:algo_top_sort}.
Since conditional constraints are a special type of precedence constraints, we include the same linear constraints as in Equation~\ref{eq:pconstraint} to account for them in the optimization problem.
\subsection{Solving the Optimization Problem}
\parlabel{Brute-force Solver.} In extremely resource-constraint systems, we expect a small number of inference tasks. In such cases, a brute-force solver would suffice that generates all possible permutations of the tasks, discards the permutations that violate precedence constraints, and selects the best ordering that maximizes a fitness score. We define \emph{fitness score} for each permutation that does not violate the precedence constraints as the sum of task switching overheads:
\begin{equation}
\begin{array}{ll}
\mathrm{f(\pi_1, \cdots , \pi_n)} &= \mathrm{\displaystyle\sum_{1 \le i < n} c_{\pi_i,\pi_{i+1}}}
\end{array}
\label{eq:fit1}
\end{equation}
where, $\mathrm{\pi_i}$ refers to the task that executes at position $\mathrm{i}$.
Specifically for conditional constraints, we adjust the fitness score to account for the probabilistic execution of dependent tasks by multiplying the probability to the switching cost:
\begin{equation}
\begin{array}{ll}
\mathrm{f(\pi_1, \cdots , \pi_n)} &= \mathrm{\displaystyle\sum_{1 \le i < n} p_{\pi_i\pi_{i+1}}~c_{\pi_i,\pi_{i+1}}}
\end{array}
\label{eq:fit2}
\end{equation}
\parlabel{More Efficient Solver.} Although a brute-force solver is reasonably fast for a small number of tasks, the solver is repeatedly invoked during the task graph generation step -- once for each task graph as they are enumerated -- which as a whole takes significant time. Furthermore, in the future, the number of inference tasks running on low-resource systems could increase significantly as technology advances. Hence, we propose an efficient, scalable, genetic algorithm-based solver to solve the task ordering problem. The details of the algorithm are in the Appendix~\ref{sec:appendix1}.
\section{Evaluation}
\label{sec:eval}
This section describes two sets of experiments: evaluation of the proposed algorithms and end-to-end system performance.
\subsection{Experimental Setup}
\parlabel{Dataset and Network Architecture.} We use the datasets and network architectures used in recent multitask inference literature for low-resource systems~\cite{nwv,yono,nws}. Table~\ref{table:network} provides a summary. The network architecture shown on the table (rightmost column) is used as the common network architecture in Antler\xspace and each task on a dataset corresponds to recognizing one class. All datasets have 10 tasks, except for HHAR which has 6. We use 80\% of the data for training and 20\% for testing.
\begin{table}[!htb]
\begin{tabular}{|l|l|l|}
\hline
\textbf{Modality} & \textbf{Dataset} & \textbf{Architecture} \\ \hline
\multirow[t]{6}{*}{Image} & MNIST~\cite{lecun1998gradient} & LeNet-5~\cite{lecun1998gradient} \\
& F-MNIST~\cite{xiao2017fashion} & LeNet-5~\cite{lecun1998gradient} \\
& CIFAR-10~\cite{krizhevsky2009learning} & DeepIoT~\cite{yao2017deepiot} \\
& SVHN~\cite{netzer2011reading} & Neuro.Zero~\cite{lee2019neuro} \\
& GTSRB~\cite{stallkamp2011german} & LeNet-4~\cite{lecun1995comparison} \\ \hline
\multirow[t]{3}{*}{Audio} & GSC-v2~\cite{warden2018speech} & KWS~\cite{chen2014small} \\
& ESC~\cite{piczak2015esc} & Mixup-CNN~\cite{zhang2018deep} \\
& US8K~\cite{salamon2014dataset} & TSCNN-DS~\cite{su2019environment}\\ \hline
IMU & HAAR~\cite{stisen2015smart} & DeepSense~\cite{yao2017deepsense} \\
\hline
\end{tabular}
\caption{Datasets and Network Architectures.}
\label{table:network}
\vspace{-20pt}
\end{table}
\parlabel{Baselines for Comparison.} We use four baselines for comparison: YONO~\cite{yono}, NWV~\cite{nwv}, NWS~\cite{nws} and Vanilla. The first three~\cite{yono, nwv, nws} are the state-of-the-art. Vanilla refers to independently trained classifiers running sequentially on the system. We use NWV and NWS in both 16-bit and 32-bit experiments. Since their source codes are not available, we use our own implementation and cross-check with their reported results to ensure that they are consistent with ours. We use YONO only in 32-bit experiments. Since our 32-bit hardware platform is identical to YONO's, we use the reported results from their paper~\cite{yono}.
\parlabel{Evaluation Platforms.} We use the two platforms described earlier (Section~\ref{sec:systemdesign}), i.e., a 16-bit MSP430FR5994 and a 32-bit STM32H747. Data samples are pre-loaded into the non-volatile memory from where they are read and executed.
We measure the time and energy consumption by connecting a 100$\Omega$ resistor in series with the board and using Analog Discovery~\cite{discovery} to measure the voltage across the resistor.
\parlabel{Code Release.} All software and hardware designs will be made open source if this work is accepted.
\subsection{Algorithm and Parameter Analysis}
This section describes how parameters such as branch points, variety-overhead trade-off points, and dependency constraints affect the algorithms proposed in Sections~\ref{sec:taskgraphgeneration} and ~\ref{sec:taskorder}.
\parlabel{Effect of Branch Point Choice.} We evaluate the effect of branch points by performing a sensitivity analysis of variety and execution cost. We use execution time as cost. We vary the number of branch points, $\mathrm{BP = \{3, 5, 7\}}$. Results are shown in Figure~\ref{fig:eval_bpnumber}. We observe that more branch points improve variety score (lower is better) but worsen the overhead. This is because more branch points decompose and groups tasks at a finer granularity which causes more tasks to branch out at deeper layers and thus decreases task switching efficiency.
\begin{figure}[!thb]
\centering
\subfloat[Variety Score]
{
\includegraphics[width = 0.49\textwidth]{figure/eval/BPnumber_variety.pdf}
}
\subfloat[Execution Cost]
{
\includegraphics[width = 0.49\textwidth]{figure/eval/BPnumber_cost.pdf}
}
\caption{Effect of the number of branch points.}
\label{fig:eval_bpnumber}
\vspace{-15pt}
\end{figure}
\begin{figure}[!thbt]
\centering
\subfloat[Variety Score]
{
\includegraphics[width = 0.49\textwidth]{figure/eval/BPlocation_variety.pdf}
}
\subfloat[Execution Cost]
{
\includegraphics[width = 0.49\textwidth]{figure/eval/BPlocation_cost.pdf}
}
\vspace{-10pt}
\caption{Variety score vs. execution cost tradeoff.}
\vspace{-5pt}
\label{fig:eval_bplocation}
\end{figure}
\parlabel{Variety Score vs. Execution Cost Tradeoff.} Figure~\ref{fig:eval_bplocation} shows the tradeoff between variety score and execution cost for each dataset. We compare three network size budgets: two extreme cases of minimum and maximum budget, and a tradeoff budget where variety and cost trend lines intersect. We observe that low budget favors execution cost, high budget favors variety, and the trade-off budget balances the variety score and execution cost.
\begin{table}[!htb]
\resizebox{\linewidth}{!}{
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
\textbf{Variant} & \textbf{Dataset} & \textbf{Node/ Pre/Cnd} & \textbf{Optimal} & \textbf{Antler\xspace} \\\hline
\multirow[t]{3}{*}{Regular} & FIVE & 5/0/0 & 19 & 19 \\
& P01 & 15/0/0 & 291 & 291 \\
& GR17 & 17/0/0 & 2085 & 2085 \\ \hline
\multirow[t]{5}{*}{Precedence} & ESC07 & 9/6/0 & 2125 & 2125 \\
&ESC11 & 13/3/0 & 2075 & 2075 \\
&br17.12 & 17/12/0 & 55 & 55 \\ \hline
\multirow[t]{3}{*}{Conditional} & ESC07 & 9/6/3 & 982 & 982 \\
&ESC11 & 13/3/3 & 1901 & 2000 \\
&ESC12 & 14/7/3 & 1398 & 1423 \\ \hline
\end{tabular}
}
\caption{Evaluation of genetic algorithm for task ordering.}
\vspace{-20pt}
\label{table:genetic_algorithm}
\end{table}
\parlabel{Performance of Genetic Algorithm.}
We evaluate the performance of the genetic algorithm for task ordering. For this, we use a popular public dataset for Traveling Salesperson Problem (TSP) called the TSPLIB~\cite{tsplib} and repurpose it for task ordering problem. This dataset already contains test cases that have precedence constraints. To include conditional constraints, we add weights to the edges of the graph. Table~\ref{table:genetic_algorithm} compares our results with the ground truth for all three cases of task ordering problem. Our result is identical to the ground truth for all cases except for a few conditional constraint cases with 5\% deviation.
\subsection{Comparison with Baseline Solutions}
\parlabel{Execution Time and Energy Cost.} We compare the execution time and energy consumption of Antler\xspace against the baselines in Figure~\ref{fig:eval_timeoverhead} and Figure~\ref{fig:eval_energyoverhead}, respectively. The Y-axis shows the total execution time (or energy) to execute all tasks for an input. We report results for both 16-bit and 32-bit systems.
We observe that while the general trend remains the same in both systems, the execution time on STM32H747 is 100X faster. On both systems, Antler\xspace's execution time is the lowest. This is because Antler\xspace leverages the similarity of tasks and reuses the intermediate results to reduce the execution time by 2.3X -- 4.6X which baseline solutions do not. Even though NWV and YONO perform complete in-memory inferences and have zero switching overhead, they fall short of Antler\xspace as the cost of repeatedly executing shared subtasks is higher, especially when it involves convolution layers. We observe similar pattern in energy consumption. Overall, Antler\xspace saves 56\% -- 78\% energy compared to the baselines.
\begin{figure}[!htb]
\centering
\subfloat[16-bit MSP430FR5994]
{
\includegraphics[width = 0.49\textwidth]{figure/eval/overhead_time_msp.pdf}
}
\subfloat[32-bit STM32H747]
{
\includegraphics[width = 0.49\textwidth]{figure/eval/overhead_time_portenta.pdf}
}
\vspace{-5pt}
\caption{Comparison of execution time.}
\vspace{-15pt}
\label{fig:eval_timeoverhead}
\end{figure}
\begin{figure}[!htb]
\centering
\subfloat[16-bit MSP430FR5994]
{
\includegraphics[width = 0.49\textwidth]{figure/eval/overhead_energy_msp.pdf}
}
\subfloat[32-bit STM32H747]
{
\includegraphics[width = 0.49\textwidth]{figure/eval/overhead_energy_portenta.pdf}
}
\vspace{-5pt}
\caption{Comparison of energy consumption.}
\vspace{-15pt}
\label{fig:eval_energyoverhead}
\end{figure}
\parlabel{Breakdown of Time and Energy Overhead.}
We breakdown the total time and energy cost into two parts: inference-only cost that corresponds to in-memory execution of the networks and switching overhead that corresponds to loading weights from external memory. We compare Antler\xspace with Vanilla and NWS since the other two (NWV and YONO) do not use external memory and thus have no switching cost. Results are shown in Figure~\ref{fig:eval_breakdown}.
The Y-values in Figures~\ref{fig:eval_breakdown}(a) and ~\ref{fig:eval_breakdown}(b) are averaged over all datasets. We observe that 32-bit STM32H747 has very little weight reloading overhead (the striped area on top of each bar is almost invisible) for both time and energy breakdown.
The time and energy cost related to weight reloading in NWS is also negligible as it only has around 7\% of the total weights stored in external memory.
Antler\xspace's time and energy cost related to weight-reloading is 54\%-56\% less than Vanilla.
\begin{figure}[!htb]
\centering
\subfloat[time breakdown]
{
\includegraphics[width = 0.22\textwidth]{figure/eval/breakdown_time.pdf}
}
\subfloat[energy breakdown]
{
\includegraphics[width = 0.22\textwidth]{figure/eval/breakdown_energy.pdf}
}
\caption{Time and energy cost breakdown.}
\vspace{-15pt}
\label{fig:eval_breakdown}
\end{figure}
\parlabel{Inference Accuracy.} We compare the inference accuracy of all systems in Figure~\ref{fig:eval_accuracy}. The accuracy is averaged over all tasks. Antler\xspace's inference accuracy is similar to YONO, NWS, and Vanilla within a margin of $\pm 3\%$ deviation. Recall that Antler\xspace's target is to reduce the time and memory cost of inference while achieving a high accuracy. In this case, all classifiers show reasonably high accuracy of over 90\%, except for NWV whose accuracy does not scale with the number of tasks. YONO does not use the later five datasets and thus its accuracy could not be included in Figure~\ref{fig:eval_accuracy} for those datasets.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{figure/eval/accuracy.pdf}
\vspace{-12pt}
\caption{Comparison of inference accuracy. }
\vspace{-7pt}
\label{fig:eval_accuracy}
\end{figure}
\parlabel{Memory Efficiency.}
We measure the total memory consumption of all tasks for each baseline and summarize in Table~\ref{table:memory}. We observe that Antler\xspace consumes more memory than NWS, NWV and YONO. This is because NWV and YONO perform complete in-memory execution and thus they are limited by the size of the RAM. Unlike them, Antler\xspace and NWS are able to utilize external memory and put no hard restrictions on the total size of the tasks. Antler\xspace consumes significantly less memory than Vanilla since Antler\xspace reduces memory consumption by exploiting the task affinity.
\begin{table}[!htb]
\resizebox{\linewidth}{!}{
\begin{tabular}{|l|lllll|}
\hline
\textbf{System} & Vanilla & Antler & NWS & NWV & YONO \\ \hline
\textbf{Memory (KB)} & 1328 & 587 & 213 & 140 & 114 \\ \hline
\end{tabular}
}
\caption{Comparison of memory consumption.}
\vspace{-15pt}
\label{table:memory}
\end{table}
\section{Related Work}
\parlabel{Single Network Compression.} This class of algorithmic techniques refer to approaches that take one DNN at a time and compress it down to a desired size by employing a wide variety of methods such as knowledge distillation~\cite{chen2017learning}, low-rank factorization~\cite{sainath2013low}, pruning~\cite{guo2016dynamic}, quantization~\cite{han2015deep}, compression with structured matrices~\cite{cheng2015exploration}, network binarization~\cite{rastegari2016xnor}, and hashing~\cite{chen2015compressing}. The disadvantages of this technique are: first, there is no cross-DNN knowledge sharing or joint compression that trains multiple DNNs together; second, since each DNN is compressed individually using different compression methods, this technique is not scalable and they do not have the advantages of multitask learning; third, a significantly compressed DNN does not run nearly as significantly faster since most parameters are pruned in the dense layers while convolutional layers consume most computation time~\cite{han2015deep}.
There are DNN compression methods that do not yield much benefit in terms of memory usage since the assignments of weights to connections have to be stored additionally. This includes soft weight sharing approaches~\cite{ullrich2017soft} such as the Dirichlet process~\cite{roth2018bayesian}, k-means clustering~\cite{han2015deep}, and quantization~\cite{koksal2001weight}.
\parlabel{Multiple Networks Compression.} This class of algorithmic techniques compress multiple DNNs together; e.g., PackNet~\cite{mallya2018packnet} compresses multiple DNNs to a single DNN with iterative pruning of redundant parameters in DNNs to remove weights that can be used by other tasks. The number of DNNs that can participate in the process, however, is limited when free weights fall short as the number of DNNs increase while a single network is maintained. ~\cite{chou2018merging} proposes a technique that merges DNNs by integrating convolutional layers. However, their technique works for two networks only and it requires layer alignment for merging. Learn-them-all~\cite{kaiser2017one} trains a single network to deal with many tasks simultaneously. However, choice of a suitable architecture is generally hard for learning all the tasks apriori. Besides, this requires a large training data from different types and sources which is a tedious task.
There exists multiple studies on sharing weights among a set of DNNs, e.g., MultiTask Zipping~\cite{he2018multi} combines DNNs for cross-model compression with a layer-wise neuron sharing; Sub-Network Routing~\cite{ma2019snr} modularizes the shared layers into multiple layers of sub-networks; Cross-stitch Networks~\cite{misra2016cross} apply weight sharing~\cite{duong2015low} after pooling and fully-connected layers of two DNNs. The scope and methods of weight sharing in these works are limited by the choice of network architecture and task type.
\parlabel{Most Relevant Works.} The three most relevant state-of-the-art systems to Antler\xspace are NWV~\cite{nwv}, NWS~\cite{nws}, and YONO~\cite{yono}. NWV and YONO propose complete in-memory execution of DNNs on memory-constrained systems. NWS extends NWV by allowing some of the high-significance weights into the flash memory to increase the accuracy of NWV. NWS essentially points out that complete in-memory packing and execution of DNNs in MCUs is not accurate. All three approaches fail to leverage the affinity among tasks and their dependencies, and thus repeatedly execute overlapping common subtasks that significant increases the time and energy cost of multitask inference which is avoided by Antler\xspace.
\section{Conclusion}
We envision a future where a wide variety of ultra-low-power sensing and inference systems will sense and classify every aspect of our personal and physical world. To realize this vision, we need to significantly lower the time and energy cost of running multiple neural networks on low-resource systems while ensuring that their application-level performance does not degrade. To achieve this goal, we propose Antler\xspace, which is the first system that exploits the similarity among a set of machine learning tasks to identify overlapping substructures in them that is combined and executed in an optimal order to reduce the execution time by 2.3X -- 4.6X and the energy overhead 56\% -- 78\% when compared to the state-of-the-art multitask learners for low-resource systems.
\section*{Appendix}
\label{sec:appendix1}
\subsection{NP Completeness}
We prove that task ordering problem on an arbitrary task graph is NP-Complete. First, we show that the task ordering problem belongs to NP following a corresponding proof for the traveling salesperson problem (TSP)~\cite{korte2011combinatorial}. Since a task can switch to any task, a tour that contains each task exactly once can be constructed. The total cost of the edges of the tour is the sum of switching cost, $\mathrm{c_{i,j}}$ corresponding to the edge $\mathrm{(\tau_i, \tau_j)}$ on the tour. Finally, we determine if the cost is minimum. This is completed in polynomial time. Therefore, the task ordering problem is in NP.
Second, we show that the task ordering problem is NP-hard by reducing an instance of a Hamiltonian cycle problem~\cite{korte2011combinatorial} to it. We take an instance of Hamiltonian cycle, $\mathrm{G(V, E)}$. From this, we construct an instance of task ordering problem. We construct a complete graph $\mathrm{G'(V, E')}$, where $\mathrm{E' = \{(u, v) | u, v \in V, i \ne j\}}$. Note that $\mathrm{G'}$ is not a task graph but rather a complete graph whose nodes are the tasks from the given task graph and edges are the cost of switching between tasks. We define a cost function as:
\begin{equation}
\mathrm{\gamma(u, v)} =
\begin{cases}
0, & \text{if}\ \mathrm{(u, v) \in E} \\
1, & \text{otherwise}
\end{cases}
\end{equation}
Using the cost function above we can argue that if a Hamiltonian cycle exists in $\mathrm{G}$, that cycle will have a cost of 0 in $\mathrm{G'}$ by construction. In other words, if $\mathrm{G}$ has a Hamiltonian cycle, we have an ordering of tasks of 0 overhead.
Conversely, we assume that $\mathrm{G'}$ has a tour (i.e., an ordering of tasks) of cost at most 0. Since edges in $\mathrm{E'}$ are 0 or 1, each edge on the tour (i.e., each task switching overhead on the chosen ordering of tasks) must be of cost 0 as the cost of the tour is 0. Therefore, the tour contains only edges in $\mathrm{E}$.
This proves that $\mathrm{G}$ has a Hamiltonian cycle if and only if $\mathrm{G'}$ has an ordering of tasks of at most 0 overhead.
\subsection{Genetic Algorithm Solver}
For the general case, especially when the number of tasks is too large for the brute-force solver, we propose a genetic algorithm to solve the optimal task ordering problem. Genetic algorithm is an evolutionary search method which provides optimal or near optimal results for many combinatorial optimization problems~\cite{GA_ahmed2001,GA_moon2001,GA_yun2011,GA_sung2014,GA_rashid2018}. The advantage of adopting genetic algorithm is that the same solution framework is customized to solve all cases of the optimal task ordering problem, i.e., with and without precedence and conditional constraints.
The algorithm begins with a set of individuals or candidate solutions which is called a population. We define the $\mathrm{j}$-th individual as $\mathrm{\pi^j = (\pi_1^j, \pi_2^j, \dots, \pi_n^j)}$, where $\mathrm{\pi_i^j}$ is the task that executes at the $\mathrm{i}$-th position. We also define the fitness of each individual using Equation~\ref{eq:fit1} (or, Equation~\ref{eq:fit2} for conditional constraints). At each round of the algorithm, we select the best $\mathrm{K}$ pairs of individuals based on their fitness scores; and for each pair, we randomly choose a crossover point, $\mathrm{k \in \{1, 2, \cdots, n\}}$ and swap the first $\mathrm{k}$ elements of the pair to generate their offspring; and for each offspring, we perform mutation by swapping the values at two randomly chosen indices, $\{m_1, m_2\}, m_1, m_2 \in \{1, 2, \dots, n\}$; and finally, we discard all individuals that are not a valid ordering. This whole process is repeated until we reach a point when the fitness score of the best solution does not improve anymore.
\section{Overview of Antler\xspace}
Antler\xspace is a tool for developing efficient multitask deep learning models for low-resource systems that have extreme CPU, memory, and energy constraints. This section provides an overview of Antler\xspace, deferring its technical details to later sections.
\subsection{Input and Preprocessing}
\parlabel{Tasks.} Antler\xspace takes a set of inference tasks defined over a domain as the input. Formally, for a given set of tasks, $\mathrm{\tau = \{\tau_1(X, y_1), \tau_1(X, y_2),}$ $\mathrm{\cdots, \tau_1(X, y_n)\}}$ defined over the domain $\mathrm{X}$, each task $\mathrm{\tau_i}$ maps a sample, $\mathrm{x_j \in X}$ to a class label, $\mathrm{y_{(j,i)}}$, where $\mathrm{y_i = [y_{(1, i)}, \cdots y_{(n, i)}]}$.
For example, over an audio dataset, $\mathrm{X}$, we can define three tasks: a speaker recognition task ($\mathrm{\tau_1}$), a speech recognition task ($\mathrm{\tau_2}$), and an emotion classification task ($\mathrm{\tau_3}$). For each audio clip, $\mathrm{x \in X}$ these three classifiers output the class labels for the speaker, the speech, and the emotion, respectively.
\parlabel{Preprocessing.} For each task, Antler\xspace instantiates a neural network. Antler\xspace uses a common network architecture for all tasks, which is trained on the dataset, $\mathrm{(X, y_i)}$ to instantiate the neural network corresponding to each task $\mathrm{\tau_i}$. A common network architecture is necessary for Antler\xspace since its ultimate goal is to form a multitask neural network that consists of two or more networks sharing one or more of their network layers. A common network architecture is methodologically obtained by running a network architecture search~\cite{elsken2019neural} that empirically optimizes the accuracy of all networks. To speed up the search, we start from a library of popular neural network architectures from the literature ~\cite{nws, nwv, yono} and run a hyper-parameter search to obtain the common network architecture that maximizes the minimum accuracy of all tasks. Figure~\ref{fig:sys_overview}(a) shows the network instances (one for each task) which have identical architecture but different weights.
\subsection{Task Graph Generation}
\parlabel{Task Graph.} Tasks in a multitask learning scenario share parts of their networks to influence each other during their joint training. Typically, tasks share their first few layers since layers closer to the input tend to encode simpler basic patterns which are the building blocks for similar inference tasks. For example, early layers of an audio classifier for human voice encodes phonemes and morphemes that are building blocks to downstream tasks such as keyword spotting, speech recognition, and sentiment analysis.
Different pairs of tasks may share different number of layers depending on how similar the tasks are. In Antler\xspace, this is represented by a tree-like structure, which we call a \emph{task graph}, as shown in Figure~\ref{fig:sys_overview}(b). Each rectangular box in the figure represents a \emph{block} which consists of one or more layers. A path from the root (i.e., the leftmost block) to a leaf (i.e., one of the rightmost blocks) corresponds to one neural network inference task. Notice that a block may be shared by two or more neural networks.
\parlabel{Task Graph Generation.} Antler\xspace analyzes the \emph{affinity} between the network instances to form a task graph that balances the trade-off between having a task graph that is too compact vs. having a task graph that has little to no overlap between tasks.
Compact task graphs are generally desirable since they require less storage, save time and energy by avoiding repeated computations, and take advantage of multitask learning such as reduced overfitting and knowledge transfer which is facilitated by the shared network structures. Compact task graphs, however, generally have less network capacity due to less number of parameters, which limits their ability to accurately classify large and complex data. Section~\ref{sec:taskgraphgeneration} describes how Antler\xspace finds an optimal task graph that balances these two opposing forces.
Once a task graph architecture is decided, all tasks are re-trained using a standard multitask learning algorithm~\cite{vandenhende2019branched}.
\subsection{Task Execution Order}
\parlabel{Task Graph Execution Process.} In memory-constrained systems, neural networks are executed progressively in multiple stages. Depending on the size of the main memory, network weights and parameters corresponding to one or more layers are brought into the main memory from the non-volatile storage (e.g., flash) prior to the execution of those layers. Hence, the total cost of executing the tasks in a task graph depends not only on the number of blocks the task graph contains but also on how often each block is brought into the main memory for execution.
In Antler\xspace, all tasks have the same network architecture. When the system starts up, memory is statically allocated in the RAM having the size of the common network architecture. At runtime, prior to the execution of each task $\tau_i$, blocks containing its weights and parameters are loaded into the main memory to initialize the common network architecture -- which is then executed to obtain the inference result. Because tasks in Antler\xspace can share blocks, Antler\xspace skips loading the blocks that are already in the main memory in order to reduce the read/write overhead. Additionally, intermediate results after executing each block are cached in memory buffers (one buffer after each block) to avoid repeated computation of blocks.
\parlabel{Optimal Task Execution Order.} Since in-memory blocks are not reloaded or re-executed if the next task needs them and since different pairs of tasks generally share different number of blocks, the order in which tasks are executed affects the total cost of executing the tasks. In Figure~\ref{fig:sys_overview}(c), the overhead of switching from one task to another is represented by a weighted complete graph whose nodes represent tasks and weights, $c_{i,j}$ on each edge represent the cost of switching between tasks. Finding the least cost ordering of the tasks therefore is equivalent to finding a least-cost Hamiltonian cycle (shown with arrows) on this graph~\cite{papadimitriou1998combinatorial}.
Furthermore, tasks may have precedence constraints and conditional dependencies between them. These add additional constraints on their execution order. Section~\ref{sec:taskorder} describes how Antler\xspace finds an optimal ordering of tasks for a given task graph where tasks may have ordering constraints.
|
{
"arxiv_id": "2302.13193",
"language": "en",
"timestamp": "2023-02-28T02:12:18",
"url": "https://arxiv.org/abs/2302.13193",
"yymm": "2302"
} | \section{Introduction}
We first introduce the problem of exceptional set estimate in the Euclidean space.
Let $G(k,\mathbb{R}^n)$ be the set of $k$-dimensional subspaces in $\mathbb{R}^n$ and $A(k,\mathbb{R}^n)$ be the set of $k$-dimensional affine subspaces in $\mathbb{R}^n$. For $W\in G(k,\mathbb{R}^n)$, define $\Pi_W:\mathbb R^n\rightarrow W$ to be the orthogonal projection onto $W$. Marstrand's projection theorem then states that the projection maps $\Pi_W$ preserve Hausdorff dimension of Borel sets for almost every $W\in G(k,\mathbb{R}^n)$. More precisely, given $A\subset \mathbb{R}^n$, then
\[ \dim(\Pi_W(A))=\min\{k,\dim(A)\} \]
for almost every $W\in G(k,\mathbb{R}^n)$.
The problem of exceptional set estimates is to find the set of $W$ such that the equation above fails. We fix a parameter $0<s<\min\{k,\dim(A)\}$, and define the exceptional set
\begin{equation}\label{exset}
E_{s}(A):=\{W\in G(k,\mathbb{R}^n): \dim(\Pi_W(A))<s\}.
\end{equation}
\begin{remark}
\rm{We remark that $E_s(A)$ also depends on $k$, but we omit it as $k$ is clear from the context.}
\end{remark}
There are three types of the exceptional set estimate for the orthogonal projections $\{\Pi_W: W\in G(k,\mathbb{R}^n)\}$. For simplicity, we denote $\dim(A)=a$.
The first one is the Kaufman-type estimate (\cite{kaufman1968hausdorff}):
\begin{equation}\label{1}
\dim (E_s(A))\le k(n-k)+s-k.
\end{equation}
The second one is the Falconer-type estimate (\cite{falconer1982hausdorff}):
\begin{equation}\label{2}
\dim (E_s(A))\le \max\{k(n-k)+s-a,0\}.
\end{equation}
The third one is due to He (\cite{he2020orthogonal}):
\begin{equation}\label{3}
\dim(E_{\frac{k}{n}a}(A))\le k(n-k)-1.
\end{equation}
The goal of this paper is to study exceptional set estimates over finite fields, $\mathbb F_p^n$. Since there is no notion of ``orthogonality" in finite fields, we may define the orthogonal projections in another way. We first make an observation. Recall that for $W\in G(k,\mathbb{R}^n)$, $\Pi_W: \mathbb{R}^n\rightarrow W$ is the orthogonal projection onto $W$. Let $V=W^\perp\in G(n-k,\mathbb{R}^n)$. We define
\begin{equation}
\pi_V: \mathbb{R}^n\rightarrow A(n-k,\mathbb{R}^n),
\end{equation}
so that $\pi_V(x)$ is the unique $(n-k)$-dimensional space parallel to $V$ and passing through $x$. We see that
$\Pi_W(A)$ and $\pi_V(A)$ are identical. This motivates the definition of projection in finite fields.
\begin{definition}\label{defpiv}
Let $\mathbb F_p$ be a prime field. Denote the $k$-dimensional subspaces and $k$-dimensional affine subspaces of $\mathbb{F}_p^n$ by $G(k,\mathbb F^n_p)$ and $A(k,\mathbb{F}_p^n)$, respectively. For $V\in G(k,\mathbb{F}_p^n)$, define
\begin{equation}\label{defpiveq}
\pi_V: \mathbb{F}_p^n\rightarrow A(k,\mathbb{F}_p^n),
\end{equation}
so that $\pi_V(x)$ is the unique element in $A(k,\mathbb{F}_p^n)$ that is parallel to $V$ and passes through $x$.
\begin{definition}[Exceptional set in finite field]
For $A\subset \mathbb{F}_p^n$ and a number $s>0$, we define the $s$-exceptional set for projection to $k$-planes of $A$ to be
\begin{equation}\label{defex}
E_s(A):=\{ V\in G(n-k,\mathbb{F}_p^n): \#\pi_V(A)<p^s \}.
\end{equation}
\end{definition}
\end{definition}
We state our main theorem.
\begin{theorem}\label{mainthm}
Let $A\subset \mathbb{F}_p^n$ be a set with $\#A=p^a$ $(0<a<n)$. For $s\in (0,\min\{k,a\})$, define
\[E_s(A):=\{ V\in G(n-k,\mathbb{F}_p^n): \#\pi_V(A)<p^s \}. \]
Then for $s<\frac{a+2k-n}{2}$, we have
\begin{equation}\label{mainthmest}
\# E_s(A)\le C_{n,k,a,s}\cdot\log p\cdot p^{t(a,s)},
\end{equation}
where $t(a,s)=\max\{k(n-k)+2(s-a), (k-1)(n-k)\}$. Here, $C_{n,k,a,s}$ is a constant that may depend on $n,k,a,s$, but not depend on $p$.
\end{theorem}
\begin{remark}
\rm
We use the same notation $E_s(A)$ for the exceptional set in both Euclidean space and finite field, but it will not make any confusion.
The numerology of our estimate is new for certain range of $a$ and $s$, compared with \eqref{1}, \eqref{2} and \eqref{3}. Though we proved the estimate in finite fields, it may be reasonable to ask whether it is able to prove the analogue in $\mathbb{R}^n$: For $\dim(A)=a$ and $s<\frac{a+2k-n}{2}$, do we have
\[ \dim(E_s(A))\le \max\{k(n-k)+2(s-a), (k-1)(n-k)\}? \]
However, it is hard to generalize our proof to the Euclidean space. The main obstacle we encounter is as follows. Let $A\subset \mathbb{R}^n$ with $\dim(A)=a$. Let $\{V_t\}_{t\in \mathbb{R}}$ be the one-parameter family of $(n-1)$-planes, where each $V_t$ is orthogonal to $(0,\dots,0,1)$ and intersects with the $x_n$-axis at $(0,\dots,0,t)$. Set $A_t=A\cap V_t$. If everything is finite, then $\#A=\sum_t \#A_t$,
which implies there exists $1\le M\le \#A$ such that
\[ \#A \lesssim \log(\#A)\cdot M\cdot\#\{t: \#A_t\sim M\}.\]
In the continuous setting, we hope there exists $\beta\in(0,a]$ such that
\[ \dim(A)\le \beta+\dim(\{ t: \dim(A_t)\ge\beta \}). \]
This roughly says that if $A$ is big, then we can find many big slices $\{A_t\}$ of $A$. If we replace ``$\le$" by ``$\ge$" in the inequality above, then it is always true. However, it may fail in the reverse direction. Actually, there exists a set $A$ with $\dim(A)=n$ but $\dim(A_t)=0$ for all $t$. This failure of Fubini-type argument is the main obstacle to generalize our theorem to $\mathbb{R}^n$.
\end{remark}
We talk about the structure of the paper. In Section \ref{section2}, we study some examples in $\mathbb{R}^3$. In Section \ref{section3}, we briefly review the Fourier transform in finite fields. In Section \ref{section4}, we prove Theorem \ref{mainthm}.
\bigskip
\begin{sloppypar}
\noindent {\bf Acknowledgement.}
We would like to thank Prof. Larry Guth for numerous helpful discussions over the course of this project.
\end{sloppypar}
\section{Some examples of the exceptional sets}\label{section2}
We discuss some examples of the exceptional sets. We begin with the conjecture of the exceptional set estimate in $\mathbb{R}^2$.
\begin{conjecture}\label{conj1}
Let $A\subset \mathbb{R}^2$ with $\dim(A)=a$. For $\theta\in G(1,\mathbb{R}^2)$, let $\Pi_\theta: \mathbb{R}^2\rightarrow \theta$ be the orthogonal projection onto line $\theta$. For $0<s<\max\{1,a\}$, define $E_s(A):=\{\theta: \dim(\Pi_\theta(A))<s\}$. Then
\[\dim(E_s(A))\le \max\{0,2s-a\}.\]
\end{conjecture}
Of course, we can also ask the question in $\mathbb{F}_p^2$:
\begin{conjecture}\label{conj1.2}
Let $A\subset \mathbb{F}_p^2$ with $\#A=p^a$. For $0<s<\max\{1,a\}$, define $ E_s(A)$ as in \eqref{defex} with $n=1,k=1$. Then
\[\#E_s(A) \le C_{\epsilon,a,s} p^{\epsilon+\max\{0,2s-a\}}.\]
\end{conjecture}
Actually, there exists $\mathbf{A}_{s,a}\subset \mathbb{R}^2$ (with $\dim(\mathbf{A}_{s,a})=a$) such that $\mathbf{E}_{s,a}:=E_s(\mathbf{A}_{s,a})$ has Hausdorff dimension $\ge 2s-a$.
This will serve as a sharp example if Conjecture \ref{conj1} is true. For this example, see Section 5.4 in \cite{mattila2015fourier}. We would like to explain a construction of this example in the finite setting. It is exactly the sharp example of the Szemer\'edi-Trotter theorem.
\begin{example}\label{ex1}
{\rm
Let $N$ be a large number. We assume $2s\ge a$. Consider a set of lines
$\mathcal{L}=\{l_{k,m}: |k|\le N^{2s-a}, |m|\le 10 N^s\}$ in $\mathbb{R}^2$. Here, $k$ and $m$ are integers and $l_{k,m}$ is given by
\[ l_{k.m}: y=kx+m. \]
We see that $\mathcal{L}$ consists of lines from $\sim N^{2s-a}$ many directions, and in each of these directions there are $\sim N^s$ many lines. We denote these directions by $E$, and for $\theta\in E$, let $\mathcal{L}_\theta$ be the lines in $\mathcal{L}$ that are in direction $\theta$.
Consider the set $A:=\{(x,y)\in \mathbb Z^2: |x|\le N^{a-s}, |y|\le N^s\}$. For any $|k|\le N^{2s-a}$, we see that any $(x,y)\in A$ satisfies $|y-kx|\le 10N^s$. This means that for any direction $\theta\in E$, $A$ is covered by $\mathcal{L}_\theta$. Therefore we have for each $\theta\in E$,
\[ \#\pi_\theta(A)\le \#\mathcal{L}_\theta\lesssim N^s. \] We obtain the following estimate:
\[ \#\{\theta: \#\pi_\theta(A)\lesssim N^s\}\ge \#E\sim N^{2s-a}. \]
Such $A$ and $E$ are like the discrete analogue of $\mathbf{A}_{s,a}$ and $\mathbf{E}_{s,a}$.
}
\end{example}
\medskip
For simplicity, we denote
\[ T(a,s):=\sup_{\dim(A)=a} \dim(E_s(A)), \]
i.e., $t=T(a,s)$ is the optimal number to make
$\dim(E_s(A))\le t$ holds for any $\dim(A)=a$. We will try to find some examples to calculate the lower bound of $T(a,s)$.
Next, we will focus on the projection to lines and planes in $\mathbb{R}^3$.
\begin{proposition}\label{prop1}[Projections to lines in $\mathbb{R}^3$]
Consider the projection to lines in $\mathbb{R}^3$, so $E_s(A)$ is given by \eqref{exset} with $k=1, n=3$. We have
\begin{enumerate}
\item When $a\le 1:$ $T(a,s)\ge 1$;
\item When $1<a\le 2:$
\begin{equation*}
T(a,s)\ge\begin{cases}
0 & s\le \frac{a-1}{2}\\
1+2s-a & \frac{a-1}{2}<s\le a-1\\
1 & a-1<s<1
\end{cases}
\end{equation*}
\item When $2<a\le 3:$
\begin{equation*}
T(a,s)\ge\begin{cases}
0 & s\le \frac{a-1}{2}\\
1+2s-a & \frac{a-1}{2}<s<1
\end{cases}
\end{equation*}
\end{enumerate}
\end{proposition}
\begin{proposition}\label{prop2}[Projections to planes in $\mathbb{R}^3$]
Consider the projection to planes in $\mathbb{R}^3$, so $E_s(A)$ is given by \eqref{exset} with $k=2, n=3$. We have
\begin{enumerate}
\item When $a\le 1:$ $T(a,s)\ge \max\{0,2s-a\}$;
\item When $1<a\le 2:$
\begin{equation*}
T(a,s)\ge\begin{cases}
0 & s\le \frac{a}{2}\\
2s-a & \frac{a}{2}<s\le 1\\
1 & 1<s\le\frac{a+1}{2}\\
2s-a & \frac{a+1}{2}<s<a
\end{cases}
\end{equation*}
\item When $2<a\le 3:$
\begin{equation*}
T(a,s)\ge\begin{cases}
0 & s\le a-1\\
1 & a-1<s\le \frac{a+1}{2}\\
2s-a & \frac{a+1}{2}<s<2
\end{cases}
\end{equation*}
\end{enumerate}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop1}]\hfill
\begin{enumerate}
\item When $a\le 1$, let $A$ be contained in the $x_3$-axis. We see that for any line $\ell$ that is parallel to the $(x_1,x_2)$-plane, the projection of $A$ onto $\ell$ has dimension $0(<s)$. Therefore $\dim(E_s(A))\ge 1$.
\item Consider the case $1<a\le 2$.
$\bullet$ When $\frac{a-1}{2}<s< a-1$, we choose $A=\mathbf{A}_{s,a-1}\times \mathbb{R}$. This looks like the left picture in Figure \ref{projectiontoplane}. We only look at those directions $\theta$ that are parallel to the $(x_1,x_2)$-plane. This is just the problem in $\mathbb{R}^2$. We see that $E_s(A)\supset \mathbf E_{s,a-1}$, so
\[ \dim(E_s(A))\ge \max\{0,2s-(a-1)\}=\max\{0,1+2s-a\}. \]
$\bullet$ When $a-1<s<1$, we choose $A=A'\times \mathbb{R}$, where $A'\subset \mathbb{R}^2$ is any set with $\dim(A')=a-1$.
We see that if $\theta$ is parallel to the $(x_1,x_2)$-plane, then $\Pi_\theta(A)=\Pi_\theta(A')$ has dimension less than $\dim(A')=a-1<s$. Therefore,
\[\dim(E_s(A))\ge 1.\]
\item Consider the case $2<a\le 3$. When $\frac{a-1}{2}<s<1$, we choose $A=\mathbf{A}_{s,a-1}\times \mathbb{R}$ (then $\dim(A)=\dim(\mathbf{A}_{s,a-1})+\dim(\mathbb{R})=a$). We only look at those directions $\theta$ that are parallel to the $(x_1,x_2)$-plane. This is just the problem in $\mathbb{R}^2$. We see that $E_s(A)\supset \mathbf{E}_{s,a-1}$, so
\[\dim(E_s(A))\ge \max\{0,2s-(a-1)\}=\max\{0,1+2s-a\}.\]
\end{enumerate}
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop2}]\hfill
\begin{enumerate}
\item When $a\le 1$, let $A=\mathbf{A}_{s,a}\times \{0\}$. For $\theta\in S^2$, we use $V_\theta$ to denote the plane orthogonal to $\theta$.
We only look at the projection of $A$ to those $V_\theta$ where $\theta$ is parallel to the $(x_1,x_2)$-plane. This exactly becomes a projection-to-line problem in $\mathbb{R}^2$, so
we have that $\dim(E_s(A))\ge \dim(\mathbf{E}_{s,a})\ge \max\{0,2s-a\}$.
\item Consider the case $1<a\le 2$.
$\bullet$ When $s\le 1$, we use the same example as in the last paragraph.
$\bullet$ When $1<s\le \frac{a+1}{2}$, we choose $A$ to be a subset of $\mathbb{R}^2\times \{0\}$. Then for those $\theta$ parallel to $(x_1,x_2)$-axis, the projection of $A$ onto $V_\theta$ is contained in the line $(\mathbb{R}^2\times \{0\})\cap V_\theta$ which has dimension $\le 1<s$. This means that these $V_\theta$ are in the exceptional set. So, $\dim(E_s(A))\ge 1$.
$\bullet$ When $\frac{a+1}{2}<s<a$, we choose $A=\mathbf{A}_{s-1,a-1}\times \mathbb{R}$ (see Figure \ref{projectiontoplane}). For a direction $\theta$ that is parallel to the $(x_1,x_2)$-plane, we use $\ell_\theta$ to denote the line contained in $\mathbb{R}^2\times \{0\}$ that is orthogonal to $\theta$, i.e., $\ell_\theta=V_\theta\cap (\mathbb{R}^2\times \{0\})$. Suppose $\ell_\theta$ is in $\mathbf{E}_{s-1,a-1}$, i.e., $\dim(\Pi_{\ell_\theta}(\mathbf{A}_{s-1,a-1}))<s-1$. Then noting that $\Pi_{V_\theta}(\mathbf{A}_{s-1,a-1}\times \mathbb{R})=\Pi_{\ell_\theta}(\mathbf{A}_{s-1,a-1})\times \mathbb{R}$, we have $\dim(\Pi_{V_\theta}(A))=\dim(\Pi_{\ell_\theta}(\mathbf{A}_{s-1,a-1}))+1<s$. On the right hand side of Figure \ref{projectiontoplane}, we draw a bunch of planes that contain $\ell_\theta$ but not contain the $x_3$-axis. We can view these planes as $\mathbb{R}\P^1\setminus \{N\}$ where the north pole $N$ corresponds to the plane given by span$\{\ell_\theta, (0,0,1)\}$. The geometric observation is that the projection of $A$ onto these planes are the same, and hence has dimension $<s$. We see that
\begin{align*}
\dim(E_s(A))&\ge \dim(\mathbf{E}_{s-1,a-1}\times (\mathbb{R}\P^1\setminus\{N\}))\\
&=\dim(\mathbf{E}_{s-1,a-1})+\dim( (\mathbb{R}\P^1\setminus\{N\}))\\
&\ge 2(s-1)-(a-1)+1=2s-a.
\end{align*}
We used that $\dim(\mathbf{E}_{s-1,a-1})\ge\max\{0,2(s-1)-(a-1)\}=2(s-1)-(a-1)$ since $\frac{a+1}{2}<s$.
\item Consider the case $2<a\le 3$.
$\bullet$ When $a-1<s<\frac{a+1}{2}$, we choose $A$ to be contained in $\mathbb{R}^2\times I$, where $I\subset \mathbb{R}$ has dimension $a-2$. Then for those $\theta$ parallel to $(x_1,x_2)$-axis, the projection of $A$ onto $V_\theta$ is contained in $\ell_\theta\times I$ which has dimension $\le 1+a-2=a-1<s$. This means that these $V_\theta$ are in the exceptional set. So, $\dim(E_s(A))\ge 1$.
$\bullet$ When $\frac{a+1}{2}<s<2$, we just use the same example $A=\mathbf{A}_{s-1,a-1}\times \mathbb{R}$ as above to get the lower bound $\dim(E_s(A))\ge 2s-a$.
\end{enumerate}
\end{proof}
\begin{figure}
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (163,71.5) -- (310,71.5) -- (247,170) -- (100,170) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (130,70) -- (131,228) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (156,59) -- (157,217) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (177,33) -- (178,191) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (194,34) -- (195,192) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (186,74) -- (187,232) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (174,137) -- (239.02,128.27) ;
\draw [shift={(241,128)}, rotate = 172.35] [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (130.5,149) ;
\draw [shift={(130.5,149)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ][line width=0.75] (0, 0) circle [x radius= 3.35, y radius= 3.35] ;
\draw [shift={(130.5,149)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ][line width=0.75] (0, 0) circle [x radius= 3.35, y radius= 3.35] ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (156.5,138) ;
\draw [shift={(156.5,138)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ][line width=0.75] (0, 0) circle [x radius= 3.35, y radius= 3.35] ;
\draw [shift={(156.5,138)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ][line width=0.75] (0, 0) circle [x radius= 3.35, y radius= 3.35] ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (177.5,112) ;
\draw [shift={(177.5,112)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ][line width=0.75] (0, 0) circle [x radius= 3.35, y radius= 3.35] ;
\draw [shift={(177.5,112)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ][line width=0.75] (0, 0) circle [x radius= 3.35, y radius= 3.35] ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (194.5,113) ;
\draw [shift={(194.5,113)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ][line width=0.75] (0, 0) circle [x radius= 3.35, y radius= 3.35] ;
\draw [shift={(194.5,113)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ][line width=0.75] (0, 0) circle [x radius= 3.35, y radius= 3.35] ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (186.5,153) ;
\draw [shift={(186.5,153)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ][line width=0.75] (0, 0) circle [x radius= 3.35, y radius= 3.35] ;
\draw [shift={(186.5,153)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ][line width=0.75] (0, 0) circle [x radius= 3.35, y radius= 3.35] ;
\draw (392.93,103.95) -- (393.72,168.71) -- (467.07,140.05) -- (466.28,75.29) -- cycle ;
\draw (458.8,140.5) -- (358,140.5) -- (401.2,103.5) -- (502,103.5) -- cycle ;
\draw (431.61,164.1) -- (355.74,80.43) -- (428.39,79.9) -- (504.26,163.57) -- cycle ;
\draw [dash pattern={on 3.75pt off 3pt on 7.5pt off 1.5pt}] (375.28,164.79) -- (418.12,122.66) -- (483,60.5) ;
\draw (212,181.4) node [anchor=north west][inner sep=0.75pt] {$ \begin{array}{l}
\mathbf{A}_{s-1,a-1}\subseteq \ \mathbb{R}^{2}\\
A\ =\ \mathbf{A}_{s-1,a-1}\times \mathbb{R}
\end{array}$};
\draw (243,118.4) node [anchor=north west][inner sep=0.75pt] {$\theta $};
\draw (490,52.4) node [anchor=north west][inner sep=0.75pt] {$\ell_\theta$};
\end{tikzpicture}
\caption{Projection to planes}
\label{projectiontoplane}
\end{figure}
\begin{remark}
{\rm
To construct analogous examples in higher dimensions, we can play the same trick. For example, choose $A=\mathbf{A}_{s',a'}\times \mathbb{R}^l$ (or $\mathbb{R}^{l'}$). Then we may find the lower bound of $T(a,s)$ are of forms $m+2s-a$ (or $m'$), for some integers $m,m'$. We would like to call the first type of the bound \textit{ST~bound} (Szemer\'edi-Trotter bound), as it contains a factor $2s-a$ which comes from the example of Szemer\'edi-Trotter theorem (see Example \ref{ex1}); We call the second type of the bound \textit{integer~bound}. We are unable to prove the ST bound; otherwise we can resolve Conjecture \ref{conj1}. However, it may be possible that we can prove the integer bound.
It would be reasonable to make the conjecture:
\begin{conjecture}
The lower bounds of $T(a,s)$ in Proposition \ref{prop1} and Proposition \ref{prop2} are actually the upper bounds of $T(a,s)$.
\end{conjecture}
The construction of examples in Proposition \ref{prop1} and Proposition \ref{prop2} also works for prime fields $\mathbb{F}_p$. Therefore, we may make the conjecture about projection to lines (or planes) in $\mathbb{F}_p^3$, in which the language is adapted to the finite field setting. More precisely, the conjecture will be stated in the form of Theorem \ref{mainthm}, with $t(a,s)$ in \eqref{mainthmest} replaced by the bounds of $T(a,s)$ in Proposition \ref{prop1} (or Proposition \ref{prop2}).
\begin{conjecture}\label{conj4}
Let $A\subset \mathbb{F}_p^3$ with $\#A=p^a$. Recalling the definition in \eqref{defex}, if $E_s(A)$ is the exceptional set for projection to lines (resp. projection to planes), then
\[ \# E_s(A)\le C_{\epsilon,a,s} p^{\epsilon+t(a,s)}, \]
where $t(a,s)$ is the lower bound of $T(a,s)$ given by Proposition \ref{prop1} (resp. Proposition \ref{prop2}).
\end{conjecture}
By Theorem \ref{mainthm}, we see that we are able to obtain the sharp exceptional set estimate for certain range of $(a,s)$. For example, letting $n=3,k=1$ in Theorem \ref{mainthm}, we obtain that: if $a>1$ and $s<\frac{a-1}{2}$, then $\#E_s(A)\lesssim p^{\epsilon+ \max\{2+2(s-a),0\} }\le p^\epsilon$. Letting $n=3,k=2$ in Theorem \ref{mainthm}, we obtain that: if $a>2$ and $s<\frac{a+1}{2}$, then $\#E_s(A)\lesssim p^{\epsilon+\max\{2+2(s-a),1\}}\le p^{\epsilon+1}$.
}
\end{remark}
\section{Fourier transform in finite field}\label{section3}
\subsection{Definition of Fourier transform}
We briefly introduce the Fourier transform in $\mathbb{F}_p^n$. We first set up our notation. $\mathbb{F}_p^n$ is our physical space, and we use $x,y$ to denote the points in $\mathbb{F}_p^n$. We denote the frequency space by $(\mathbb{F}_p^n)^*$, and we use $\xi,\eta$ to denote the points in it. We remark that $(\mathbb{F}_p^n)^*$ is the same set as $\mathbb{F}_p^n$. We use different notation just in order to distinguish them. For $x\in\mathbb{F}_p^n$ or $\xi\in(\mathbb{F}_p^n)^*$, we also write $x=(x_1,\dots,x_n)$ or $\xi=(\xi_1,\dots,\xi_n)$ in coordinate, where each $x_i$ or $\xi_i$ belongs to $\mathbb{F}_p$.
For a function $f(x)$ on $\mathbb{F}_p^n$, the Fourier transform of $f$ is a function on $(\mathbb{F}_p^n)^*$ given by
\[
\widehat{f} (\xi) = \sum_{x\in \mathbb{F}_p^n} f(x) e_p (-x\cdot \xi),
\]
where $e_p(x) = e^{\frac{2\pi i x}{p}}$.
For a function $g(\xi)$ on $(\mathbb{F}_p^n)^*$, the inverse Fourier transform of $g$ is a function on $\mathbb{F}_p^n$ given by
\[
g^{\vee}(x) = \frac{1}{p^n} \sum_{\xi \in (\mathbb{F}_p^n)^\ast} g(\xi) e_p(x\cdot \xi).
\]
It is not hard to check that $(\widehat f)^\vee=f$ and $(g^\vee)^\wedge=g$. We also have the Plancherel's identity
\[ \sum_{x\in\mathbb{F}_p^n}|f(x)|^2=\frac{1}{p^n}\sum_{\xi\in(\mathbb{F}_p^n)^*}|\widehat f(\xi)|^2. \]
\subsection{Dual space}
Given a $k$-dimensional subspace $V$ in $\mathbb{F}_p^n$, we want to define the dual space of $V$, denoted by $V^*$ which is a $(n-k)$-dimensional subspace in $(\mathbb{F}_p^n)^*$. The intuition in $\mathbb{R}^n$ is that: if $V\in G(k,\mathbb{R}^n)$, then naturally the dual space of $V$ is $V^\perp\in G(n-k,\mathbb{R}^n)$. However, there is no natural way of defining ``orthogonality" in $\mathbb{F}_p^n$, so we need to use Fourier transform to define the dual space.
\begin{definition}[Dual space]
For $V\in G(k,\mathbb{F}_p^n)$, we define $V^*=\mathrm{supp} \widehat \boldsymbol 1_V$.
\end{definition}
Next we will verify several properties of $V^*$.
\begin{lemma}\label{lem1}
If $V\in G(k,\mathbb{F}_p^n)$, then $V^*\in G(n-k,(\mathbb{F}_p^n)^*)$. Moreover, $\widehat \boldsymbol 1_V=p^k \boldsymbol 1_{V^*}$.
\end{lemma}
\begin{proof}
Suppose the $k$-dimensional space $V$ is spanned by the following $k$ vectors:
\begin{equation*}
{\mathbf v}_1=(v_{11},\cdots,v_{1n}),
{\mathbf v}_2=(v_{21},\cdots,v_{2n}), \cdots
{\mathbf v}_1=(v_{k1},\cdots,v_{kn}).
\end{equation*}
We use $\mathcal{V}$ to denote the $k\times n$ matrix
\[ \mathcal{V}=\begin{pmatrix}
{\mathbf v}_1\\
{\mathbf v}_2\\
\vdots\\
{\mathbf v}_k
\end{pmatrix}. \]
Therefore $V$ can be written as
\begin{equation*}
V=\{ (y_1,\dots,y_k)\mathcal{V} : y_1,\dots,y_k\in\mathbb{F}_p \}.
\end{equation*}
We will calculate $\widehat \boldsymbol 1_V$. By definition
\begin{equation}\label{tocalculate}
\widehat \boldsymbol 1_V= \sum_{x\in V} e_p(-x\cdot\xi) = \sum_{y_1,\dots,y_k\in\mathbb{F}_p} e_p(-(y_1,\dots,y_k)\mathcal{V} \begin{pmatrix}
\xi_1\\
\vdots\\
\xi_n
\end{pmatrix}).
\end{equation}
To calculate the right hand side, we first choose ${\mathbf v}_{k+1},\dots,{\mathbf v}_{n}$, so that $\{{\mathbf v}_1,\dots,{\mathbf v}_n\}$ form a basis of $\mathbb{F}_p^n$. Define
\[ \mathcal{W}=\begin{pmatrix}
\mathcal{V}\\
{\mathbf v}_{k+1}\\
\vdots\\
{\mathbf v}_n
\end{pmatrix}=\begin{pmatrix}
{\mathbf v}_1\\
\vdots\\
{\mathbf v}_n
\end{pmatrix},\]
which is invertible. We can write the the right hand side of \eqref{tocalculate} as
\begin{equation}
\sum_{y_1,\dots,y_k\in\mathbb{F}_p} e_p(-(y_1,\dots,y_k,0,\dots,0)\mathcal{W} \begin{pmatrix}
\xi_1\\
\vdots\\
\xi_n
\end{pmatrix}).
\end{equation}
It is not hard to see that this sum $=p^k$, if $\mathcal{W} \begin{pmatrix}
\xi_1\\
\vdots\\
\xi_n
\end{pmatrix}\in \{0\}^k\times \mathbb{F}_p^{n-k}$; and $=0$ otherwise. We see that
\[ V^*= \mathcal{W}^{-1} (\{0\}^k\times \mathbb{F}_p^{n-k}) \]
is an $(n-k)$-dimensional subspace, and
\[ \widehat \boldsymbol 1_V=p^k \cdot \boldsymbol 1_{V^*}. \]
\end{proof}
It is also not hard to see the following results, for which we omit the proof.
\begin{lemma}
If $V\in G(k,\mathbb{F}_p^n)$, then $(V^*)^*=V$. Therefore, $(\cdot)^*:G(k,\mathbb{F}_p^n)\rightarrow G(n-k,\mathbb{F}_p^n)$ is a bijection. If $M\in GL_n(\mathbb{F}_p)$, then $(\boldsymbol 1_{M(V)})^\wedge=p^k \boldsymbol 1_{(M^T)^{-1}(V^*)}$, and hence $(M(V))^*=(M^T)^{-1}(V^*)$.
\end{lemma}
\begin{lemma}\label{lem3}
For two subspaces $V,W$ in $\mathbb{F}_p^n$, we have $V\subset W\Leftrightarrow W^*\subset V^*$.
\end{lemma}
We also need a key lemma about the Falconer-type exceptional estimate.
\begin{lemma}\label{lemfalconer}
Let $A\subset \mathbb{F}_p^n$ be a set with $\#A=p^a$ $(0<a<n)$. For $s\in (0,a)$, define
\[ E_s(A):=\{ V\in G(n-k,\mathbb{F}_p^n):\#\pi_V(A)<p^s \}. \]
Let $M$ be the overlapping number of $\{V^*\setminus\{0\}: V\in E_s(A)\}$, i.e.,
\[M:=\sup_{\xi\in \mathbb{F}_p^n\setminus \{0\}}\sum_{V\in E_s(A)}\boldsymbol 1_{V^*}(\xi).\]
Then
\begin{equation}\label{falconereq}
\#E_s(A)\lesssim M p^{n-k+s-a}.
\end{equation}
\end{lemma}
\begin{remark}
{\rm
Noting that for $\xi\in\mathbb{F}_p^n\setminus \{0\}$, we have
\[ \sum_{V\in E_s(A)}\boldsymbol 1_{V^*}(\xi)\le \sum_{W\in G(k,\mathbb{F}_p^n)}\boldsymbol 1_{W}(\xi)=\# \{ W\in G(k,\mathbb{F}_p^n): 0,\xi\in W \}. \]
Denote the line passing through $0,\xi$ by $\ell_\xi$. Noting that $\ell_\xi\subset W\Leftrightarrow W^*\subset \ell_\xi^*$ (by Lemma \ref{lem3}), we see that the right hand side of the inequality above is equal to
\[\#\{V\in G(n-k,\mathbb{F}_p^n):V\subset \ell_\xi^*\}=\#G(n-k,\ell_\xi^*)\sim p^{(k-1)(n-k)}. \]
We obtain that $M\lesssim p^{(k-1)(n-k)}$. Plugging into \eqref{falconereq}, we obtain that
\begin{equation}\label{eqremark}
\# E_s(A)\lesssim p^{k(n-k)+s-a},
\end{equation}
which is the Falconer-type estimate.
}
\end{remark}
\begin{proof}[Proof of Lemma \ref{lemfalconer}]
By definition, for each $V \in E_s(A)$, there exists a set of $(n-k)$-planes $\mathcal L_V (=\pi_V(A))$ parallel to $V$ such that $A\subset\bigcup_{W \in \mathcal L_V} W$. Furthermore, we have $\# \mathcal L_V < p^s$.
Let
\[
f = \sum_{V \in E_s(A)} \sum_{W \in \mathcal L_V} \boldsymbol 1_W.
\]
We will apply the high-low method to $f$ using the Fourier transform on $\mathbb{F}_p^n$. Denote $\# E_s(A) = p^t$ for simplicity. Then, notice that for every $a\in A$ and for every $V \in E_s(A)$, there exists a $W\in\mathcal{L}_V$ containing $a$. Therefore, we have that
\begin{equation}\label{lefthandside}
p^a p^{2t} = \#A(\# E_s(A))^2 \leq \int_A f^2 = \int_A \left(\sum_{V \in E_s(A)} \sum_{W \in \mathcal L_V} \boldsymbol 1_W \right)^2.
\end{equation}
We now seek to find an upperbound for the right hand side, for which we use the high-low method. The idea of the high-low method originates from \cite{vinh2011szemeredi}, \cite{guth2019incidence}, and has recently been applied to solve many problems. We briefly explain the idea of high-low method in the finite field setting. For a function $f$ on $\mathbb{F}_p^n$, we want to decompose it into high part and low part:
\[f=f_h+f_l.\]
The ``high part" $f_h$ satisfies $0\notin\mathrm{supp} \widehat f_h$; the ``low part" $f_l$ satisfies $\mathrm{supp}\widehat f_l\subset \{0\}$. By the requirement of the high part and low part, we can see that \[f_l(x)=\big(\frac{1}{p^n}\int_{\mathbb{F}_p^n}f(x)dx\big) \boldsymbol 1_{\mathbb{F}_p^n}(x),\ \ \ f_h(x)=f(x)-\big(\frac{1}{p^n}\int_{\mathbb{F}_p^n}f(x)dx\big) \boldsymbol 1_{\mathbb{F}_p^n}(x). \]
The Fourier support condition on $f_h$ will give us more orthogonality, and hence more gains when we use $L^2$ estimate.
We come back to the proof.
Notice that
\[
\int_A \left(\sum_{V \in E_s(A)} \sum_{W \in \mathcal L_V} \boldsymbol 1_W \right)^2 \lesssim \int_A \left(\sum_{V \in E_s(A)} \sum_{W \in \mathcal L_V} \boldsymbol 1_W - \frac{1}{p^k} \right)^2 + \int_A \left(\sum_{V \in E_s(A)} \sum_{W \in \mathcal L_V} \frac{1}{p^k} \right)^2.
\]
We now show that the first term on the right hand side dominates. To see this, notice that
\[
\int_A \left(\sum_{V \in E_s(A)} \sum_{W \in \mathcal L_V} \frac{1}{p^{k}} \right)^2 = \int_A \left(\#E_s(A)\cdot \#\mathcal L_V \cdot \frac{1}{p^{k}}\right)^2 \leq p^a\cdot p^{2(s+t-k)}.
\]
Notice that this is much less than the left hand side of \eqref{lefthandside} since $s<k$ and we may assume $p$ is large enough (since for small $p$, \eqref{mainthmest} naturally holds by choosing large enough constant $C_{n,k,a,s}$). Therefore, we have that
\begin{align*}
p^{a+2t} &\lesssim \int_A \left(\sum_{V \in E_s(A)} \sum_{W \in \mathcal L_V} \boldsymbol 1_W - \frac{1}{p^{k}} \right)^2 \\
&\lesssim \int_{\mathbb{F}_p^n} \left(\sum_{V \in E_s(A)} \sum_{W \in \mathcal L_V} \boldsymbol 1_W - \frac{1}{p^{k}} \right)^2.
\end{align*}
We now apply the Fourier transform to the last integrand.
Since any $W\in\mathcal{L}_V$ is a translation of $V$, we can write $W=x_W+V$ for some $x_W\in\mathbb{F}_p^n$.
By Lemma \ref{lem1}, we have that $\widehat\boldsymbol 1_W(\xi)=p^{n-k} e_p(-x_W\cdot\xi)\boldsymbol 1_{V^*}$. We also simply note that $\widehat \boldsymbol 1_{\mathbb{F}_p^n}=p^n\boldsymbol 1_{\{0\}}$. We have
\[
\left(\boldsymbol 1_W - \frac{1}{p^{k}}\right)^\wedge\,\,(\xi) = p^{n-k}e_p(-x_W\cdot\xi)\boldsymbol 1_{V^*}(\xi) - p^{n-k}\cdot \boldsymbol 1_{\{0\}}(\xi).
\]
Therefore, we see that $\mathrm{supp} \left(\sum_{W\in\mathcal{L}_V}(\boldsymbol 1_W - \frac{1}{p^{k}})\right)^\wedge\subset V^*\setminus\{0\}$. Applying Plancherel and noting the definition of $M$, we have
\begin{align*}
p^{a+2t}&\lesssim \frac{1}{p^n}\int_{(\mathbb{F}_p^n)^*}\left| \sum_{V\in E_{s}(A)}\sum_{W\in\mathcal{L}_V}(\boldsymbol 1_W-\frac{1}{p^k})^\wedge\right|^2\\
&\lesssim \frac{1}{p^n}M\sum_{V\in E_{s}(A)}\int_{(\mathbb{F}_p^n)^*}\left| \sum_{W\in\mathcal{L}_V}(\boldsymbol 1_W-\frac{1}{p^k})^\wedge\right|^2\\
&=M\sum_{V\in E_{s}(A)}\int_{\mathbb{F}_p^n}\left| \sum_{W\in\mathcal{L}_V}\boldsymbol 1_W-\frac{1}{p^k}\right|^2\\
&\lesssim M\sum_{V\in E_{s}(A)}\int_{\mathbb{F}_p^n} (\sum_{W\in\mathcal{L}_V}\boldsymbol 1_W)^2+\int_{\mathbb{F}_p^n}\#\mathcal{L}_V\frac{1}{p^{2k}}.
\end{align*}
Noting that $(\sum_{W\in\mathcal{L}_V}\boldsymbol 1_W)^2=\sum_{W\in\mathcal{L}_V}\boldsymbol 1_W$ (as $\{\boldsymbol 1_W\}_{W\in\mathcal{L}_V}$ are disjoint), $\#\mathcal{L}_V<p^s$, and $\#E_s(A)=p^t$, we see that the inequality above is
\begin{align*}
&\lesssim M p^t( p^{s+n-k}+p^{s+n-2k} )\lesssim Mp^{t+s+n-k}.
\end{align*}
Combining with the lower bound $p^{a+2t}$, we obtain
\[ p^t\lesssim Mp^{n-k+s-a}. \]
\end{proof}
\section{Proof of Theorem \ref{mainthm}}\label{section4}
The goal of this section is to prove Theorem \ref{mainthm} which we restate here:
\begin{theorem}\label{thm}
Let $A\subset \mathbb{F}_p^n$ be a set with $\#A=p^a$ $(0<a<n)$. For $s\in (0,a)$, define
\[E_s(A):=\{ V\in G(n-k,\mathbb{F}_p^n): \#\pi_V(A)<p^s \}. \]
Then for $s<\frac{a+2k-n}{2}$, we have
\begin{equation}\label{mainest}
\# E_s(A)\le C_{n,k,a,s}\cdot\log p\cdot p^{t},
\end{equation}
where $t=\max\{k(n-k)+2(s-a), (k-1)(n-k)\}$. Here, $C_{n,k,a,s}$ is a constant that may depend on $n,k,a,s$, but not depend on $p$.
\end{theorem}
\subsection{Proof of Theorem \ref{thm}}
Let $A\subset \mathbb{F}_p^n$ with $\#A= p^a$. We consider two cases.
\medskip
\textbf{Case 1}: There exists a hyperplane, $H\in A(n-1,\mathbb{F}_p^n)$, such that
\[
\#(A\cap H) \ge p^{s+n-k-1}.
\]
Let $H_0\in G(n-1,\mathbb{F}_p^n)$ be such that $H$ is parallel to $H_0$.
Then, we claim that every $V \in E_s(A)$ must be contained in $H_0$.
To see this, notice that if $V\in G(n-k,\mathbb{F}_p^n)$ is not contained in $H_0$, then $H\cap V$ is a $(n-k-1)$-dimensional subspace, which means that $\#(H\cap V)\le p^{n-k-1}$. Recalling the definition of $\pi_V$ in \eqref{defpiveq}, we have
\[
\#\pi_V(A) \geq \#\pi_V(A\cap H) \ge \frac{\#(H\cap A)}{\#(H\cap V)}\geq p^s.
\]
So, $E_s(A)\subset G(n-k,H)$.
It follows that
\[\# E_s(A) \le \# G(n-k,H_0)\sim p^{(k-1)(n-k)}.\]
\bigskip
\textbf{Case 2}: Suppose for every hyperplane $H\in A(n-1,\mathbb{F}_p^n)$, we have that
\[
\#(A\cap H) \leq p^{s+n-k-1}.
\]
First, we define
\[ M:=\sup_{\xi\in\mathbb{F}_p^n\setminus\{0\}}\sum_{V\in E_s(A)}\boldsymbol 1_{V^*}(\xi). \]
We denote $\#E_s(A)=p^t$.
By Lemma \ref{lemfalconer}, we have
\[
p^{t} \leq M p^{n-k+s-a}.
\]
To complete the proof, it remains to prove the following lemma.
\begin{lemma} \label{lemma:hyper1}
\[
M \lesssim \log p\cdot p^{(n-k)(k-1)+s-a}.
\]
\end{lemma}
\begin{proof}
Let $\xi_0$ be the point in $\mathbb{F}_p^n - \{0\}$ such that
\begin{equation}\label{M}
M = \#\{V\in E_s(A):\xi_0\in V^*\}.
\end{equation}
We know that such a $\xi_0$ exists since there are only finitely many $\xi$.
Let $\ell$ be the line passing through $0$ and $\xi_0$, and let $H=\ell^*$ which is a hyperplane. We should view $\ell$ as a line in $(\mathbb{F}_p^n)^*$, and view $H$ as a hyperplane in $\mathbb{F}_p^n$.
Define the set
\[
\Theta := \{V \in E_s(A):\ell\subset V^*\}=\{V\in E_s(A): V\subset H\}.
\]
By \eqref{M}, $\#\Theta=M$.
Now, we decompose $\mathbb{F}_p^n$ into $(n-1)$-dimensional planes that are parallel to $H$:
\[\mathbb{F}_p^n = \bigsqcup_{i=1}^p H_i.\]
Let $A_i = H_i \cap A$, which is the intersection of $A$ with each slice.
\begin{lemma}\label{lemma:hyper2}
For each $i$, we have that
\[
\sum_{V \in \Theta} \#\pi_V(A_i) \gtrsim \# \Theta \min\{p^{k-1}, \#A_i p^{-(n-k)(k-1)}\#\Theta\}.
\]
\end{lemma}
\begin{proof}
If $\#A_i p^{-(n-k)(k-1)}\#\Theta\le C$ for some large constant $C$, then the estimate trivially holds since $\#\pi_V(A_i)\ge 1$. Therefore, we assume $\#A_i p^{-(n-k)(k-1)}\#\Theta>C$.
The proof is by applying \eqref{eqremark} to the set $A_i$ in the $(n-1)$-dimensional space $H_i$($\cong \mathbb{F}_p^{n-1}$). This is actually an exceptional set estimate for projection to $(k-1)$-planes in $\mathbb{F}_p^{n-1}$.
Let $E:=\{ V\in G(n-k,H): \#\pi_V(A_i)<p^{s'} \}$, where $s'$ is to be determined. \eqref{eqremark} yields that
\[ \#E\lesssim p^{(k-1)(n-k)+s'}(\#A_i)^{-1}. \]
We choose $s'$ to be such that
\[ p^{s'}=C^{-1}\min\{ p^{k-1}, \#A_i p^{-(n-k)(k-1)}\#\Theta \}, \]
where $C$ is a large constant. Plugging into the upper bound of $\#E$, we see that
\[ \#E\le \frac{1}{2}\#\Theta. \]
Therefore,
\[ \sum_{V\in\Theta}\#\pi_V(A_i)\ge\sum_{V\in\Theta\setminus E}\#\pi_V(A_i)\ge \frac{1}{2}\#\Theta p^{s'}\gtrsim\# \Theta \min\{p^{k-1}, \#A_i p^{-(n-k)(k-1)}\#\Theta\}. \]
\end{proof}
We continue the proof. By a Fubini-type argument, we have
\begin{align*}
\#\Theta \cdot p^s
&\geq \sum_{V\in\Theta} \#\pi_V(A) \\
&= \sum_{V\in\Theta} \sum_{i=1}^p \#\pi_V(A_i) \\
&= \sum_{i=1}^p \sum_{V\in\Theta} \#\pi_V(A_i).
\intertext{Applying Lemma \ref{lemma:hyper2}, we have that}
\#\Theta \cdot p^s &\gtrsim \sum_{i=1}^p \#\Theta \min\{p^{k-1}, \# A_i p^{-(n-k)(k-1)}\#\Theta\}.
\intertext{By dyadic pigeonholing, we choose $I$ which is a subset of these $i$, such that there exists $\beta>0$ with $\#A_i \sim p^\beta$ for $i\in I$, and $\#I \cdot p^\beta\gtrsim (\log p)^{-1}\#A$. Thus,}
\#\Theta \cdot p^s &\gtrsim \sum_{i\in I} \#\Theta \min\{p^{k-1}, p^{\beta -(n-k)(k-1)}\#\Theta \}.
\end{align*}
Also, recall the assumption at the beginning of \textbf{Case 2}: \begin{equation}\label{case2assumption}
p^\beta\le p^{s+n-k-1}.
\end{equation}
We now have two cases depending on where the minimum is achieved. Firstly, if $p^{k-1} \geq p^{\beta - (n-k)(k-1)}\#\Theta$, we have that
\[
\#\Theta\cdot p^s \gtrsim \#I \cdot p^\beta \cdot p^{-(n-k)(k-1)}(\#\Theta)^2 \gtrsim (\log p)^{-1} p^{a-(n-k)(k-1)}(\#\Theta)^2.
\]
Therefore, $M=\#\Theta\leq \log p \cdot p^{(n-k)(k-1)+s-a}$, which finishes the proof of Lemma \ref{lemma:hyper1}.
The second scenario is $p^{k-1} \leq p^{\beta - (n-k)(k-1)}\#\Theta$. We will show that this will not happen. If it happens, we have
\[
\#\Theta\cdot p^s \gtrsim \#I\cdot \#\Theta\cdot p^{k-1}.
\]
Multiplying $p^\beta$ on both sides gives
\[p^\beta \#I \cdot p^{k-1} \lesssim p^\beta p^s.\]
This together with \eqref{case2assumption} implies that
\[
p^{a+(k-1)}(\log p)^{-1} \lesssim p^{s+\beta} \le p^{2s+n-k-1}.
\]
When $p$ is big enough,
this is a contradiction, as we assumed that $s<\frac{a+2k-n}{2}$. Thus, Lemma \ref{lemma:hyper1} is proven.
\end{proof}
\bibliographystyle{abbrv}
|
{
"arxiv_id": "2302.13272",
"language": "en",
"timestamp": "2023-02-28T02:14:46",
"url": "https://arxiv.org/abs/2302.13272",
"yymm": "2302"
} | \section{Introduction}\label{sec:intro}
In the recent years laser induced breakdown spectroscopy (LIBS) has found numerous applications encompassing various field\cite{Berlo2022}. Simultaneously various LIBS techniques also emerged for better exploitation and interpretation of obtained data \cite{Fu_2020_Front,Daniel_2021_ASR}. New techniques like hand held and standoff LIBS have been developed and used\cite{SENESI2021106013}. Optimization of time window has been pointed out in ref. \cite{C8JA00415C} Exploiting delayed emission for LIBS indentation has also been suggested \cite{Garima_Jaas_2022}. \par
From time to time review articles covering various aspects of LIBS have come up in the literature\cite{Hahn_2012_Appl_Spec, Hahn_2010_Appl_Spec, pasquini2007laser_JBC, singh2020laser, singh2007laser, miziolek2006laser,PabloAF,noll2012laser,ASR_Review_LIBS_1,ASR_LIBS_Application1, ASR_LIBS_Review_2,ASR_Tutorial_review_2020}.
However they are focused on particular aspect or technique e.g. study of uranium containing compounds \cite{KAUTZ2021106283},
plasma facing components \cite{MAURYA2020152417,li2016review}, LIBS imaging \cite{JOLIVET201941} element analysis of industrial materials\cite{app11199274} and data analysis\cite{Dianxin_ApplSpecReview}, nano particles in LIBS \cite{D1JA00149C,DELLAGLIO2018105}, industrial applications \cite{C9AY02728A},
underwater applications \cite{Ayumu_underwater}, food analysis\cite{molecules_libs_food} ,optical diagnostics and laser produced plasma\cite{Kautz:21,Hari_uranium}, cancer diagnostics and classification\cite{LIBS_Cancer},
rapid CPVID detection \cite{Berlo2022} (Scientific reports 12, 2022, 1614) aerosol analysis\cite{Huview_ASR_2021} and geological samples \cite{Shujun_ASR_2015}.
Hand held and portable LIBS technique is reviewed in reference \cite{SENESI2021106013} . Technique used in LIBS quantification have also progressed \cite{FP_2022_Zhang} in the past. Ultra sensitive and multianalyte analysis of plasma plumes using laser induced fluorescence (LIF) has been reviewed in a recent article\cite{CHEUNG2022106473} . Combination of other techniques e.g.
FTIR, Raman and Hyper spectral imaging (HI) have been elucidated to gather detailed spatial information
\cite{Ribeiro_2020_AO}
\cite{HOLUB2022106487}
\cite{SUN2022106456}
\cite{D2JA00147K}
Hybrid LIBS-Raman-LIF been discussed in a recent review \cite{Dhanada_2021_ASR}.
\par
Despite these reports, efforts are continuously emerging to extend the LIBS detection range and its application in various platforms e.g. study of deposition on the tokamak first wall components\cite{LIBS_Nucl_Fusion2021} , analysis of hydrogen isotopes\cite{Kautz:21} hardness estimation etc. In this short review we attempt to briefly sketch salient features associated with LIBS which encompass phenomenological aspects to emerging applications and recent technical developments. Some new features e.g. colliding plasma, application of self-reversal in estimating isotopic abundance, filament induced LIBS and grating induced LIBS are also briefly covered. The overall goal of this article to provide with first hand information to the LIBS community. We describe it in the following sections\par
\section{LIBS Methodology}\label{sec:Methodoloy}
When a substance is irradiated with a high power laser, the material is heated and results in melt which finally forms a plasma plume which consists of atom, ions and electrons. The mechanism and timescales of ablation process and subsequent plasma formation for short (fs) and long (ns) pulses are demonstrated in Fig~\ref{fig:Libs_Time_scale_ns_fs}. In ns-LIBS, at initial times in plasma formation the primary mechanisms are thermal vaporization and non-thermal evaporation whereas in case of fs LIBS, thermal evaporation occurs after Coulomb explosion, electron-ion energy transfer and heating of lattice by electrons which are followed by thermal vaporization. However, at later stages in case of ns LIBS, plasma reflection, absorption and reflection can occur whereas later stages for both ns and fs LIBS comprise of plasma-ambient interaction, shock wave propagation and confinement. Finally both are characterized by LIBS regime followed by plume condensation and particle ejection.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Libs_Time_scale_ns_fs.png}
\caption{\label{fig:Libs_Time_scale_ns_fs} Approximate time scales of nanosecond and femtosecond energy absorption and laser ablation along with various processes happening during and after the laser pulse (adapted with permission from reference \cite{LIBS_Time_scale}).}
\end{figure}
Plume hydrodynamics has been found to play an important role in molecular and nano-cluster formation. In the early stages, shock wave at the edge has been found to hinder molecular formation which takes place only after the shock wave collapse\cite{Hari_Anals_Chem_2016} .
Regarding expansion behavior, a systematic study is carried out for fs, ps and ns ablations\cite{jinto_PhsD}. Differences in the propagation of the plume are observed for these three cases. For forward ablation of nickel thin film, in case of fs ablation, linear expansion is noticed for low background pressures which eventually shows shock wave like expansion at higher background pressures. For ps ablation, blast wave model describes the expansion at low pressures but a drag model appears appropriate for higher pressures. For fs ablation, effect of laser fluence on the emission characteristics in ultrafast laser produced copper was reported by Anoop et.al \cite{Anoop_JAP_2016}. At low to moderate fluences, neutral emission dominates but at higher fluences, ionic emission is predominant. Fast and slow components are also noticed in case of Zn (I)
481 nm emission in ultrafast laser produced zinc plasma which are ascribed to neutral and recombination contributions to the emission \cite{Smijesh_JAP_2013}
\subsection{Ablation types (Front ablation and Back ablation)}
In LIBS, basically the material can be ablated in three configurations viz. front, back and non-orthogonal ablation. Front and rear ablation geometries for thin film target are shown in Fig~\ref{fig:Fron_rear} and Fig~\ref{fig:Scheme_Tomo}. When laser is incident from the front side, it is termed as front ablation. On the other hand, when the laser is incident from backside, it is named as back or rear ablation. Notable differences in plume expansion geometry, composition of the plume and plume velocity are noticed. Higher velocity is obtained in case of front ablation as compared to the rear ablation\cite{Rear_Front_Alam}. Moreover, spherical shock wave front is observed for both the cases, however, front side ablation has been found to have more excited state species as compared to rear ablation\cite{ESCOBAR2002} . Further, neutral species dominate in the rear
ablation geometry\cite{C9JA00158A}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Fron_rear.png}
\caption{\label{fig:Fron_rear} Schematic diagram of laser produced plasma plume of thin film deposited on a transparent substrate in front ablation (FA) and back ablation (BA) geometries (adapted with permission from reference \cite{C9JA00158A}).
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Scheme_Tomo.png}
\caption{\label{fig:Scheme_Tomo} Schematic diagram of the tomographic system for demonstrating the effect of non-orthogonal ablation (adapted with permission from reference \cite{D1AN01292D}).
}
\end{figure}
Besides these two geometries, laser ablation for different incident angles of laser beam has also been studied. Non orthogonal ablation has been found to increase inhomogeneity. It has been found to be composed of two parts; one following the ablation pulse and the other expanding along the sample normal. Moreover, the temporal evolution of the plasma, ionic and neutral emission and electron density and temperature have ben found to exhibit similar trends.
\subsection{Single pulse(SP)}
In most of the LIBS experiments single pulsed laser is used for ablation. The pulse of the laser can vary from femtosecond to microsecond time\cite{D0JA00521E}. Enhancement in intensity with long ns pulses has been reported in submerged solids\cite{D0JA00521E} . Moreover, long ns pulses have been found to produce plasma with stronger emission and longer lifetime as compared to LIBS using short (35 ns) pulses. Figure~\ref{fig:LIBS_Cu} shows the LIBS studies using short (35 ns) and and long (180 ns) laser pulses for copper \cite{D1JA00151E}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{LIBS_Cu.png}
\caption{\label{fig:LIBS_Cu} Typical LIBS spectra of the atomic lines of Cu at varied delay times obtained at two pulse durations of 35ns (a and b) and 180 ns (c and d) (adapted with permission from reference \cite{D1JA00151E} ) .
}
\end{figure}
\subsection{Double pulse (DP)}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{DP_Geometries.png}
\caption{\label{fig:DP_Geometries} Various DP configurations (a) Collinear DP, (b) Orthogonal reheating DP (c) orthogonal pre ablation DP and (d) dual pulsed cross beam (adapted with permission from reference \cite{DIWAKAR201365}).
}
\end{figure}
Double pulse (DP) LIBS has been found to enhance intensity of the atomic/ionic lines depending on inter pulse delay\cite{Wang_POP_2020} . A simple schematic of various configurations of DP LIBS is shown in Fig~\ref{fig:DP_Geometries}. The DP configuration can be collinear (a), orthogonal reheating (b) orthogonal pre ablation (c) or dual laser cross beam (d).
Further, DP LIBS can also have different configurations depending on lasers. It can have nano second + nanosecond (ns+ns), femtosecond+ femtosecond (fs+fs) femtosecond+ nanosecond (fs+ns) or nanosecond+ femtosecond (ns+fs) configurations.
It has been found that the spectral intensity of copper plasma is higher in case of the configuration fs+ns \cite{Wang_POP_2020}. The plasma temperature has been found to be lower whereas electron density is higher. It has been suggested that the second pulse re-excites the plasma resulting in enhanced spectral intensity.
Intensity enhancement has also been reported for orthogonal fs+fs DP LIBS. \cite{Nikolaos_Applied_Spectro} and is projected for LIBS imaging with better spatial resolution and spectro-chemical sensitivity. Effect of inverse Bremsstrahlung is reported for DP LIBS \cite{Siva_POP_2014}. In a recent work, effect of DP ablation on the emission characteristics of plasma has been theoretically treated using hydrodynamic model \cite{D2JA00105E}. Enhancement in intensity is attributed to two mechanisms viz plasma-plasma coupling effect and pressure effect.
\par
\section{colliding Plasma}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Colliding_plasma_illustration.png}
\caption{\label{fig:Colliding_plasma_illustration} Illustration of the orientation of the spectrometer slit with respect to the stagnation layer. This arrangement can provide one dimensional spatial resolution normal to the target along the stagnation layer (adapted with permission from reference \cite{Hough_JAP_2010}).
}
\end{figure}
When two laser produced plasmas (known as seed plasmas) are made to interact, an interaction zone is formed. This interaction zone is characterized by a stagnation layer. Colliding plasma schemes are useful in understanding plasma screening effects in fusion devices\cite{Shboul_POP_2014} . A simple setup for colliding plasma is shown in Fig~\ref{fig:Colliding_plasma_illustration}. The spectrometer slit is aligned along the propagation of stagnation layer.
However, the properties of the stagnation region depend on the relative orientation of the targets from which seed plasma are formed. Two widely used target configurations- plane and wedge shaped for plasma collision studies. \par
Further, collisionality parameter
($\zeta =D/\lambda_{ii}$) is defined to represent various scenarios where D is the separation between the two seed plasmas and $\lambda_{ii}$ is ion-ion mean free path defined by
\begin{eqnarray}
\label{i-i_mean_freepath}
\lambda_{ii}(1\dashrightarrow2)=\frac{4\pi\epsilon_0^2m_i^2v_{12}^4}{e^4Z^4n_iln\Delta_{12}}
\end{eqnarray}
Collisionality parameter $\zeta >1$ indicates soft stagnation or interpenetration of the plumes species of seed plasma whereas $\zeta <1$indicates the condition of hard stagnation ( in this case collisions among the seed plasmas. will dominate and result in heated plasma.
As mean free path is highly dependent on $V_{12}$ and to a lesser extent on $m_i$, the collisionality condition can be engineered by the orientation of the seed plasmas with respect to each other and target material.
Further, the colliding plasma can be homogeneous or heterogeneous. If the seed plasmas are from the same element, it is called homogeneous whereas seed plasmas from different elements termed as heterogeneous
Signatures of enhanced neutral emission and molecular formation have been reported for the interaction zone \cite{Saxena_19}. Moreover properties of interaction zone are also modulated in the presence of magnetic field\cite{Alam_POP_2020}. By observing the lines corresponding to trace elements, Tiwari et al \cite{TIWARI2022106411} showed that sensitivity can be optimized using colliding plasma in the presence of the magnetic field. In an other recent study Delaney et al \cite{DELANEY2022106430} studied the properties of stagnation layer formed in case of laterally colliding plasmas and annular plasmas and found that limit of detection (LOD) can be improved in case of colliding plasmas \cite{DELANEY2022106430}.
\section{Grating induced breakdown spectroscopy (GIBS)}
In conventional ns-LIBS technique, plasma shielding affects its reproducibility, repeatability and signal to noise ratio. Interestingly in filament induced breakdown spectroscopy (FIBG) , remarkable property of the filaments to travel long distances independently of the diffraction limit makes it suitable for long range operation \cite{ROHWETTER20051025}. In case of FIBG, the problem of shielding is overcome but the power density profile is limited. These problems can be countered in plasma grating induced breakdown spectroscopy (GIBS)\cite{MOTTOROS2020329}. A simple illustration of FIBG is shown in Fig~\ref{fig:GRAT_LIBS}. Fig~\ref{fig:FIBS_GIBS} shows improvement intensity of Si 288.2 nm line with GIBS. .
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{GRAT_LIBS.png}
\caption{\label{fig:GRAT_LIBS} Experimental schematic diagram of GIBS (adapted with permission from reference \cite{GIBS_2020}).
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{FIBS_GIBS.png}
\caption{\label{fig:FIBS_GIBS} (a) Intensity of the Si 288.2 nm line as a function of the laser pulse energy detected with the FIBS and GIBS systems. (b) Intensity of the Si 288.2 nm line obtained by interaction of two beams with different polarizations. Intensity enhancement is evident in case of GIBS which also depends on polarization (adapted with permission from reference \cite{GIBS_2020} ).
}
\end{figure}
\section{LIBS imaging/Confocal LIBS}
Spatially resolved LIBS imaging has attracted considerable interest because of its importance in revealing elemental distribution in the sample. For improving lateral resolution in LIBS imaging techniques e.g. micro-LIBs, fs-LIBS and near field enhanced atomic emission spectroscopy have been proposed \cite{MOTTOROS2020329}. Figure~\ref{fig:Imaing_FIBS} demonstrates the general protocol for LIBS imaging whereas fig~\ref{fig:Cofocal_LIBS} shows confocal LIBS imaging setup.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Imaing_LIBS.png}
\caption{\label{fig:Imaing_FIBS} General protocol for LIBS imaging (A) Schematic view of the LIBS imaging setup with
main components: the microscope objective is used to focus the laser pulse, the motorized
platform for moving the sample, and the detection system connected to a spectrometer via an
optical fiber. (B) Examples of single-shot emission spectra in the spectral range between 250 and
280 nm. (C) Sample relative-abundance images of Au (yellow), Fe (red), and P (blue) represented
using false color scales. (adapted with permission from reference \cite{MOTTOROS2020329})
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Cofocal_LIBS.png}
\caption{\label{fig:Cofocal_LIBS} Schematic of Confocal LIBS microscopy. The beam emitted from continuous-wave (CW) laser passes through a beam expander and beam splitters BS1 and BS2, and then is focused onto the sample by an objective. The reflected light transmits along the original light path and is reflected by the BS2, and then is finally detected by a photomultiplier tube (PMT)
(adapted with permission from reference \cite{C9JA00387H}).
}
\end{figure}
\section{Self-Absorption, Optical Thickness}
LIBS based spectro-chemical analysis based on relation between the observed intensity of an emission line and analyte concentration and from analytical point of view a linear relationship is desired. However,
If optical thickness is large,
part of the emitted radiation will be reabsorbed by the same species which is termed as self-absorption \cite{REZAEI2020105878}. Self-absorption coefficient (SA) is defined as the ratio of the actual intensity of an emission line to the theoretical intensity\cite{YANG2020163702} .Cases of self-absorption in case of homogeneous as well as inhomogeneous plasm are well described by Fatemeh Rezaei et al \cite{REZAEI2020105878} . A simplest check to ensure that
the plasma is thin is to check the ratio of intensities of two lines which originate from same upper energy level or with upper energy levels with a small difference and if the following relation holds, it can be assumed to be optically thin.
\begin{eqnarray}
\label{Optical_thickness}
\frac{I_1}{I_2}=\frac{g_1A_1\lambda_2}{g_2A_2\lambda_1}
\end{eqnarray}
where $I,g,A,\lambda$ with subscripts are the intensities, gaunt factor, transition probabilities and wavelengths respectively of the lines 1 and 2 under consideration.
Another way to is to estimate spectral absorption at the center of a line originating in levels i and j and can be given by
\begin{eqnarray}
\label{Spectral_absorption}
k_{ji}(\lambda_0)=8.85\times10^{-13}f_{ji}\lambda_0^2n_iP_{ji}(\lambda_0)
\end{eqnarray}
where $k_{ji}$ ($cm^{-1}$) is absorption coefficient, $f_{ji}$ is the absorption oscillator strength, $\lambda_0$ is the wavelength and $P_{ji}$ ($\lambda_0$) is normalized line profile at the center. For Lorentzian profile $P_{ji}$($\lambda_0$)= $1/(\pi\Delta\lambda_{1/2})$.
\section{Self-reversal of lines}
Self reversal results when the emission from the hot center is absorbed by the species present at cooler periphery or due to plasma inhomogeneity\cite{TOUCHET2020105868} as shown in figure~\ref {fig:Self_Reversal}. The emitted line shows dip at the peak due to absorption of the emitted line at rather hotter plasma center. A number of studies reported the presence of self-reversed lines under different plasma environments \cite{TOUCHET2020105868,Bhupesh_POP_2013,KUMAR2022127968,URBINA2022106489,Kautz_JAP_2021}. Strong self-reversal in Li 670.8 line is noticed when LIBS was performed in a confined geometry as shown in figure~\ref{fig:Self_Reversal2}. Moreover, it is dependent on the distance of the plate used to confine the plasma and time delay \cite{Bhupesh_POP_2013}. Strong self-reversal was also reported in laser produced plasma for Ba ionic line\cite{KUMAR2022127968}. Shock waves during breakdown had also been considered to play an important role in generating plasma inhomogeneities\cite{URBINA2022106489} .
Laser induced fluorescence (LIF) of laser ablated filaments has been found to reduce the self-reversal features in the spectral profiles \cite{Kautz_JAP_2021}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Self_Reversal.png}
\caption{\label{fig:Self_Reversal} Figure (a, b) depicts how self-reversal can take place; (c,d) shows self-reversal in 670.8 nm resonance line of Lithium (Li I) (adapted with permission from reference
\cite{TOUCHET2020105868}).
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Self_Reversal2.png}
\caption{\label{fig:Self_Reversal2} Self-reversal in Lithium 670.8 nm and 610.3 nm lines in a confined geometry. More pronounced self-reversal is clearly evident for 670.8 nm resonance line (adapted with permission from reference
\cite{Bhupesh_POP_2013}).
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Self_Reversal3.png}
\caption{\label{fig:Self_Reversal3} Self reversal demonstrating Li isotopic dependence (adapted with permission from reference\cite{TOUCHET2020105868}).
}
\end{figure}
Self-reversal of lines in LIBS which was considered to be unwanted, appears to to be boon in disguise. Estimation of isotopic abundance in case of lithium isotopes has been projected by exploiting the narrow (less effected by line broadening processes) self-reversed profile \cite{TOUCHET2020105868}. With further optimization and analysis of self-reversed lines, better quantitative estimate of isotopic abundance is expected. Figure~\ref{fig:Self_Reversal3} shows the isotopic dependence of self reversal profiles for lithium 670.8nm resonance line.
\section{Parameter estimation} The estimation of two important parameters viz electron density and
electron temperature is important in understanding the plasma plume behavior as well as
estimation of elemental composition.
\subsection{Electron density} Electron density is mainly estimated from the Stark broadening of the lines and is well established in literature and will not be detailed in the present article. However, in some cases Stark parameters of certain lines are not available and hence need to be estimated. One of the adopted methods is to use an alloy with other metal with known Stark parameters and from the obtained density extract the unknown Stark parameters\cite{Mayo_2008}
In another study Aragon et al. used fused glass samples to extract the Stark parameters for FeII and Ni II and Ti II with improved line to background ratios\cite{ARAGON201439,mnras_Manrique}.
In some cases, these parameters are estimated from H$_\alpha$ line due to the presence of trace amount of water.
In case of nickel Stark parameters are estimated from the density obtained from $H_\alpha$ line \cite{Jinto_POP,ElSherbini2006}.
Parameters for some lines of tungsten were estimated by using CII line (426.7 nm) in tungsten carbide plasma \cite{Nishijima_2015}.
Cross calibration in a multi element plasma was also used to extract these parameters \cite{Liu2016}. Stark widths for UI and UII were estimated from the O I 799.19 nm line present as surface impurity in a uranium metal target\cite{Hari_uranium}. Stark broadening coefficients for Tantalum lines were extracted using extended C-sigma method \cite{POGGIALINI2020105829} from the information of Stark broadening coefficient for a known line. However, these methods either may suffer from the interference from the presence of more elements on the plasma or the lines from the tracer impurities may be weak to provide sufficient intensity.
Moreover presence of hydrogen, at low pressures presence of trace amount of water can give H$_\alpha$ line but may have low intensity. At the same time at higher air pressure (higher water content is expected), water is likely to contaminate the plasma as well as affect the plasma properties. From $H_\alpha$, we can estimate Stark parameters of any element.\par
The spectral emission from LPP plasma are broadened due to various mechanisms, such as Stark, Doppler, van der Waals etc.\cite{Griem1974}. The line shape of emission from LPP plasma varies depending on the broadening mechanisms. A Lorentzian profile is expected for the collisional broadening process such as Stark and van der Waal broadening. whereas Doppler broadening results in a Gaussian profile. Some times more than one mechanism results in the broadening and hence the lines shape will be a convolution of different profiles as discussed in details by Griem\cite{Griem1974}
In LPP, three primary mechanisms can contribute to the spectral line shape; Doppler, Stark and instrumental. FWHM due to Doppler broadening can be estimated by
\begin{eqnarray}
\label{dopler_broadening}
\Delta \lambda = 7.2\times10^{-7}\lambda_0\sqrt{\frac{T_e}{M}}
\end{eqnarray}
In case of LIBS, the main broadening mechanism is considered as Stark broadening as due to low temperature, contribution from Doppler is very small.
The Stark width is given by
\begin{eqnarray}
\label{stark_broadening}
\Delta \lambda_{1/2} = 2W(\frac{n_e}{10^{16}})A^0
\end{eqnarray}
Though the Stark broadening contribution is dominant in LPP plasma, the Doppler broadening may come significant at LPP emission of a plume expanding into a low background pressure and at a later time where the plasma density is not high.
Assuming instrumental width as Lorentzian, the actual Stark width can be deducted by subtracting the instrumental width from the fitted Lorentzian width.
Besides, Stark broadening, Laser Thomson scattering and interferometry can also be used to estimate electron density in laser produced plasmas \cite{Hari_RevModPhys.94.035002,Choudhary_interferometry}. It has been described in an earlier review article also\cite{Muraoka_2011}.
\subsection{Electron temperature}
The estimation of plasma parameter using the OES is based on the Boltzmann and Saha equation. Boltzmann equation relates the ratio of population densities $N_j^z$ of excited energy levels and the number density $N^z$ with temperature T as
\begin{eqnarray}
\label{Bolt_eq}
\frac{N_j^z}{N^z}=\frac{g_j^z \: exp(\frac{-E^z}{kT})}{U^z(T)}
\end{eqnarray}
where $z$ represents the ionization stages, $E_j^z$ and $g_j^z$ are the respective energy and degeneracy of the specified level, $U^z(T)$ is the partition function.
The Saha equation relates electron density $(N_{e})$ and temperature with, population density of different states $z$ and $z-1$ as follows
\begin{eqnarray}
\label{Saha_eq}
\frac{N_{e}N^z}{N^{z-1}}=\frac{2U^{z}(T)}{U^{z-1}(T)} \left(\frac{2\pi mkT}{h^2}\right)^{3/2} exp\left[\frac{-(E_\infty^{z-1}-\Delta E_\infty^{z-1})}{kT}\right]
\end{eqnarray}
where $E_\infty^{z-1} $ is the ionisation energy of species in charge state of $z-1$, $\Delta E_\infty^{z-1} $ is the correction in ionization energy due to the plasma interaction, $h$ is Planck?s constant $(6.626 \times 10^{-34}J.s)$, $m$ is the mass of the electron $(9.109\times10^{-31}Kg)$. All these parameters for most of the species are well known and available in data bases like NIST. The applicability of these equations largely depends on the validity of LTE conditions of plasma described in section~\ref{sec:LTE}.
Electron temperature can be estimated using the ratio of emission intensities of spectral lines of the same species and charge states using the Boltzman equation (Equation~\eqref{Bolt_eq}). Estimation of temperature is normally done
either by taking the ratio of the emission intensity of two separate lines or using a Boltzmann plot method\cite{ARAGON2008893}.
In the case of intensity ratios of line emissions, assuming local thermodynamic
equilibrium (LTE) in the system, the temperature is estimated using the following equation which is derived from Boltzman equation,
\begin{eqnarray}
\label{Temp_int_R}
\frac{I_1}{I_2}=\frac{g_1A_1\lambda_2}{g_2A_2\lambda_1}exp\left(\frac{-(E_1-E_2)}{k_bT_e}\right)
\end{eqnarray}
where$ \lambda_i, A_i, g_i, I_i and E_i \: (i = 1,2)$ are the wavelength, transition probability, statistical
weight, line intensity, and energy of the excited state respectively.
When only two lines are considered, the selection of these lines is very critical. To have a better accuracy, the upper state energy levels of the two lines under consideration have to be well separated. Moreover, the energy difference should be significantly larger than the plasma electron temperature to get an accurate estimation of electron temperature from line intensity ratio. Also, care has to be taken to make correction for the opacity of the plasma if it is present. A lack of consideration of these conditions can lead to wrong estimation of the temperature. Better estimate of electron temperature can be achieved by using spectral lines of two successive ionization states \cite{Harilal1998}.
Electron temperature can be estimated by eq.2 of this reference.
A more general and accurate way for the estimation of temperature is possible by using the Boltzmann plot method. In this method, a number of lines can be used for the estimation of temperature. The equation for estimating temperature (equation~\eqref{Temp_int_R}) can be re written as
\begin{eqnarray}
\ln{\bigg[\frac{I_{ij}\lambda_{ij}}{g_iA_{ij}}\bigg]}=\frac{-E_i}{KT_e} +C
\label{BoltzmanPlot }
\end{eqnarray}
where $I_{ij}$, $\lambda_{ij}$,$ A_{ij}$, $g_i $and $E_i$ are the spectral intensity, wavelength, transition probability and statistical weight of the upper state and upper state energy respectively.
If a plot is made with $E_i$ on x axis and the left hand side(LHS) of equation \eqref{BoltzmanPlot } as the y axis, the slope of the graph will be equal to $\frac{ -1}{ KT_e}$ , from which one can easily estimate the plasma temperature. In this method, lines meeting the required conditions can be used, which improves the accuracy of temperature estimation.
Another method for the estimation of plasma temperature using spectroscopy is the line-to-continum ratio method \cite{MOON2012221}. The equation for this method is derived from Saha equation and from the expression for the integrated spectral emissivities of the respective lines it can be expresses as
\begin{eqnarray}
\label{Line_to_cont}
\frac{I_{1}}{\varepsilon_{c}}( \lambda)= \dfrac{2.0052 \times 10^{-5}A_{21} g_2\exp \left(\frac{E_{i}-\bigtriangleup E_{1}}{kT_{e}} \right) \exp \frac{-E_{2}}{kT_{exc}} }
{U_i \lambda_i T_e \left[\xi \left( 1 - \exp \left( \frac{-hc}{\lambda kT_{e}} \right) \right) +G\left( \exp \left( \frac{-hc}{\lambda kT_{e}} \right) \right) \right] }
\end{eqnarray}
where $I_1$ is the integrated intensity of emission line, $\varepsilon_{c}$ is the continuum emission coefficient, $A_{21}$ is the transition probability, $ g_2$ is the upper state statistical weight. $E_i$ is the ionization potential, $E_2$ is the upper state energy level, $\bigtriangleup E_{1}$ is the correction factor to inonization potential due to the plasma which can be neglected safely. $U_i$ is the partition function, $\xi$ is the free-bound continuum correction factor, G is the free free Gaunt factor.
Experimentally, $\varepsilon_{c}$ is measured closest to the line chosen. From the above equation we can calculate the electron temperature $T_{e}$.
As discussed in case of density, temperature can also be estimated using laser Thomson scattering \cite{Muraoka_2011}
\section{Local Thermodynamical Equilibrium}\label{sec:LTE}
For establishing that the levels are populated with Boltzmann distribution, the plasma is assumed is to be under local thermodynamic equilibrium (LTE). Macwhirter criterion defined below is taken as a necessary condition for the LTE condition
\begin{eqnarray}
\label{Macwhirter}
n_e\geq 1.6\times 10^{12}T_e^{0.5}\Delta E_{mn}^3
\end{eqnarray}
where $n_e$ is electron density in $cm^{-3}$, $T_e$ is electron temperature in Kelvin and $\Delta E_{mn}$ (eV) is the largest energy gap between the adjacent energy levels.
However for transient and inhomogeneous plasma. Christoferrati criteria\cite{CRISTOFORETTI} discussed below have to be checked.
In the case of transient plasma, like the case of a laser produced plasma, require to be verified with the Christoferrati criteria in order to ensure the LTE situation. The Christoferrati criteria state that the diffusion length $D_\lambda$ of atoms/ions, during a time period of the order of the relaxation time to the equilibrium, should be shorter than the variation length of temperature and electron number density in the plasma. The diffusion length is approximated as
\begin{eqnarray}
D_\lambda \approx 1.4\times 10^12 \times \left( \frac{(k_B T_e)^{3/4}}{N_e}\right) \times\left( \frac{\Delta E}{M_A f_{12}(g)} \right)^{1/2}
\times e^{\Delta E/ 2k_B T_e}
\end{eqnarray}
where $k_B$ is the Boltzmann constant, $N_e$ is the electron number density, T is the plasma temperature, $M_A$ is the atomic mass of element,
$\Delta E$ is the energy difference in the upper and lower level, $f_{12}$ is the oscillator strength and g is the gaunt factor.
$f_{12}$ is a dimensionless quantity of the probability of absorption or emission of electromagnetic radiation for a particular transition and $g$ is the correction factor to be used as an approximation to the classical calculation of emission results.
Similar to the variations in diffusion length, it is equally important that the relaxation time of the plasma for establishing the thermodynamic equilibrium has to be shorter than the time of variation of plasma temperature and density\cite{CRISTOFORETTI}.
Typical laser produced plasma at its initial stage has density $\gg 10^{18} cm^{-3}$ and temperature of few eV, which meets the requirement of LTE condition. However, after a sizeable plasma expansion, the density falls rapidly which leads to non-LTE plasma conditions.
\section{Calibration free LIBS (CFLIBS)}
Convention LIBS analysis suffers from matrix effects and also requires reference sample for calibration curve. Hence calibration free LIBS (CF LIBS) approach was adopted. Details of this method are given in a review by Tongoni et al. \cite{TOGNONI20101} and also in recent reference of Zhang et al\cite{FP_2022_Zhang}.However, CFLIBS considers that (i) plasma plume represents the actual composition, (ii) the plasma is in LTE condition in spatial as well as temporal window, (iii) plasma is considered to be homogeneous and (iv) the spectral lines under consideration are optically thin.
\subsection{C Sigma graphs}
Generalized curves of growth known as CSigma( $C \sigma$) graphs which include several lines of various elements of same ionic stage were suggested by Aragon et al. for LIBS\cite{ARAGON201490}. It is based on Saha, Boltzmann and radiative transfer equations under the assumption of local thermodynamic equilibrium (LTE). Further C-sigma graphs are based on the calculation of line cross section which allows estimation of self-absorption.
\subsection{ Internal reference for self-absorption correction (IRSAC)}
To overcome self-absorption in CF-LIBS, correction by internal reference has also been suggested \cite{Sun2009CorrectionOS} . Basically the line chosen as internal reference has lower energy level with higher energy or with low transition probability which is affected slightly by self –absorption. Based on this, the intensity of other lines can be corrected. Moreover, columnar density and standard reference line (CDSRL) method has been found to result in better accuracy than standard LIBS \cite{HU2021339008,Sun2009CorrectionOS}.
\section{Enhancement in signal/detection sensitivity}
Signal enhancement and subsequent increase in detection sensitivity has been a subject of wide investigations for better exploiting it in various applications. A recent review briefly describes some of the approaches attempted for it \cite{Fu_2020_Front}. The methods adopted of signal enhancement are DP LIBS, atmosphere control method, applying spatial constraint, application of magnetic and electric fields, microwave assisted LIBS (MALIBS), LIBS/ laser induced fluorescence combination (LIBS+LIF), nano particle enhanced LIBS( NELIBS).Sample temperature has also been found to enhance signal emission intensity\cite{C9JA00261H,C8JA00348C,Guo_AIP_Adva_2019, TAVASSOLI2009481}. Increase
in signal emission is attributed to increased ablation rate.
\subsection{Effect of magnetic field}
Introduction of magnetic field has been found to result in significant enhancement in the intensity of lines \cite{Rai_2003}. Various mechanisms e.g. confinement by the field, increase in electron impact excitation due to Joule heating and acting as a constraint been suggested for this enhancement\cite{JOSHI2010415,Fu_2020_Front}.
\subsection{Effect of electric field}
Electric field assisted LIBS has been studied for the detection of chlorine and copper \cite{Ahmed2020}. In another work significant enhancement ($\approx8$ fold) is observed in case of copper lines \cite{AHMED_2021} . Fluctuations (contraction/expansion) in the laser produced plasma are suggested for observed intensity enhancement. Electric field effect on laser induced titanium (Ti) plasma has also been investigated \cite{Asamoah2021}. Intensity enhancement in Ti lines has been found to depend on voltage biasing.
\subsection{Glow discharge LIBS (GDLIBS)} GDLIBS has been found to result in significant signal enhancement when compare with simple GD or LIBS\cite{Tereszchuk_2008} .GD-LIBS takes advantage of collisional excitation by exciting the material generated by LIBS \cite{TERESZCHUK2009378} .
Flame-enhanced LIBS (FELIBS): Enhancement in LIBS sensitivity was noticed by producing laser plasma in the outer envelope of a neutral oxy-acetylene flame \cite{Liu_2014} . High temperature and low density plasma was observed before 4$\mu s$ which has been projected to be beneficial for enhancing LIBS sensitivity.
\subsection{Microwave Assisted LIBS (MALIBS)}
Interaction between microwave radiation and laser produced plasma has been studied in earlier works and significant increase in line intensity was noticed\cite{Liu_2010,Oba_2020}.Intensity enhancement and plasma sustainment in presence of microwave in air has been suggested to occur due to re-excitation and , of course, not due to absorption of the microwave\cite{Ikeda_22}.
\section{Surface Enhanced LIBS (SENLIBS)}
In this method metallic target is used to enhance and stabilise plasma for direct analysis of flow \cite{C8JA00347E,Yang2020}. It has been demonstrated that it has the potential of improving measurement sensitivity. Figure~\ref{fig:SENLIBS} shows the schematic diagram of SENLIBS and conventional LIBS. Figure~\ref{fig:Senlibs_Spectra} shows substantial enhancement in the emission intensity for copper and aluminium lines. In both the experiments, it is worth mentioning that there is no significant variation in the background lines.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{SENLIBS.png}
\caption{\label{fig:SENLIBS} Schematic diagram of (a) SENLIBS and (b) conventional LIBS (adapted with permission from reference\cite{C8JA00347E}).
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Senlibs_Spectra.png}
\caption{\label{fig:Senlibs_Spectra} Spectra showing Si I lines from silica in (a) copper rod and (b) in aluminium rod (adapted with permission from reference \cite{C8JA00347E}).
}
\end{figure}
\section{LIF+ LIBS}
Combination of laser induced fluorescence with LIBS (LIF+LIBS) can enhance the intensity of lines and subsequently provide better detection sensitivity. A typical LIF+LIBS setup is shown in Fig~\ref{fig:LIBS_LIF}, and enhanced Co-I intensity is demonstrated in Fig~\ref{fig:Spec_LIF_LIBS} . Significant improvement in single shot Limit of detection is observed while combining LIBS and LIF . Moreover spectral interference effect which is problematic in conventional LIBS can be resolved with LIF +LIBS (adapted with permission from reference \cite{D2JA00120A}).
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{LIBS_LIF.png}
\caption{\label{fig:LIBS_LIF} Typical setup for LIBS+LIF combination (adapted with permission from reference
\cite{Zhou21_LIBS_LIF}).
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Spec_LIF_LIBS.png}
\caption{\label{fig:Spec_LIF_LIBS} Enhancement in Co (I) 340.51 nm line in LIBS+LIF (adapted with permission from reference
\cite{Zhou21_LIBS_LIF}).
}
\end{figure}
\section{Nanoparticle enhanced LIBS (NELIBS)} Presence of nano-particles has been found to enhance LIBS intensity. In Fig~\ref{fig:NELIBS_schematic},\ref{fig:NELIBS_1}, processes associated in the presence of nano-particle in metallic target are shown. NELIBS has been found to have larger plasma volume and longer persistence in spite of similar plasma parameters. Production of more efficient seed electrons in comparison to conventional LIBS has been attributed to this enhancement.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{NELIBS_schematic.png}
\caption{\label{fig:NELIBS_schematic} Schematic of the ablation process in NELIBS: (a) Laser irradiation and field emission,(b) explosion of NPs and breakdown, (c) electron heating and plasma formation, and (d) plasma evolution and target re-solidification (adapted with permission from reference
\cite{DEGIACOMO201419}).
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{NELIBS_1.png}
\caption{\label{fig:NELIBS_1} NP enhanced photo-ablation in metals and dielectrics (adapted with permission from reference
\cite{DELLAGLIO2018105}).
}
\end{figure}
Nano-particle enhanced molecular LIBS (NEMLIBS) was recently reported by Tang et.al.\cite{D0JA00528B}. Geometric constraint was proposed to improve NEMLIBS. Moreover, larger spot size, higher laser energy and pre-ablation of sample are beneficial in NEMLIBS. A typical NEMLIBS set up is shown in figure~\ref{fig:NEMLIBS_scheme} and enhancement in molecular emission in the absence as well as in the presence of constraint is shown in Fig~\ref{fig:MLIBS_NEMLIBS_Comparison}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{NEMLIBS_scheme.png}
\caption{\label{fig:NEMLIBS_scheme} NEMLIBS set up (adapted with permission from reference
\cite{D0JA00528B}).
}
\end{figure}
\section{Spatial Constraint Method}
Spatial constraint method: In this method a spatial constraint device is introduced at the periphery of the plasma\cite{Fu_2020_Front}. The generated shock wave will be reflected at the constraint, subsequently compressing the plasma. This will result in higher temperature and density which in turn will give enhanced signal. A schematic of spatial constraint method is depicted in figure~\ref{fig:Spatail_Constraint_Scheme}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{MLIBS_NEMLIBS_Comparison.png}
\caption{\label{fig:MLIBS_NEMLIBS_Comparison} Demonstration in enhancement in NEMLIBS for AlO and C2
with and without geometric constraint. (adapted with permission from reference \cite{D0JA00528B}).
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Spatail_Constraint_Scheme.png}
\caption{\label{fig:Spatail_Constraint_Scheme} Plasma plume evolution (A) in the presence of spatial constraint and (B) without spatial constraint (adapted with permission from reference
\cite{Fu_2020_Front}).
}
\end{figure}
\section{Optically trapped LIBS (OTLIBS)} Optical catapulting-optical trapping LIBS:
In this technique spectral identification of micro and nano sized particles is done by sequential optical catapulting, optical trapping and LIBS\cite{FORTES201478,acs.analchem.0c04827}. The details of optical trapping technique are reviewed in Galbács, et. al \cite{D1JA00149C}.
A typical arrangement for LIBS analysis of optically trapped single particle is illustrated in Fig ~\ref{fig:Scheme_OTLIBS}. The method has been demonstrated to have attogram-level detection sensitivity\cite{FORTES201478,PUROHIT201775}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Scheme_OTLIBS.png}
\caption{\label{fig:Scheme_OTLIBS} Scheme of an experimental arrangement for LIBS analysis of optical trapped single nano particles (adapted with permission from reference
\cite{D1JA00149C} ).
}
\end{figure}
\section{Polarization Resolved LIBS (PRLIBS)}
Emission anisotropy in the expanding plasma plume has been studied because of its importance in deciphering electron distribution and self-generated electric and magnetic fields \cite{POP_anisotropy, JLA_Aghaba,Wubetu_2020,SHARMA20073113} .
The degree of polarization at a particular wavelength is defined by
\begin{eqnarray}
\label{polarization}
P_{\lambda}=\dfrac{I_H-I_V}{I_H+I_V}
\end{eqnarray}
Exploiting polarization resolved LIBS may be interesting in enhancing LIBS sensitivity \cite{Zhao_2014} as can be seen from figure~\ref{fig:Anisotropy_Al}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Anisotropy_Al.png}
\caption{\label{fig:Anisotropy_Al} Polarisation resolved spectra of $Al^{2+}$ for different stages of plasma evolution at a background pressure of $1\times10^{2}$mbar. The laser fluence was 550 J$/cm^2$ with an ICCD camera gate width of 10 ns (adapted with permission from reference
\cite{Anisotropy_2017}
}
\end{figure}
\section{RESONANT ENHANCED LIBS (RELIBS), RESONANT SURFACE ENHANCED LIBS (RSENILIBS)}
In RELIBS, the excitation laser is tuned to strong absorption line of one of the major species \cite{C3JA30308J}. The energy absorbed is then distributed over all the elements in the plasma through collisions. A schematic of R-SENILIBS is given in figure~\ref{fig:R-SENLIBS_scheme}. Main advantage of RELIBS over LIBS-LIF is its ability to determine multiple species simultaneously. Concept of R-SENILIBS was reported to and was used to detect lead in water \cite{D1JA00250C}. In this method surface enhanced
LIBS is combined with resonance enhancement to improve the detection sensitivity. Figure~\ref{fig:Energy_level} shows the energy level diagram for resonant excitation in the case of Pb atoms. It can be seen that large enhancement in intensity is observed in case of on resonance condition.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{R-SENLIBS_scheme.png}
\caption{\label{fig:R-SENLIBS_scheme} Schematic of R-SENILIBS set up (adapted with permission from reference
\cite{D1JA00250C}).
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{Energy_level.png}
\caption{\label{fig:Energy_level} (a) Partial energy level diagram for resonant excitation of Pb atoms and (b) off resonance and on resonance line spectra (adapted with permission from reference
\cite{D1JA00250C}).
}
\end{figure}
\section{Back Reflection Enhanced LIBS (BRELIBS)}
LIBS sensitivity is enhanced using BRELIBS by using metallic reflectors behind transparent targets \cite{ABDELHARITH2021339024} as can be seen in figure~\ref{fig:BRELIBS}. In this method metallic reflectors behind target are used to enhance LIBS sensitivity. The reflected laser beam reheats the plasma resulting in enhanced
intensity. Further, the obtained LIBS spectrum shows pronounced increase in signal to noise ratio (SNR).
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0cm 0cm 0cm 0cm, clip=true,angle=0]{BRELIBS.png}
\caption{\label{fig:BRELIBS} Schematic of back reflection enhanced LIBS (adapted with permission from reference
\cite{ABDELHARITH2021339024}).
}
\end{figure}
\section{Simultaneous LIBS Combination With Other Analytical Measurements}
In this section we briefly introduce other analytical methods which are simultaneously used with LIBS.
\subsection{LIBS-Raman}
Combination of Raman and LIBS enables one to study the chemical composition in a broader context encompassing elemental
and morphological information \cite{HOLUB2022106487,SUN2022106456} .
In earlier studies simultaneous Raman and LIBS measurements with a single ns laser with 1064 nm fundamental and 532 nm second harmonic were suggested by Sharma and coworkers\cite{SHARMA20071036,Samuel_appl_spec}.
However, combination of LIBS and Raman in a single platform has gained lot of attention\cite{Qingyu_ASR,ASR_hybrid_LIBS_RAMAN}.
Spatial and temporal combining techniques have been suggested for this \cite{Lednev2018} .
Potentially portable table top remote Raman-LIBS system has been reported recently\cite{MUHAMMEDSHAMEEM2022108264}.
\subsection{LIBS-Laser Ablation- Inductively Coupled Plasma (LA-ICP)}
Meissner et.al.\cite{MEISSNER2004316} compared LIBS and Laser ablation-Inductively coupled mass spectroscopy (LA-ICP-MS) for the detection of trace elements in a solid matrix. LIBS is affected by matrix as well as self-absorption but does not depend on sample preparation. On the other hand, LA-ICP-MS detection limits are in principle lower but the preparation method strongly influences the measurements.
Simultaneous LIBS and LA-ICP optical emission spectroscopy (LA-ICP-OES) have been reported for simultaneous analysis of the elements in asphaltenes\cite{Asphaltenes_LIBS} .
Elemental analysis was done by LA-ICP-MS as well. At the same time the aromatic/paraffinic nature was also determined by LIBS from H/C ratio.
Simultaneous LIBS and LA-ICP- mass spectroscopy (LA-ICP-MS) has been reported for spatially resolved mapping of major and trace elements Bastnasite rare earth ore \cite{LA-ICP-MS_JAAS}.
The combination of two techniques
provides complementary measurements that can be achieved with separate measurements due to low sensitivity or strong interferences.
Combined LIBS/LA-ICP-MS has been reported for polymer alteration under corrosive conditions \cite{LA_ICP_MS_LIBS_SR}.
It has been demonstrated that LIBS/LA-ICP-MS is a powerful method for polymer characterization as well in the study of polymer degradation.
\subsection{ LIBS- X-Ray Fluorescence (XRF)}
Double pulse LIBS and micro-x-ray fluorescence (micro-XRF) were reported for characterizing materials \cite{Spie_LIBS_XRF}. They have found that LIBS is highly subjective to sample chemical and physical properties. Performance of portable LIBS and portable XRF device has been reported by Rao et. al\cite{D1JA00404B}. While portable LIBS can give instantaneous measurement, its accuracy is hampered by self-absorption. On the other hand, XRF measurements have better limit of detection but the measurement consumes time.
\section{concluding remarks}
In this brief review we sketch LIBS techniques from parameter estimation to the emerging trends and also projected applications. In this article we have also introduced application aspects of some hitherto not exploited much phenomena e.g. colliding plasma, self-reversal of lines and GIBS. Further, we believe that glimpses outlined in the present review will provide a sound platform to the beginners also as it encompasses brief phenomenological aspects as well as recent developments in the field.
\section{References}
|
{
"arxiv_id": "2302.13253",
"language": "en",
"timestamp": "2023-02-28T02:14:16",
"url": "https://arxiv.org/abs/2302.13253",
"yymm": "2302"
} | \section{Introduction}
Increasing interest in natural language processing (NLP) presented automated solutions to different human problems, such as text classification, text summarization and generation either with quality comparable to human solutions \cite{khurana2022natural}. Still, there remains the problem of context addition. To solve this problem we aim to predict the entire context or basic entities to get the coherent and cohesive text meaning \cite{santhanam2020context}. Usually a part of the full text, such as the key words or some description, are available. On the one hand it is a more complicated task than a language modeling problem, because there are limits to extrapolating the context from a small part or a description \cite{pillutla2021mauve}. Furthermore, the semantic gap between the original text and what is recovered is not as unambiguous as in the summarization problem \cite{nallapati2016abstractive}.
This may also be applied to search insights by titles or recovery of text contents when the author is not available. Another application is feature extraction for better text generation or context reconstruction for dialogues \cite{yang2019end}. On the other hand, there are topics in the analysis and use of human code generation quality assessment \cite{kovalchuk-etal-2022-human} and community question answering (CQA), in which the Bayesian approach could prove a great tool. The CQA domain needs to emphasize information from questions or shorter titles, to generate answers more accurately. Such tasks mostly obtain good results by complex and sophisticated neural network architectures, such as LSTM \cite{santhanam2020context} or transformers \cite{10.1007/978-3-030-64580-9_32}. However, there are issues with this application of neural networks \cite{liu-etal-2020-understanding}. For example, well known GPT-like models used for text generation need huge amounts of textual data and time, and they are too complex for fine tuning \cite{Brown2020LanguageMA}.
In this paper, we present a Bayesian approach for context prediction. Bayesian networks (BNs) allow us to recover the meaning of a full text by knowing the conditional probability distributions (CPDs) of named entities. A named entity in our case is the class of one of the semantically meaningful words in the programming domain obtained as a result of named entity recognition (NER). These entities present informative units that carry information about the context.
Additionally, the directed acyclic graph (DAG) provided by BN show links between entity classes. In most cases, entities from the title part directly affect the appearance of the entity in question. Besides, it detects links between significant elements of the programming domain, such as code blocks with error names or class and function entity classes. In practice, because of probable errors of the NER model used to annotate the text content, there may be incorrect relationships, but in an ideal case, BN specify more precise relationships and give information about semantics and causal relationships.
\section{Methodology}
In this section, we describe different components of the proposed BN approach for context prediction. Figure~\ref{fig1}, shows the overall process consists of several parts:
1) Semantic entity recognition by the NER model;
2) Learning the Bayesian network as a causal model;
3) Predicting and evaluating entities in question by title.
\begin{figure}
\includegraphics[width=\textwidth]{fig1.png}
\caption{The overall process of proposed BN approach} \label{fig1}
\end{figure}
\subsection{Problem Statement}
As shown in Figure~\ref{fig1}, we need to predict the semantic meaningful classes of questions with BN as a multilabel classification problem. For this problem we have textual data, presented as vectors.
More formally, assume we are given two sets of Questions $Q = \langle Q_1, Q_2, \dots, Q_N \rangle$ and Titles $T = \langle T_1, T_2, ..., T_N \rangle$, where $N$ - is the number of samples in our dataset. For each title $T_i\in{T}$ we have $k=25$ dimension vector, $T_i= \langle t_1^i, t_2^i, ..., t_k^i \rangle$, where $t_k^i$ represents the $k_{th}$ entity class of the $i_{th}$ title and $t_k^i\in{\{0, 1\}}$, where $t_k^i = 0$ corresponds to the absence of the $k_{th}$ class entity in title, and $t_k^i = 1$ corresponds to the existence of the $k_{th}$ class entity in title. For the questions it is the same, for each question $Q_i\in{Q}$ there are 25 dimension vectors, $Q_i= \langle q_1^i, q_2^i, ..., q_k^i \rangle$, where $q_k^i$ represents the $k_{th}$ entity class of the $i_{th}$ question and $q_k^i\in\{{0, 1\}}$, where $q_k^i = 0$ corresponds to the absence of the $k_{th}$ class entity in question, and $q_k^i = 1$ corresponds to existence of the $k_{th}$ class entity in question. We solve the multi label classification problem by predicting for each $i_{th}$ question its entity classes by $i_{th}$ titles entity classes.
\subsection{Dataset}
The dataset we use is based on 10\% of the Stack Overflow\footnote{\url{https://stackoverflow.com}} Q\&A 3 years ago\footnote{\url{https://www.kaggle.com/datasets/stackoverflow/stacksample}}. For the set of questions we apply the following filtering operations: select questions with tag "android", select questions with a length less than 200 words and related to the API Usage category proposed by Stefanie et al. \cite{beyer2020kind}. Moreover, we selected questions without links and images, because information from those types of content is unavailable for Bayesian networks. Thus, we received $N = 707$ pairs of title and question $(T_i, Q_i)$.
\subsection{Semantic Entities Recognition}
For extracting domain specific entities from text content we used the open source CodeBERT \cite{feng2020codebert} realization trained for the NER problem \cite{lample2016neural} on Stack Overflow data, since this is the most popular resource for programmers to find answers to questions. The model was tuned to detect 25 entity classes defined by Jeniya et al. \cite{tabassum-etal-2020-code}. They represent the following classes: ALGORITHM, APPLICATION, CLASS, CODE BLOCK, DATA STRUCTURE, DATA TYPE, DEVICE, ERROR NAME, FILE NAME, FILE TYPE, FUNCTION, HTML XML TAG, KEYBOARD IP, LANGUAGE, LIBRARY, LICENSE, OPERATING SYSTEM, ORGANIZATION, OUTPUT BLOCK, USER INTERFACE ELEMENT, USER NAME, VALUE, VARIABLE, VERSION, WEBSITE. Each class is domain specific and defines context semantics \cite{dash2008context}.
Declared precision of the open-source model is 0.60\footnote{\url{https://huggingface.co/mrm8488/codebert-base-finetuned-stackoverflow-ner}}, hence markup could not be ideal because of model mistakes. Figure~\ref{fig2} shows the Hugging face model inference example. So, annotation models sometimes break a word into several parts and define for each its own class. To smooth out these inaccuracies, we decided to combine parts of words into one entity according to the class of the first defined part. While, entities detected by the model might be ambiguous, testing the key words of sentences mostly results in correct detection. All pairs are vectorized as one-hot encoding, thus each title and question is represented by a k-dimension vector, as there are $k=25$ defined classes.
\begin{figure}
\includegraphics[width=\textwidth]{fig2.jpg}
\caption{Inference of the NER model with wrong broken word CANCEL.} \label{fig2}
\end{figure}
\subsection{Bayesian Networks}
A Bayesian network is a probabilistic model that encodes a joint probability distribution over a set of variables $V = \{X_1, …, X_n\}$, which, in our case, presents entity classes. We consider only discrete variables. Formally, a Bayesian network $B$ is a pair $\{G, \Theta\}$, where $G$ is a directed acyclic graph called "structure". Each node corresponds to one of the variables from $V$. $\Theta$ is a set of probabilities defined on $G$. It specifies the conditional probability distribution $P(X_i | P A_i)$, where $P A_i$ are the parents of the $X_i$ variable. The lack of edge between the variables encodes the conditional independence. With BN, it is possible to get the joint probability distribution of all variables, given as:
\begin{equation}\label{formula_1}
P(V) = \prod_{i=1}^{n}{P(X_i|P A_i)}
\end{equation}
\subsubsection{Structure Learning}
BNs are a suitable tool for the problem, providing excellent means to structure complex domains and draw inferences. To determine semantic relationships and dependencies, we chose a score-based approach of structure-learning; otherwise, a constraint-based approach needs expert knowledge. In a score-based approach, a scoring function is used to measure how well a given structure fits the data. Formally, the learning problem is to find $B^*$:
\begin{equation}
B^* = \arg\max_{B}{Score(B|D)}
\end{equation}
where $D$ is the given dataset. The score-based approach is essentially a search problem, hence there are two parts: the search algorithm and a score metric.
\subsubsection{Searching Algorithm}
Chickering showed that learning an optimal BN from $D$ is an NP-hard problem \cite{chickering1996learning}. Solving the learning problem precisely becomes impractical, which is why we decided to use the local search algorithm. In our case, the number of variables is equal to 50, because question and title entity classes have 25 each. The search algorithm selected the greedy hill climbing approach \cite{heckerman1998tutorial}. There are also other algorithms that are enabled to learn optimal structure for datasets with dozens of variables\cite{koivisto2004exact, jaakkola2010learning, yuan2012improved}, based on dynamic programming, branch and bound, linear and integer programming (LP), and heuristic search.
\subsubsection{Scoring Metrics}
We used the Bayesian information criterion (BIC) \cite{schwarz1978estimating}, Bayesian Dirichlet equivalent uniform prior(BDeu) and K2 \cite{heckerman1995learning} as metrics. The BIC is based on the Schwarz Information Criterion and consists of a log-likelihood term and a penalty term, defined as $f(X_i, B, D) = log(N)/2$ while the score is defined as follows:
\begin{equation}
Score(B | D) = LL(B | D) - f(X_i, B, D)||V||
\end{equation}
In this way, the influence of model complexity decreases as $||V||$ increases, and we get regularized DAG, as the log-likelihood score usually overfits and tends to favor complete graphs.
BDeu and K2 are scores from the family of Bayesian Dirichlet score functions. Under some assumptions, such as parameter independence, parameter modularity, exchangeable data and Dirichlet prior probabilities it is possible to say that penalty term for BDeu is
\begin{equation}
f(X_i,B,D)=\sum^{q_i}_{j}{\sum^{r_i}_{k}{log{\frac{P(D_{ijk}|D_{ij})}{P(D_{ijk}|D_{ij},a_{ij})}}}},
\end{equation}
where $q_i$ is the number of possible values of $PA_i$, $r_i$ is the number of possible values for $X_i$, $D_{ijk}$ is the number of times $X_i = k$ and $PA_i = j$ in $D$, and $\alpha_{ij}$ is a parameter based on the user-specified $\alpha$. $\alpha$ is a heuristic constant that under the likelihood-equivalent assumption proposes the same distribution, described in general terms by Heckerman, Geiger and Chickering (1995). This is called the equivalent sample size, and low $\alpha$ values typically result in sparse networks. We used $\alpha$ equals to 5, as default value.
After learning structure and finding a local optimum, BNs were pruned by Chi-Square Test Independence \cite{Argyrous1997} to detect more specific semantic relationships.
Additionally, we used the Chow-Liu Algorithm \cite{chow1968approximating}. It finds the maximum-likelihood tree-structured graph (i.e., each node has exactly one parent, except for parentless root node). The score is simply the log-likelihood and there is no penalty term for graph structure complexity as it is regularized by tree structure.
\subsubsection{Predicting \& Evaluating networks}
For BNs using BIC, BDeu and K2 scores, we predicted question' entities using the Maximum Likelihood Estimation (MLE). A natural estimate for the CPDs is to simply use the relative frequencies, with each variable state that has occurred following Formula ~\ref{formula_1}.
For BNs having tree structures we tried different probabilistic inference approaches. Algorithms such as Variable Elimination (VE), Gibbs Sampling (GS), Likelihood Weighting (LW) and Rejection Sampling (RS) are detailed in respective articles \cite{koller2009probabilistic, hrycej1990gibbs}. Each label in question is predicted by a one-vs-rest strategy, by all entities of its title from the pair.
For evaluation we selected common multilabel classification metrics. We preferred macro and weighted averaging because existing classes are imbalanced, and it is important to evaluate each class with its number of instances. The formulas for those metrics are
\begin{equation}
Precision_{M}=\frac{\sum_{i=1}^{k}{\frac{TP_i}{TP_i+FP_i}}}{k};
\end{equation}
\begin{equation}
Precision_{W}=\sum_{i=1}^{k}{Precision_{M_{i}}*W_i},
\end{equation}
where $W_i = \frac{N_i}{N}$, $N_i$ is the number of samples of $i$ class, TP is the number of predictions that correctly reports a positive result, and FP is the number of predictions that incorrectly reports a false positive.
\section{Results}
In this section we analyze classification metrics of BNs based on BIC, BDeu and K2 scores as well as Chow-Liu trees. Each score defines a different structure of DAG, which means different semantic dependencies. We compared DAGs and analyzed the penalty terms of each score and its relationships reflected in graphs, as well as the detected relations.
\subsection{Comparison of Evaluation Metrics}
We used a common train-test split for evaluation. With the dataset described above, we composed the test dataset as random 30\% samples of the whole set. The random seed is defined in a specific way whereby classes from the test set are in the train set as well.
\begin{table}
\caption{Comparison of evaluation metrics.}\label{tab1}
\begin{tabularx}{\textwidth}{|c|*{6}{X|}} \hline
{} & \multicolumn{2}{c|}{Precision} & \multicolumn{2}{c|}{Recall} & \multicolumn{2}{c|}{F1-score} \\ \hline
Model & Macro & Weighted & Macro & Weighted & Macro & Weighted \\ \hline
CatBoost & 0.41 & 0.58 & 0.19 & \textbf{0.35} & 0.24 & 0.41 \\ \hline
BIC based & \textbf{0.56} & \textbf{0.66} & 0.20 & 0.33 & 0.28 & 0.42 \\ \hline
BDeu based & 0.48 & 0.63 & 0.20 & \textbf{0.35} & 0.26 & \textbf{0.43} \\ \hline
K2 based & 0.51 & \textbf{0.66} & \textbf{0.24} & 0.34 & \textbf{0.29} & \textbf{0.43} \\ \hline
CL trees VE & 0.47 & 0.63 & 0.21 & 0.33 & 0.25 & 0.41 \\ \hline
CL trees LW & 0.48 & 0.63 & 0.17 & 0.29 & 0.22 & 0.37 \\ \hline
CL trees GS & 0.41 & 0.57 & 0.13 & 0.25 & 0.18 & 0.33 \\ \hline
CL trees RS & 0.23 & 0.44 & 0.07 & 0.15 & 0.10 & 0.22 \\ \hline
\end{tabularx}
\end{table}
Table \ref{tab1} shows the main evaluation results according to the selected classification metrics. We prefer to accentuate precision, because precision of individual classes is most important for information extraction and context prediction, and wrong class predictions caused context misunderstanding.
Our approach shows better precision metrics than the baseline - CatBoost model \cite{NEURIPS2018_14491b75}, 0.56 vs 0.41 macro precision and 0.66 vs 0.58 weighted precision, comparing the BIC score-based network and baseline.
We observe the highest precision in the BIC score-based model, while the K2-based model shows the better recall metric and comparable precision, hence the best network from the F1-score perspective is the K2-based one. As expected, the BIC regularizes the log-likelihood stronger than the BDeu and K2-specific penalty terms. As a result, BDeu and K2-based DAGs detect more relationships that allow to classify more instances of each class correctly, hence the growth of recall.
We see that the Chow-Liu tree-based networks are comparable to other models if Variable Elimination is used as a sampling algorithm. This causes the limitation that each node has exactly one parent, except parents root nodes, and it is non-redundant for the DAG to fit the data. Other sampling algorithms approximating solutions to the inference problem show worse results.
\subsection{Visual DAG representation}
We visualized DAGs from each Bayesian network to see relationships that a BN allows to detect. A BN has causal structure, and we use this property to analyze connections of different semantic entities, describing the context. Figure~\ref{fig:dags} shows structures learned by described methods. The graphs provide information about the relations between significant parts of the context.
As expected, graphs of K2 (\ref{fig:k2}) and BDeu (\ref{fig:bdue}) -based networks detect more relationships and are more complete, in contrast to the BIC (\ref{fig:bic}) -based graph. At each DAG there are semantic links between the same title and question entity classes. The structure of the Chow-Liu trees (\ref{fig:cl}) shows this very well.
Additionally, analysis shows different clusters of semantic entities. This way, DATA STRUCTURE and ALGORITHM separated in each of the four graphs. Furthermore, there is a link between FILE NAME and FILE TYPE, CODE BLOCK and OUTPUT BLOCK. This indicates the logic and validity of BN DAGs structures.
It is noteworthy that the tree structured DAG defines causation from Question ALGORITHM to Title USER NAME and from Title ALGORITHM to Question CLASS without establishing a causal relationship between entities of the same names. More likely, these are outliers, as the NER model is not ideal.
\begin{figure}
\centering
\begin{subfigure}{0.85\textwidth}
\includegraphics[width=\textwidth]{BIC_graph.png}
\caption{BIC based graph}
\label{fig:bic}
\end{subfigure}
\hfill
\begin{subfigure}{0.85\textwidth}
\includegraphics[width=\textwidth]{K2_graph.png}
\caption{K2-metric based graph}
\label{fig:k2}
\end{subfigure}
\end{figure}
\begin{figure}\ContinuedFloat
\centering
\begin{subfigure}{0.85\textwidth}
\includegraphics[width=\textwidth]{BDeu_graph.png}
\caption{BDeu based graph}
\label{fig:bdue}
\end{subfigure}
\hfill
\begin{subfigure}{0.85\textwidth}
\includegraphics[width=\textwidth]{CL_graph.png}
\caption{Chow-Liu trees based graph}
\label{fig:cl}
\end{subfigure}
\hfill
\caption{DAG structures of learned BNs}
\label{fig:dags}
\end{figure}
\subsection{Predictions analysis}
Finally, we compared semantic entities detected by the NER model and predicted by BN based on the K2 metric. Below in Table~\ref{tab2}, there are several examples of predictions. Matched to the DAG described above, we observe that overall, predicted entities coincide to target one in the first example. In some cases, BN could not detect semantic instances, as in the second and third rows of Table~\ref{tab2}. On Graph (\ref{fig:k2}) VERSION and USER NAME have consequence relations from APPLICATION at both question and title. Similarly, OPERATING SYSTEMS connected to APPLICATION and LANGUAGE in the graph. It is likely that the value of conditional probability was not enough to consider whether these entities would be in the question context.
\begin{table}
\caption{Comparison of existing and predicted entities.}\label{tab2}
\begin{tabular}{|p{2cm}|p{4.8cm}|p{2.5cm}|p{2.5cm}|} \hline
{Title} & {Question} & {Questions entities} & {Predicted entities} \\ \hline
{How to send email with attachment using GmailSender in android} & {I want to know about how to send email with attachment using GmailSender in android.} & {APPLICATION, OPERATING SYSTEM} & {APPLICATION, OPERATING SYSTEM} \\ \hline
{Intel XDK build for previous versions of Android} & {I have just started developing apps in Intel XDK and was just wondering how to build an app for a specific version of Android OS. The emulator I select "Samsung Galaxy S" is using the version 4.2 of android. My application works fine for Galaxy s3 but not on galaxy Ace 3.2 . I could not find a way to add more devices to the emulator list. How can I achieve this. Regards, Shankar.} & {APPLICATION, OPERATING SYSTEM, VERSION, USER NAME} & {APPLICATION, OPERATING SYSTEM} \\ \hline
{Automatic update database of android application} & {I'm making an quiz application in android.But If there are changes in database then how can user get updated with this changes.I read about GCM and php.But can anyone tell me how to do that?Any helpful tutorial? Thanks.} & {OPERATING SYSTEM, APPLICATION, LANGUAGE} & {OPERATING SYSTEM} \\ \hline
\end{tabular}
\end{table}
\section{Discussion}
As mentioned above, Bayesian networks are one of the methods to predict and analyze the context. This method may be especially useful at the CQA domain for information extraction and semantic causation in the analysis of the important parts of a question and how clear it is for answering \cite{9892454}. Results show that BN are able to capture the main trend of using meaningful entities, in particular in the programming domain. The recovery task might be an efficient way to determine heuristics as an improvement of the BN approach for context prediction and meaning, as shown by Mehmet et al. Global Uniform parameter priors \cite{10.5555/2073876.2073906}, because we have no knowledge about prior distribution. Conversely, an additional penalty term could fare better in structure learning and detect more relevant relations. Finally, using the mentioned optimal search algorithm should show better defined metrics.
Another approach is work with data. On the one hand, expanding data and not specifying the android tag could allow the BN to determine more general dependencies because of Bayesian inference. It is also possible to change data representation, and focus on specific meaningful words or verbal constructions as opposed general classes of entities. It could lead to the growth of variables $V$ power depending on penalty term structure capacity, due to context uncertainty.
Furthermore, there are techniques of query expansion based on relevant documents feedback, especially in information retrieval systems \cite{kandasamy}. Neural systems lack interoperability, whereas Bayesian networks have a clarify causal inference and could potentially be a good tool for query expansion and reformulations by providing context representation from the given query-reformulation pairs \cite{adolphs}.
\section{Conclusion and Future Directions}
In this paper we proposed a new application for Bayesian networks in CQA. Bayesian networks could be used as a tool for context prediction and context information extraction. Applying BN to CQA and programming domain in this way, we recognized causal semantic relationships on the set of SO questions and related titles. More precisely, we received the DAGs based on different approaches, making it possible to analyze interrelations. Moreover, we defined that BNs identify entities acceptably, mostly correctly but with issues detecting semantic classes that are separated in the DAG structure.
In future work we plan to build an end-to-end artificial neural network based on the existing NER model. In addition, it seems interesting to compare the NER model and the Bayesian network approach on a small dataset, as done in this article. With this in mind, we would use LSTM with attention to predict semantic entities. Additionally, we are planning to compare the BN and LDA (Latent Dirichlet allocation) approaches for the problems of thematic modeling and information extraction CQA.
\bibliographystyle{splncs03_unsrt}
|
{
"arxiv_id": "2302.13270",
"language": "en",
"timestamp": "2023-02-28T02:14:44",
"url": "https://arxiv.org/abs/2302.13270",
"yymm": "2302"
} | \section{Introduction}\label{sec:S3-1}
Classifying and cataloguing integrable systems is an important unsolved problem. Only for particular classes of systems does a classification exist.
It was shown by Atiyah-Guillemin-Sternberg \cite{atiyah,Guillemin1982} that the image of the momentum map of a toric system is a convex polytope, called the momentum polytope. This polytope completely classifies the toric system up to an equivariant symplectomorphism. Delzant \cite{BSMF_1988__116_3_315_0} gave an explicit construction of toric manifolds based on their momentum polytopes. Topological classification of Liouville foliations has been extensively studied by Bolsinov and Fomenko \cite{book} and extended towards the orbital classification of integrable systems.
V\~u Ng\d{o}c and Pelayo have extended the toric classification to semi-toric systems \cite{VuNgoc07,VuNgoc09}. While a complete theory of classification is still far out of reach, we'd like to investigate the possibility of extending the current theory to broader classes of integrable systems.
A very well studied class of integrable systems are superintegrable systems, see, e.g., \cite{Fasso05}. A classification of superintegrable systems in two and three degrees of freedom has been achieved by Kalnins, Kress and Miller in a series of works \cite{Kalnins2005-1,Kalnins2005-2,Kalnins2005-3,Kress06,Kalnins2006-5}. In their book \cite{KKM18} they highlight the link between superintegrability and separation of variables. Superintegrable systems that are separable in multiple coordinate systems provide a rich source of integrable systems. This is because each distinct separable coordinate system gives rise to a St\"{a}ckel integrable system \cite{Stackel}. Separable coordinates on conformally flat spaces has been extensively studied by Kalnins and Miller in \cite{Kalnins1986,kalnins76,miller81}. More recently, Sch\"{o}bel studied the space of separable coordinates on the $n$-sphere as an algebraic variety \cite{Schoebel2014,Schoebel2015,Schoebel2016}. While it is known \cite{KKM18} that all separable coordinates on the sphere can be obtained as appropriate limits of the general Jacobi ellipsoidal coordinates, Sch\"{o}bel's work formalises this by giving a topology to this space in the form of the Stasheff polytope. The inspiration for this paper was to establish a similar topology in the space of integrable systems that arise from separating the geodesic flow on $S^3$ in this family of coordinates. It is likely that similar constructions can be done for any superintegrable and multi-separable systems. In fact, the idea to use multi-separability to define interesting fibrations has been used in \cite{Dullin2012,Dullin2016} for the most fundamental systems of classical mechanics, the harmonic oscillator and the Kepler problem. To then use the periodic flow of the superintegrable system for reduction was first done in \cite{Dawson2022} and this paper is the natural continuation of that work where instead of a superintegrable system on $\mathbb{R}^3$ a superintegrable system on $S^3$ is the starting point.
Another motivation for our work are the recent studies \cite{Alonso,Alonso2019,Palmer2018,Hohloch2017} of various integrable systems on the compact symplectic manifold $S^2\times S^2$. We will show that the symplectic reduction of the geodesic flow on $S^3$ results in a reduced system on a compact symplectic leaf of $\mathfrak{so}^*(4)$ that is diffeomorphic to $S^2\times S^2$. The $3$-degrees of freedom integrable systems on $T^*S^3$ obtained from separation of variables descend to $2$-degrees of freedom systems on $S^2\times S^2$ through this quotient. We will employ more recent techniques in the theory of compatible Poisson structures and bi-Hamiltonian systems \cite{Bolsinov-Borisov,Bolsinov-Oshemkov} to study these systems in detail. This will allow us to realise these systems as special restricted cases of the Manakov top \cite{Komarov_1991, 339ab604500943c5b69d0da421f62e74}.
A somehow related question is the study of separable systems depending on parameters, foremost the geodesic flow on an ellipsoid \cite{Moser:2261095}. In \cite{DAVISON20072437,Davison2007} the geodesic flow on 3-dimensional ellipsoids with various sets of equal semi-major axes has been studied. It is astonishing how similar these system -- degenerate or not -- are to the ones studied in this paper. However, the fundamental difference is that there a constant energy slice of a 3-degree of freedom system is studied, while here we reduce and study the resulting 2-degree of freedom system. As a result, here we obtain a system on a compact symplectic manifold, which is better suited as a playground for symplectic classification.
Similar degenerations have also been studied for the Neumann system \cite{Dullin2012} and again there are many similarities.
The paper is structured as follows. Section \ref{sec:S3-2} introduces the basic theory of separation of variables.
We focus on the separation of variables in the ellipsoidal coordinate system in Section \ref{sec:S3-3}. Section \ref{sec:S3-4} discusses the symplectic reduction of the geodesic flow on $S^3$ with emphasis on establishing the reduced ellipsoidal integrable system on $S^2\times S^2$. The theory of compatible Poisson structures is applied to study the reduced ellipsoidal integrable system in Section \ref{sec:S3-5}. In Section \ref{sec:degen}, we combine the techniques and results of Sections \ref{sec:S3-3}, \ref{sec:S3-4}, and \ref{sec:S3-5} to study the integrable systems obtained from separation of variables in the degenerate coordinate systems, namely prolate, oblate, Lam\'{e}, spherical and cylindrical coordinates.
\section{Orthogonally Separable Coordinate Systems on $S^{3}$}\label{sec:S3-2}
In this section, we introduce some basic concepts from the theory
of separation of variables. For more details, see \cite{KKM18,Stackel}.
Let $(s_{i},p_{i})$ be local canonical coordinates on $T^{*}M$ where
$M$ is an $n-$dimensional differentiable manifold with metric $g$.
Define $\mathcal{S}^{k}(M)$ to be the space of smooth contravariant
symmetric tensors of order $k$ on $M$, in particular $k=1$ is the
space of vector fields on the manifold. Each $\bm{K}\in\mathcal{S}^{k}(M)$
can be associated with a $C^{\infty}$ real function $E_{\bm{K}}$
on $T^{*}M$ locally expressed in the momenta as
\begin{equation}
E_{\bm{K}}=\sum_{i}\frac{1}{k!}\bm{K}^{i_{1}\dots i_{k}}p_{i_{1}}\dots p_{i_{k}}.\label{eq:Killing tensor def}
\end{equation}
The Lie bracket between two tensors $\bm{K}\in\mathcal{S}^{k}(M)$
and $\bm{R}\in\mathcal{S}^{r}(M)$, denoted by $[\bm{K},\bm{R}]\in\mathcal{S}^{k+r-1}(M)$,
is defined by
\begin{equation}
E_{[\bm{K},\bm{L}]}=\{E_{\bm{K}},E_{\bm{L}}\}_{(s_{i},p_{i})}\label{eq:commute}
\end{equation}
where $\{\cdot,\cdot\}_{(s_{i},p_{i})}$ denotes the canonical Poisson
bracket in the curvilinear coordinates.
\begin{defn}
On a Riemannian manifold $(M,g)$, a symmetric tensor $\bm{K}$ is
a Killing tensor if it commutes with the metric $[\bm{K},g]=0$.
\end{defn}
There is a natural identification of Killing tensors with first integrals
of the geodesic flow:
A function $E_{\bm{K}}$ is a first integral of the geodesic flow
if and only if $\bm{K}$ is a Killing tensor, see, e.g \cite{Stackel}.
Similarly to how a set of $n$ integrals in involution on a manifold
form a Liouville-integrable system, a collection of $n$ Killing tensors
defines a St\"{a}ckel system.
\begin{defn}
A St\"{a}ckel system on an $n-$dimensional Riemannian manifold $(M,g)$
is a set of $n$ Killing tensors of order $2$ that commute under
the commutator in (\ref{eq:commute}).
\end{defn}
Eisenhart proved in \cite{10.2307/1968433} that there is a bijective
correspondence between equivalence classes of St\"{a}ckel systems
and equivalence classes of orthogonal separable coordinate systems.
Given an orthogonal coordinate system $s_{i}$ on an $n$-dimensional
manifold $M$, we can define a St\"{a}ckel system by constructing
a St\"{a}ckel matrix.
\begin{defn}
A St\"{a}ckel matrix $\Phi$ for a given metric $g$ is any $n\times n$ matrix where each
row depends on only one of the curvilinear coordinates $s_{i}$, and
\begin{equation}
\frac{1}{g_{ii}}=\frac{\det(\Omega_{i})}{\det(\Phi)},\label{eq:stackel relationdship-1}
\end{equation}
where $\Omega_{i}$ is the minor formed by deleting the $i^{th}$
row and first column of $\Phi$ and $g_{ii}$ is the $(i,i)$ element
of the metric tensor.
\end{defn}
The functional value of the Hamiltonian $H$ will be denoted by $h$.
Let $W\coloneqq W(\bm{s},\bm{\eta})$ where $\bm{s}=(s_{1},\dots,s_{n})$
are the separable curvilinear coordinates and $\bm{\eta}=(\eta_{0},\dots,\eta_{n-1})$
are parameters. The Hamilton Jacobi equation is given by
\begin{equation}
\sum_{i}\frac{1}{2}(g^{-1})_{ii}\left(\frac{\partial W}{\partial s_{i}}\right)^2=h. \label{HJ}
\end{equation}
This can be separated by computing
\begin{equation}
\Phi^{-1}\mathfrak{p}=\bm{\eta},\label{eq:sep constants-1}
\end{equation}
where $\mathfrak{p}=\left(p_{1}^{2},\dots,p_{n}^{2}\right)^{t}$ is
the vector of squared canonical curvilinear momenta and the parameters
$\bm{\eta}$ are the separation constants.
Comparing (\ref{eq:sep constants-1}) and (\ref{eq:Killing tensor def}),
we see that rows of $\Phi^{-1}$ encode the diagonal entries of the
Killing tensors for separable orthogonal coordinates $s_{i}$. The first row of \eqref{eq:sep constants-1} gives \eqref{HJ} and so we have $\eta_0=h$.
The work of Kalnins and Miller \cite{Kalnins1986} gave a graphical
algorithm for constructing all orthogonally separable coordinates on constant curvature
manifolds. Recent results by Sch\"{o}bel and Veselov extended
this by giving an algebraic geometric classification of separable coordinate
systems on $S^{n}$ \cite{Schoebel2015,Schoebel2014}. In particular,
they showed that the variety of St\"{a}ckel systems on $S^{n}$ is
given by the Stasheff polytope $K_{n+1}$ which is a convex polytope
of dimension $n-1$. In this paper we work only with $S^{3}$; the
relevant Stasheff polytope $K_{4}$ is shown in Figure~\ref{fig:Stasheff S3}.
The codimension $0$ face of a Stasheff polytope represents the family
of ellipsoidal coordinates $s_{i}$. These are defined as the roots
of $T(s)=\sum_{i=1}^{n+1}\frac{x_{i}^{2}}{s-e_{i}}=0$ where $x_{i}$
are Cartesian coordinates on $\mathbb{R}^{n+1},$ $e_{j}\le s_{j}\le e_{j+1}$
for all $j=1,\dots,n$, the $e_{i}\ge0$ are all distinct and are called the semi-major
axes. Note that when using a similar coordinate system to separate the geodesic flow on an ellipsoid these parameters actually are semi-major axes (hence the name), while here they describe a separating coordinate system but \emph{not} the underlying manifold. Solving $T(s_{j})=0$ together with $\sum_{i=1}^{n+1}x_{i}^{2}=1$
gives
\begin{equation}
x_{i}^{2}=\frac{\prod_{j=1}^{n}(s_{j}-e_{i})}{\prod_{k\ne i}(e_{k}-e_{i})}.\label{generalcoord}
\end{equation}
Note that for $S^3$, despite the ellipsoidal coordinates being parametrised by the 4 parameters $e_1<e_2<e_3<e_4$, the Stasheff polytope is only $2$-dimensional. Affine transformation of the parameters of the form $e_i\mapsto \alpha e_i + \beta$ for $\alpha\neq0$ gives a equivalent coodinate system up to scaling. Thus each ellipsoidal coordinate system on $S^3$ with parameters $e_1<e_2<e_3<e_4$ can be transformed to an equivalent system with parameters $0<1<a<b$, see \cite{KKM18} for details.
Higher codimension faces represent families of degenerate coordinate
systems on the sphere. Degenerate coordinate systems on $S^{n}$ are
constructed by gluing ellipsoidal coordinates on lower dimensional
spheres together. Let $\bm{w}_{\alpha}$ and $\bm{z}$ be Cartesian
coordinates expressed in terms of local ellipsoidal coordinates on
$S^{k_{\alpha}-1}$ and $S^{m}$ respectively where $\alpha=1,\dots,m+1$
and $n=k_{1}+\dots k_{m+1}-1$. Then any degenerate coordinate system
on $S^{n}$ is found by recursively applying composition \cite{Schoebel2015}
\[
\begin{aligned}\circ:S^{m}\times S^{k_{1}-1}\times\dots\times S^{k_{m+1}-1} & \to S^{k_{1}+\dots+k_{m+1}-1}\\
(\bm{z},\bm{w}_{1},\dots,\bm{w}_{m+1}) & \to(z_{1}\bm{w}_{1},\dots,z_{m+1}\bm{w}_{m+1})
\end{aligned}
\]
where $z_{\beta}\bm{w}_{\beta}$ are Cartesian coordinates on $\mathbb{R}^{n+1}$
expressed in the new degenerate coordinate system.
In this paper we have chosen to adopt the notation of Sch\"{o}bel
\cite{Schoebel2014}. The general ellipsoidal coordinates on $S^{n}$
are represented as $(1\ 2\ \dots\ n+1)$. If one attaches an $S^{j}$
to the $m^{\text{th}}$ Cartesian coordinate, then we enclose brackets
around all numbers $m$ to $m+j$. For instance, attaching $S^{1}$
to the the $x_{2}$ coordinate on $S^{2}$ is written as $(1\ (2\ 3)\ 4)$.
Symmetric bracketing results in systems with similar behaviour, i.e.
$(1\ 2\ (3\ 4))$ and $((1\ 2)\ 3\ 4)$ describe equivalent coordinate systems as we will discuss in more detail below, also see \cite{KKM18,Schoebel2016,Schoebel2015,Schoebel2014}. For the various degenerate coordinate systems we are going to use short hand names as indicated in Figure~\ref{fig:Stasheff S3}.
\begin{figure}
\begin{centering}
\includegraphics[width=8cm]{\string"stasheff\string".png}
\par\end{centering}
\caption{Stasheff polytope $K^{4}$ of separating coordinate systems on $S^{3}$ with corresponding names and bracket notation. \label{fig:Stasheff S3}}
\end{figure}
\section{Separation of variables of the Geodesic Flow on $S^{3}$ and Ellipsoidal Coordinates\label{sec:Geodesic-Flow-on-S3}}\label{sec:S3-3}
Consider the geodesic flow on the $3-$sphere as a constrained system
on $T^{*}\mathbb{R}^{4}$ with Cartesian coordinates $(\bm{x},\bm{y})$
where $\bm{x}\cdot\bm{x}=1$ and $\bm{x}\cdot\bm{y}=0$. We define
the Poisson bracket on $T^{*}S^{3}$ to be the Dirac bracket $\{\cdot,\cdot\}$
enforcing these two constraints. Let $\{\cdot,\cdot\}_{T^{*}\mathbb{R}^{4}}$
be the canonical Poisson bracket on $T^{*}\mathbb{R}^{4}$. Set $c_{1}=\bm{x}\cdot\bm{x},c_{2}=\bm{x}\cdot\bm{y}$
and define the matrix
\[
C_{ij}=\{c_{i},c_{j}\}_{T^{*}\mathbb{R}^{4}}.
\]
The Dirac bracket on $T^{*}S^{3}$ is given by
\[
\{f,g\}=\{f,g\}_{T^{*}\mathbb{R}^{4}}+\{f,c_{i}\}_{T^{*}\mathbb{R}^{4}}(C^{-1})_{ij}\{c_{j},g\}_{T^{*}\mathbb{R}^{4}}
\]
with structure matrix
\begin{equation}
B_{(\bm{x},\bm{y})}=\begin{pmatrix}\bm{0} & id-\bm{x}\bm{x}^{t}\left|\bm{x}\right|^{-2}\\
-id+\bm{x}\bm{x}^{t}\left|\bm{x}\right|^{-2} & -(\bm{x}\bm{y}^{t}-\bm{y}\bm{x}^{t})\left|\bm{x}\right|^{-2}
\end{pmatrix}.\label{eq:BPQ-1-1}
\end{equation}
From (\ref{eq:BPQ-1-1}), the Poisson bracket between two functions
$f$ and $g$ on $T^{*}S^{3}$ is
\begin{equation}
\{f,g\}\coloneqq\left(\nabla f\right)^{T}B_{(\bm{x},\bm{y})}\nabla g\label{eq:PB in QP}
\end{equation}
where $\nabla$ denotes the gradient with respect to the Cartesian
coordinates $(\bm{x},\bm{y})$.
Let $H=\frac{1}{2}\bm{y}\cdot\bm{y}$ be the Hamiltonian of the geodesic
flow on $S^{3}$. It is well known that this system is superintegrable
and separates in multiple coordinate systems \cite{Schoebel2014}.
The Hamilton Jacobi equation can be separated in general ellipsoidal
coordinates $(s_{1},s_{2},s_{3})$ given by \eqref{generalcoord} for $n=3$ as
\begin{equation}
\begin{aligned}x_{j}^{2} & =\frac{\left(s_{1}-e_{j}\right)\left(s_{2}-e_{j}\right)\left(s_{3}-e_{j}\right)}{\Pi_{i\ne j}\left(e_{i}-e_{j}\right)}, & & j=1,\dots,4\end{aligned}
\label{eq:def of ellipsoidal coordinates}
\end{equation}
where $0\le e_{1}\le s_{1}\le e_{2}\le s_{2}\le e_{3}\le s_{3}\le e_{4}$ and the $e_i$'s are all distinct.
In these coordinates, the geodesic Hamiltonian is (see e.g. \cite{DAVISON20072437,KKM18})
\begin{equation}
H=-2\sum_{i=1}^{3}\frac{\prod_{j=1}^{4}(s_{i}-e_{j})}{\prod_{k\ne i}(s_{i}-s_{k})}p_{i}^{2}.\label{eq:Ham in ellipsoidal}
\end{equation}
This system is Liouville integrable. To separate the Hamilton Jacobi
equation we use the following St\"{a}ckel matrix
\begin{equation}
\Phi_{el}=\frac{1}{4}\begin{pmatrix}-\frac{s_{1}^{2}}{A(s_{1})} & -\frac{s_{1}}{A(s_{1})} & -\frac{1}{A(s_{1})}\\
-\frac{s_{2}^{2}}{A(s_{2})} & -\frac{s_{2}}{A(s_{2})} & -\frac{1}{A(s_{2})}\\
-\frac{s_{2}^{2}}{A(s_{3})} & -\frac{s_{1}}{A(s_{3})} & -\frac{1}{A(s_{3})}
\end{pmatrix}.\label{eq:Stackel El S3-1}
\end{equation}
where $A(z)=\prod_{k=1}^{4}(z-e_{k})$. From (\ref{eq:sep constants-1}),
we compute $\Phi_{el}^{-1}\mathfrak{p}=(\eta_{0},-\eta_{1},\eta_{2})^{T}$
where
\begin{equation}
\begin{aligned}\eta_{0} & =-4\sum_{i=1}^{3}\frac{A(s_{i})}{D(s_{i})}p_{i}^{2}, & & & \eta_{1} & =-4\sum_{i=1}^{3}\left(\frac{A(s_{i})}{D(s_{i})}p_{i}^{2}\sum_{k\ne i}s_{k}\right), & & & \eta_{2} & =-4\sum_{i=1}^{3}\left(\frac{A(s_{i})}{D(s_{i})}p_{i}^{2}\prod_{k\ne i}s_{k}\right),\end{aligned}
\label{eq:separation constants}
\end{equation}
and $D(s_{i})=\prod_{k\ne i}(s_{i}-s_{k}).$ To express (\ref{eq:separation constants})
in terms of the angular momenta $\ell_{ij}\coloneqq x_{i}y_{j}-x_{j}y_{i}$,
we note that
\[
\ell_{ij}^{2}=\frac{x_{i}^{2}x_{j}^{2}}{4}\left(\sum_{k=1}^{3}\left(-\frac{1}{s_{k}-e_{i}}+\frac{1}{s_{k}-e_{j}}\right)p_{k}\right)^{2}
\]
with $x_{i}$ given by (\ref{eq:def of ellipsoidal coordinates}).
Let $\bm{L}\coloneqq(\ell_{12},\ell_{13},\ell_{14},\ell_{23},\ell_{24},\ell_{34})$, it can be verified that
\begin{equation}
\begin{aligned}\eta_{0}(\bm{L}) & =\sum_{i<j}\ell_{ij}^{2}=2H, & & & \eta_{1}(\bm{L}) & =\sum_{i<j}\left(\ell_{ij}^{2}\sum_{k\ne i,j}e_{k}\right), & & & \eta_{2}(\bm{L}) & =\sum_{i<j}\left(\ell_{ij}^{2}\prod_{k\ne i,j}e_{k}\right).\end{aligned}
\label{eq:Separation constants ellipsoidal}
\end{equation}
The separated equations are obtained by multiplying both sides of
(\ref{eq:sep constants-1}) by $\Phi_{el}$. This gives
\begin{equation}
p_{i}^{2}=\frac{-R(s_{i})}{4A(s_{i})}\label{eq:psq ellipsoidal-1}
\end{equation}
where $R(z)=2hz^{2}-\eta_{1}^{*}z+\eta_{2}^{*}$ and $(\eta_{1}^{*},\eta_{2}^{*})$
are the values of the integrals $(\eta_{1},\eta_{2})$. Thus,
the geodesic flow is separable on the hyperelliptic curve $w^{2}=-R(z)A(z)$
which has genus $3$.
It is known \cite{Moser:2261095} that a set of global polynomial
integrals for the geodesic flow are given by
\begin{equation}
F_{i}=\sum_{j=1,j\ne i}^{4}\frac{\ell_{ij}^{2}}{e_{i}-e_{j}},\label{eq:ulenbech integral def}
\end{equation}
where $i\in\{1,2,3,4\}.$ The $F_{i}$ are known as the Uhlenbeck
integrals and satisfy $\sum_{i=1}^{4}F_{i}=0$. The separation constants
$\eta_{1}$ and $\eta_{2}$ are related to the $F_{i}$ via the identity
\begin{equation}
\begin{aligned}\sum_{i=1}^{4}\frac{F_{i}}{z-e_{i}} & =\frac{2Hz^{2}-\eta_{1}z+\eta_{2}}{\prod_{k=1}^{4}(z-e_{k})}.\end{aligned}
\label{eq:R on A def}
\end{equation}
Since all integrals are polynomial, wee can easily show that $\eta_{1},\eta_{2}$ and $H$ are functionally
independent almost everywhere and $\{\eta_{1},H\}=\{\eta_{2},H\}=\{\eta_{1},\eta_{2}\}=0$.
This establishes that
the triple $(H,\eta_{1},\eta_{2})$ is an integrable system on $T^{*}S^{3}$.
We call this the ellipsoidal integrable system on $T^{*}S^{3}$.
This is a reformulation of the underlying St\"ackel system, whose commuting Killing tensors lead to quadratic (in momenta) integrals, which are also quadratic in angular momenta.
Since $H$ is a global $S^{1}$ action on the energy surface where $2h=1$ we can use it to perform symplectic reduction. Doing so, we obtain a reduced system on the symplectic manifold $S^{2}\times S^{2}$. The integrals $(\eta_{1},\eta_{2})$ descend
to form an integrable system on this quotient space.
\section{Reduction by Geodesic Flow }\label{sec:S3-4}
The orbits of $H$ are oriented great circles on $S^{3}$. Since each
great circle is the intersection of a two dimensional plane through
the origin with $S^{3}$, the orbit space of $H$ is given by \textcolor{black}{
\[
T^{*}S^{3}/S^{1}|_{H=h}=U^{*}S^{3}/S^{1}\cong\widetilde{Gr}(2,4)
\]
}where the oriented Grassmanian $\widetilde{Gr}(2,4)$ is the set
of oriented two dimensional planes in $\mathbb{R}^{4}$ \textcolor{black}{and
$U^{*}S^{3}$ is the unit cotangent bundle of $S^{3}$}. One way to see that $\widetilde{Gr}(2,4)\cong S^{2}\times S^{2}$ is using the
Pl\"{u}cker embedding (for more details on this, see Appendix \ref{appen-gras}).
For our purposes, the reduction will be performed with invariants
of the geodesic Hamiltonian.
Invariants of $H$ are the six angular momenta $\bm{L}\coloneqq(\ell_{12},\ell_{13},\ell_{14},\ell_{23},\ell_{24},\ell_{34})$.
These form a closed set of invariants under the Dirac bracket $\{\cdot,\cdot\}$
from (\ref{eq:PB in QP}). The Poisson algebra of these invariants has the structure matrix
\begin{equation}
B_{\bm{L}}=\begin{pmatrix}
0 & \ell_{23} & \ell_{24} & -\ell_{13} & -\ell_{14} & 0 \\
-\ell_{23} & 0 & \ell_{34} & \ell_{12} & 0 & -\ell_{14} \\
-\ell_{24} & -\ell_{34} & 0 & 0 & \ell_{12} & \ell_{13} \\
\ell_{13} & -\ell_{12} & 0 & 0 & \ell_{34} & -\ell_{24} \\
\ell_{14} & 0 & -\ell_{12} & -\ell_{34} & 0 & \ell_{23} \\
0 & \ell_{14} & -\ell_{13} & \ell_{24} & -\ell_{23} & 0 \\
\end{pmatrix}
\end{equation}
with 2 Casimirs: $\mathcal{C}_{1}=2H=\sum_{j>i}\ell_{ij}^{2}$ and
$\mathcal{C}_{2}=\ell_{12}\ell_{34}-\ell_{13}\ell_{24}+\ell_{14}\ell_{23}$.
The first is the energy of the geodesic flow which we have the freedom
to set to an arbitrary value $2h$. The second Casimir is the Pl\"{u}cker
relation and must be zero since the angular
momenta $\bm{L}=\bm{x}\wedge\bm{y}$ where $\wedge$ is the wedge
operator. This means that $\bm{L}$ is a totally
decomposable bivector and so must satisfy $C_{2}=\left|\bm{L}\wedge\bm{L}\right|=0$. The Lie Poisson algebra of the $\ell_{ij}$'s is isomorphic to the Lie algebra $\mathfrak{so}(4)$.
Using the $\ell_{ij}$ as new coordinates, we obtain an explicit realisation
of $S^{2}\times S^{2}$ as
\begin{align}
\mathscr{C}_{1} & =\mathfrak{\mathcal{C}}_{1}+2\mathcal{C}_{2}=(\ell_{12}+\ell_{34})^{2}+(\ell_{13}-\ell_{24})^{2}+(\ell_{14}+\ell_{23})^{2}=2h,\label{eq:Cas in ls}\\
\mathscr{C}_{2} & =\mathcal{C}_{1}-2\mathcal{C}_{2}=(\ell_{12}-\ell_{34})^{2}+(\ell_{13}+\ell_{24})^{2}+(\ell_{14}-\ell_{23})^{2}=2h.\nonumber
\end{align}
The Poisson bracket of functions on $S^2\times S^2$, denoted by $\{\cdot,\cdot\}_{\bm L}$
is \begin{equation}
\{f,g\}_{\bm L}=(\nabla f)^{T}B_{\bm L}\nabla g\label{eq:XY pb}
\end{equation}
where $\nabla$ denotes the gradient with respect to $\bm L$.
A sometimes more convenient set of coordinates on $S^{2}\times S^{2}$ is obtained by applying the linear transformation \linebreak
$T:\bm{L}~\mapsto(\bm{X},\bm{Y})=~(X_1,X_2,X_3,Y_1,Y_2,Y_3)$ with
\begin{equation}
\begin{aligned}X_{1} & =\frac{1}{2}(\ell_{12}+\ell_{34}), & & & Y_{1} & =\frac{1}{2}(\ell_{12}-\ell_{34}),\\
X_{2} & =\frac{1}{2}(\ell_{13}-\ell_{24}), & & & Y_{2} & =-\frac{1}{2}(\ell_{13}+\ell_{24}),\\
X_{3} & =\frac{1}{2}(\ell_{14}+\ell_{23}), & & & Y_{3} & =\frac{1}{2}(\ell_{14}-\ell_{23}).
\end{aligned}
\label{eq:ls to Xy}
\end{equation}
In these variables we can rewrite (\ref{eq:Cas in ls}) as $\mathscr{C}_{1}=4\left|\bm{X}\right|^{2}$
and $\mathscr{C}_{2}=4\left|\bm{Y}\right|^{2}$ which both have functional
value $2h$. The Poisson structure \eqref{eq:BPQ-1-1} becomes block diagonal
\begin{equation}
B_{\bm{X},\bm{Y}}=\begin{pmatrix}\bm{\hat{X}} & \bm{0}\\
\bm{0} & \bm{\hat{Y}}
\end{pmatrix}\label{eq:Bxy}
\end{equation}
and is isomorphic to $\mathfrak{so}(3)\times\mathfrak{so}(3)$. The
notation in (\ref{eq:Bxy}) is such that for a vector $\bm{v}\in\mathbb{R}^{3}$
the corresponding antisymmetric hat matrix $\hat{\bm{v}}$ is defined
by
\[
\begin{aligned}\hat{\bm{v}}\bm{u}=\bm{v}\times\bm{u} & & \forall\bm{u}\in\mathbb{R}^{3}.\end{aligned}
\]
This reduction gives an integrable system on the reduced manifold
$S^{2}\times S^{2}$. While the integrals $(\eta_{1},\eta_{2})$ can be easily rewritten in terms of the $(\bm{X},\bm{Y})$ variables using \eqref{eq:ls to Xy}, they are simplest and most symmetric as functions of the $\ell_{ij}$'s as in \eqref{eq:Separation constants ellipsoidal}.
Under (\ref{eq:XY pb}), $(\eta_{1},\eta_{2})$ are commuting quadratic
functions on $S^{2}\times S^{2}$ and so we arrive at the following
result.
\begin{thm}
The integrable sytem $(H,\eta_{1},\eta_{2})$ on $T^{*}S^{3}$
descends to an integrable system $(\eta_{1}(\bm{L}),\eta_{2}(\bm{L}),\{\cdot,\cdot\}_{\bm L})$
on $S^{2}\times S^{2}$ with two degrees of freedom and integrals quadratic in $\ell_{ij}$.
\end{thm}
We call this integrable system the reduced ellipsoidal integrable system.
The symplectic reduction performed in this section also applies to integrable systems obtained from separating the geodesic flow in the degenerate coordinate systems shown in Figure~\ref{fig:Stasheff S3}.
Separating coordinate systems on $S^3$ are invariant under
affine transformations $e_i\mapsto \alpha e_i + \beta $ for $\alpha\neq0$. This allows us to normalise the ordered distinct parameters $(e_1, e_2, e_3, e_4)$ to $(0, 1, a, b)$ by a shift by $\beta = -e_1$ and a scaling by $\alpha = e_2 - e_1$. Thus the inside of
Figure~\ref{fig:Stasheff S3} can be thought of as the region $1 < a < b$.
%
The affine transformations when applied to the family of corresponding integrable systems on $T^*S^3$ gives topologically equivalent integrable systems. This property can be observed directly in the reduced systems on $S^2\times S^2$.
\begin{lem} \label{eta lemma}
Affine transformation of the parameters $e_i\mapsto \alpha e_i + \beta $ for $\alpha\neq0$ when applied to the reduced system $(\eta_1(\bm{L}),\eta_2(\bm{L}))$ gives a topologically equivalent integrable system.
\end{lem}
\begin{proof}
Applying $e_i\mapsto \alpha e_i + \beta $ to $(\eta_1(\bm{L}),\eta_2(\bm{L}))$ induces the map
\[(\eta_1,\eta_2)\mapsto (\alpha \eta_1+4\beta h,\alpha^2\eta_2+\alpha\beta\eta_1+2\beta^2h)\]
which gives a topologically equivalent system, because it is a linear map of the original integrals, plus affine terms that add the Casimir $h$.
\end{proof}
This result illustrates nicely how the equivalence of separating coordinates leads to an equivalence of reduced integrable systems. It highlights the fact that the reduced system does not have a Hamiltonian (since we reduced by the flow of $H$) and hence it is natural to consider quadratic integrals up to linear transformations.
\begin{figure}
\begin{centering}
\includegraphics[width=7cm]{iaba.pdf}$\quad$
\includegraphics[width=7cm]{iabb.pdf}
\end{centering}
\caption{a) Parameter space of ellipsoidal coordinates $1 < a < b$ (red: prolate, blue: oblate, dashed: fixed set).
b) Blown up parameter space giving half the Stasheff polytope (green: Lam\'e).
\label{fig:iab}}
\end{figure}
There is another equivalence between separating coordinates which maps an ordered quadruple $(e_1, e_2, e_3, e_4)$ to an ordered quadruple $(-e_4, -e_3, -e_2, -e_1)$ . After normalisation this maps $(0, 1, a, b)$ to $(0, 1, a', b')$
where $a' = (b-1)/(b-a)$ and $b' = b/(b-a)$. This map is an involution
that can be written as
\begin{equation} \label{eq:involution}
\frac{1}{b'} = 1 - \frac{a}{b}, \quad
\frac{a'}{b'} = 1 - \frac{1}{b}\,.
\end{equation}
The line of fixed points of the involution is $1/b + a/b = 1$.
This suggests to map the parameter region $1 < a < b$ to the triangle
$0 < a/b < 1/b < 1$ which is cut in half by the line of fixed points, see Figure~\ref{fig:iab}~a). The prolate case $a=1$ and the oblate case $a=b$ correspond to two edges of this triangle. To see the whole parameter space the $e_i$ need to be considered projectively. In particular the point $1=a=b$ representing the Lam\'e family needs to be blown up.
\begin{lem}
Define $q=1/b, r = (a-1)/(b-1)$. There is a one-to-one correspondence between equivalence classes of reduced integrable system on $S^2\times S^2$ and points in the region $0 \le q \le 1$, $0 \le r \le 1$, $q \ge (1-r)/(2-r)$.
\end{lem}
\begin{proof}
The map $r = (a-1)/(b-1)$ blows up the point $1=a=b$ to a line.
The parameter $r$ is the inverse of the slope of a straight line in $ab$-space through the point $a=b=1$. Due to $1 \le a \le b$ we have $0 \le r \le 1$. Consider the images of the edges of the triangle
$0 < a/b < 1/b < 1$.
The prolate line segment is mapped to $r=0$, $1/2 < q < 1$.
The oblate line segment is mapped to $r=1$, $0 < q < 1$.
The line of fixed points is mapped to $q = (1-r)/(2-r)$.
This establishes the claimed boundaries of the region.
The corners of the region are:
\begin{itemize}
\item $(q,r) = (0,1/2)$: symmetric prolate coordinates.
\item $(q,r) = (0, 1)$: spherical coordinates.
\item $(q,r) = (1,1)$: spherical coordinates.
\item $(q,r) = (0,1)$: cylindrical coordinates.
\end{itemize}
\end{proof}
The region described in the Lemma in $(r,q)$ space is shown in Figure~\ref{fig:iab}~b).
It represents half of the Stasheff polytope Figure~\ref{fig:Stasheff S3}.
In the next section, we study the reduced ellipsoidal system in depth. We find the momentum map, compute the critical points and critical values. The integrable systems arising from the degenerate coordinate systems will be covered in detail in section \ref{sec:degen}.
\section{The Reduced Ellipsoidal Integrable System}\label{sec:S3-5}
To construct and study the bifurcation diagram for the ellipsoidal
integrable system we will employ techniques from \cite{Bolsinov-Borisov} and \cite{Bolsinov-Oshemkov} using
compatible Poisson structures.
\subsection{Compatible Poisson Structures}
In this section, we will be closely following Example B in \cite{Bolsinov-Oshemkov}. Let us consider the reduced system on a symplectic leaf of $\mathfrak{so}^{*}(4)$
defined by $\mathcal{C}_{1}=\bm{L}\cdot\bm{L}=2h$ and
$\mathcal{C}_{2}=\ell_{12}\ell_{34}-\ell_{13}\ell_{24}+\ell_{14}\ell_{23}=0$.
On $\mathfrak{so}(4)$ we have the standard bracket $[X,Y]=XY-YX$ and we can
identify elements $X\in \mathfrak{so}(4)$ with elements $X^{*}\in \mathfrak{so}^{*}(4)$
via
\begin{align*}
K(X,\cdot) & =X^{*}.
\end{align*}
Here $K$ is the Killing form defined as
\begin{align*}
K(X,Y) & =\text{Tr}(\text{ad}_{X}\circ\text{ad}_{Y})\in\mathbb{R}
\end{align*}
where $\text{ad}_{X}=[X,\cdot]$. Explicitly, let us define $X_{ij}$ to be the $4\times4$ matrix with $1$ in the $ij^\text{th}$ position, $-1$ in the $ji^\text{th}$ position and $0$ everywhere else. This gives us a basis of $\mathfrak{so}(4)$. An element $X\in \mathfrak{so}(4)$ of the form
\[X=\begin{pmatrix} 0 & \ell_{12}&\ell_{13}&\ell_{14}\\
-\ell_{12}&0&\ell_{23}&\ell_{24}\\
-\ell_{13}&-\ell_{23}&0&\ell_{34}\\
-\ell_{14}&-\ell_{24}&\ell_{34}&0
\end{pmatrix}\]
can be written as $X=\sum_{j>i}\ell_{ij}X_{ij}$. This allows for the further identification of $T(\mathfrak{so}^*(4))\equiv \mathfrak{so}(4)$ with $\nabla_{\bm{L}}f\leftrightarrow \sum_{i<j}\frac{\partial_f}{\partial\ell_{ij}} X_{ij} $. We can now express the integrals $\eta_1$ and $\eta_2$ as functions on $\mathfrak{so}(4)$ as
\begin{align*}
\begin{aligned}\eta_1 & =\text{Tr} (X^t\nabla_{\bm{L}} \eta_1) & & & \eta_2 & =\text{Tr} (X^t\nabla_{\bm{L}} \eta_2)
\end{aligned}
\end{align*}
with $\nabla_{\bm{L}} \eta_1\coloneqq A_1=\sum_{i<j}(e_m+e_n)\ell_{ij}X_{ij}$ and $\nabla_{\bm{L}} \eta_2\coloneqq A_2=\sum_{i<j}e_me_n\ell_{ij}X_{ij}$ where $i,j,m,n$ are all distinct. Dynamics on $\mathfrak{so}^*(4)$ with Hamiltonian $\eta_1$ or $\eta_2$ can be rewritten in Lax form as
$\dot{X}=[A_1,X]$ or $\dot{X}=[A_2,X]$. The trace of $X^2$ and $X^4$ recover the Casimirs $-2\mathcal{C}_1$ and $-4\mathcal{C}_2^2+2\mathcal{C}_1^2$, respectively.
Let $C$ be a symmetric matrix and define the Lie bracket $[X,Y]_{C}\coloneqq XCY-YCX$. This lifts to the Poisson bracket $\{\cdot,\cdot\}_{C}$ on $\mathfrak{so}^*(4)$. We can
WOLG assume that $C$ is diagonal of the form $C=\text{diag}(c_{1},c_{2},c_{3},c_{4})$.
If $C$ is invertible, then there exists an isomorphism $\alpha$
from $\mathfrak{so}(4,\mathbb{C})$ to $\mathfrak{so}(4,[\cdot,\cdot]_{C})$ defined by
$\alpha:X\mapsto C^{1/2}XC^{1/2}$. This lifts to a linear map
\begin{align}
\gamma_C:\mathfrak{so}^{*}(4,\mathbb{C}) & \to \mathfrak{so}^{*}(4,\{\cdot,\cdot\}_{C})\label{eq:isomorphism}\\
\bm{L} & \mapsto M_C\bm{L}\nonumber
\end{align}
where $M_C=\text{diag}(\sqrt{c_{1}c_{2}},\sqrt{c_{1}c_{3}},\sqrt{c_{1}c_{4}},\sqrt{c_{2}c_{3}},\sqrt{c_{2}c_{4}},\sqrt{c_{3}c_{4}}).$
The Poisson matrix for $\mathfrak{so}^{*}(4,\{\cdot,\cdot\}_{C})$ in the basis
of $(\ell_{12},\ell_{13},\ell_{14},\ell_{23},\ell_{24},\ell_{34})$
is given by
\begin{align*}
B_{C} & =\left(\begin{array}{cccccc}
0 & -c_{1}\ell_{23} & -c_{1}\ell_{24} & c_{2}\ell_{13} & c_{2}\ell_{14} & 0\\
c_{1}\ell_{23} & 0 & -c_{1}\ell_{34} & -c_{3}\ell_{12} & 0 & c_{3}\ell_{14}\\
c_{1}\ell_{24} & c_{1}\ell_{34} & 0 & 0 & -c_{4}\ell_{12} & -c_{4}\ell_{13}\\
-c_{2}\ell_{13} & c_{3}\ell_{12} & 0 & 0 & -c_{2}\ell_{34} & c_{3}\ell_{24}\\
-c_{2}\ell_{14} & 0 & c_{4}\ell_{12} & c_{2}\ell_{34} & 0 & -c_{4}\ell_{23}\\
0 & -c_{3}\ell_{14} & c_{4}\ell_{13} & -c_{3}\ell_{24} & c_{4}\ell_{23} & 0
\end{array}\right).
\end{align*}
In \cite{Bolsinov-Oshemkov} it is shown that the Lie bundle $\lambda[\cdot,\cdot]-[\cdot,\cdot]_{C}=[\cdot,\cdot]_{\lambda I-C}$
where $\lambda\in\mathbb{R}\cup\infty$ is still a Lie bracket on
$\mathfrak{so}(4)$. Similarly, $\lambda\{\cdot,\cdot\}-\{\cdot,\cdot\}_{C}=\{\cdot,\cdot\}_{\lambda I-C}$
is also a Poisson bracket on $\mathfrak{so}^{*}(4)$ giving us a set of compatible
Poisson structures. Following Example B in \cite{Bolsinov-Oshemkov}, we can
now study our system from the perspective of the compatible Poisson
structures $\{\cdot,\cdot\}_{\lambda I-C}$ on $\mathfrak{so}^{*}(4)$. Expanding the Casimirs in terms of the parameter $\lambda$ gives commuting integrals \cite{Bolsinov-Oshemkov}.
\begin{prop}[\cite{Bolsinov-Oshemkov}]
\label{fact1}The integrals $(I_{0},I_{1},I_{2})$ for the Poisson structure
$(\mathfrak{so}^{*}(4),\{\cdot,\cdot\}_{\lambda I-C})$ with $C=\text{diag}(c_{1},c_{2},c_{3},c_{4})$
where $c_{i}$ are real distinct constants can be obtained from the coefficients
of the numerator of the rational function given by
\begin{align}
\psi(\lambda) & =\text{Tr}((X(\lambda I - C)^{-1})^{2})=2\frac{I_{0}\lambda^{2}+I_{1}\lambda+I_{2}}{(\lambda-c_{1})(\lambda-c_{2})(\lambda-c_{3})(\lambda-c_{4})}\label{eq:Trace formula-1}
\end{align}
where $X\in \mathfrak{so}(4)$. They are $I_{0}=-\sum_{i,j}\ell_{ij}^{2}=-2h,\ I_{1}=\sum_{i<j}(c_{n}+c_{m})\ell_{ij}^{2},\ I_{2}=-\sum_{i<j}c_{n}c_{m}\ell_{ij}^{2}$
where the indices $m,n,i,j$ are all distinct.
\end{prop}
Define $E=\text{diag}(e_1,e_2,e_3,e_4)$ and using $C=E$
gives us the integrals $(I_{0},I_{1},I_{2})=(-2h,\eta_{1},-\eta_{2})$
obtained from separation of variables in (\ref{eq:Separation constants ellipsoidal}).
This allows us to study the ellipsoidal integrable system on the reduced
space $\mathfrak{so}^{*}(4)$ as a system of compatible Poisson structures
and (\ref{eq:Trace formula-1}) becomes
\begin{align}
\psi(\lambda) & =2\frac{-2h\lambda^{2}+\eta_{1}\lambda-\eta_{2}}{(\lambda-e_{1})(\lambda-e_{2})(\lambda-e_{3})(\lambda-e_{4})}.\label{eq:Trace formula-1-1}
\end{align}
This is precisely the equation for the separated momenta in (\ref{eq:psq ellipsoidal-1})
with $\psi(s_{i})=4p_{i}^{2}$.
\subsection{Critical Points}
To find critical points, consider the lift of the standard endomorphism
from $\mathfrak{so}(4,\mathbb{C})$ to $\mathfrak{so}(3)\oplus \mathfrak{so}(3)$ defined by $T:\text{\ensuremath{\bm{L}\mapsto(\bm{X},\bm{Y})}}$ given in \eqref{eq:ls to Xy}, where here $\bm{X}$ and $\bm{Y}$ are complex vectors.
We will be using the map $\gamma_{\lambda I-E}$ in (\ref{eq:isomorphism}) to construct
the map $T_{2}=\gamma_{\lambda I - E} T^{-1}$ from $[\mathfrak{so}(3)\oplus \mathfrak{so}(3)]^{*}$
to $\mathfrak{so}^{*}(4,\{\cdot,\cdot\}_{C})$. It is known from \cite{Bolsinov-Oshemkov} that the set of singular
points in $\mathfrak{so}(4,\mathbb{C})\equiv \mathfrak{so}(3)\oplus \mathfrak{so}(3)$ under the standard
bracket is given by $\{(\bm{X},0)\}\cup\{(0,\bm{Y})\}$. When the
matrix $\lambda I-E$ is invertible (that is $\lambda\neq e_{i}$), the map
$\gamma_{\lambda I - E}$ is an Poisson isomorphism between $\mathfrak{so}^{*}(4,\mathbb{C})$
and $\mathfrak{so}^{*}(4,\{\cdot,\cdot\}_{\lambda I -E})$ and so $T_{2}$ is also a Poisson
isomorphism. The set of singular points of $\mathfrak{so}^{*}(4,\{\cdot,\cdot\}_{\lambda I -E})$
is the image of the singular points of $\mathfrak{so}(4,\mathbb{C})$ under $T_{2}$,
that is $T_{2}(\{(\bm{X},0)\}\cup\{(0,\bm{Y})\})$. The set of critical
points of the ellipsoidal integrable system is precisely the set of
singular points of $\mathfrak{so}^{*}(4,\{\cdot,\cdot\}_{\lambda I -E})$ by Theorem 2 in \cite{Bolsinov-Oshemkov}. We have an analogous result:
\begin{prop}
\label{Bolsinov Theorem}An element of $\bm{L}\in \mathfrak{so}^{*}(4)$ is critical
if
1. $\lambda\ne e_{i}$ and $\bm{L}\in\Re e(T_{2}(0,\bm{X})\cup T_{2}(\bm{\bm{Y}},0))$.
2. $\lambda=e_{i}$ and $\bm{L}$ is such that the Poisson bracket
$\{\cdot,\cdot\}_{C}$ drops rank.
\end{prop}
Using Proposition~\ref{Bolsinov Theorem} we start by finding the general
solutions for case 1 with $\lambda\ne e_{i}$. The critical points are given
by $\bm{L}\in\Re e(T_{2}(0,\bm{Y})\cup T_{2}(\bm{\bm{X}},0))$.
Let $\bm{z}=(z_1,z_2,z_3) \in \mathbb{C}^3$, define $\text{Sing}_{+}=\{(\bm{\bm{z}},0)\},\ \text{Sing}_{-}=\{(0,\bm{\bm{z}})$\}
and $\text{Sing}=\text{Sing}_{+}\cup\text{Sing}_{-}$.
We have
\begin{align}
T_{2}(\lambda)(\text{Sing}_{\pm})= & \left(\frac{z_{1}\sqrt{\lambda-e_{1}}\sqrt{\lambda-e_{2}}}{\sqrt{2}},\pm\frac{z_{2}\sqrt{\lambda-e_{1}}\sqrt{\lambda-e_{3}}}{\sqrt{2}},\frac{z_{3}\sqrt{\lambda-e_{1}}\sqrt{\lambda-e_{4}}}{\sqrt{2}},\right.\label{eq:t2sing}\\
& \left.\pm\frac{z_{3}\sqrt{\lambda-e_{2}}\sqrt{\lambda-e_{3}}}{\sqrt{2}},-\frac{z_{2}\sqrt{\lambda-e_{2}}\sqrt{\lambda-e_{4}}}{\sqrt{2}},\pm\frac{z_{1}\sqrt{\lambda-e_{3}}\sqrt{\lambda-e_{4}}}{\sqrt{2}}\right),\nonumber
\end{align}
where $z_{j}=a_{j}+ib_{j}$ with $a_i ,b_i \in \mathbb{R}$. Substituting these into the integrals gives $(I_{1},I_{2})=(4h\lambda,2h\lambda^{2})$
which is the curve $I_{2}=\frac{I_{1}^{2}}{8h}$ . This is precisely when $\lambda$ is a double root of \eqref{eq:Trace formula-1-1}. It is easily seen that the values of $\lambda$ that permit a double root
while keeping $\psi>0$ are in the interval $\lambda\in[e_{2},e_{3}]$.
After taking the real part this gives the critical points
\begin{equation}
\begin{aligned}\bm{L}_{\pm}= & \left(\frac{a_{1}\sqrt{\lambda-e_{1}}\sqrt{\lambda-e_{2}}}{\sqrt{2}},\mp\frac{b_{2}\sqrt{\lambda-e_{1}}\sqrt{e_{3}-\lambda}}{\sqrt{2}},-\frac{b_{3}\sqrt{\lambda-e_{1}}\sqrt{e_{4}-\lambda}}{\sqrt{2}},\right.\\
& \left.\mp\frac{b_{3}\sqrt{\lambda-e_{2}}\sqrt{e_{3}-\lambda}}{\sqrt{2}},\frac{b_{2}\sqrt{\lambda-e_{2}}\sqrt{e_{4}-\lambda}}{\sqrt{2}},\mp\frac{a_{1}\sqrt{e_{3}-\lambda}\sqrt{e_{4}-\lambda}}{\sqrt{2}}\right).
\end{aligned}
\label{eq:lcurcrit}
\end{equation}
We can verify that these are critical points of the system
$(\eta_{1},\eta_{2})$ by noting that $B(\nabla\eta_{0}-\lambda\nabla\eta_{1}-\nabla\eta_{2})|_{\bm{L=L_{\pm}}}=0$.
Substituting (\ref{eq:lcurcrit}) into the Pl\"ucker relation forces
\begin{equation}
a_{1}^{2}=b_{2}^{2}+b_{3}^{2}.\label{plu-el}
\end{equation}
Using $\bm{L\cdot L}=2h$ gives
the conic
\begin{equation}
(e_{1}e_{2}-e_{1}e_{3}-e_{2}e_{4}+e_{3}e_{4})b_{2}^{2}+(e_{1}e_{2}-e_{2}e_{3}-e_{1}e_{4}+e_{3}e_{4})b_{3}^{2}=4h.\label{ham-el}
\end{equation}
The conditions \eqref{plu-el} and \eqref{ham-el}
when combined with (\ref{eq:lcurcrit}) gives the explicit parametrisation
for the 4 topological $S^{1}$ of critical points for case 1 with $\lambda\neq e_i$.
In case 2 where $\lambda=e_{i}$, the map $\gamma_{\lambda I - E}$ still exists
but it is not invertible and so $T_{2}(e_{i})(\text{Sing})$ is still a subset of
the critical points of the bracket $\{\cdot,\cdot\}_{C(e_{i})}$.
Let us consider $\lambda =e_1$, then we have
\begin{align*}
T_{2}(e_{1})(\text{Sing}_{\pm}) & =\left(0,0,0,\pm\frac{z_{3}\sqrt{e_{1}-e_{2}}\sqrt{e_{1}-e_{3}}}{\sqrt{2}},-\frac{z_{2}\sqrt{e_{1}-e_{2}}\sqrt{e_{1}-e_{4}}}{\sqrt{2}},\pm\frac{z_{1}\sqrt{e_{1}-e_{3}}\sqrt{e_{1}-e_{4}}}{\sqrt{2}}\right).
\end{align*}
These naturally satisfies the Pl\"{u}cker relations giving us solutions
of the form $\bm{L}=(0,0,0,\ell_{23},\ell_{24},\ell_{34})$ after taking the real part with
constraint $\ell_{23}^{2}+\ell_{24}^{2}+\ell_{34}^{2}=2h$. This means
that the set of all critical points corresponding to case 2 with $\lambda=e_1$
is the sphere $\ell_{23}^{2}+\ell_{24}^{2}+\ell_{34}^{2}=2h$.
In order to show these are all the critical points for $\lambda=e_1$, recall that the singular points of $\{\cdot , \cdot\}_{C(e_i)}$ occurs when
$B_{C(e_{i})}$ drops rank. We perform the change of variables $(\bm{U},\bm{V})$
where $\bm{U}=(u_{12},u_{13,},u_{14})$ and $\bm{V}=(v_{34},v_{24},v_{23})$
with $u_{ij}=\sqrt{(e_{i}-e_{k})(e_{i}-e_{m})}\ell_{ij}$ , $v_{km}=\frac{1}{\sqrt{(e_{i}-e_{k})(e_{i}-e_{m})}}\ell_{km}$
and $i,j,k,m$ are all distinct. This transforms $B_{e_1 I - E}$ into
the standard $\mathfrak{e}^{*}(3)$ algebra given by
\begin{align*}
B_{3} & =\begin{pmatrix}\bm{0} & \bm{\hat{U}}\\
\bm{\hat{U}} & \bm{\hat{V}}
\end{pmatrix}.
\end{align*}
Singular orbits of $B_{3}$ are given by $\bm{U}=\bm{0}$, giving
$\ell_{12}=\ell_{13}=\ell_{14}=0$ for all $j\ne i$. These are the critical points described
above.
For $\lambda=e_{2},e_{3},e_{4}$ we have an isomorphism between
$B_{\lambda I - E}$ and $\mathfrak{e}^{*}(1,2),\ \mathfrak{e}^{*}(2,1),\ \mathfrak{e}^{*}(0,3)$ respectively,
all of which have singular orbits iff $\bm{U}=\bm{0}$ giving $\ell_{ik}=0$
for all $k\neq i$ if $\lambda=e_{i}$.
\subsection{Bifurcation diagram}
Using the critical points described in the previous section we have the following result for the critical values.
\begin{cor}
\label{biham-critical-values}The critical values of the integrals
$(\eta_1,\eta_2)$ occur when $\lambda$
is a real root of
$\psi(\lambda)$ so that $\psi(\lambda)\geq0$ and
\begin{enumerate}
\item $\lambda=e_{i}$ or
\item $\lambda$ is a double root of the numerator $\psi(\lambda)\coloneqq 2h\lambda^{2}-\eta_1\lambda+\eta_{2}$
\end{enumerate}
\end{cor}
\begin{proof}
By the Cayley Hamilton theorem, it is known that
\begin{align*}
\text{Tr}(M^{2})-(\text{Tr}(M))^{2}+2\det(M) & =0.
\end{align*}
We observe that if $M=XC^{-1}$, $\text{Tr}(M)=0$ and $\det(M)=\frac{(X_{12}X_{23}-X_{13}X_{24}+X_{12}X_{34})^{2}}{(\lambda-c_{1})(\lambda-c_{2})(\lambda-c_{3})(\lambda-c_{4})}=\frac{\mathcal{C}_{2}^{2}}{(\lambda-c_{1})(\lambda-c_{2})(\lambda-c_{3})(\lambda-c_{4})}=0$
due to constraint on the Casimir. This implies that $\lambda$ has
to be a root of $\psi(\lambda)$ for valid motion. For critical points, we either have $\lambda=e_i$ or $\lambda$ such that $(I_{1},I_{2})=(4h\lambda,2h\lambda^{2})$, that is $\lambda$ is a double root of $\psi(\lambda)$.
\end{proof}
\begin{prop}
\label{The-bifurcation-diagram Lemma}The set of critical values for
the reduced ellipsoidal integrable system $(\eta_1,\eta_2):S^2\times S^2 \to \mathbb{R}^2$ is composed of 4 straight lines and a quadratic curve. The lines
are $\mathcal{L}_{i}:\eta_{2}-e_{i}(\eta_{1}-e_{i})=0$ for $i\in\{1,2,3,4\}$
and part of the parabola $\eta_{2}=\frac{\eta_{1}^{2}}{4}$ given
by $\mathcal{C}:(\eta_{1},\eta_{2})=\left(2t,t^2\right)$ for $e_{2}\le t\le e_{3}$.
There are $6$ transverse intersections of the lines $\mathcal{L}_{i}\cap\mathcal{L}_{j}$
which occur at $(\eta_{1},\eta_{2})=d_{ij}\coloneqq(e_{i}+e_{j},e_{i}e_{j})$
where $i\ne j$ and $i,j\in\{1,2,3,4\}$. The points $d_{i}\coloneqq (2e_i,e_i^2)$
where $i\in\{2,3\}$ correspond to the two tangential intersections
of $\mathcal{L}_{2}$ and $\mathcal{L}_{3}$ with $\mathcal{C}$. The bifurcation diagram with $2h=1$
is shown in Figure \ref{fig:root diagram and mm ellipsoidal} b).
\end{prop}
\begin{proof}
Using Proposition \ref{fact1} and Corollary \ref{biham-critical-values}, when $\lambda=e_{i}$
in (\ref{eq:Trace formula-1-1}) we must have $\eta_{2}=e_{i}(\eta_{1}-2he_{i})$
for the numerator of $\psi(\lambda)$ to be identically $0$ . If
$\lambda$ is a double root, then taking the discriminant of the numerator
gives the curve $\eta_{2}=\frac{\eta_{1}^{2}}{8h}$. With $2h=1$, we obtain
the formulae for the lines $\mathcal{L}_{i}$ and the curve $\mathcal{C}$
which make up the boundary of the image of the momentum map. Since the bifurcation
diagram is necessarily compact we must also determine the regions for which the momenta are real. To do this, recall that $\psi(s_{i})=4p_{i}^{2}$ can be
factored as follows
\begin{equation}
\psi(s_{i})=4p_{i}^{2}=-2\frac{(s_{i}-r_{1})(s_{i}-r_{2})}{(z-e_{1})(z-e_{2})(z-e_{3})(z-e_{4})},\label{eq:psq new}
\end{equation}
where $e_{1}\le r_{1}\le r_{2}\le e_{4}$, $\eta_{1}=r_{1}+r_{2}$
and $\eta_{2}=r_{1}r_{2}.$ The denominator of (\ref{eq:psq new})
defines $4$ poles at $e_{j}$ and so divides the interval $[e_{1},e_{4}]$
into three intervals $[e_{i},e_{i+1}]$ where $i\in\{1,2,3\}$. To
distribute the roots $r_{k},$ we require that $p_{i}^{2}$ takes
on non negative values in each interval $[e_{i},e_{i+1}]$ for valid
motion. This gives $4$ regions of motion which we represent in Figure
\ref{fig:root diagram and mm ellipsoidal} a). We call this the
root diagram for the ellipsoidal system. The mapping from the root
diagram to the bifurcation diagram is smooth on the interior and all edges
of the root diagrams except on the diagonal cyan segment where $r_1=r_2$. The image of
the momentum map is the region enclosed by the lines $\mathcal{L}_{i}$
and the curve $\mathcal{C}$ presented in Figure~\ref{fig:root diagram and mm ellipsoidal}~b).
From the root diagram, we find that each of the lines $\mathcal{L}_{i}$
is defined over $\eta_{1}\in[e_{1}+e_{j},e_{4}+e_{k}],\eta_{2}\in[e_{1}e_{j},e_{4}e_{k}]$
where $j=\max(2,i)$ and $k=\min(3,i)$. Hence the end points of $\mathcal{L}_i$ are $d_{ij}$ and $d_{k4}=d_{4k}$. The critical points on each line $\lambda=e_{i}$ described in the previous section represents the geodeosic subflow on the great $2$-sphere $x_{i}=0$ under
elliptical coordinates on $S^2$ with axes given by the remaining $e_{k}$ with
$k\neq i$. Consider the case with $\lambda=e_1$, for each critical value $(\eta_{1},\eta_{2})$ on the line $\mathcal{L}_1$, its set of
critical points is the intersection of the sphere $\ell_{23}^{2}+\ell_{24}^{2}+\ell_{34}^{2}=2h$ with the ellipsoids
$\eta_{2}(\bm{L})=e_{1}(e_{4}\ell_{23}^{2}+e_{3}\ell_{24}^{2}+e_{2}\ell_{34}^{2})=\eta_{2}$.
These are precisely the fibres of the geodesic flow on
$S^{2}$ when separation of variables is performed in the elliptical
coordinates on $S^2$ with semi-axes $(e_{2},e_{3},e_{4})$ (see Appendix \ref{s2ellip}). Indeed, when $\ell_{12}=\ell_{13}=\ell_{14}=0$,
we have $x_{1}=y_{1}=0$ and we have geodesic motion restricted on
the great 2-sphere $x_{1}=0$.
In the case where there is a double root in the numerator, i.e.
$t=r_{1}=r_{2}$, we obtain the curve $\mathcal{C}:(\eta_{1},\eta_{2})=(2t,t^{2})$
where $e_{2}\le t\le e_{3}$.
It is clear that the intersections between $\mathcal{L}_{i}$ and
$\mathcal{L}_{j}$ are transverse and are located at $(\eta_{1},\eta_{2})=(e_{i}+e_{j},e_{i}e_{j})=d_{ij}$
where $i\ne j$ and $i,j\in\{1,2,3,4\}$. Similarly, it is easy to
see by computing the tangents that only $\mathcal{L}_{2}$ and $\mathcal{L}_{3}$
intersect $\mathcal{C}$ tangentially at $d_2=(2e_{2},e_{2}^{2})$ and
$d_3=(2e_{3},e_{3}^{2})$ respectively.
\end{proof}
\begin{cor}
The Uhlenbeck integral $F_{i}=0$ if and only if $\eta_{2}-e_{i}(\eta_{1}-e_{i})=0$,
i.e. $F_{i}$ vanishes along $\mathcal{L}_{i}$.
\end{cor}
\begin{proof}
Taking the residue at $e_{i}$ of both sides of (\ref{eq:R on A def})
gives
\[
F_{i}=-\frac{e_{i}^{2}-e_{i}\eta_{1}+\eta_{2}}{(e_{i}-e_{k})(e_{i}-e_{l})(e_{i}-e_{m})}
\]
where indicies $i,k,l,m$ are all distinct. Since we have assumed
all semi major axes are distinct, $F_{i}=0$ if and only if $(\eta_{1},\eta_{2})$
lie on $\mathcal{L}_{i}$.
\end{proof}
\begin{figure}
\begin{centering}
\includegraphics[width=14cm]{Ell-MM.pdf}
\par\end{centering}
\caption{a) Root diagram of the reduced ellipsoidal system with $(e_{1},e_{2},e_{3},e_{4})=(1,2,5,8)$.
b) Corresponding bifurcation diagram. The four chambers of the image of the
momentum map are labelled $I-IV$. The colours of the various lines will be kept in further figures. \label{fig:root diagram and mm ellipsoidal}}
\end{figure}
To classify the nature of the critical points, we compute the eigenvalues
of the linearisation $\nabla[B(\lambda\nabla\eta_{1}+\nabla\eta_{2})]$.
The intersections $d_{12},d_{14}$ and $d_{34}$ are all elliptic-elliptic
critical values, $d_{13}$ and $d_{24}$ are of elliptic-hyperbolic
type and $d_{23}$ is hyperbolic-hyperbolic. The tangential intersections
$d_{2}$ and $d_{3}$ are degenerate. The lines $\mathcal{L}_{1},\mathcal{L}_{4}$,
the curve $\mathcal{C}$, as well as the yellow and purple parts of
$\mathcal{L}_{2}$ and $\mathcal{L}_{3}$ respectively have one pair
of imaginary eigenvalues and so are codimension one elliptic. The
magenta, orange and grey, red segments of $\mathcal{L}_{3}$ and $\mathcal{L}_{2}$
give one pair of real eigenvalues and so are codimension one hyperbolic.
\subsection{Critical Fibres}
Unlike the critical points, the parametrisation of the critical fibre
cannot be computed algebraically. We will instead provide informal description
and topological classification of the fibres on $S^2\times S^2$ instead.
Firstly, by Louvile-Arnold theorem, the preimage of regular values
in regions $I,II,III,IV$ are $\mathbb{T}^{2}$ in $S^2\times S^2$.
Since $d_{12},d_{14},d_{34}$ are elliptic-elliptic, their preimage
on $S^2\times S^2$ are 2 points each, the critical points found earlier.
Next, consider the lines immediately connected to $d_{12},d_{14},d_{34}$
These are the lines $\mathcal{L}_{1},\mathcal{L}_{4}$, as well as
the yellow and purple parts of $\mathcal{L}_{2}$ and $\mathcal{L}_{3}$
respectively. They are codimension one elliptic and so their fibres
are circles $S^1$ and only contain critical points.
The multiplicity of these circles is two as a result of extending the
multiplicities of $d_{12},d_{14}$ and $d_{34}$.
Similarly, $d_{13}$ and $d_{24}$ are elliptic-hyperbolic critical
values. There are 2 intersecting circles of critical points in their fibres. To obtain
the full fibre, we observe that as we move along $\mathcal{L}_{1}$
(resp. $\mathcal{L}_{4})$ and pass by $d_{13}$ (resp. $d_{24})$,
two $S^1$ bifurcate into two $S^1$. Such a bifurcation is represented
by the Fomenko atom $C_{2}$.
The magenta and grey segments of $\mathcal{L}_{3}$ and $\mathcal{L}_{2}$
are extensions of $d_{13}$ and $d_{24}$ respectively, the fibres
of these segments are $S^1\times C_{2}$.
Since the grey and magneta lines both have a $C_{2}$ type singularity,
we know that the fibre of $d_{23}$ is of type $(C_{2},C_{2})$ $l$-type
of complexity $2$ with loop molecule number 17 in \cite{book}. The critical fibre is simpler, there are only 4 types after symmetry reduction \cite{DullinVuNgoc07}.
Since the $17$ saddle saddle singularity contains $4$ $B$ atoms, it follows that the fibres of the red
and orange lines are $2B\times S^1$.
The cyan curve is codimension one elliptic and so its fibre is a circle
$S^{1}$. To find the multiplicity of these $S^1$, we note that the
fibre of $d_{23}$ contains $4S^1$. Extending $d_{23}$ into chamber
$IV$, we multiply by $S^{1}$. Hence, the fibre of a regular value
in chamber $IV$ is $4T^{2}$. Continuing onto $\mathcal{C}$, we
see that the fibre along the curve must be $4S^1$.
The fibre type does not change at the degenerate points $d_{2}$ and $d_{3}$, hence are simply $2S^1$. Approaching $d_{2}$
and $d_{3}$ along the yellow and purple lines respectively, we see
$2S^1$ bifurcate into $4S^1$ (along the curve $\mathcal{C}$)
as well as a hyperbolic fibre ($2B\times S^1$). This means $d_{2}$
and $d_{3}$ are pitchfork bifurcations as described in \cite{book}.
Extending the codimension $1$ lines $\mathcal{L}_{1}$ and $\mathcal{L}_{4}$
into chambers $I-III$ gives the following corollary.
\begin{cor}
The fibre of a regular point on the momentum map is a torus $T^{2}$.
The multiplicity of the tori in chambers $I-III$ is $2$, while tori
in chamber $IV$ have multiplicity $4$.
\end{cor}
\subsection{The Action Map}\label{ell-act}
Recall that the Liouville tori of the ellipsoidal integrable system
are certain coverings of the real parts of the Jacobi variety of the genus
3 hyperelliptic curve defined by $w^{2}(z)=-R(z)A(z)$. The actions of
this system are the periods of the Abelian integral
\[
I_{j}=\frac{1}{2\pi}\oint_{\gamma_{j}}p_{j}(s)ds=\frac{1}{2\pi}\oint_{\gamma_{j}}\sqrt{\frac{-R(s)}{4A(s)}}ds=\frac{1}{2\pi}\oint_{\gamma_{j}}\frac{-R(s)}{2w(s)}ds.
\]
The actions $I_{j}$ are discontinuous on phase space
as we cross boundaries of chambers of the bifurcation diagram. To construct
a set of continuous actions, we perform discrete symmetry reduction by
the $2^{4}$ discrete symmetries generated by the reflections
\[
\sigma_{i}:(x_{i},p_{i})\to(-x_{i},-p_{i}).
\]
Following \cite{GURNeumanQuantum}, we can construct symmetry reduced actions
that are continuous accross all chambers of the momentum map. A detailed
explanation can be found in \cite{DullinNeuman}.
\begin{lem}
\label{def:action}The continuous actions of the ellipsoidal integrable
system $(J_{1},J_{2},J_{3})$ are
\begin{equation}
\begin{aligned}J_{1}=\frac{2}{\pi}\int_{e_{1}}^{\min(r_{1},e_{2})}p(s)ds, & & J_{2}=\frac{2}{\pi}\int_{\max(r_{1},e_{2})}^{\min(r_{2},e_{3})}p(s)ds, & & J_{3}=\frac{2}{\pi}\int_{\max(r_{2},e_{3})}^{e_{4}}p(s)ds\end{aligned}
.\label{eq:ACtions ell dfef}
\end{equation}
This discrete symmetry reduction reduces the multiplicities of $T^2$ in all chambers to $1$ and this is the reason why the discrete symmetry reduced system has a globally continuous action map. The actions $J_{i}$ are independent on $T^*S^3$ but since $H$ is a superintegrable Hamiltonian, they are related on an energy surface, and hence become dependent for the reduced system on $S^2\times S^2$.
\end{lem}
\begin{lem}
\label{Action lemma} The continuous actions satisfy the relation
\begin{equation}
J_{1}+J_{2}+J_{3}=\sqrt{2h}.\label{eq:action cond el}
\end{equation}
\end{lem}
\begin{proof}
Let $\beta_{i}$ be cycles that enclose the bounds of $J_{i}$ respectively.
I.e. $\beta_{1}$ encloses the interval $[e_{1},\min(r_{1},e_{2})]$ and similarly
for $\beta_{2},\beta_{3}$. By deforming the cycles on the hyperelliptic
curve $w^{2}=-R(z)A(z)$ we have
\begin{equation}
J_{1}+J_{2}+J_{3}=-\frac{1}{2\pi}\oint_{\gamma}pdz\label{eq:sum of actions}
\end{equation}
where $\gamma$ is a cycle that encircles the point at infinity.
It is easily shown that
\begin{equation}
\text{Res}(p,\infty)=-\sqrt{-2h}.\label{eq:Residue at infinity}
\end{equation}
Combining (\ref{eq:Residue at infinity}) with (\ref{eq:sum of actions})
gives the desired result.
\end{proof}
\begin{thm}
The image of the action map \eqref{eq:ACtions ell dfef} is an equilateral triangle $\mathcal{T}$
(see Figure \ref{fig:Action map}).\label{action thm}
\end{thm}
\begin{proof}
From Lemma 5, we know that the image of the action map is constrained
to the plane $J_{1}+J_{2}+J_{3}=\sqrt{2h}$. This is bounded by $J_{i}\geq0$
and hence the image is contained in the intersection of the plane
$J_{1}+J_{2}+J_{3}=\sqrt{2h}$ with the positive quadrant. Since the
maps $J_{i}$ are continuous on the reduced phase space, every point
in the interior of $\mathcal{T}$ must be a point in the image of
the action map. On the boundary of $\mathcal{T},$ the lines $J_{1}=0$ and $J_3=0$ are the image of the lines $\mathcal{L}_1$ and $\mathcal{L}_4$ respectively while $J_2=0$ is the image of the cyan curve $\mathcal{C}$ together with the yellow and purple segments of $\mathcal{L}_2$ and $\mathcal{L}_3$. Thus, the triangle $\mathcal{T}$ formed
by intersecting the plane $J_{1}+J_{2}+J_{3}=\sqrt{2h}$ with the
positive quadrant is the image of the symmetry reduced phase space
under the action map.
\end{proof}
The action map calculated using (\ref{eq:ACtions ell dfef}) is shown
in Figure \ref{fig:Action map} for $h=\frac{1}{2}$. Let lines
$J_{i}=0$ be $\mathfrak{J}_{i}$ and call the interior
lines $\gamma_{1}$ (red and grey) and $\gamma_{2}$ (magenta and
orange). We denote by $A_{ij}$ the intersection of $\mathfrak{J}_{i}$
and $\gamma_{j}$. Similar polytopes are known to classify toric systems
(two dimensional integrable systems where both integrals are global
$S^{1}$ actions). For more details on this, see \cite{BSMF_1988__116_3_315_0}.
In our case, the $\gamma_{i}$ in the interior of the action map reflect
the non-toric nature of the ellipsoidal integrable system. Along these lines the hyperelliptic action integrals \eqref{def:action} become elliptic. Let
\[
\mathcal{T}(u,v,\alpha)=\frac{(u-v)K(k)+(e_{4}-e_{1})\Pi(\alpha,k)}{\pi\sqrt{(e_{1}-e_{3})(e_{2}-e_{4})}}
\]
where $k^{2}=\frac{(e_{4}-e_{3})(e_{2}-e_{1})}{(e_{4}-e_{2})(e_{3}-e_{1})}$
and $K(k),\ \Pi(\alpha,k)$ are the complete elliptic integrals of the
first and third kind respectively. The tangential intersections of
$\mathfrak{J}_{2}$ with $\gamma_{1},\gamma_{2}$ occur at $A_{21}$
and $A_{22}$. These are given by
\[
\begin{aligned}A_{21}=(\mathcal{T}(e_{2},e_{4},\alpha_{1}),0,\mathcal{T}(e_{1},e_{2},\alpha_{2})), & & A_{22}=(\mathcal{T}( & e_{3},e_{4},\alpha_{1}),0,\mathcal{T}(e_{1},e_{3},\alpha_{2}))\end{aligned}
\]
where $\alpha_{1}=\frac{e_{2}-e_{1}}{e_{2}-e_{4}}$ and $\alpha_{3}=\frac{e_{4}-e_{3}}{e_{1}-e_{3}}$.
Transverse intersections of $\mathfrak{J}_{i}$ with $\gamma_{j}$
occur at $A_{31}$ and $A_{12}$ given by
\[
\begin{aligned}A_{31}=\frac{2}{\pi}(\sin^{-1}(u_{1}),\cos^{-1}(u_{1}),0) & & A_{12} & =\frac{2}{\pi}(0,\sin^{-1}(u_{2}),\cos^{-1}(u_{2}))\end{aligned}
\]
where $u_{1}=\sqrt{\frac{e_{1}-e_{2}}{e_{1}-e_{3}}}$ and $u_{2}=\sqrt{\frac{e_{2}-e_{3}}{e_{2}-e_{4}}}$.
The intersection of $\gamma_{1}$ and $\gamma_{2}$ is located at
\begin{equation} \label{eq:HHptaction}
(J_{1},J_{2},J_{3})=\frac{2}{\pi}(\sin^{-1}\left(v_{1}\right),\sin^{-1}\left(v_{2}\right)-\sin^{-1}\left(v_{1}\right),\cos^{-1}\left(v_{2}\right))
\end{equation}
where $v_{1}=\sqrt{\frac{e_{1}-e_{2}}{e_{1}-e_{4}}}$ and $v_{2}=\sqrt{\frac{e_{1}-e_{3}}{e_{1}-e_{4}}}$. Notice that these intersection points are invariant under affine transformations of $(e_1,e_2,e_3,e_4)$ as expected.
\begin{figure}
\begin{centering}
\includegraphics[width=8cm]{Ell-AM.pdf} $\quad$
\includegraphics[width=7cm]{Ell-AM-reduced.png}
\par\end{centering}
\caption{a) Action map of the ellipsoidal integrable system with $(e_{1},e_{2},e_{3},e_{4})=(1,2,5,8)$. b) Corresponding reduced action map $(J_1,J_3)$ \label{fig:Action map}}
\end{figure}
The action map $(J_1,J_2,J_3)$ is equivalent to the map $(J_1,J_1+J_2+J_3,J_3)$ by a uni-modular transformation. Since $J_1+J_2+J_3=1$ for the reduced system on $S^2\times S^2$, we have 2 actions only, ie. $(J_1,J_3)$. By using similar transformations, any pair $(J_i,J_j)$ can be chosen as the actions for the reduced system. The resulting image of the corresponding action map is the projection of the action map in Figure \ref{fig:Action map} onto the $J_i J_j$ plane.
\begin{cor}
A possible set of actions for the reduced ellipsoidal integrable system is $(J_1,J_3)$. The image of the reduced space $S^2\times S^2$ under this map is a right-angled isosceles triangle obtained from projecting Figure \ref{fig:Action map} onto the $J_2=0$ plane.
\end{cor}
\begin{proof}
The reduction can be done in appropriate action variables directly. Since $J_1 + J_2 + J_3$ is a global action variable that is equal to the square root of twice the Hamiltonian, reduction means the following two things. Fix the action (i.e.~fix the energy), and quotient by its flow. The flow of this action only changes it's conjugate angle, and so the quotient identifies this angle to a point. The remaining system with two degrees of freedom has action variables $J_1, J_3$, and the image of the action map of the reduced system is the projection of the ``spatial'' fixed energy triangle in Figure~\ref{fig:Action map}~a) onto the appropriate coordinate plane in Figure~\ref{fig:Action map}~b).
\end{proof}
The choice of which action variable to present the reduced system in is somewhat arbitrary, and we prefer not to make any choice and hence keep showing the ``spatial'' picture of the triangle in $J_1J_2J_3$-space in the following section, even when discussing the reduced system on $S^2\times S^2$. In general an integrable system can be represented by its energy surface in action space, which means the surface $H(J_1, J_2, J_3) = h$, which will depend on $h$, and may not even be continuous. However, in our setting we have simplest possible case of a maximally superintegrable system for which $H(J_1+J_2+J_3) = h$, and so the triangle in action space occurs for every superintegrable system for which globally continuous actions can be defined.
The position of the hyperbolic-hyperbolic point in the image of the action map (the intersections of the lines $\gamma_1$ and $\gamma_2$) is uniquely determined by the reduced parameters $1 < a < b$ of the system. In fact, the map $(1/b, a/b) \mapsto \frac{2}{\pi}(\sin^{-1}\sqrt{1/b}, \cos^{-1}\sqrt{a/b})$ maps the triangle in parameter space Figure~\ref{fig:iab}~a) to the triangle in action space Figure~\ref{fig:Action map}~b). The involution in parameter space \eqref{eq:involution} becomes the reflection across the diagonal $(J_1, J_3) \mapsto (J_3, J_1)$.
Performing the affine transformation of $e_i$ directly in the action integral and applying the same transformation to the integration variable $s$ does change the integral. However, then also transforming $(\eta_1, \eta_2)$ according to Lemma~\ref{eta lemma} recovers the original integral, as expected.
\section{Degenerate Systems on $S^{2}\times S^{2}$}\label{sec:degen}
In this section, we study all systems arising from separating the
geodesic flow in degenerate coordinates on $S^{3}$. We begin by focusing
on the following $3$ systems: prolate $(1\ (2\ 3)\ 4)$, oblate $(1\ 2\ (3\ 4))$
and Lam\'{e} $(1\ (2\ 3\ 4))$. These correspond to the edges of
the Stasheff polytope. Further degenerations of these coordinates,
cylindrical $((1\ 2)(3\ 4))$ and two forms of spherical $(1\ (2\ (3\ 4)),(1\ ((2\ 3)\ 4))$
form the corners of the polytope.
It will be shown that the integrable systems (reduced and un-reduced) corresponding to these degenerate coordinate systems
can also be obtained by smoothly deforming their ellipsoidal counterparts. Thus, we will establish the analogue of the result by Sch\"{o}bel and Veselov \cite{Schoebel2014}, namely that the correct moduli space for this family of integrable systems is the Stasheff polytope.
The main feature of the degenerate systems corresponding to the lower-dimensional faces of the Stasheff polytope is the appearance of global symmetries, in the case of $S^3$ either $SO(2)$ and $SO(3)$.
The results of this section described in detail below can be sumarised in the following theorem.
Consider the designation of a separable coordinate system on $S^3$ by pairs of nested brackets inserted into 4 objects as shown in Figure~\ref{fig:Stasheff S3}.
\begin{thm}
For each pair of brackets that enclose two adjacent members, the corresponding (reduced) integrable system has an $SO(2)$ symmetry. For each pair of brackets that enclose three adjacent members the corresponding (reduced) integrable system has a global $SO(3)$ symmetry.
The generic ellipsoidal integrable system with quadratic integrals degenerates to an integrable system with quadratic integrals. If there is an $SO(k)$ symmetry the corresponding quadratic integral
is replaced by its square root. For $SO(2)$ this gives a global $S^1$ action, while for $SO(3)$ this gives an almost global $S^1$ action.
\end{thm}
In particular, the oblate, prolate, and spherical systems have one global $S^1$ action each, the cylindrical system has two global $S^1$ actions, and the spherical and the Lam\'e system have an almost-global $S^1$ action each. In addition, we find that the prolate system is generalised semi-toric and the cylindrical system is toric but the $S^1 \times S^1$ action is not effective. By almost global $S^1$ action we mean that the action fails in the preimage of an isolated point of the image of the momentum map. The corresponding spherical singularity
in the spherical system \cite{RonanThesis} also appears in the Lam\'e system.
\subsection{Prolate Coordinates }
We begin by considering prolate coordinates. It will
be shown that the corresponding integrable system is generalised semi-toric and has non-trivial monodromy. In addition, the global action triangle is half of the semi-toric polygon invariant.
\subsubsection{Separation of Variables}
Prolate coordinates on $S^{3}$, denoted by $(1\ (2\ 3)\ 4)$, are
a degeneration of ellipsoidal coordinates arising from setting the
middle two semi major axes equal, i.e. $e_{2}=e_{3}$. We normalise the $e_{j}$ according to $(e_{1},e_{2}=e_{3},e_{4})=(0,1=a,b)$.
From \cite{Kalnins1986}, an explicit representation of prolate coordinates
is
\[
\begin{aligned}x_{1}^{2} & =\frac{s_{1}s_{3}}{b}, & & x_{2}^{2}=-\frac{\left(s_{1}-1\right)s_{2}\left(s_{3}-1\right)}{b-1},\\
x_{3}^{2} & =\frac{\left(s_{1}-1\right)\left(s_{2}-1\right)\left(s_{3}-1\right)}{b-1}, & & x_{4}^{2}=\frac{\left(b-s_{1}\right)\left(b-s_{3}\right)}{(b-1)b},
\end{aligned}
\]
where $0\le s_{1},s_{2}\le1\le s_{3}\le b$. Since $s_2$ is an ignorable coordinates, the Hamilton Jacobi equation can be separated easily to give integrals
$(2H,G_{pro},\ell_{23})$ where $G_{pro}=b\ell_{12}^{2}+b\ell_{13}^{2}+\ell_{14}^{2}$.
The corresponding momenta are
\begin{equation}
\begin{aligned}p_{i}^{2} & =\frac{-2hs_{i}^{2}+(g+2h+(b-1)l^{2})s_{i}-g}{4s_{i}(s_{i}-b)(s_{i}-1)^{2}}, & & & p_{2}^{2} & =\frac{l^{2}}{4s_{2}(1-s_{2})}\end{aligned}
\label{eq:sep momenta prolate}
\end{equation}
where $i\in\{1,3\}$ and $(l,g)$ are the
values of $\ell_{23}$ and $G_{pro}$, respectively. We call the triple $(2H,\ell_{23},G_{pro})$ on
$T^{*}S^{3}$ the $1$-parameter family of prolate integrable systems. Similarly, $(\ell_{23},G_{pro})$ gives a $1$-parameter family of reduced prolate integrable systems on $S^2\times S^2$.
Note that
we have chosen $\ell_{23}$ as an integral since it is naturally a
global $S^{1}$ action, unlike its square.
The quadratic integrals can be obtained as a limit of the ellipsoidal integrable system.
\begin{lem}
\label{Prolate degen proof }The integrals $\left(\ell_{23}^2,G_{pro}\right)$
as well as the separated momenta \eqref{eq:sep momenta prolate} can
be obtained by smoothly degenerating their ellipsoidal counterparts
\eqref{eq:Separation constants ellipsoidal} and \eqref{eq:psq ellipsoidal-1}.
\end{lem}
\begin{proof}
The transformation from ellipsoidal to prolate coordinates
is given by
\begin{equation}
\begin{aligned}e_{3} & =e_{2}+\epsilon & & & s_{2} & =e_{2}+\epsilon\tilde{s}_{2} & & &p_2=\frac{\tilde{p}_2}{\epsilon}
\end{aligned}
\label{eq:legit pro trans}
\end{equation}
in the limit $\epsilon\to0$ where $\tilde{s}_{2}\in[0,1]$. The transformation from $(s_2,p_2)$ to $(\tilde{s_2},\tilde{p_2})$ is canonical.
Let $(\tilde{\eta}_{1},\tilde{\eta}_{2})=\left.(\eta_{1},\eta_{2})\right|_{e_{3}=e_{2}+\epsilon}.$ Substituting (\ref{eq:legit pro trans}) into (\ref{eq:Separation constants ellipsoidal})
and taking the limit gives
\[
\begin{aligned}G_{pro} & =\tilde{\eta}_{2} & & & \ell_{23}^{2} & =\frac{1}{b-1}(\tilde{\eta}_{1}-\tilde{\eta}_{2}-2H)\end{aligned}
\]
where we have normalised the $e_{j}$ by setting $(e_{1},e_{2}=e_{3},e_{4})=(0,1,b)$.
For the separated equations, we insert (\ref{eq:legit pro trans})
into (\ref{eq:psq ellipsoidal-1}) and expand about $\epsilon=0$
to obtain
\begin{equation}
\begin{aligned}\tilde{p_{i}}^{2} & =\frac{\tilde{\eta}_{1}+s_{i}\left(s_{i}-\tilde{\eta}_{2}\right)}{4\left(e_{1}-s_{i}\right)\left(s_{i}-e_{2}\right){}^{2}\left(s_{i}-e_{4}\right)}+O(\epsilon) & & & \tilde{p}_{2}^{2} & =\frac{e_{2}(e_{2}-\tilde{\eta}_{2})+\tilde{\eta}_{1}}{4\tilde{s}_{2}(\tilde{s}_{2}-1)(e_{1}-e_{2})(e_{2}-e_{4})}+O(\epsilon).\end{aligned}
\label{eq:V pro}
\end{equation}
Taking the limit of (\ref{eq:V pro}) as $\epsilon\to0$ and dropping
the tildes gives (\ref{eq:sep momenta prolate}).
\end{proof}
In the prolate limit, $F_{2}$ and $F_{3}$ become singular. However,
multiplying both integrals by $(e_{3}-e_{2})$ gives
\[
\lim_{\epsilon\to0}(e_{3}-e_{2})F_{2}=-\lim_{\epsilon\to0}(e_{3}-e_{2})F_{3}=\ell_{23}^{2}.
\]
The other two integrals $F_{1}$ and $F_{4}$ degenerate smoothly
to
\[
\begin{aligned}F_{1,pro}=\frac{-G_{pro}}{b}, & & F_{4,pro} & =\end{aligned}
\frac{2Hb-G_{pro}-b\ell_{23}^{2}}{(b-1)b}.
\]
\subsubsection{Critical Points and Momentum Map}
Since the integrals of the prolate system are significantly simpler than those of the ellipsoidal system, we can easily compute the critical points and values directly. However, it is interesting to note that we can also use the method of compatible Poisson structures with the matrix
$C=\text{diag}(0,1,1+\epsilon,b)$ for $0<\epsilon<b-1$ for this computation.
\\
Using Proposition \ref{fact1} with this $C$,
we get
\[
\psi_{pro}(\lambda)=2\frac{-2h\lambda^{2}+I_{1}\lambda+I_{2}}{\lambda(\lambda-1)(\lambda-1-\epsilon)(\lambda-b)}
\]
with $(I_{1},I_{2})=(G_{pro}+2h+(b-1)\ell_{23}^{2}+\epsilon(\ell_{12}^{2}+\ell_{14}^{2}+\ell_{24}^{2}),G_{pro}+\epsilon(b\ell_{12}^{2}+\ell_{14}^{2}))$. While the integrals $I_1$ and $I_2$ may appear complicated, the system $(I_1,I_2)$ is equivalent to the system $(G_{pro},\ell_{23}^2)$ in the limit as $\epsilon\to 0$. We can find the critical points and values of the system $(I_1,I_2)$ then taking the
limit as $\epsilon\to0$ at the end of calculations to recover the correct results for the system $(G_{pro},\ell_{23}^2)$.
\begin{figure}
\begin{centering}
\includegraphics[width=7cm,height=6cm]{Pro-MM-opened.png}\quad\includegraphics[width=7cm,height=6cm]{Pro-MM-unopened.png}\caption{a) ``Opened'' momentum map with $b=2.4$. b) ``Unopened'' momentum
map using $\ell_{23}^{2}$ as an integral. \label{fig:Prolate Root and Momentum map}}
\par\end{centering}
\end{figure}
\begin{prop}
The momentum map for the reduced prolate integrable system is the region bounded
by the curve $\mathcal{P}_{1}:G_{pro}=b(1-\ell_{23}^{2})$ and line
$\mathcal{P}_{2}:G_{pro}=0$ shown in Figure~\ref{fig:Prolate Root and Momentum map}~a). There is an isolated critical value at $(0,1)$.
\end{prop}
\begin{proof}
Critical points of the system $(\ell_{23}^2,G_{pro})$ can be computed directly or by applying Proposition~\ref{Bolsinov Theorem}:
\begin{enumerate}
\item The blue line $\mathcal{P}_{2}:G_{pro}=0$ has $\lambda=0$ and the critical
points are parametrised by $\bm{L}=(0,0,0,\ell_{23},\ell_{24},\ell_{34})$
with $\ell_{23}^{2}+\ell_{24}^{2}+\ell_{34}^{3}=1$ and $\ell_{23}=l$.
These are co-dimension 1 elliptic points.
\item The green line $\mathcal{P}_{1}:G_{pro}=b(1-\ell_{23}^{2})$ in Figure \ref{fig:Prolate Root and Momentum map} b) has $\lambda=a$
and the critical points are parametrised by $\bm{L}=(\ell_{12},\ell_{13},0,\ell_{23},0,0)$
with $\ell_{12}^{2}+\ell_{13}^{2}+\ell_{23}^{3}=1$ and $\ell_{23}=l$.
These are co-dimension 1 elliptic point.
\item The lines $\ell_{23}^2=0$ has both $\lambda=1$, $\lambda=1+\epsilon$ as well as the curve $I_{2}=\frac{I_{1}^{2}}{8h}$ corresponding to the double root $\lambda\in[1,1+\epsilon]$. These are degenerate critical values of system $(\ell_{23}^2,G_{pro})$.
\end{enumerate}
For the reduced prolate integrable system $(\ell_{23},G_{pro})$, we see that the line $\mathcal{P}_2$ remains critical and has $l\in[-1,1]$. The line $P_1$ for $(\ell_{23}^2,G_{pro})$ becomes a parabola for $(\ell_{23},G_{pro})$. The line $\ell_{23}=0$ becomes regular values after changing from $\ell_{23}^2$ to $\ell_{23}$ with the exception of the isolated point $(\ell_{23},G_{pro})=(0,1)$ which has critical
points $\ell_{14}=\pm 1$. This
is a focus-focus point and its fibre on $S^{2}\times S^{2}$ a doubly
pinched torus.
\end{proof}
\subsubsection{Action Map and Monodromy}\label{pro-act}
From (\ref{eq:sep momenta prolate}) and the same reasoning used to
obtain (\ref{eq:ACtions ell dfef}), we have the following formulae
for the actions
\begin{equation}
\begin{aligned}J_{1}=\frac{2}{\pi}\int_{0}^{\min(r_{1},1)}p_{1}ds & & J_{2} & =\frac{2}{\pi}\int_{0}^{1}p_{2}ds & & & J_{3}=\frac{2}{\pi}\int_{\max(1,r_{2})}^{a}p_{3}ds\end{aligned}
\label{eq:action prol}
\end{equation}
where the $p_{k}$ are given in (\ref{eq:sep momenta prolate}).
Here $(r_{1},r_{2})$ are the roots of $p_{1,3}^{2}$ from (\ref{eq:sep momenta prolate})
where $0\le r_{1}\le1\le r_{2}\le b$. Note that $J_{2}$ simplifies
to $\left|\ell_{23}\right|$. Like for the ellipsoidal system, the
prolate actions also satisfy (\ref{eq:action cond el}). The action
map for the prolate system is shown in Figure \ref{fig:Action and polygon invariant}
a) where the black dot corresponding to the focus-focus point is located
at $\frac{2}{\pi}(\sin^{-1}(\frac{1}{b}),0,\frac{\pi}{2}-\sin^{-1}(\frac{1}{b}))$.
\begin{figure}
\begin{centering}
\includegraphics[width=7cm,height=6cm]{Pro-AM.png}\quad\includegraphics[width=6.5cm,height=6cm]{Pro-AM-proj.png}
\par\end{centering}
\caption{a) Action map for the prolate system with $b=2.4$ where the black dot is the image of the focus-focus
point. b) Semi-toric polygon invariant for the reduced prolate system. \label{fig:Action and polygon invariant}}
\end{figure}
The reduced prolate system is a two degree of freedom integrable system
where one of the integrals is a global $S^{1}$ action and all singularities
are either elliptic or focus-focus type. Thus, we have the following.
\begin{cor}
The reduced prolate system $(\ell_{23},G_{pro})$ on $S^{2}\times S^{2}$
is a generalised semi-toric system.
\end{cor}
Semi-toric systems have been globally classified using $5$ symplectic
invariants \cite{Pelayo2009}. One of these is the polygon invariant,
which is a family of rational convex polygons. This is a generalisation
of the Delzant polytope (see, e.g., \cite{BSMF_1988__116_3_315_0})
and allows us to compare the standard affine structure of $\mathbb{R}^{2}$
with that of the momentum map \cite{Alonso2019,Sepe2017}.
In Figure \ref{fig:Action and polygon invariant} b) we show one representative of the polygon invariant. This is simply the projection of the action map onto the $(\ell_{23},J_{1})$
axes with both signs of $\ell_{23}$ considered. The red vertex at
$(0,1)$ is a fake corner and is the result of ``opening up'' from $|\ell_{23}|$ to $\ell_{23}$.
For more information on the classification of semi-toric systems,
see \cite{Sepe2017,Alonso2019}.
Another symplectic invariant of a semi-toric system is the height invariant, which is the position of the focus-focus point in the image of the action map. Note that this is the limit of the image of the hyperbolic-hyperbolic point in the action map for the degeneration $a=1$. Specialising \eqref{eq:HHptaction} to this case gives $\frac{2}{\pi} \cos^{-1} \sqrt{1/b}$ for the height invariant.
Another property of semi-toric systems
is the non-trivial monodromy of the actions; the focus-focus
equilibrium implies that one can only locally construct a smooth set
of action variables. Monodromy has been well studied, both classically
and quantum mechanically, see e.g. \cite{Dawson2022,Dullin2016,DAVISON20072437,Chiscop_2019}.
The prolate system has non-trivial monodromy.
\begin{lem}
The reduced prolate system has non-trivial monodromy with monodromy matrix
\begin{equation}
\mathfrak{M}=\left(\begin{array}{ccc}
1 & 2 & 0\\
0 & 1 & 0\\
0 & -2 & 1
\end{array}\right).\label{eq:Monodromy matrix}
\end{equation}
\end{lem}
\begin{proof}
Let $C_{1}$ and $C_{3}$ be cycles that enclose the intervals $[0,\min(1,r_{1})]$
and $[\max(1,r_{2}),b]$ respectively. We rewrite (\ref{eq:action prol})
as
\begin{equation}
\begin{aligned}J_{1}\coloneqq\frac{1}{2\pi}\oint_{C_{1}}p_{1}ds_{1}, & & J_{3}\coloneqq\frac{1}{2\pi}\oint_{C_{3}}p_{3}ds_{3}\end{aligned}
.\label{eq:action prolate}
\end{equation}
We observe that if $(l,g)=(0,g>1)$ then $r_{1}\to1^{-}$.
while if $g_{pro}<1$, then $r_{2}\to1^{+}$. We now show
that the actions \eqref{eq:action prolate}, while continuous everywhere, are not globally smooth.
Consider the slope of the action $J_i$ considered as a function of $l$,
\[
2\pi W_{i}\coloneqq\frac{\partial J_{i}}{\partial l}=\oint_{C_{i}}\frac{(b-1)l}{2(b-s_{i})(s_{i}-1)^{2}p(s_{i})}ds_{i}.
\]
When $l=0$, if $C_{i}$ encloses $s_{i}=1$ then $W_{i}=-\text{sgn}(l)$,
otherwise it vanishes. From our analysis of $C_{1}$ and $C_{2}$
around $s_{i}=1$ we have
\begin{equation}
\begin{aligned}\lim_{l\to0}\frac{\partial J_{1}}{\partial l}= & -\kappa_{1}\text{sgn}(l) & & \lim_{l\to0}\frac{\partial J_{3}}{\partial l}=-\kappa_{3}\text{sgn}(l)\end{aligned}
\label{eq:K1 <-1}
\end{equation}
where $(\kappa_{1},\kappa_{3})=(1,0)$ when $g_{pro}<1$, otherwise
$(\kappa_{1},\kappa_{3})=(0,1)$. Thus, the actions $J_{1}$ and $J_{3}$
are continuous but not differentiable at $l=0$.
For $l>0$, let $J_{+}=\left(J_{1},J_{2},J_{3}\right)^{t}$
and similarly for $J_{-}$. Note that $J_{1}$ and $J_{3}$ are even
functions of $l$ while $J_{2}$ is odd. This means $J_{-}(-l)=SJ_{+}(l)$
where $S=\text{diag}(1,-1,1)$. We are now interested in finding unimodular
matricies $M_{1},M_{2}\in SL(3,\mathbb{Z})$ such that $J_{+}$ and
$M_{i}J_{-}$ are locally smooth across $l=0$. To ensure continuity at $l=0$
we require
\[
\begin{aligned}J_{+}=M_{1}J_{-}=M_{1}SJ_{-}=M_{1}J_{+}, & & & g>1\\
J_{+}=M_{2}J_{-}=M_{2}SJ_{-}=M_{2}J_{+}, & & & g<1.
\end{aligned}
\]
The above relations imply that $\left(J_1,0,J_{3}\right)^{t}$ is
an eigenvector of both $M_{1}$ and $M_{2}$ with eigenvalue $+1$.
For arbitrary $J_{1},J_{3}$ the corresponding eigenvector equation implies
that $M_{i}$ has the form
\[
M_{i}=\begin{pmatrix}1 & \alpha_i & 0\\
0& 1 & 0\\
0& \beta_{i} & 1
\end{pmatrix}.
\]
For the actions to be smoothly joined when $g>1$ we require
\[
M_{1}\frac{\partial J_{-}}{\partial l}=\frac{\partial J_{+}}{\partial l}
\]
and similarly for $g<1$ and $M_{2}$. The limits in (\ref{eq:K1 <-1}) force
$(\alpha_{1},\beta_{1})=(0,-2)$ and $(\alpha_{2},\beta_{2})=(-2,0)$. The corresponding
monodromy matrix is given by $M=(M_{2}S)^{-1}(M_{1}S)$ which we compute
to be (\ref{eq:Monodromy matrix}).
\end{proof}
The monodromy of the reduce system with actions $(J_1,\ell_{23})$ is obtained by the restriction to the top left block of $\mathfrak{M}$.
It should be stressed that the integrable system with Hamiltonian $H$ does not have monodromy, it is superintegrable, and does not even have dynamically defined tori.
However, the fibration defined by the three commuting functions $(H, \ell_{23}, G_{pro})$
on $T^*S^3$ has monodromy as computed. Similarly, the commuting function $(\ell_{23}, G_{pro})$ on $S^2 \times S^2$ have monodromy given by the top left block of $\mathfrak{M}$.
\subsection{Oblate Coordinates}
Eventhough the definition
of these coordinates is similar to the prolate case, the corresponding integrable
system is significantly different. In particular, even though it does have a global $S^1$ action it is not semi-toric because of the appearance of hyperbolic and degenerate singularities.
\subsubsection{Separation of Variables}
Oblate coordinates, denoted by $(1\ 2\ (3\ 4))$ and $((1\ 2)\ 3\ 4)$,
lie on opposite sides of the dotted line in Figure \ref{fig:Stasheff S3}. They are equivalent by flipping the ordering of the $e_i$ by applying $e_i\mapsto-e_i$ then reordering.
Consequently, the corresponding integrable systems are equivalent. We focus on the $(1\ 2\ (3\ 4))$ coordinates
and normalise according to $(e_{1},e_{2},e_{3}=e_{4})=(0,1,a)$. This system is equivalent to the $((1\ 2)\ 3\ 4)$ with $(e_{1}=e_{2},e_{3},e_{4})=(0,1,\frac{a}{a-1})$. An
explicit definition of these coordinates is given by
\begin{equation}
\begin{aligned}x_{1}^{2} & =\frac{s_{1}s_{2}}{a}, & & x_{2}^{2}=\frac{-\left(s_{1}-1\right)\left(s_{2}-1\right)}{a-1},\\
x_{3}^{2} & =\frac{\left(s_{1}-a\right)\left(s_{2}-a\right)s_{3}}{a\left(a-1\right)}, & & x_{4}^{2}=\frac{\left(s_{1}-a\right)\left(s_{2}-a\right)\left(1-s_{3}\right)}{a\left(a-1\right)},
\end{aligned}
\label{eq:oblate def}
\end{equation}
where $0\le s_{1},s_{3}\le1\le s_{2}\le a$. The integrals are $(2H,\ell_{34},G_{obl})$
where $G_{obl}=a\ell_{12}^{2}+\ell_{13}^{2}+\ell_{14}^{2}$. Let $(l,g)$ be functional values of $\ell_{34}$ and
$G_{obl}$ respectively, we obtain the separated
momenta
\begin{equation}
\begin{aligned}p_{i}^{2} & =\frac{-2hs_{i}^{2}+(2ah+g-(a-1)l^{2})s_{i}-ag}{4s_{i}(s_{i}-1)(s_{i}-a)^{2}} & & & p_{3}^{2} & =\frac{l^{2}}{4s_{3}(1-s_{3})}\end{aligned}
\label{eq:psq oblate}
\end{equation}
where $i\in\{1,2\}$. We call the triple $(2H,\ell_{34},G_{obl})$ on $T^{*}S^{3}$ the
1-parameter family of oblate integrable systems and $(\ell_{34},G_{obl})$ the corresponding reduced oblate integrable system.
We have a similar result to Lemma \ref{Prolate degen proof } for the oblate system.
\begin{lem}
The integrals $(\ell_{34}^2,G_{obl})$
as well as the separated momenta \eqref{eq:psq oblate} can
be obtained by smoothly degenerating their ellipsoidal counterparts
\eqref{eq:Separation constants ellipsoidal} and \eqref{eq:psq ellipsoidal-1}.
\end{lem}
\begin{proof}
Using the transformation
\[(e_{4},s_{3},p_3)=(e_{3}+\epsilon,e_{3}+\epsilon\tilde{s}_{3}, \frac{\tilde{p}_3}{\epsilon})\]
where $\tilde{s}_{3}\in[0,1]$ and following the same procedure as Lemma \ref{Prolate degen proof } gives the result.
\end{proof}
In the oblate limit, the Uhlenbeck integrals $\tilde{F}_i$ are
\[
\begin{aligned}\tilde{F}_{1}=-\frac{G_{obl}}{a}, & & \tilde{F}_{2}=\frac{G_{obl}+\ell_{34}^2-2h}{a-1}, & & \tilde{F}_{3}=\tilde{F}_4=\ell_{34}^2.\end{aligned}
\]
\subsubsection{Critical Points and Momentum Map}
\begin{figure}
\begin{centering}
\includegraphics[width=7cm,height=6cm]{Obl-MM-opened.png}\quad\includegraphics[width=7cm,height=6cm]{Obl-MM-unopened.png}
\par\end{centering}
\caption{a) Momentum map for the oblate system $(\ell_{34},G_{obl})$ with $a=2.4$.
There are two chambers, labelled $I$ and $II$. b) ``Unopened''
momentum map taking $\ell_{34}^{2}$ as integral. \label{fig:Oblate root and MM}}
\end{figure}
We can also use the method of compatible Poisson structures to aid
in studying this system. Using the matrix $C=\text{\text{diag}}(0,1,a,a+\ensuremath{\epsilon})$
in Proposition \ref{fact1} gives the equation
\[
\psi_{obl}(\lambda)=2\frac{-2h\lambda^{2}+I_{1}\lambda+I_{2}}{\lambda(\lambda-1)(\lambda-a)(\lambda-a-\epsilon)}
\]
where $(I_{1},I_{2})=(G_{obl}+2ah-(a-1)\ell_{34}^{2}+\epsilon(\ell_{12}^{2}+\ell_{13}^{2}+\ell_{23}^{2}),-aG_{obl}-a\epsilon\ell_{12}^{2}-\epsilon\ell_{13}^{2})$.
According to Corollary~\ref{biham-critical-values}, the critical values
occurs at the curve $I_{2}=\frac{I_{1}^{2}}{8h}$ as well as the lines
$I_{2}=0$, $I_{2}=2h-I_{1}$, $I_{2}=2ha^{2}-aI_{1}$ and $I_{2}=2h(a+\epsilon)^{2}-(a+\epsilon)I_{1}$
when $\lambda=0,1,a,a+\epsilon$ respectively.
\begin{prop}
The critical values of the momentum map for the reduced oblate integrable system are the curves $\mathcal{O}_{1}:G_{obl}=(\sqrt{a}-\sqrt{a-1}\left|\ell_{34}\right|)^{2},\mathcal{O}_{2}:G_{obl}=1-\ell_{34}^{2}$
and $\mathcal{O}_{3}:G_{obl}=0$. The momentum map is shown in Figure
\ref{fig:Oblate root and MM} a) with $2h=1$.
\end{prop}
\begin{proof} Applying Proposition~\ref{Bolsinov Theorem} and following a similar calculation as the ellipsoidal case we get:
\begin{enumerate}
\item The blue line $\mathcal{O}_{3}:G_{obl}=0$ has $\lambda=0$ and critical
points parametrised by $\bm{L}=(0,0,0,\ell_{23},\ell_{24},\ell_{34})$
with $\ell_{23}^{2}+\ell_{24}^{2}+\ell_{34}^{3}=1$ and $\ell_{34}=l_{obl}$.
\item The red and yellow curve
$\mathcal{O}_{2}:G_{obl}=1-\ell_{34}^{2}$ has $\lambda=1$
and the critical points are parametrised by $\bm{L}=(0,\ell_{13},\ell_{14},0,0,\ell_{34})$
with $\ell_{13}^{2}+\ell_{14}^{2}+\ell_{34}^{3}=1$ and $\ell_{34}=l_{obl}$.
\item The cyan curve $\mathcal{O}_{1}:G_{obl}=(\sqrt{a}-\sqrt{a-1}\left|\ell_{34}\right|)^{2}$
has $\lambda\in[1,a]$ and critical points
\[
\begin{aligned}\bm{L}_{\pm}= & \left(\frac{a_{1}\sqrt{\lambda}\sqrt{\lambda-1}}{\sqrt{2}},\mp\frac{b_{2}\sqrt{\lambda}\sqrt{a-\lambda}}{\sqrt{2}},-\frac{b_{3}\sqrt{\lambda}\sqrt{a-\lambda}}{\sqrt{2}},\right.\\
& \left.\mp\frac{b_{3}\sqrt{\lambda-1}\sqrt{a-\lambda}}{\sqrt{2}},\frac{b_{2}\sqrt{\lambda-1}\sqrt{a-\lambda}}{\sqrt{2}},\mp\frac{a_{1}(a-\lambda)}{\sqrt{2}}\right)
\end{aligned}
\]
with Pl\"{u}cker relation $a_{1}^{2}=b_{2}^{2}+b_{3}^{2}$ and $\bm{L}\cdot\bm{L}=\frac{1}{2}a(a-1)(b_{2}^{2}+b_{3}^{2})=1$.
This forces $a_{1}=\pm\frac{\sqrt{2}}{\sqrt{a(a-1)}}$ and gives a
parametrisation $(\ell_{34},G_{obl})=\left(\mp\frac{(a-\lambda)}{\sqrt{a(a-1)}},\frac{\lambda^{2}}{a}\right)$ in terms of $\lambda$. This curve exists only for $0<|\ell_{34}|<\sqrt{\frac{a-1}{a}}$
which corresponds to a double root $1<\lambda<a$.
\end{enumerate}
The lines $I_{2}=2ha^{2}-aI_{1}$ and $I_{2}=2h(a+\epsilon)^{2}-(a+\epsilon)I_{1}$
both gives the line $\ell_{34}=0$ (orange and magenta segments in Figure \ref{fig:Oblate root and MM} b)) which are degenerate critical values
for the system $(\ell_{34}^{2},G_{obl})$ since the vector field
generated by $\ell_{34}^{2}$ vanishes at $\ell_{34}=0$. However,
for the oblate integrable system $(\ell_{34},G_{obl})$ the line
$\ell_{34}=0$ becomes a set of regular values (except at three points). Direct computation
using the vector fields of $(\ell_{34},G_{obl})$ confirms these results.
\end{proof}
Let the intersection of $\mathcal{O}_{i}$ and $\mathcal{O}_{j}$
be denoted by $o_{ij\pm}$ where the sign is determined by whether
the intersection occurs for a positive or negative value of $\ell_{34}$.
The intersections $o_{12\pm}$ at $(\ell_{34},G_{obl})=(\pm\sqrt{\frac{a-1}{a}},\frac{1}{a})$
are tangential. The other $3$ intersections at $o_{11}=(0,a)$ and
$o_{23\pm}=(\pm1,0)$ are transverse.
The tangential intersections $o_{12\pm}$ are degenerate pitchfork singularities and their fibres are single circles $S^{1}$ on $S^2\times S^2$.
The points $o_{23\pm}$ are of elliptic-elliptic type. The point $o_{11}$ is also elliptic-elliptic with 2 critical points $\bm{L}=(\pm 1,0,0,0,0,0)$.
The curves $\mathcal{O}_{1},\mathcal{O}_{3}$ as well as the yellow
parts of $\mathcal{O}_{2}$ are all codimension one elliptic. The
fibre of $\mathcal{O}_{3}$ and the yellow segments are single
$S^1$, while the fibre of $\mathcal{O}_{1}$ is $2S^1$. The
red part of $\mathcal{O}_{2}$ is codimension one hyperbolic and its
fibre is $B\times S^1$.
The fibre of a regular value in chamber $I$ is $T^{2}$ while the
fibre of a regular value in chamber $II$ is $2T^{2}$.
\subsubsection{Actions} \label{obl-act}
\begin{figure}
\begin{centering}
\includegraphics[width=8cm,height=7cm]{Obl-AM.png}
\par\end{centering}
\caption{Action Map for the oblate system with $a=2.4$.\label{fig:Action Map Oblate}}
\end{figure}
Like in the prolate case, one action for the oblate system $J_{3,obl}\coloneqq\left|\ell_{34}\right|$
is trivial. The other two non trivial actions are
\[
\begin{aligned}J_{1}=\frac{2}{\pi}\int_{0}^{\min(r_{1},1)}p_{1}ds, & & & J_{2}=\frac{2}{\pi}\int_{\max(r_{1},1)}^{\min(r_{2},a)}p_{2}ds\end{aligned}
\]
where $r_{2}\ge1$ and $0\le r_{1}\le r_{2}\le a$. Theorem~\ref{action thm} also applies here. The action map is shown in Figure \ref{fig:Action Map Oblate}. For the interior (red)
curve $\gamma_{obl}$ we have $r_1=1$, $r_2=a(1-l^2)\geq1$ with $g=1-l^2$ for $|l|\leq\sqrt{\frac{a-1}{a}}$. The parametrisation of this curve in terms of the angular momentum $l$ where $|l|\leq\sqrt{\frac{a-1}{a}}$ is given by
\[
\gamma_{obl}(l)=\left(\frac{2}{\pi}\left(\sin ^{-1}\left(t_1\right)+l \tan ^{-1}\left(t_2\right)\right)-|l|,1-\frac{2}{\pi}\left(\sin ^{-1}\left(t_1\right)+l \tan ^{-1}\left(t_2\right)\right),|l|\right)
\]
where $t_1=\frac{1}{\sqrt{a(1-l^2)}}$ and $t_2=\frac{\sqrt{a(1-l^2)-1}}{l}$.
Call the intersections of $\gamma_{obl}$ with the boundary of the
action map $\mathfrak{O}_{1}$ (yellow/cyan) and $\mathfrak{O}_{2}$
(magenta/orange). The point $\mathfrak{O}_{1}$ has coordinates $\gamma_{obl}(\sqrt{\frac{a-1}{a}})=(1-\sqrt{\frac{a-1}{a}},0,\sqrt{\frac{a-1}{a}})$,
while $\mathfrak{O}_{2}$ is located at $\gamma_{obl}(0)=\frac{2}{\pi}(\sin^{-1}\left(\frac{1}{\sqrt{a}}\right),\cos^{-1}\left(\frac{1}{\sqrt{a}}\right),0)$.
\subsection{Lam\'{e} Coordinates}
The Lam\'e system is unusual in a number of ways. In this case we actually need to make use of the fact that the parameters $e_i$ live on the real projective line. This is the reason why this case is not visible in the original normalised $ab$-parameter space and a blow-up is required. Furthermore, it is a family that has larger symmetry group $SO(3)$. A larger symmetry group is in some sense related to super-integrability, however, since we don't have a Hamiltonian in the reduced system it is harder to define what this means. As we will see after another reduction, this system becomes the Euler top (see the Appendix).
\subsubsection{Separation of Variables and St\"{a}ckel System}
Lam\'{e} coordinates are an extension of ellipsoidal coordinates from $S^{2}$
onto $S^{3}$ and arise from limiting to three equal semi-major axes. As with oblate, there are two cases to consider: $(1\ (2\ 3\ 4))$
and $((1\ 2\ 3)\ 4)$ which are equivalent. Here we only discuss the $(1\ (2\ 3\ 4))$
coordinates which are defined as follows
\begin{equation}
\begin{aligned}x_{1}^{2} & =s_{1}, & x_{3}^{2} & =\frac{\left(s_{1}-1\right)\left(f_{2}-s_{2}\right)\left(f_{2}-s_{3}\right)}{\left(f_{1}-f_{2}\right)\left(f_{2}-f_{3}\right)},\\
x_{2}^{2} & =-\frac{\left(s_{1}-1\right)\left(s_{2}-f_{1}\right)\left(s_{3}-f_{1}\right)}{\left(f_{2}-f_{1}\right)\left(f_{3}-f_{1}\right)} & x_{4}^{2}, & =\frac{\left(s_{1}-1\right)\left(f_{3}-s_{2}\right)\left(f_{3}-s_{3}\right)}{\left(f_{2}-f_{3}\right)\left(f_{3}-f_{1}\right)},
\end{aligned}
\label{eq:Lame coord def}
\end{equation}
where $0\le s_{1}\le1$ and $0\le f_{1}\le s_{2}\le f_{2}\le s_{3}\le f_{3}$.
A possible St\"{a}ckel matrix for these coordinates is
\begin{equation}
\Phi_{L}=\frac{1}{4}\left(\begin{array}{ccc}
-\frac{1}{\left(s_{1}-1\right)s_{1}} & -\frac{1}{\left(s_{1}-1\right){}^{2}s_{1}} & 0\\
0 & \frac{1}{\left(f_{3}-s_{2}\right)\left(s_{2}-f_{2}\right)} & \frac{1}{\left(f_{3}-s_{2}\right)\left(s_{2}-f_{1}\right)\left(s_{2}-f_{2}\right)}\\
0 & \frac{1}{\left(f_{3}-s_{3}\right)\left(s_{3}-f_{2}\right)} & \frac{1}{\left(f_{3}-s_{3}\right)\left(s_{3}-f_{1}\right)\left(s_{3}-f_{2}\right)},
\end{array}\right).\label{eq:Lame stack}
\end{equation}
with integrals $(2H,2H-F_{L},G_{L}-f_{1}(2H-F_{L}))$
where $(F_{L},G_{L})=(\ell_{12}^{2}+\ell_{13}^{2}+\ell_{14}^{2},f_{1}l_{34}^{2}+f_{2}l_{24}^{2}+f_{3}l_{23}^{2})$. From (\ref{eq:Lame stack}), the separated momenta are given by
\begin{equation}
\begin{aligned}p_{1}^{2} & =\frac{f_{L}-2hs_{1}}{4\left(s_{1}-1\right){}^{2}s_{1}}, & & & p_{k}^{2} & =-\frac{(f_{L}-2h)s_{k}+g_{L}}{4\left(f_{3}-s_{k}\right)\left(s_{k}-f_{1}\right)\left(s_{k}-f_{2}\right)},\end{aligned}
\label{eq:lame sep momenta}
\end{equation}
where $k=2,3$ and $(f_{L},g_{L})$ are functional values of $(F_{L},G_{L})$. We call the triple $(2H,F_{L},G_{L})$ on $T^{*}S^{3}$ the
Lam\'{e} integrable system and $(F_L,G_L)$ the corresponding reduced Lam\'{e} integrable system on $S^2\times S^2$.
The important feature of this case is the appearance of the integral $F_L$ with a $SO(3)$ symmetry given by the rotations generated by $\ell_{1i}$ using $B_L$. Similar to the prolate and oblate families, the Lam\'{e} system can also be obtained as a limit of the ellipsoidal system.
\begin{lem}
The integrals $(F_{L},G_{L})$ and separated momenta (\ref{eq:lame sep momenta})
for the Lam\'{e} integrable system can be obtained from their ellipsoidal counterparts \eqref{eq:Separation constants ellipsoidal} and \eqref{eq:psq ellipsoidal-1}.
\end{lem}
\begin{proof}
A possible limiting process from ellipsoidal to Lam\'{e} coordinates is given in \cite{KKM18}. However in this case, it is much simpler to use the transformation
\begin{equation}
\begin{aligned}\left(e_{1},e_2,e_3,e_{4}\right) & =\left(-\frac{1}{\epsilon},f_1,f_2,f_3\right)\\
\left(s_{1},p_1\right) & =\left(f_2-\frac{\tilde{s}_1}{\epsilon},\epsilon \tilde{p}_1\right)
\end{aligned}
\label{eq:sub lame}
\end{equation}
for $\epsilon>0$ and $\tilde{s}_1\in [0,1]$. Applying \eqref{eq:sub lame} and taking the limit as $\epsilon\to 0$ immediately gives \[(\tilde{\eta}_1,\tilde{\eta}_2)=(-\frac{2H-F_L}{\epsilon},\frac{-G_L}{\epsilon})\]
The separated momenta are obtained using the same method as in the prolate case.
\end{proof}
In the Lam\'{e} limit, the Uhlenbeck integrals $\tilde{F}_{i}$ are as
follows
\[
\begin{aligned}\tilde{F}_{1}=-F_{L}, & & \tilde{F}_{2} & =\frac{\ell_{23}^{2}}{f_{1}-f_{2}}+\frac{\ell_{24}^{2}}{f_{1}-f_{3}}, & & \tilde{F}_{3}=\frac{\ell_{23}^{2}}{f_{2}-f_{1}}+\frac{\ell_{34}^{2}}{f_{2}-f_{3}}, & & \tilde{F}_{4}=\frac{\ell_{24}^{2}}{f_{3}-f_{1}}+\frac{\ell_{34}^{2}}{f_{3}-f_{2}}.\end{aligned}
\]
Note that up to projective transformations, the system only has one parameter $\frac{f_3-f_1}{f_2-f_1}$, but we use $f_i$ to keep the higher symmetry.
The vector field of $G_L$ given by $B_{\bm{L}} \nabla G_L$ has a semi-direct product stucture: the equations for $\ell_{23}, \ell_{24}, \ell_{34}$ decouple from the others, and they are in fact Euler's equations for the rigid body on $SO(3)$ with moments of inertia given by $f_i^{-1}$. The equation for the remaining three variables are linear equations with time-varying coefficients given by the solution of Euler's equation.
\subsubsection{Critical Points and Momentum Map}
Using the matrix $C=\text{diag}(-\frac{1}{\epsilon},f_1,f_2,f_3)$
gives the integrals
\[
(I_{1},I_{2})=-\frac{1}{\epsilon}\left(2h-F_L+O(\epsilon),G_L+O(\epsilon)\right).
\]
We need consider the $\frac{1}{\epsilon}$
order term since the limit as $\epsilon\to0$ of $(I_{1},I_{2})$
is infinite.
\begin{prop}
The set of critical values of the momentum map for the Lam\'{e} system is composed of four straight lines $\mathfrak{L}_{j}:F_{L}=1-\frac{1}{f_{j}}G_{L}$
for $j=1,2,3$ and $\mathfrak{L}_{4}:F_{L}=0$. This is shown in Figure \ref{fig:Lame MM and Action Map} a).
\end{prop}
\begin{figure}
\begin{centering}
\includegraphics[width=8cm,height=7cm]{Lame-MM.png}\includegraphics[width=7cm,height=7cm]{Lame-AM.png}
\par\end{centering}
\caption{a) Momentum map for the Lam\'{e} system with $(f_{1},f_{2},f_{3})=(0.4,1.3,3.2)$.
b) Corresponding action map. \label{fig:Lame MM and Action Map}}
\end{figure}
\begin{proof}The critical points for the Lam\'{e} integrable systems are similar to
those for the lines in the ellipsoidal system:
\begin{enumerate}
\item The line $\mathfrak{L}_{4}:F_{L}=0$ is the limit as $\epsilon\to 0$ of the line $I_2=-\frac{1}{\epsilon}(I_1+\frac{1}{\epsilon})$ with $\lambda=-\frac{1}{\epsilon}$ and the critical
points are parametrised by $\bm{L}=(0,0,0,\ell_{23},\ell_{24},\ell_{34})$
with $\ell_{23}^{2}+\ell_{24}^{2}+\ell_{34}^{2}=1$ and $f_{1}\ell_{34}^{2}+f_{2}\ell_{24}^{2}+f_{3}\ell_{23}^{2}=g_{L}$.
\item The line $\mathfrak{L}_{1}:F_{L}=1-\frac{1}{f_{1}}G_{L}$ is the limit as $\epsilon\to 0$ of the line $I_2=f_1(I_1-f_1)$ with
$\lambda=f_1$ and the critical points are parametrised by $\bm{L}=(0,\ell_{13},\ell_{14},0,0,\ell_{34})$
with $\ell_{13}^{2}+\ell_{14}^{2}+\ell_{34}^{2}=1$ and $f_{1}\ell_{34}^{2}=g_{L}$.
\item The line $\mathfrak{L}_{2}:F_{L}=1-\frac{1}{f_{2}}G_{L}$ is the limit as $\epsilon\to 0$ of the line $I_2=f_2(I_1-f_2)$ with
$\lambda=f_2$ and the critical points are parametrised
by $\bm{L}=(\ell_{12},0,\ell_{14},0,\ell_{24},0)$ with $\ell_{12}^{2}+\ell_{14}^{2}+\ell_{24}^{2}=1$
and $f_{2}\ell_{24}^{2}=g_{L}$.
\item The line $\mathfrak{L}_{3}:F_{L}=1-\frac{1}{f_{3}}G_{L}$ is the limit as $\epsilon\to 0$ of the line $I_2=f_3(I_1-f_3)$ with
$\lambda=f_3$ and the critical points are parametrised
by $\bm{L}=(\ell_{12},\ell_{13},0,\ell_{23},0,0)$ with $\ell_{12}^{2}+\ell_{13}^{2}+\ell_{23}^{2}=1$
and $f_{3}\ell_{23}^{2}=g_{L}$.
\end{enumerate}
The curve $I_{2}=\frac{I_{1}^{2}}{8h}$ for $\lambda\in[f_1,f_2]$
shrinks to the degenerate point $(F_{L},G_{L})=(1,0)$ in the limit $\epsilon\to 0$.
\end{proof}
Let the intersections of $\mathfrak{L}_{i}$ and $\mathfrak{L}_{j}$
be denoted by $t_{ij}$ and the three way intersection of $\mathfrak{L}_{1}$, $\mathfrak{L}_{2}$, $\mathfrak{L}_{3}$ by $T_{123}$.
The intersections $t_{14}$ and $t_{34}$ are elliptic-elliptic critical
values with 2 points in their fibres $2S^{1}$ on $S^2\times S^2$. The point $t_{24}$
is elliptic hyperbolic and its fibre is $C_{2}$.
\begin{lem}
The three-way intersection $T_{123}$ at $(F_{L},G_{L})=(1,0)$ is a degenerate singularity of spherical type.
\end{lem}
\begin{proof}
The linearisation of the vector field generated by $F_L$ and $G_L$ is $\nabla_{\bm L}B_{\bm L}(\alpha \nabla_{\bm L}F_L + \beta \nabla_{\bm L} G_L)$. At the three-way intersection $T_{123}$, this matrix becomes
\begin{equation}
\left(\begin{array}{cc}
\bm 0 & \begin{array}{ccc}
2(\alpha-f_3\beta)\ell_{13} & 2(\alpha-f_2\beta)\ell_{14} & 0\\
-2(\alpha-f_3\beta)\ell_{12} & 0 & 2(\alpha-f_1\beta)\ell_{14}\\
0 & -2(\alpha-f_2\beta)\ell_{12} & -2(\alpha-f_1\beta)\ell_{13}
\end{array}\\
\bm 0 & \bm 0
\end{array}\right)\label{linlame}
\end{equation}
where $\bm 0$ is a $3\times 3$ matrix of all zeros.
The eigenvalue of \eqref{linlame} is $(0,0,0,0,0,0)$ meaning that $T_{123}$ is a degenerate critical value. The rank of the differential of the moment map drops by $1$ at $T_{123}$. This is known as a spherical type singularity studied in the thesis \cite{RonanThesis}. Systems with a spherical type singularity are characterised by the presence of a globally defined, continuous but not smooth action. The fibre of a spherical type singularity is diffeomorphic to a product of spheres. In this case the preimage of $T_{123}$ is $S^{2}$ and every point in this fibre is critical.
\end{proof}
It was shown in \cite{RonanThesis} that the geodesic flow on $S^n$ in polyspherical coordinates gives rise to systems containing a spherical type singularity. While the Lam\'e coordinate system is not polyspherical, the reduced Lam\'e integrable system is an example of a system that has a spherical singularity that was not studied in \cite{RonanThesis}. Since the Lam\'e coordinate system is obtained by extending the ellipsoidal coordinates on $S^2$ to $S^3$, we can conjecture that systems originating from the geodesic flow on $S^n$ in coordinates obtained from extending a coordinate system from $S^k$ to $S^n$ where $2\leq k<n$ will contain a spherical type singularity.
\subsubsection{Actions}\label{lame-act}
The actions of the Lam\'{e} system are given by
\[
\begin{aligned}J_{1} & =\frac{2}{\pi}\int_{0}^{f_L}p_{1}ds & & & J_{2} & =\frac{2}{\pi}\int_{f_{1}}^{\min(r_{2},f_{2})}p_{2}ds & & & J_{3} & =\frac{2}{\pi}\int_{\max(f_{2},r_{2})}^{f_{3}}p_{3}ds\end{aligned}
\]
where $r_{2}=\frac{g_{L}}{1-f_{L}}$. Theorem~\ref{action thm} applies for the Lam\'e system also. The
first action evaluates to
\begin{equation}
J_{1}=\text{\ensuremath{1-\sqrt{1-f_{L}}.}}\label{eq:J1 Lame}
\end{equation}
Notice that $F_L$ has a vector field $B_{\bm L} \nabla F_L$ has a flow that is the rotation of $(\ell_{12}, \ell_{13}, \ell_{14})$ about the fixed axis given by $(\ell_{34}, -\ell_{24}, \ell_{23})$. The frequency of this rotation is given by the length of the axis, and is hence not constant.
\begin{lem}
$J_{1} = 1 - \sqrt{ 1 - F_L}$ is an almost global $S^1$-action.
\end{lem}
\begin{proof}
The vector field $B_{\bm L}\nabla J_{1}$ has periodic flow which is given by the rotation about the same axis as the flow of $F_L$, but here the axis is normalised because we need to divide by $\sqrt{ 1 - F_L}$ and using the Casimir $2h=1$ this means to divide by the length of the axis. It is only ``almost'' global because when $2h - F_L = \ell_{34}^2 + \ell_{24}^2 + \ell_{23}^2 = 0$ the normalisation factor vanishes and the vector field is not defined. Because of $2h=1$ this occurs on the sphere $S^2$ given by $\ell_{12}^2 + \ell_{13}^2 + \ell_{14}^2 = 1$.
\end{proof}
Note that \eqref{eq:J1 Lame} means that lines of constant $f_{L}$ correspond to
lines of constant $J_{1}$ in action space. In general, we have
\begin{lem}
Straight lines in the image of the momentum map $(F_L,G_L)$ of the reduced Lam\'e system on $S^2\times S^2$ given by $F_L=1-\frac{1}{r_2}G_L$ maps to straight lines in action space.
\end{lem}
\begin{proof}
Observe that
\[
J_{2}=\sqrt{1-f_L}\int_{f_1}^\gamma\sqrt{\frac{r_2-s}{(s-f_1)(s-f_2)(s-f_3)}}ds=(1-J_{1,L})\mathcal{F}(r_2)
\]
where $\gamma=\min(r_{2},f_{2})$,
$r_2=\frac{g_L}{1-f_L}$ and $\mathcal{F}$ is function of $r_2$ only. Since $J_{3,L}=1-J_{1,L}-J_{2}$, this implies that the image of a straight line $F_L=1-\frac{1}{r_2}G_L$ with constant slope $\frac{1}{r_2}$ under the action map is again a straight line.
\end{proof}
Figure \ref{fig:Lame MM and Action Map} b) shows an example of
the action map. Let the magenta line in the interior of the action
map be denoted by $\mathfrak{L}_{M}$. This has parameterisation
\[
(J_{1},J_{2},J_{3})=(J_{1},\frac{2}{\pi}(1-J_{1})\sin^{-1}\Delta,\frac{2}{\pi}(1-J_{1})\cos^{-1}\Delta)
\]
where $\Delta=\sqrt{\frac{f_{1}-f_{2}}{f_{1}-f_{3}}}$.
The line intersects the boundary of the action map at $(1,0,0)$ and
$\frac{2}{\pi}(0,\sin^{-1}(\Delta),\cos^{-1}(\Delta)$.
The fact that lines of constant $f_L$ and lines through $T_{123}$ are mapped to straight lines does not imply it is a linear map, because the map along these lines is determined by the non-linear map $\mathcal{F}$.
Since away from $f_l = 1$, the action variable $J_{1}$ is defined we can consider reduction with respect to the flow of $J_{1}$ on levels with $0 < f_L < 1$. Fixing the action and identifying the corresponding angle variable to a point gives the action $J_{2}$ for that constant value of $f_L$, and up to an overall constant factor this is the action of the Euler top.
\subsection{Spherical Coordinates}
Spherical coordinates (or rather poly-spherical coordinates) correspond to the case where simultaneously there is an $SO(2)$ and an $SO(3)$ symmetry. Accordingly we do have a global $S^1$ action. However, the induced integrable system on $S^2 \times S^2$ is not semi-toric because it has a degenerate point, which corresponds to the critical values at which the almost global action is not differentiable.
\subsubsection{Separation of Variables}
The two forms of spherical coordinates are found by setting $f_{1}=f_{2}$
or $f_{2}=f_{3}$ in Lam\'{e} coordinates. We call these the $12$ and
$23$ spherical coordinates. The two systems are equivalent by a permutation of coordinates. These can also be obtained by setting
$a=1$ in prolate and oblate coordinates, respectively. The $23-$spherical
coordinate system $(1\ (2\ (3\ 4)))$ is defined by
\begin{equation}
\begin{aligned}x_{1}^{2} & =s_{1}, & & x_{2}^{2}=\left(1-s_{1}\right)s_{2},\\
x_{3}^{2} & =\left(1-s_{1}\right)\left(1-s_{2}\right)s_{3}, & & x_{4}^{2}=\left(1-s_{1}\right)\left(1-s_{2}\right)\left(1-s_{3}\right),
\end{aligned}
\label{eq:RR sph}
\end{equation}
where $0\le s_{k}\le1$ and $k=1,2,3$. \textcolor{black}{Due to
the simplicity of these coordinates, we can manually separate the
corresponding Hamilton-Jacobi equation. The geodesic Hamiltonian can
be expressed as
\begin{equation}
H_{23}=\frac{2\left(p_{2}^{2}\left(s_{2}-1\right)s_{2}-p_{1}^{2}\left(s_{1}-1\right){}^{2}s_{1}-\frac{p_{3}^{2}\left(s_{3}-1\right)s_{3}}{s_{2}-1}\right)}{s_{1}-1}.\label{eq:H23}
\end{equation}
The integrals are $(2H_{23},\ell_{34},G_{23})$ with separated momenta
\begin{equation}
\begin{aligned}p_{1}^{2} & =\frac{g_{23}-2hs_{1}}{4s_{1}\left(s_{1}-1\right)^{2}} & & & p_{2}^{2} & =\frac{(g_{23}-2h)(s_{2}-1)-l_{34}^{2}}{4s_{2}\left(s_{2}-1\right)^{2}} & & & p_{3}^{2} & =\frac{l_{34}^{2}}{4s_{3}\left(1-s_{3}\right)}\end{aligned}
\label{eq:Sep 23 system}
\end{equation}
}where $G_{23}=\ell_{12}^{2}+\ell_{13}^{2}+\ell_{14}^{2}$ and $(l_{34},g_{23})$ are functional values of $(\ell_{34}^2,G_{23})$.
To obtain $(\ell_{34},G_{23})$ and (\ref{eq:Sep 23 system}) from
the Lam\'{e} system, we set $(f_{3},s_{3},p_3)=(f_{2}+\epsilon,f_{2}+\epsilon\tilde{s}_{3},\tilde{p}_3/\epsilon)$
where $\tilde{s}_{3}\in[0,1]$ and normalise $(f_{1},f_{2})=(0,1)$.
To come from oblate, we let $(a,s_{3},p_3)=(1+\epsilon,1+\epsilon\tilde{s}_{3},\tilde{p}_3/\epsilon)$.
\subsubsection{Critical Points and Momentum Map}
The critical points and values are easily obtained by direct computation to give
\begin{prop}
\label{Theorem SPh}The image of momentum map for the $23-$spherical system
$(\ell_{34},G_{23})$ with $2h=1$ has critical values $\mathfrak{C}_{1}:G_{23}=1-\ell_{34}^{2}$
and $\mathfrak{C}_{2}:G_{23}=0$ which are both codimension
one elliptic (see Figure \ref{fig: RR sph }).
\end{prop}
\begin{proof} The computation of critical points and values are straight forward for this system.
\end{proof}
\textcolor{black}{The fibre of a regular value on $S^2\times S^2$ is a
torus $T^{2}$ with multiplicity one.} The fibres along $\mathfrak{C}_{1}$ and $\mathfrak{C}_{2}$
are single $S^1$. The intersections of $\mathfrak{C}_{1}$
and $\mathfrak{C}_{2}$ are codimension 2 elliptic points and have 1 critical point in their fibres. The linearisation $\nabla_{\bm L}B_{\bm L}(\alpha \nabla_{\bm L}G_{23} + \beta \nabla_{\bm L} \ell_{34})$ has eigenvalues $(0,0,-i \beta ,i \beta ,-i\beta ,i\beta)$ at $D_{23}=(0,1)$ making the peak of the parabola $D_{23}$ a degenerate singularity. Similar to the Lam\'e system the rank of the differential of the moment map drops by 1 at $D_{23}$ and it's fibre is $S^2$. This is also an example of a spherical type singularity. Spherical coordinates is a type of polyspherical coordinates and these have been studied in detail in \cite{RonanThesis}.
In the limit $a\to1$, the bifurcation diagram for the oblate coordinates in
Figure~\ref{fig:Oblate root and MM}~a) degenerates to Figure~\ref{fig: RR sph }~a). In particular, the elliptic-elliptic point
$o_{11}$ collides with the hyperbolic line $\mathcal{O}_{2}$ and
becomes degenerate. Similarly, setting $f_{2}=f_{3}$ in the Lam\'{e}
system causes the elliptic-hyperbolic point $t_{24}$ to collide with
the elliptic-elliptic point at $t_{34}$ while $T_{123}$ remains
degenerate.
\subsubsection{Actions}\label{sph-act}
The action variables for the $23-$spherical system \textcolor{black}{are
given by
\[
\begin{aligned} {J}_{1} & =\frac{2}{\pi}\int_{0}^{r_{1}}p_1(s)ds & & & {J}_{2} & =\frac{2}{\pi}\int_{0}^{r_{2}}p_2(s)ds & & & {J}_{3} & =|\ell_{34}|\end{aligned}
\]
where $0\le r_{1}=g_{23}\le1$ and $0\le r_{2}=1-\frac{l_{23}^{2}}{1-g_{23}}\le1$.
We can simplify the non trivial actions to
\[
\begin{aligned} {J}_{1}=1-\sqrt{1-g_{23}} & & {J}_{2} & =\sqrt{1-g_{23}}-|l_{34}|.\end{aligned}
\]
}
The action map is shown in Figure \ref{fig: RR sph } b). Note that $(\ell_{34},\sqrt{1-G_{23}})$ defines continuous global action variables that are not differentiable at $G_{23}=1$. This is a system obtained from toric degeneration, see \cite{RonanThesis}.
\begin{figure}
\begin{centering}
\includegraphics[width=8cm,height=7cm]{Sph-MM.png}\includegraphics[width=7cm,height=7cm]{Sph-AM.pdf}
\par\end{centering}
\caption{a) Momentum map and b) Action map for the $23$-spherical system.
\label{fig: RR sph }}
\end{figure}
\subsection{Cylindrical Coordinates}
The coordinate system with the highest symmetry has two global $S^1$ action, and there is only a single point in the Stasheff polytope for which this happens. The corresponding reduced system on $S^2\times S^2$ is toric.
\subsubsection{Separation of Variables}
The cylindrical coordinates (also called Hopf coordinates) $((1\ 2)\ (3\ 4))$ are a further degeneration
of the oblate coordinates obtained by setting both $e_{1}=e_{2}$
and $e_{3}=e_{4}$. \textcolor{black}{Specifically, the transformation
$(e_{2},e_{4})\to(e_{1}+\epsilon,e_{3}+\epsilon)$ along with $(s_{1},p_1,s_{3},p_3)\to(e_{1}+\epsilon\tilde{s}_{1},\tilde{p_1}/\epsilon,e_{3}+\epsilon\tilde{s}_{3},\tilde{p_3}/\epsilon)$
gives the following relationship between Cartesian coordinates and
cylindrical coordinates:}
\[
\begin{aligned}x_{1}^{2} & =s_{1}s_{2}, & & x_{2}^{2}=s_{2}\left(1-s_{1}\right),\\
x_{3}^{2} & =s_{3}\left(1-s_{2}\right),& & x_{4}^{2}=\left(1-s_{2}\right)\left(1-s_{3}\right),
\end{aligned}
\]
where $0\le s_{k}\le1$ and $k=1,2,3$. \textcolor{black}{The geodesic
Hamiltonian in these coordinates is
\[
H_{Cyl}=-\frac{2p_{1}^{2}\left(s_{1}-1\right)s_{1}}{s_{2}}-2p_{2}^{2}\left(s_{2}-1\right)s_{2}+\frac{2p_{3}^{2}\left(s_{3}-1\right)s_{3}}{s_{2}-1}
\]
which trivially separates to give integrals $(H_{Cyl},\ell_{34}^2,\ell_{12}^2)$
} The separated equations are
\begin{equation}
\begin{aligned}p_{2}^{2} & =\frac{l_{12}^{2}(s_{2}-1)-s_{2}\left(2h(s_{2}-1)+l_{34}^{2}\right)}{4(s_{2}-1)^{2}s_{2}^{2}} & & & p_{k}^{2} & =\frac{l_{\nu}^{2}}{4s_{k}(1-s_{k})}\end{aligned}
\label{eq:sep cyl}
\end{equation}
where $\nu=12$ if $k=1$, $\nu=34$ if $k=3$ and $l_{\nu}$ denoted
the functional value of $\ell_{\nu}$.
\subsubsection{Critical Points and Momentum map}
\begin{prop}
The bifurcation diagram for the cylindrical system $(\ell_{12},\ell_{34})$ on $S^2\times S^2$ with $2h=1$ is
composed of $4$ straight lines $\ell_{34}=\pm(1\pm\ell_{12})$ which
intersect transversally at $(\pm1,0)$ and $(0,\pm1)$ (see Figure \ref{fig:Cylindrical-coordin}).
\end{prop}
The fibre of a regular
point on $S^2\times S^2$ is $T^2$ with multiplicity one. The lines
are all codimension one elliptic and their fibres are single $S^1$. The intersections of the lines are elliptic-elliptic
critical values with a single critical point in their fibres.
\subsubsection{Actions}\label{cyl-act}
\textcolor{black}{The trivial actions for the cylindrical system
are $(J_{1},J_{3})=(\left|\ell_{12}\right|,\left|\ell_{34}\right|)$
while the ``non trivial'' action is easily determined by $J_{1}+J_{2}+J_{3}=1$.}
The action map is shown in Figure~\ref{fig:Cylindrical-coordin}~b).
The relation between the symmetry reduced actions $|\ell_{12}|$ and $|\ell_{34}|$ and the global $S^1$ actions is to forget the absolute value sign. In this way 4 copies of the right triangle in $J_1, J_3$ are glued together to a diamond in $\ell_{12}, \ell_{34}$.
\begin{figure}
\begin{centering}
\includegraphics[width=7cm,height=7cm]{Cyl-MM.png}\quad\includegraphics[width=7cm,height=7cm]{Cyl-AM.pdf}\\
\par\end{centering}
\caption{(a) Momentum map and (b) action map of the cylindrical system.\label{fig:Cylindrical-coordin}}
\end{figure}
\begin{prop}
The system $(X_1,Y_1)$ is a toric system on $S^2\times S^2$ where $X_1$ and $Y_1$ are defined in \eqref{eq:ls to Xy}.
\end{prop}
\begin{proof}
Both $\ell_{12}$ and $\ell_{34}$ define smooth global $S^1$ action on $S^2\times S^2$. However, the torus action $(\ell_{12},\ell_{34})$ is not effective. Note that $\ell_{ij}$ is the generator of rotation in the $x_ix_j$-plane represented by $\exp({i t_{ij}\hat{\bm x}_i\wedge\hat{\bm x}_j})$ where $\hat{\bm x_i}$ is the unit vector in the $x_i$ axis. The action on the momenta $\bm y$ is the same. A rotation by $t_{ij}=\pi$ in the $x_ix_j$-plane has the effect
\begin{equation*}
\begin{aligned}
x_i&\mapsto -x_i, & & & y_i&\mapsto -y_i,\\
x_j&\mapsto -x_j, & & & y_j&\mapsto - y_j,
\end{aligned}
\end{equation*}
on $T^*S^3$. This induces the map
\begin{equation*}
\begin{aligned}
\ell_{ij}&\mapsto \ell_{ij}, & & & \ell_{mn}&\mapsto \ell_{mn},
\end{aligned}
\end{equation*}
for $i,j,m,n$ all distinct, and
\begin{equation*}
\begin{aligned}
\ell_{ik}&\mapsto -\ell_{ik}, & & & \ell_{kj}&\mapsto -\ell_{kj},
\end{aligned}
\end{equation*}
for all $k\neq i,j$. In particular, the action with $t_{12}=\pi$ and $t_{34}=\pi$ both generate the same map
\begin{equation}
(\bm X,\bm Y)\mapsto(X_1,-X_2,-X_3,Y_1,-Y_2,-Y_3)
\label{l12l34}
\end{equation}
on $S^2\times S^2$. Since the flows of $\ell_{12}$ and $\ell_{34}$ commute and \eqref{l12l34} is an involution, we see that $(t_{12},t_{34})=(\pi,\pi)$ is the identity on $S^2\times S^2$, so the action is not effective. By taking half of the sum and difference, we see that $(X_1,Y_1)=\frac{1}{2}(\ell_{12}+\ell_{34},\ell_{12}-\ell_{34})$ is faithful with $2\pi$ period giving us a toric system on $S^2\times S^2$. The image of the momentum map of $(X_1,Y_1)$ is the unit square - the standard Delzant polytope for $S^2\times S^2$.
Note that the torus action of $(\ell_{12},\ell_{34})$ is effective on $T^*S^3$ as $(t_{12},t_{34})=(\pi,\pi)$ gives $\bm x\mapsto -\bm x$ and $\bm y\mapsto-\bm y$. However, the points $(\bm x,\bm y)$ and $(-\bm x, -\bm y)$ are anti-podal points on the same great circle and thus become the same point on $S^2\times S^2$ after reduction.
\end{proof}
\section{Conclusion}\label{sec:S3-7}
The main novelty in this paper is the construction of a natural family of integrable system on $S^2 \times S^2$ in section \ref{sec:S3-4}, and the analysis of its Liouville foliation in section \ref{sec:degen}. It turns out that many properties of the reduced system are visible already in one way or another in the original St\"ackel system on $T^*S^3$. However, it should be pointed out that the upstairs system does not even have a natural Liouville foliation because it is superintegrable, and hence dynamically does not possess invariant tori, but just periodic orbits.
After reduction by the flow of the Hamiltonian, which after extracting the square root is a global $S^1$ action, an integrable system on $S^2\times S^2$ is obtained.
The reduced system is Lie-Poisson with Lie-algebra $\mathfrak{so}(4)$.
Since the reduction is done by the flow of the Hamiltonian the reduced system does not have a Hamiltonian any more, it just has commuting integrals. The definition of Liouville integrable system does not require a Hamiltonian, and the foliation into tori as defined by the integrals is defined independently of a Hamiltonian.
What is missing is the possibility to define the Hamiltonian vector field which induces a flow on these tori. But this is not necessary in order to study the equivalence of Liouville foliations of integrable systems.
At first it may be surprising that in the Liouville-Arnold theorem the existence of action-angle variables near a regular torus does not need a Hamiltonian either. In fact, the action-angle variables are such that all of the integrals can be expressed as functions of the action variables. Moreover, considering diffeomorphism of the integrals changes the integrals, but does not change the action variables. We saw this explicitly for a restricted class of transformations of the integrals in our case. Since after reduction by the Hamiltonian there is no distinguished function any more, the focus is fully on the action variables. For the foliation it makes sense to consider leaf-preserving homeomorphisms or diffeomorphisms, but from the point of view of the action variables the natural class is symplectomorphisms.
Since the reduced symplectic manifold is compact the image of the momentum map is compact as well, and we have shown that the image of the action map (appropriately modified so that it is continous!) is a right triangle. This triangle is rigid, which means that it is the same for the whole family. What does change are the position and organisation of action values in the triangle that correspond to critical values of the momentum map. These play the role of the height invariant, and in fact for the prolate system which is semi-toric these turn into the height invariant.
The Liouville-Arnold theorem holds near regular tori, and can be extended to open subsets of phase space bounded by separatrices. Only in rare cases are there no separatrices, essentially this means that the system is toric. But most integrable systems do have singular fibres that are not just tori, and the classification of integrable system needs to take these into account. It is crucial to note that the actions of the action-angle variables can in general not be extended globally in phase space. If this is possible we call them global $S^1$ actions. Instead of $S^1$ action we may also speak of a global $SO(2)$ symmetry. A slightly less optimal situation occurs for a global $SO(3)$ symmetry, which leads to almost global $S^1$ actions, as described for the Lam\'e system, with an almost global $S^1$ action and a spherical type singularity. Examples of global $S^1$ actions do appear in our family through degenerations, and when they do appear they unfold the action map into the polygon invariant in the semi-toric case (prolate system) and into the Delzant polygon in the toric case (cylindrical system).
Thus, for our family we have some analogues of important symplectic invariants, namely a convex polygon and generalisations of the height invariant. Certainly the semi-global symplectic invariants would need to be added, and at least in principle this is understood for the hyperbolic-hyperbolic point in the ellipsoidal family \cite{DullinVuNgoc07}, and generalisations to elliptic-hyperbolic points, degenerate points and the rank 1 hyperbolic lines would need to be worked out.
The interesting question is what kind of global invariants (like the twisting index invariant for semi-toric systems) would need to be added to the list so that it becomes the complete list of global symplectic invariants.
|
{
"arxiv_id": "2302.13197",
"language": "en",
"timestamp": "2023-02-28T02:12:36",
"url": "https://arxiv.org/abs/2302.13197",
"yymm": "2302"
} | \section{Introduction}
In this paper, we use the term ``graphs'' for multigraphs, which may contain multiple edges but contain no loop.
A multigraph with no parallel edge will be stressed as a simple graph.
Let $G$ be a graph.
Denote by $V(G)$ and $E(G)$ the vertex set and edge set of $G$,
respectively, and by $e(G)$ the cardinality of $E(G)$.
For $v\in V(G)$, $N_G(v)$ is the set of neighbors of $v$
in $G$, and
$d_G(v)$, the degree of $v$
in $G$, is the number of edges of $G$ that are incident with $v$.
The notation $\delta(G)$ and $\Delta(G)$ denote the minimum degree
and maximum degree of $G$, respectively.
For two integers $p,q$, let $[p,q]=\{ i\in \mathbb{Z} \,:\, p \le i \le q\}$.
For an integer $k\ge 0$, an \emph{edge $k$-coloring} of a multigraph $G$ is a mapping $\varphi$ from $E(G)$ to the set of integers
$[1,k]$, called \emph{colors}, such that no two adjacent edges receive the same color with respect to $\varphi$.
The \emph{chromatic index} of $G$, denoted $\chi'(G)$, is defined to be the smallest integer $k$ so that $G$ has an edge $k$-coloring. A color class of $\varphi$ is a set of edges of $G$ colored by
the same color under $\varphi$, which is a matching.
In the 1960's, Gupta~\cite{Gupta-67} and, independently, Vizing~\cite{Vizing-2-classes} proved
that for all graphs $G$, $\Delta(G) \le \chi'(G) \le \Delta(G)+\mu(G)$,
where $\mu(G)$, called the \emph{maximum multiplicity} of $G$, is the largest number of
edges joining two vertices of $G$.
We say a graph $G$ is \emph{overfull}
if $|E(G)|>\Delta(G) \lfloor |V(G)|/2\rfloor$.
A subgraph $H$ of $G$ is a \emph{$\Delta(G)$-overfull} subgraph
if $\Delta(H)=\Delta(G)$ and $H$ is overfull.
A subgraph $H$ of odd order of $G$ is \emph{$\Delta(G)$-full}
if $\Delta(H)=\Delta(G)$ and $|E(H)|=\Delta(H) \lfloor |V(H)|/2\rfloor$.
It is clear that if $G$ contains a $\Delta(G)$-overfull subgraph, then
$\chi'(G) \ge \Delta(G)+1$.
Conversely, for simple graphs,
Chetwynd and Hilton~\cite{MR848854,MR975994}, in 1986, proposed the following
conjecture.
\begin{CON}[Overfull Conjecture]\label{overfull-con}
Let $G$ be a simple graph with $\Delta(G)>\frac{1}{3}|V(G)|$. Then $\chi'(G)=\Delta(G)$ if and only if $G$ contains no $\Delta(G)$-overfull subgraph.
\end{CON}
The degree condition $\Delta(G)>\frac{1}{3}|V(G)|$ in the conjecture above is best possible as seen by
the simple graph $P^*$ obtained from the Petersen graph by deleting one vertex.
Applying Edmonds' matching polytope theorem, Seymour~\cite{seymour79} showed that whether a graph $G$ contains an overfull subgraph of maximum degree $\Delta(G)$ can be determined in polynomial time. Thus if the Overfull Conjecture is true, then the NP-complete problem of
determining the chromatic index~\cite{Holyer} becomes polynomial-time solvable
for simple graphs $G$ with $\Delta(G)>\frac{1}{3}|V(G)|$.
There have been some fairly strong results supporting the Overfull Conjecture in the case when $G$ is regular or $G$ has
large minimum degree, for example, see~\cite{MR975994, MR3545109,MR1439301,MR4394718, 2105.05286, 2205.08564}.
Our goal in this paper is to study the multigraph version of the Overfull Conjecture, which can be stated as follows.
\begin{CON}[Multigraph Overfull Conjecture]\label{overfull-con2}
Let $G$ be a graph satisfies $\Delta(G)>\frac{1}{3} \mu(G)|V(G)|$. Then $\chi'(G)=\Delta(G)$ if and only if $G$ contains no $\Delta(G)$-overfull subgraph.
\end{CON}
Again, the degree condition $\Delta(G)>\frac{1}{3}\mu(G)|V(G)|$ in Conjecture~\ref{overfull-con2} is best possible.
To see this, let $r\ge 2$
be an integer, and
let $Q$ be obtained from the Petersen graph by duplicating each of
its edge $r-1$ times. Then $Q$ is $3r$-regular with maximum multiplicity $r$.
Let $Q^*$ be obtained from $Q$ by deleting a vertex. It is easy to see that $Q^*$
contains no $3r$-overfull subgraph. By a result of the first author and Tipnis~\cite[Theorem 2]{MR1483445},
we have $\chi'(Q^*)=3r+1$ if $r$ is odd. Thus
$Q^*$ is a sharpness example for the condition $\Delta(G)>\frac{1}{3} \mu(G)|V(G)|$. (When $r$ is even, $\chi'(Q)=\chi'(Q^*)=3r$.)
It is easy to verify that when $G$ is regular with even order, $G$ has no $\Delta(G)$-overfull subgraphs if its vertex degrees are at least about $\frac{1}{2}|V(G)|$. Thus the well-known 1-Factorization Conjecture stated below is a special case of the Overfull Conjecture.
\begin{CON}[1-Factorization Conjecture]\label{con:1-factorization}
Let $G$ be a simple graph of even order $n$. If $G$ is $k$-regular for some $k\ge 2\lceil n/4\rceil-1$, then $G$ is 1-factorable; equivalently, $ \chi'(G) = \Delta(G)$.
\end{CON}
In 2016, Csaba, K\"uhn, Lo, Osthus and Treglown~\cite{MR3545109} verified Conjecture~\ref{con:1-factorization} for sufficiently large $n$.
\begin{THM}\label{thm:1-factorization-proof}
There exists an $n_0\in \mathbb{N}$ such that the following holds. Let $n, k\in \mathbb{N}$
be such that $n\ge n_0$ is even and $k\ge 2\lceil n/4\rceil-1$.
Then every $k$-regular simple graph on $n$ vertices has a 1-factorization.
\end{THM}
A natural extension of the 1-factorization Conjecture to multigraphs is obtained by restricting the edge multiplicity,
\begin{CON}[Multigraph 1-Factorization Conjecture]\label{con:1-factorization-multi}
Let $G$ be a graph of even order $n$ and maximum multiplicity $r$. If $G$ is $k$-regular for some $k\ge r(2\lceil n/4\rceil-1)$, then $G$ is 1-factorable; equivalently, $ \chi'(G) = \Delta(G)$.
\end{CON}
Vaughan~\cite{MR2993074} proved Conjecture~\ref{con:1-factorization-multi}
for large graphs asymptotically.
In this paper, we prove a slightly weaker form of Conjecture~\ref{con:1-factorization-multi}.
\begin{THM}\label{thm:1-factorization}
There exists an $n_0\in \mathbb{N}$ such that the following holds. Let $n, k, r\in \mathbb{N}$
be such that $n\ge n_0$ is even and $k\ge r(n/2+18)$. Then every $k$-regular graph $G$ on
$n$ vertices with maximum multiplicity at most $r$ has a 1-factorization.
\end{THM}
For a
graph $H$ with an odd number of vertices $n\ge 3$,
we have $\chi'(H) \ge \frac{|E(H)|}{(n-1)/2}$, since each color
class is a matching and each matching can contain at most $(n-1)/2$ edges.
For any graph $G$ on at least three vertices, define the \emph{density} of $G$ as
$$
\omega(G)=\max\left\{\frac{|E(H)|}{(n-1)/2}: \text{$H\subseteq G$, $|V(H)|=n$, $n\ge 3$ and is odd}\right\}.
$$
It is clear that $\chi'(G) \ge \omega(G)$. Combining with the
lower bound $\Delta(G)$ on $\chi'(G)$,
$\chi_f'(G):=\max\{\Delta(G), \omega(G)\}$
is called the \emph{fractional chromatic index} of $G$. As the chromatic index is always an
integer, we have a general lower bound for the chromatic index given by
$\chi'(G) \ge \lceil \chi_f'(G) \rceil$.
We show that when $\Delta(G)$-overfullness or $\Delta(G)$-fullness is present, then the chromatic index of $G$ equals the integer round-up of the fractional chromatic index.
\begin{THM}\label{thm:overfull-present}
There exists an $n_0\in \mathbb{N}$ such that the following holds. Let $n, r\in \mathbb{N}$
be such that $n\ge n_0$ is even and $r=\mu(G)$.
If $G$ is a graph of order $n$, $\delta(G) \ge r(n/2+18)$, and $G$ contains a $\Delta(G)$-full
or $\Delta(G)$-overfull subgraph, then $\chi'(G)=\lceil \chi_f'(G)\rceil$.
As a consequence, $\chi'(G)=\Delta(G)$ if $G$ contains a $\Delta(G)$-full subgraph.
\end{THM}
Restricting the Overfull Conjecture on simple graphs of even order and large minimum degree, Theorem~\ref{thm:plantholt-shan} was proved by the authors. In this paper,
we also prove its analogy for multigraphs.
\begin{THM}[{\cite{2105.05286}}]\label{thm:plantholt-shan}
For all $0<\varepsilon <1$, there exists an $n_0\in \mathbb{N}$ such that the following holds. Let $n \in \mathbb{N}$
be such that $n\ge n_0$ is even.
If $G$ is a simple graph on $n$ vertices with $\delta(G) \ge (1+\varepsilon )n/2$, then $\chi'(G)=\Delta(G)$ if and only if $G$ contains no $\Delta(G)$-overfull subgraph.
\end{THM}
\begin{THM}\label{thm:1}
For all $0<\varepsilon <1$, there exists an $n_0\in \mathbb{N}$ such that the following holds. Let $n, r \in \mathbb{N}$
be such that $n\ge n_0$ is even and $r=\mu(G)$.
If $G$ is a graph on $n$ vertices with $\delta(G) \ge r(1+\varepsilon )n/2$, then $\chi'(G)=\Delta(G)$ if and only if $G$ contains no $\Delta(G)$-overfull subgraph.
\end{THM}
The remainder of this paper is organized as follows.
In the next section, we introduce some notation and preliminary results.
In Section 3, we prove Theorems~\ref{thm:1-factorization} and~\ref{thm:overfull-present}, and in Section 4,
we prove Theorem~\ref{thm:1}.
\section{Notation and preliminaries}
Let $G$ be a graph
and $A,
B\subseteq V(G)$ be two disjoint vertex sets. Then $E_G(A,B)$ is the set
of edges in $G$ with one end in $A$ and the other end in $B$, and $e_G(A,B):=|E_G(A,B)|$. We write $E_G(v,B)$ and $e_G(v,B)$
if $A=\{v\}$ is a singleton. For an edge $e\in E_G(u,v)$, if $e_G(u,v)=1$, we call $e$ a \emph{singleton edge}.
For
$S\subseteq V(G)$,
the subgraph of $G$ induced by $S$ is $G[S]$, and $G-S:=G[V(G)\setminus S]$.
If $F\subseteq E(G)$, then $G-F$ is obtained from $G$ by deleting all
the edges of $F$. Denote by $V_\Delta$ the set of maximum degree vertices of $G$.
A \emph{trail} is an alternating sequence of vertices and edges $v_0e_1v_1\ldots e_tv_t$ such that
$v_{i-1}$ and $v_i$ are the end vertices of $e_i$
for each $i\in [1,t]$, and the edges are all distinct (but there might be repetitions among the vertices). A trail is \emph{closed} if $v_0= v_t$, and is \emph{open} otherwise. An {\it Euler tour} of $G$
is a closed trail in $G$ that contains all the edges of $G$.
A graph is \emph{even} if all its vertex degrees are even.
We will need the following classic result of Euler.
\begin{THM}[Euler, 1736]\label{Euler}
A graph $G$ has an Euler tour if and only if
$G$ has at most one nontrivial component and $G$ is even.
\end{THM}
\begin{LEM}\label{cor:cycle-path-decomp}
If $G$ is a graph with in total $2\ell$ vertices of odd degree for some integer $\ell\ge 0$,
then $G$ can be decomposed into edge-disjoint cycles and paths, where there are exactly $\ell$
paths in the decomposition such that the set of the endvertices of the paths
is the same as the set of odd degree vertices of $G$ and that the union of the paths is a forest.
\end{LEM}
\noindent\textbf{Proof}.\quad First we iteratively delete the edges of cycles from $G$ until no cycles are left. The remaining graph $G^*$ is a forest, and has the same set of odd degree vertices as does $G$. From a component of $G^*$, remove a path between two endvertices; this reduces the number of odd degree vertices by two. Then iterating this to get $\ell$ paths, we get a decomposition with the desired properties.
\qed
We will also need the two classic results below on Hamilton cycles in simple graphs.
\begin{THM}[\cite{MR47308}]\label{thm:Dirac}
If $G$ is a simple graph on $n\ge 3$ vertices with $\delta(G) \ge \frac{n}{2}$, then $G$ has a Hamilton cycle.
\end{THM}
\begin{THM}[\cite{MR294155}] \label{lem:chvatal's-theorem}
Let $G$ be a simple graph on $n\ge 3$ vertices.
Suppose the degrees of $G$ are $d_1, \ldots d_n$ with $d_1\le \ldots \le d_n$.
If $d_i\ge i+1$ or $d_{n-i} \ge n-i$
for all $i<\frac{n}{2}$, then $G$ has a Hamilton cycle.
\end{THM}
Let $f$ be a function from the vertices of a graph $G$ into the positive integers, $g$ be a map from possible edges into the positive integers. An \emph{$fg$-coloring} of $G$ is a coloring of the edges so that each vertex $v$ has at most $f(v)$ incident edges assigned the same color, and for each pair $u,v\in V(G)$, there are at most $g(uv)$ edges of the same color joining $u$ and $v$. The results below provide an upper
bound on the number of colors needed for an $fg$-coloring.
\begin{LEM}[{\cite[Theorem 8]{MR1075068}}]\label{lem:fg-b}
Every bipartite graph $G$ has an $fg$-coloring in $$\max \{\max_{v\in V(G)} \lceil d_G(v)/f(v) \rceil, \max_{u,v\in V(G)} \lceil e_G(u,v)/g(uv)\rceil\}$$ colors.
\end{LEM}
\begin{LEM}[{\cite[Corollary 15]{MR1075068}}]\label{lem:fg}
If $f(v)$ and $g(uv)$ are all positive even integers for each vertex $v$ of
$G$ and each pair $u,v$ of $V(G)$, then $G$ has an $fg$-coloring in $$\max \{\max_{v\in V(G)} \lceil d_G(v)/f(v) \rceil, \max_{u,v\in V(G)} \lceil e_G(u,v)/g(uv)\rceil\}$$ colors.
\end{LEM}
\begin{LEM}\label{lem:numberofD}
Let $G$ be an $n$-vertex graph with maximum multiplicity $r$ such that all vertices of degree less than $\Delta(G)$ are mutually adjacent in $G$ with $r$ edges between them. Then $|V_\Delta|> \frac{n}{2}$.
\end{LEM}
\noindent\textbf{Proof}.\quad By contradiction. Let $A=V_\Delta$, $|A|=k$ for some integer $k\ge 1$, and suppose that $|V(G)\setminus A|=k+s$
for some integer $s\ge 0$. Let $H$ be obtained
from $G$ by deleting $s$ vertices from $V(G)\setminus A$, and $B=V(H)\setminus A$. Then
we have $|A|=|B|=k$.
As all the vertices of $B$ are mutually adjacent in $G$ with $r$ edges between them,
we have $\sum_{v\in B}d_H(v)=2r{k \choose 2}+e_H(A,B) \ge \sum_{v\in A}d_H(v)$.
Since $d_G(v)=d_H(v)+rs$ for each $v\in B$ and $d_G(v) \le d_H(v)+rs$ for each $v\in A$,
we then get
$\sum_{v\in B}d_G(v)\ge \sum_{v\in A}d_G(v)$,
a contradiction.
\qed
\begin{LEM}\label{lemma:overfull-subgraph2}
Let $G$ be a graph of even order $n$. There is at most one vertex $v$,
of minimum degree,
such that $G-v$ is $\Delta(G)$-overfull; if $G$ is not regular, there are at most two
vertices $v, w$, both of minimum degree, such that $G-v$ and $G-w$ are $\Delta(G)$-full.
\end{LEM}
\noindent\textbf{Proof}.\quad Let $v, w\in V(G)$.
Suppose $G-v$ is $\Delta(G)$-full or $\Delta(G)$-overfull.
Note that $e(G) \le( \Delta(G) (n-2)+d_G(v)+d_G(w))/2$. Thus
\begin{equation}\label{eqn:overfull}
e(G-v)=e(G)-d_G(v) \le \frac{\Delta(G) (n-2)-d_G(v)+d_G(w)}{2}.
\end{equation}
From the inequality above, we have $d_G(w)>d_G(v)$ if
$G-v$ is $\Delta(G)$-overfull, i.e., $2e(G-v)/(n-2)>\Delta(G)$.
Therefore, $v$ is the only vertex of minimum degree in $G$.
Similarly, suppose $G-v$ is $\Delta(G)$-full. Then we have $d_G(w) \ge d_G(v)$
and so $v$ is a vertex of minimum degree in $G$. If $G$ is not regular and
$d_G(w)=d_G(v)$, we must have $d_G(u)=\Delta(G)$
for any $u\in V(G)\setminus\{v,w\}$ by~\eqref{eqn:overfull}.
Therefore, if $G$ is not regular, there are at most two
vertices $v, w$, both of minimum degree, such that $G-v$ and $G-w$ are $\Delta(G)$-full.
\qed
In the case of regular graphs of high degree, it is easy to show there can be no overfull subgraphs. The next lemma shows that overfullness has a rigid structure when the minimum degree is high.
\begin{LEM}\label{lemma:overfull-subgraph}
Let $G$ be a graph of even order $n$ and $\delta(G)> r\frac{n}{2}$, where $r=\mu(G)$.
Then every $\Delta(G)$-full or $\Delta(G)$-overfull subgraph is
obtained from $G$ by deleting a vertex of minimum degree.
\end{LEM}
\noindent\textbf{Proof}.\quad Suppose to the contrary that $G[X]$ is $\Delta(G)$-full or $\Delta(G)$-overfull for some $X\subseteq V(G)$
with $|X|$ odd and $3\le |X| \le n-3$. Then $e(G[X]) \ge \Delta(G) \frac{|X|-1}{2} >r\frac{n}{2} \frac{|X|-1}{2}$.
Since $e(G[X]) \le \frac{r|X|(|X|-1)}{2}$, we get $|X| >n/2$.
As $e(G[X]) \le r\frac{|X|(|X|-1)}{2}$ and so $\omega(G[X]) \le r|X|$,
we get $\Delta(G) \le r|X|$.
As $ 3\le |V(G)\setminus X| =n-|X|\le \frac{n-1}{2}$ and $n$ is even, we get $n\ge 8$.
Now since $(n-|X|)(\delta(G)-r(n-|X|-1))-r|X|$ is a quadratic concave down function in $|X|$
and so its minimum is achieved at the two boundary values of $|X|$, we get
\begin{eqnarray*}
e_G(V(G)\setminus X, X)-r|X| &\ge& (n-|X|)(\delta(G)-r(n-|X|-1))-r|X|\\
&\ge & \min\{3\delta(G)-6r-r(n-3), \frac{3r}{2}(n-1)/2-r(n+1)/2\} \\
& >& 0.
\end{eqnarray*}
Thus $e_G(V(G)\setminus X, X) > r|X| \ge \Delta(G)$.
This shows that $G[X]$ is neither $\Delta(G)$-full nor $\Delta(G)$-overfull, a contradiction.
The
lemma then follows easily by Lemma~\ref{lemma:overfull-subgraph2} and
the fact that
$e(G-u)\le e(G-v)$ for any $u,v\in V(G)$ with $d_G(v)=\delta(v)$.
\qed
\begin{LEM}[{\cite[Lemma 1]{MR1149003}}]\label{lem:spanning forest}
If $G$ is a connected even graph, then $G$ has a spanning tree $T$
such that $d_T(v) \le 1+\frac{1}{2}d_G(v)$ for each $v\in V(G)$.
\end{LEM}
\section{Proof of Theorems~\ref{thm:1-factorization} and~\ref{thm:overfull-present}}
We need the following result by the first author and Tipnis to
decompose a graph into simple graphs.
\begin{LEM}[{\cite[Theorem 1]{MR1149003}}]\label{lem:decomposition}
Let $G$ be a $k$-regular graph with even order $n$ and $k= rs \ge r(n/2 + 1)$, where $r=\mu(G)$ and $s$ is a positive integer. If $r$ is even,
then $G$ can be decomposed into $ r/2$ edge-disjoint Hamilton cycles and $r$ spanning simple graphs, each of which is $(s-1)$-regular.
\end{LEM}
For the graph $G$ described above, when $\mu(G)$ is odd, the first author and Tipnis~\cite{MR1149003}
conjectured that $G$ can be decomposed into $\mu(G)$ perfect matchings and $\mu(G)$ simple graphs that each are $(s-1)$-regular. We here prove a weaker version of the conjecture, which will also be used to prove Theorem~\ref{thm:1-factorization}.
\begin{LEM}\label{lem:decompose2}
Let $G$ be a $k$-regular graph with even order $n\ge 96$ and odd maximum multiplicity $r\ge 3$. Then the following statements
hold.
\begin{enumerate}[(a)]
\item If $k= r s \ge r(n /2+ 16)$ for some even integer $s$, then
$G$ can be decomposed into $r-2$ perfect matchings, $(r+1)/2$
Hamilton cycles, one spanning $(s-1)$-regular simple graph, and $(r-1)$ spanning $(s-2)$-regular simple graphs.
\item If $k = rs \ge r(n/2+ 17)$ for some odd integer $s$, then
$G$ can be decomposed into $2r-2$ perfect matchings, $(r+1)/2$
Hamilton cycles, one spanning $(s-2)$-regular simple graph, and $(r-1)$ spanning $(s-3)$-regular simple graphs.
\end{enumerate}
\end{LEM}
\noindent\textbf{Proof}.\quad For (b), the conclusion follows from Statement (a) by deleting $r$ edge-disjoint perfect matchings from $G$.
Thus we only need to prove Statement (a).
Our strategy is to double each edge of $G$ to get a $k^*$-regular graph $G^*$
with $k^* = 2r s \ge 2r(n/2 + 16)$ and then ``split off''
one Hamilton cycle $C$ of $G^*$ and one $(s-1)$-regular
subgraph $H_0$ of $G^*$ such that these two graphs are edge-disjoint and
they both are subgraphs of $G$ and $\mu(G-E(C\cup H_0)) \le r-1$.
Since $G-E(C\cup H_0)$ is $((r-1)s-1)$-regular, then we can take off
$s-2$ edge-disjoint perfect matchings from $G-E(C\cup H_0)$, and
apply Lemma~\ref{lem:decomposition} on the remaining graph.
We double all the edges of $G$ to get $G^*$. Then $G^*$ is $ 2r s$-regular with maximum multiplicity $2r$.
We claim that $G^*$ has an orientation $D$
such that
\begin{enumerate}[(1)]
\item $d_D^+(v)=d_D^-(v)$ for any $v\in V(D)$;
\item for any $u,v\in V(D)$, there are at most $r$ arcs of $D$ from $u$ to $v$;
\end{enumerate}
To get such an orientation of $G^*$, for each edge $e$ of $G$ that is not a singleton edge,
we pair up $e$ with its duplication $e'$ and orient them in opposite directions.
We let $R$ be the subgraph of $G$ induced by all its singleton edges.
Suppose $R$ has in total $2\ell$ vertices of odd degree for some integer $\ell \ge 0$. By Lemma~\ref{cor:cycle-path-decomp},
$R$ can be decomposed into edge-disjoint cycles, say $C_1, \ldots, C_m$, together
with $\ell$ edge-disjoint
paths $P_1, \ldots, P_\ell$, where the set of the endvertices of those paths is the same as the set of odd degree vertices of $R$
and the union of these paths is a forest.
We orient each $C_i$ to get a directed cycle. Now for each singleton edge $e\in E(G)$
such that $e$ is contained in some $C_i$ for $i\in [1,m]$, we orient $e$ and its duplication $e'$ in $G^*$
the same direction as it is on $C_i$.
For each singleton edge $e\in E(G)$ such that $e$ is contained in some $P_i$
for $i\in [1,\ell]$, we direct $e$ and its duplication $e'$ in $G^*$
in opposite directions. Now we have oriented all the edges of $G^*$. Call the resulting directed graph $D$.
By the construction, such an orientation satisfies the two properties above.
We then construct a bipartite graph $H$ associated with $D$.
Let $V(H)=\{v^+, v^-: v\in V(D)\}$. An arc $uv$ in $D$ is corresponding to an edge joining $u^+$ and $v^-$ in $H$.
By this construction, for any vertex $v\in V(D)$, we have
\begin{enumerate}[(1)]
\item $d_H(v^+)=d_D^+(v)=d_H(v^-)=d_D^-(v)=rs$ for any $v\in V(D)$;
\item $\mu(H) \le r$;
\end{enumerate}
Let $f(v)=s$ and $g(uv)=1$
for all $u, v\in V(H)$. As $H$ is $r s$-regular with $\mu(H) =r$,
we find an $fg$-coloring of $H$ using $r$ colors by Lemma~\ref{lem:fg-b}.
In each of the color classes, identifying $v^+$ and $v^-$ for each $v\in V(G^*)$
gives a decomposition of $G^*$ into $r$ spanning subgraphs $H_1, \ldots, H_r$,
where each $H_i$ is $2s$-regular with maximum multiplicity 2 (since the maximum multiplicity of $G^*$ is $2r$ and each color classes of $H$ is a simple graph).
Let $T^*=P_1\cup \ldots \cup P_\ell$. As $T^*$
contains at most $n-1$ edges, at least one of those subgraphs $H_1, \ldots, H_r$ has fewer than $n/3$ edges of $T^*$.
Relabeling the $H_i$, if necessary, we can therefore assume $H_1$ contains fewer than $n/3$ edges of $T^*$.
Thus for every $v\in V(H_1)$,
$v$ is incident in $H_1$ with at most $n/3$ edges from $E(H_1)\cap E(T^*)$.
Ideally we would now like to partition the graph $H_{1}$ into two spanning simple graphs, each with maximum degree $s$. However, that may not be possible; for example if $H_{1}$ has every edge having multiplicity two, except for 3 singleton edges that form a triangle using vertices of degree $2s$, this is impossible. However we claim that we can find a decomposition of $ H_{1}$ into a Hamilton cycle (and thus two perfect matchings) and two spanning simple graphs, each with maximum degree $s-1$. For every singleton edge $e\in E(G)$, if both $e$ and its duplication $e'$
are contained in $H_{1} $, we delete both $e$ and $e'$ from $H_{1}$.
Denote the resulting graph by $H'_1$.
\begin{CLA}\label{claim:multiple-edges}
For each $v\in V(H_1')$, $v$ is incident in $H_1'$ to either at least $n/3+2$ singleton edges or at least $16$
multiple edges.
\end{CLA}
\noindent\textbf{Proof}.\quad Let $v\in V(H_1')$ be any vertex. We suppose $v$ is adjacent in $H_1'$ to at most $15$
multiple edges. Then as $\delta(H_1) \ge n+32$,
we then know that
$v$ is incident in $H'_1$ to at least $n+32-2(n/3+15)=n/3+2$ singleton edges
of $H_1'$.
\qed
\begin{CLA}\label{claim:H1-prime-con}
The graph $H_1'$ is connected.
\end{CLA}
\noindent\textbf{Proof}.\quad In fact, we show that $H_1'$ has a Hamilton cycle.
Let $J$ and $J^*$ be the underlying simple graphs of $H'_1$ and $H_1$, respectively.
Assume $d_J(v_1) \le \ldots \le d_J(v_{n})$, where $\{v_1,\ldots, v_{n}\}=V(H'_1)$. Suppose to the contrary that $J$ does not have a Hamilton cycle.
Then by Theorem~\ref{lem:chvatal's-theorem}, there exists $i<[1,n/2-1]$ such that $d_J(v_i) \le i$ and $d_J(v_{n-i}) \le n-i-1$.
By Claim~\ref{claim:multiple-edges}, we know that in $H'_1$, each vertex is incident with at least $16$ distinct vertices.
Thus $\delta(J) \ge 16$ and so $i \ge 17$. Recall that $T^*$
is a forest of $G$. Thus
\begin{eqnarray*}
d_{J}(v_i) &\ge& \frac{1}{i} \sum\limits_{j=1}^id_J(v_j) \ge
\frac{1}{i}\left (\sum\limits_{j=1}^i d_{J^*}(v_i) -(2n-2) \right) \\
&\ge & (n/2+15)-\frac{1}{i}(2n-2).
\end{eqnarray*}
This implies $d_{J}(v_i) \ge \frac{3}{8}n$ as $i\ge 17$. Thus $i>\frac{3}{8}n$. Now by the same argument above we get
$$
d_{J}(v_i)>(n/2+16)-\frac{1}{i}(2n-2)>n/2,
$$
a contradiction.
\qed
Since $H_1$ is even and we only deleted some doubleton edges of $H_1$ to get $H_1'$,
we know that $H_1'$ is even. Thus the subgraph, call it $R_1'$, of $H_1'$ induced on its singleton edges
is even.
By applying Lemma~\ref{lem:spanning forest} to each component of $R_1'$, we can find a
forest $T$ of $H'_1$ that is formed by its singleton edges
and has as many edges as possible so $d_T(v) \le 1+\frac{1}{2}d_{R_1'}(v) \le n/6+2$ for each $v\in V(R_1')$ if $d_{R_1'}(v) \ge n/3+2$.
Thus $d_{R_1'-E(T)}(v) \ge n/6\ge 16 $ by $n\ge 96$.
Let $H_1^*=H'_1-E(T)$.
\begin{CLA}\label{claim:h-cycle}
The graph $ H_1^*$ has a Hamilton cycle.
\end{CLA}
\noindent\textbf{Proof}.\quad Let $J$ and $J^*$ be the underlying simple graphs of $H'_1$ and $H_1$, respectively.
Assume $d_J(v_1) \le \ldots \le d_J(v_{n})$, where $\{v_1,\ldots, v_{n}\}=V(H^*_1)$. Suppose to the contrary that $J$ does not have a Hamilton cycle.
Then by Theorem~\ref{lem:chvatal's-theorem}, there exists $i<[1,n/2-1]$ such that $d_J(v_i) \le i$ and $d_J(v_{n-i}) \le n-i-1$.
Note that $\delta(J) \ge 16$ by Claim~\ref{claim:multiple-edges} and the fact that $d_{R_1'-E(T)}(v) \ge n/6\ge 16 $ if $d_{R_1'}(v) \ge n/3+2$.
Thus $i \ge 17$. Recall that the union $T^*$ of $P_1, \ldots, P_\ell$
is a forest of $G$. Then
\begin{eqnarray*}
d_{J}(v_i) &\ge& \frac{1}{i} \sum\limits_{j=1}^id_J(v_j) \ge
\frac{1}{i}\left (\sum\limits_{j=1}^i d_{J^*}(v_i) -(4n-4) \right) \\
&\ge & (n/2+16)-\frac{1}{i}(4n-4).
\end{eqnarray*}
This implies $i>n/4$. Now by the same argument above we get
$$
d_{J}(v_i)>(n/2+16)-\frac{1}{i}(4n-2)>n/2,
$$
a contradiction.
\qed
Thus $H_1^*$ has a Hamilton cycle, call it $C$. Now consider $H_{1} - E(C)$. Partition its edges to form two simple graphs $H_{11}$ and $H_{12}$ as follows. First place one of each of the doubleton edges of $H_{1} - E(C) $ into these two graphs.
Let $R$ be the subgraph of $H_1-C$ formed by all its singleton edges. Note that by our choice of $C$, the graph $R$ is connected.
This is because (1)
$E(T) \subseteq E(R)$; and (2) the edges of $H_1$ joining components of $T$ (if $T$ has more than one components) are all doubleton edges,
thus in this case the cycle $C$ contains edges of $H_1$ that are joining components of $T$ and so deleting edges in $C$ leaves singleton edges in $ H_{1} - E(T)$ that are still joining the components of $T$.
Then note that all the vertices of $R$ are of even degree. This is because $H_1$ is $2s$-regular, $C$
is 2-regular, and we only removed doubleton edges from $H_1-E(C)$ to get $R$.
As $R$ is connected with all vertices of even degree, $R$ has an Euler tour by Theorem~\ref{Euler}.
Since $n$ is even and $2s$ is even, we know that $e(H_1)$ is even and so
$R$ has an even number of edges.
Now placing the edges of $R$ from one of its Euler tour alternately in $H_{11}$ and $H_{12}$ gives the desired decomposition, each graph is $(s-1)$-regular.
We now let $G_1=G-E(C)-E(H_{11})$. As $\mu(G^*)=2r$, we know that each $H_i$ contains two edges between any two vertices $u,v\in V(G^*)$
with $e_{G^*}(u,v)=2r$.
As $H_{12}$ is a simple graph, we know that $C\cup H_{11}$ contains at least one edge between any two vertices $u,v\in V(G^*)$
with $e_{G^*}(u,v)=2r$. Thus $C\cup H_{11}$ contains at least one edge between any two vertices $u,v\in V(G)$
with $e_{G}(u,v)=r$.
Thus $\mu(G_1) \le r-1$. As $C\cup H_{11}$ is spanning and $(s+1)$-regular, we know
that $G_1$ is $((r-1)s-1)$-regular. By Dirac's Theorem, the underlying simple graph of $G_1$ contains $r-2$ edge-disjoint perfect matchings.
Delete those $r-2$ perfect matchings from $G_1$ to get $G_2$. Then $G_2$
is $(r-1)(s-1)$-regular. Applying Lemma~\ref{lem:decomposition}, $G_2$
can be decomposed into $(r-1)/2$ edge-disjoint Hamilton cycles and $(r-1)$ spanning $(s-2)$-regular simple graphs.
Thus
$G$ can be decomposed into $r-2$ perfect matchings, $(r+1)/2$
Hamilton cycles, one spanning $(s-1)$-regular simple graph, and $(r-1)$ spanning $(s-2)$-regular simple graphs.
\qed
\proof[Proof of Theorem~\ref{thm:1-factorization}]
We choose $n_0$ to be the same as that stated in Theorem~\ref{thm:1-factorization-proof}. Let $G$ be a $k$-regular graph
with maximum multiplicity $r$ and $k\ge r(n/2+18)$. Applying Theorem~\ref{thm:Dirac}
to the underlying simple graph of $G$, if necessary, by removing at most $r-1$ edge-disjoint perfect matchings from $G$,
we may assume that $k=rs$ for some integer $s\ge n/2+17$.
If $r$ is even, applying Lemma~\ref{lem:decomposition}, we decompose $G$
into $r/2$ edge-disjoint Hamilton cycles and $r$ spanning $(s-1)$-regular simple graphs.
Each of the Hamilton cycle can be decomposed into $2$ perfect matchings and each of the $r$
simple graphs has a 1-factorization by Theorem~\ref{thm:1-factorization-proof}.
Thus $G$ has a 1-factorization into $r+r(s-1)=rs$ 1-factors.
Now suppose that $r$ is odd. If $s$ is even, by Lemma~\ref{lem:decompose2}(a),
$G$ can be decomposed into $r-2$ perfect matchings, $(r+1)/2$
Hamilton cycles, one spanning $(s-1)$-regular simple graph, and $(r-1)$ spanning $(s-2)$-regular simple graphs.
If $s$ is odd, by Lemma~\ref{lem:decompose2}(b),
$G$ can be decomposed into $2r-2$ perfect matchings, $(r+1)/2$
Hamilton cycles, one spanning $(s-2)$-regular simple graph, and $(r-1)$ spanning $(s-3)$-regular simple graphs.
Again as each of the Hamilton cycle can be decomposed into $2$ perfect matchings and each of the
simple graphs has a 1-factorization by Theorem~\ref{thm:1-factorization-proof}, we know that
$G$ has a 1-factorization.
\qed
\proof[Proof of Theorem~\ref{thm:overfull-present}]
We choose $n_0$ to be the same as that stated in Theorem~\ref{thm:1-factorization-proof}.
We assume that $G$ is not regular, else the result follows from Theorem~\ref{thm:1-factorization}. By Lemma~\ref{lemma:overfull-subgraph}, $G$ contains a vertex $v$ such that $\omega(G- v) = \omega(G)$. Let
$$
\omega(G -v) =\delta(G)+k+t,
$$
where $k\ge 0$ is an integer and $0 \le t<1$ is rational of the form $\frac{b}{n-2}$, where $0\le b<n-2$ is even.
Let $w$ be a vertex with minimum degree among all vertices other than $v$, which can be of degree $\Delta(G)$
if $v$ is the only vertex of degree less than $\Delta(G)$ in $G$.
By Dirac's Theorem, $G - \{v, w\}$ has a perfect matching; let $M$ be a subset of such a matching with exactly $t\cdot (n-2)/2 $ edges, and consider $G_0=G-M$.
The graph $G_0$ has $G_0- v$ as a $\Delta(G_0)$-full/overfull subgraph, and the number of edges of $G_0-v$ is now a multiple of $(n-2)/2$, specifically $(\delta(G_0)+k)(n-2)/2$. Let $w_0$ be a vertex of minimum degree in $G_0$ other than $v$. As before, $G_0-\{v,w_0\}$ has a perfect matching $M_0$; we remove that matching from $G_0$. As we continue this process, adjusting the choice of $w_0$ each time as necessary, by Lemmas~\ref{lemma:overfull-subgraph2} and~\ref{lemma:overfull-subgraph}, we have at most two vertices of minimum degree at each step, one of which must be vertex $v$ whose degree does not change throughout the procedure.
After the removal of $k$ such perfect matchings the graph $G^*$ obtained has minimum degree $\delta(G)$, and $G^*-v$ has exactly $\delta(G)(n-2)/2$ edges. It is straightforward then to see that $G^*$ must be $\delta(G)$-regular. This graph has a 1-factorization by Theorem~\ref{thm:1-factorization}, and combining this 1-factorization with the previously removed matchings gives a decomposition of $G$ into matchings. The result follows as $\chi'(G) \ge \omega(G)$.
The consequence part of the statement is clear as when $G$ contains a $\Delta(G)$-full subgraph, we have $\lceil \omega(G) \rceil =\Delta(G)$.
\qed
%
\section{Proof of Theorem~\ref{thm:1}}
\subsection{Special cases}
\begin{LEM}\label{lema:small-deficiency}
There exists an $n_0\in \mathbb{N}$ such that the following holds. Let $n, r \in \mathbb{N}$
be such that $n\ge n_0$ is even and $r=\mu(G)$.
If $G$ has a minimum degree vertex $v^{*}$ such that $e(G- v^{*}) = \Delta(G)(n-2) /2-s$
for some $s\in [0,6r]$ and $\delta(G) \ge r(n/2+7)+s$, then $\chi'(G) = \Delta(G) $.
\end{LEM}
\noindent\textbf{Proof}.\quad If $ e(G- v^{*}) = \Delta(G) (n-2)/2$, then $G - v^{*}$ is $ \Delta(G)$-full in $G$ and the result follows by Theorem~\ref{thm:overfull-present}. So we assume that $ e(G - v^{*})= \Delta(G)(n-2) /2- s$ for $s\in[1,6r]$.
Since
\begin{eqnarray*}
\sum_{v\in V(G)}(\Delta(G)-d_G(v))&=&\sum_{v\in V(G-v^*)}(\Delta(G)-d_{G^*}(v))-d_G(v^*)+(\Delta(G)-d_{G}(v^*))\\
&=&(n-1)\Delta(G)-2e(G-v^*)-d_G(v^*)+(\Delta(G)-d_{G}(v^*))\\
&=&\Delta(G)+2s-d_G(v^*)+(\Delta(G)-d_{G}(v^*)) \\
&=& 2(\Delta(G)-d_{G}(v^*))+2s,
\end{eqnarray*}
and $\Delta(G)-d_{G}(v^*)=\max\{\Delta(G)-d_{G}(v): v\in V(G)\}$, we know that
there exist at least two other vertices $u,w$ in $G$ with degree less than $\Delta(G)$. By Dirac's Theorem applied to the simple graph underlying $G$, there is a Hamilton cycle, and therefore a perfect matching $M$, in $G - u - w$. We remove $M$ from $G$ and let $G_1=G-M$.
Note that $\delta(G_1)=\delta(G)-1=d_{G_1}(v^*)$.
Furthermore, $e(G_1-v^*)=\Delta(G)(n-2)/2 - s-(n-4)/2=(\Delta(G)-1)(n-2)/2-(s-1)=\Delta(G_1)(n-2)/2-(s-1) \le \Delta(G_1)(n-2)/2$.
Since $\delta(G_1)=\delta(G)-1\ge r(n/2+7)+s-1$,
we can repeat the process with $G_1$ in the place of $G$.
In general, we repeat this process $s$ times to yield a graph $G_s$ of maximum degree $\Delta(G) - s$ in which $ G_s - v^{*} $ is $\Delta(G_s)$-full with $\delta(G_s) \ge r(n/2+7)$. By Theorem~\ref{thm:overfull-present},
we have $\chi'(G_s)=\Delta(G)-s$. As each time a matching was only removed
when we repeated the process, we then know that $\chi'(G) = \Delta(G)$.
\qed
\subsection{Proof of Theorem~\ref{thm:1}}
\noindent\textbf{Proof}.\quad We choose $n_0$ to be at least the maximum of the $n_0$ stated in Theorem~\ref{thm:1-factorization-proof} and
the $n_0$ with respect to $0.9\varepsilon $ as stated in Theorem~\ref{thm:plantholt-shan}, and such that $1/n_0\ll \varepsilon $.
Let $G$ be a graph of even order $n \ge n_{0}$, maximum multiplicity $r$, minimum degree $\delta > r(1+\varepsilon )n/2$, maximum degree $\Delta $. Denote by $V_\delta$ the set of minimum degree vertices of $G$.
By Lemma~\ref{lema:small-deficiency}, we assume
\begin{equation}\label{eqn:e-of-g-v1}
e(G - v) < \Delta(n-2)/2 - 6r \quad \text{for any $v\in V(G)$.}
\end{equation}
Therefore, if two vertices with degree less than $\Delta$ are not adjacent in $G$ with $r$ edges between them, we may add an edge between them without creating an overfull subgraph, or increasing $\Delta$. We iterate this edge-addition procedure. If at some point we create a $\Delta$-full subgraph, the result follows by Theorem~\ref{thm:overfull-present}. Otherwise, we reach a point where we
may now assume that in $G$ all vertices with degree less than $\Delta$ are mutually adjacent with $r$ edges between them.
Thus by Lemma~\ref{lem:numberofD}, we know that $|V_\Delta| \ge n/2+1$.
We may assume that the maximum degree $\Delta$ is a multiple of $r$, say
$$\Delta = rk \quad \text{for some positive integer $k$, and $k$ is even when $r$ is odd}.$$
For otherwise, when $r$ is even, we can remove at most $r-1$ edge-disjoint perfect matchings to achieve
the property; and
when $r$ is odd,
we can remove at most $2r-1$ edge-disjoint perfect matchings to achieve that property.
The existence of perfect matchings is guaranteed as the underlying simple graph of $G$ is hamiltonian by Dirac's Theorem.
Note that deleting a perfect matching maintains the
inequality in~\eqref{eqn:e-of-g-v1} with respect to the
resulting graph. The minimum degree $\delta$
satisfies $\delta \ge r(1+ \varepsilon )n/2-2r \ge r(1+ \varepsilon ')n/2$, where $\varepsilon '=0.9\varepsilon $.
We first form a supergraph $ G^{*}$ that contains $G$ as a subgraph as follows:
\begin{enumerate}[---]
\item Add two vertices $z_1$ and $w_1$ to $G$.
\item For any vertex $u$ of $G$ whose degree is not a multiple of $r$, add just enough parallel $uz_1$ edges so the degree of $u$ is a multiple of $r$ in $G^{*}$.
\item Finally, if necessary, add enough parallel edges (at most $r-1$ edges) between $ z_1$ and $w_1$ so that the degree of $z_1$ in $G^{*}$ is also a multiple of $r$.
\end{enumerate}
Since $|V_\Delta| \ge n/2+1$, we have
$$d_{G^*}(z_1) \le (r-1)(n/2-1)+r-1 \le (r-1)n/2 <\delta.$$
We now separate the proof into two cases according to whether or not $r$ is even.
\smallskip
{\bf \noindent Case 1: $r$ is even}.
\smallskip
We find an $fg$-coloring of $G^{*}$ by letting
\begin{numcases}{f(v)=}
\frac{d_{G^*}(v)}{r/2} & \text{if $v\in V(G^*)\setminus\{w_1\}$;} \nonumber \\
2 & \text{if $v=w_1$;} \nonumber
\end{numcases}
and
$$
g(uv)=2 \quad \text{for any pair $u,v\in V(G^*)$}.
$$
By Lemma~\ref{lem:fg}, since each $f, g $ value is even, we can obtain such a coloring using $\Delta/(\Delta / \frac {r} {2}) = r/2 $ colors. This gives us a partition of $G^{*}$ into $ r/2 $ subgraphs $H_{1}^{*}, H_{2}^{*}, ..., H_{r /2}^{*}$, each with maximum multiplicity at most 2, and maximum degree $2k$. Let $H_i=H_i^*-\{z_1,w_1\}$ for each $i\in [1,r/2]$.
We first claim that none of these graphs $H_i$ has a $2k$-overfull subgraph. To see this, note that $e(G) = e(G - v^{*}) + \delta \leq \Delta(n-2)/2 - 6r + \delta$ for any $v^*\in V_\delta$ by~\eqref{eqn:e-of-g-v1}.
Because in our decomposition each vertex $v$ of $H_{i}^{*} $ other than $w_1$ has degree $f(v)$, and $w_1$ has degree at most 2, it follows that for any $i,j$, the edge cardinalities of $H_{i}$ and $ H_{j} $ differ by at most 1. Thus each $H_{i}$ has at most $ \left \lceil ( e(G - v^{*} ) + \delta)/ (r /2) \right \rceil$ edges. In addition, since in the coloring $w_1$ has degree at most 2 in each color class, the degree of any vertex $v$ can differ by at most 2 in any two subgraphs $H_{i}, H_{j}$. It follows then that for any $H_{i}$, the number of edges in any vertex-deleted subgraph is at most
\begin{eqnarray*}
&&\left \lceil ( e(G - v^{*}) + \delta)/ (r /2) \right \rceil - (\delta/(r /2)) +2 \\
&\leq& (kr (n-2)/2- 6r )/ (r /2) + 2\delta/r +1 - 2\delta/r + 2 \leq 2k(n-2)/2 - 9.
\end{eqnarray*}
Thus $H_i$ contains no $2k$-overfull subgraph by Lemma~\ref{lemma:overfull-subgraph}. Each $H_i$ satisfies $\delta(H_i) \ge (1+\varepsilon ' ) n$.
Just as in the proof of Lemma~\ref{lem:decompose2}, we find a decomposition of $ H_{i}$ into a Hamilton cycle (and thus two perfect matchings) and two simple graphs, each with maximum degree $k-1$.
To get this partition, first let $T_{i}$ be a spanning forest of the subgraph of $H_{i}$ that is formed by its singleton edges
and has as many edges as possible. We show below that $ H_{i} - E(T_{i})$ has a Hamilton cycle.
\begin{CLA}\label{claim:h-cycle2}
Each graph $ H_{i} - E(T_{i})$ has a Hamilton cycle.
\end{CLA}
\noindent\textbf{Proof}.\quad Let $J^*$ and $J$ be the underlying simple graph of $H_i$ and $H_i-E(T_i)$, respectively.
Assume $d_J(v_1) \le \ldots \le d_J(v_{n})$, where $\{v_1,\ldots, v_{n}\}=V(H_i)$. Suppose to the contrary that $J$ does not have a Hamilton cycle.
Then by Theorem~\ref{lem:chvatal's-theorem}, there exists $i<[1,n/2-1]$ such that $d_J(v_i) \le i$ and $d_J(v_{n-i}) \le n-i-1$.
Since $\delta(H_i) \ge (1+\varepsilon ')n$, we know that in $H_i$, each vertex is incident with at least $\varepsilon ' n$ multiple edges.
Thus $\delta(J) \ge \varepsilon ' n$ and so $i >\varepsilon ' n$. Then as $\sum\limits_{j=1}^{n} d_{T_i}(v_j) \le 2n-2$, we get
\begin{eqnarray*}
d_{J}(v_i) &\ge& \frac{1}{i} \sum\limits_{j=1}^id_J(v_j) \ge
\frac{1}{i}\left (\sum\limits_{j=1}^i d_{J^*}(v_i) -(2n-2) \right) \\
&\ge & ( 1+ \varepsilon ' )n/2-\frac{1}{i}(2n-2) >n/2,
\end{eqnarray*}
showing a contradiction to $d_J(v_i) \le i <n/2$.
\qed
Thus $ H_{i} - E(T_{i})$ has a Hamilton cycle, call it $C_i$. Now consider $H_{i} - E(C_i)$. Partition its edges to form two simple graphs $H_{i1}$ and $H_{i2}$ as follows. First place one of each of the the doubleton edges of $H_{i} - E(C_i) $ into these two graphs.
Let $R_i$ be the subgraph of $H_i-C_i$ formed by all its singleton edges. Note that by our choice of $C_i$, the graph $R_i$ is connected.
This is because (1)
$E(T_i) \subseteq E(R_i)$; and (2) the edges of $H_i$ joining components of $T_i$ (if $T_i$ has more than one components) are all doubleton edges,
thus in this case the cycle $C_i$ contains edges of $H_i$ that are joining components of $T_i$ and so deleting edges in $C_i$ leaves singleton edges in $ H_{i} - E(T_{i})$ that are still joining the components of $T_{i}$.
We add a new vertex $y$ to $R_i$ and add an edge between $y$ and each odd degree vertex of $R_i$,
calling the new graph $R^{*}_i$. Then we find an Euler tour of $R^{*}_i$ that begins and ends at a vertex that is not of maximum degree in the graph $H_i$ ($H_i$ is not regular as any vertex-deleted subgraph of it has at most $2k(n-2)/2-9$ edges).
Placing the edges of $R_i$ from this tour alternately in $H_{i1}$ and $H_{i2}$ gives the desired decomposition, each graph having maximum degree $k-1$.
Finally we show that neither $H_{i1}$ nor $H_{i2}$ has a $(k-1)$-overfull subgraph. Recall from before that $H_{i}$ has at most $2k(n-2)/2 - 9$ edges in any vertex-deleted subgraph. Therefore $H_{i} - E(C_i)$ has at most $ (2k-2)(n-2) /2- 9$ edges in any vertex-deleted subgraph. By construction, the graphs $H_{i1}$ and $H_{i2}$ differ in size by at most 1, and in degree at any vertex by at most 2. Thus, arguing as before, the number of edges in any vertex-deleted subgraph of $H_{i1}$ or $H_{i2}$ is at most $(k-1)(n-2)/2-9 + 3 <(k-1)(n-2) /2$, so that $H_{i1}$ and $H_{i2}$ contain no $(k-1)$-overfull subgraph. It follows by Theorem~\ref{thm:plantholt-shan} that $ \chi'(H_{i1}) = \chi'(H_{i2}) = k-1$. Consequently $\chi'(G) = \Delta(G)$, and the result follows.
\smallskip
{\bf \noindent Case 2: $r$ is odd}.
\smallskip
We first form a supergraph $ G^{**}$ based on $G^*$:
\begin{enumerate}[---]
\item Add two vertices $z_2$ and $w_2$ to $G^*$.
\item For any vertex $u$ of $G^*$ with $u\ne w_1$ whose degree is not an even multiple of $r$ (so the degree of $u$ in $G^*$ is at most $\Delta-r$), add exactly $r$ parallel $uz_2$ edges so the degree of $u$ is an even multiple of $r$ in $G^{**}$.
\item Finally, if necessary, add $r$ parallel edges between $ z_2$ and $w_2$ so that the degree of $z_2$ in $G^{**}$ is also an even multiple of $r$.
\end{enumerate}
Since $|V_\Delta| \ge n/2+1$, we have
$$d_{G^{**}}(z_2) \le r(n/2-1)+r = r n/2 <\delta,$$
$d_{G^{**}}(w_1) \le r-1$ and $d_{G^{**}}(w_2) \le r$.
We double all the edges of $G^{**}$ to get $G'$. Then $G'$ has maximum multiplicity $2r$. By the same argument
as in the proof of Lemma~\ref{lem:decompose2},
we know that $G'$ has an orientation $D$
such that
\begin{enumerate}[(1)]
\item $d_D^+(v)=d_D^-(v)$ for any $v\in V(D)$;
\item for any $u,v\in V(D)$, there are at most $r$ arcs of $D$ from $u$ to $v$;
\item Let $E=\{e\in A(D): \text{ $e$ is singleton in $G$ and $e$ and its duplication $e'$ form a 2-cycle in $D$}\}$, where $A(D)$ is the set of arcs of $D$. Then $D[E]$
is a forest.
\end{enumerate}
We then construct a bipartite graph $H$ associated with $D$.
Let $V(H)=\{v^+, v^-: v\in V(D)\}$. An arc $uv$ in $D$ is corresponding to an edge joining $u^+$ and $v^-$ in $H$.
By this construction, for any vertex $v\in V(D)$, we have
\begin{enumerate}[(1)]
\item $d_H(v^+)=d_D^+(v)=d_H(v^-)=d_D^-(v)$ for any $v\in V(D)$;
\item $\mu(H) \le r$;
\item The union of the edges corresponding to $E$ form a forest $T^*$ in $H$.
\end{enumerate}
We find an $fg$-coloring of $H$ by letting
\begin{numcases}{f(v)=}
\frac{d_{H}(v)}{r} & \text{if $v\in V(H)\setminus\{w_1^+, w_1^-, w_2^+, w_2^-\}$;} \nonumber \\
1 & \text{if $v\in \{w_1^+, w_1^-, w_2^+, w_2^-\}$;} \nonumber
\end{numcases}
and
$$
g(uv)=1 \quad \text{for any pair $u,v\in V(H)$}.
$$
As $d_{H}(v)$ is a multiple of $r$ for any $v\in V(H)\setminus\{w_1^+, w_1^-, w_2^+, w_2^-\}$, $d_H(v) \le r$
for any $v\in \{w_1^+, w_1^-, w_2^+, w_2^-\}$, and $\mu(H)=r$,
we find an $fg$-coloring of $H$ using $r$ colors by Lemma~\ref{lem:fg-b}.
In each of the color classes, identifying $v^+$ and $v^-$ for each $v\in V(G^{**})$
gives a decomposition of $G^{**}$ into $r$ spanning subgraphs $H^*_1, \ldots, H^*_r$,
where each $H^*_i$ has maximum degree $2k$
and maximum multiplicity 2 (since the maximum multiplicity of $G^{**}$ is $2r$ and each color class of $H$ is a simple graph).
Let $H_i=H_i^*-\{z_1,w_1,z_2,w_2\}$ for each $i\in [1,r]$.
We first claim that none of these graphs $H_i$ has a $2k$-overfull subgraph. To see this, note that $e(G) = e(G - v^{*}) + \delta \leq \Delta(n-2)/2 - 6r + \delta$ for any $v^*\in V_\delta$ by~\eqref{eqn:e-of-g-v1}.
Because in our decomposition each vertex $v$ of $H_{i}^{*} $ other than $w_1$ and $w_2$ has degree $f(v)$, and $w_1, w_2$ have degree at most 2, it follows that for any $i,j$, the edge cardinalities of $H_{i}$ and $ H_{j} $ differ by at most 2. Thus each $H_{i}$ has at most $ ( e(G - v^{*} ) + \delta)/ r +2$ edges. In addition, since in the coloring $w_1,w_2$ have degree at most 2 in each color class, the degree of any vertex $v$ can differ by at most 4 in any two subgraphs $H_{i}, H_{j}$. It follows then that for any $H_{i}$, the number of edges in any vertex-deleted subgraph is at most
\begin{eqnarray}
&& 2( e(G - v^{*}) + \delta)/ r +2- (\delta/r) +4 \nonumber \\
&\leq& 2(rk (n-2)/2 - 6r )/ r + \delta/r +2 - \delta/r + 4 \leq 2k(n-2)/2 - 6. \label{eqn:Hi-v}
\end{eqnarray}
Thus $H_i$ contains no $2k$-overfull subgraph by Lemma~\ref{lemma:overfull-subgraph}. Each $H_i$ satisfies $\delta(H_i) \ge (1+\varepsilon ' ) n$.
By the same argument as in the proof of Lemma~\ref{lem:decompose2},
for every $v\in V(H_1)$, we may assume that
$v$ is incident in $H_1$ with at most $n/3$ edges from $E(H_1)\cap E(T^*)$.
We claim that we can find a decomposition of $ H_{1}$ into a Hamilton cycle (and thus two perfect matchings) and two spanning simple graphs, each with maximum degree $s-1$. For every singleton edge $e\in E(G)$, if both $e$ and its duplication $e'$
are contained in $H_{1} $, we delete both $e$ and $e'$ from $H_{1}$.
Denote the resulting graph by $H'_1$.
\begin{CLA}\label{claim:multiple-edges2}
For each $v\in V(H_1')$, $v$ is incident in $H_1'$ to either at least $n/3+2$ singleton edges or at least $ \lfloor \varepsilon ' n \rfloor$
multiple edges.
\end{CLA}
\noindent\textbf{Proof}.\quad Let $v\in V(H_1')$ be any vertex. We suppose $v$ is adjacent in $H_1'$ to at most $\varepsilon ' n-1$
multiple edges. Then as $\delta(H_1) \ge n+2\varepsilon ' n$,
we then know that
$v$ is incident in $H'_1$ to at least $n+2\varepsilon ' n-2(n/3+ \varepsilon ' n-1)=n/3+2$ singleton edges
of $H_1'$.
\qed
%
\begin{CLA}\label{claim:H1-prime-con2}
The graph $H_1'$ is connected.
\end{CLA}
\noindent\textbf{Proof}.\quad In fact, we show that $H_1'$ has a Hamilton cycle.
Let $J$ and $J^*$ be the underlying simple graphs of $H'_1$ and $H_1$, respectively.
Assume $d_J(v_1) \le \ldots \le d_J(v_{n})$, where $\{v_1,\ldots, v_{n}\}=V(H'_1)$. Suppose to the contrary that $J$ does not have a Hamilton cycle.
Then by Theorem~\ref{lem:chvatal's-theorem}, there exists $i<[1,n/2-1]$ such that $d_J(v_i) \le i$ and $d_J(v_{n-i}) \le n-i-1$.
By Claim~\ref{claim:multiple-edges2}, we know that in $H'_1$, each vertex is incident with at least $ \lfloor \varepsilon ' n \rfloor$ distinct vertices.
Thus $\delta(J) \ge \lfloor \varepsilon ' n \rfloor$ and so $i \ge \lfloor \varepsilon ' n \rfloor+1$. Recall that $T^*$
is a forest of $G$. Thus
\begin{eqnarray*}
d_{J}(v_i) &\ge& \frac{1}{i} \sum\limits_{j=1}^id_J(v_j) \ge
\frac{1}{i}\left (\sum\limits_{j=1}^i d_{J^*}(v_i) -(2n-2) \right) \\
&\ge & (n/2+\varepsilon ' n)-\frac{1}{i}(2n-2)>n/2,
\end{eqnarray*}
a contradiction.
\qed
Let $R'$ be the subgraph of $H_1'$ induced on its singleton edges.
Since $H_1$ contains doubleton edges and singleton edges only,
the parities of the degrees of each vertex in $H_1$, $H_1'$, and in $R'$ are the same.
If $R'$ is even, applying Lemma~\ref{lem:spanning forest} to each component of $R'$, we can find a
forest $T$ of $H'_1$ that is formed by its singleton edges
and has as many edges as possible such that $d_{R'-E(T)}(v) \ge \varepsilon ' n$
if $d_{R'}(v) \ge n/3+2$.
If $R'$ is not even, then we add a new vertex $w$ to $R'$ and join an edge between $w$
and each vertex of $R'$ that is of odd degree. Denote the resulting graph by $R^{''}$.
Now applying Lemma~\ref{lem:spanning forest} to each component of $R^{''}$, we can find a
forest $T$ of $R^{''}$ with as many edges as possible such that $d_{R^{''}-E(T)}(v) \ge \varepsilon ' n$
if $d_{R^{''}}(v) \ge n/3+2$ for $v\in V(H_1')$. By adding edges of $R^{''}$ incident to $w$
and deleting other edges if necessary (this will maintain the property that $d_{R^{''}-E(T)}(v) \ge \varepsilon ' n$
if $d_{R^{''}}(v) \ge n/3+2$ for $v\in V(H_1')$), we may assume that all edges incident with $w$ in $R^{''}$
are contained in $T$.
Let $H_1^*=H'_1-E(T)$.
By a similar argument as in Case 1 but using $4n-4$ in the place of $2n-2$, we know that $H_1^*$ has a Hamilton cycle, call it $C$. Now consider $H_{1} - E(C)$. Partition its edges to form two simple graphs $H_{11}$ and $H_{12}$ as follows. First place one of each of the doubleton edges of $H_{1} - E(C) $ into these two graphs.
Let $R$ be the subgraph of $H_1-C$ formed by all its singleton edges together
with the edges of $R^{''}$ that are incident with $w$ if they exist.
Note that by our choice of $C$, the graph $R$ is connected.
This is because (1)
$E(T) \subseteq E(R)$; and (2) the edges of $H_1$ joining components of $T$ (if $T$ has more than one components) are all doubleton edges (recall that $T$ contains all edges incident with $w$ in $R^{''}$),
thus in this case the cycle $C$ contains edges of $H_1$ that are joining components of $T$ and so deleting edges in $C$ leaves singleton edges in $ H_{1} - E(T)$ that are still joining the components of $T$.
We then claim that $R$ is even. Let $R^*=R'$ if $R'$ is even
and $R^*=R^{''}$ otherwise.
Note that at each vertex $v\in V(R)\cap V(H_1^*)$, there are three possibilities of constitution of the two edges of $C$ that
are incident with $v$: both are two doubleton edges of $H_1^*$ ($d_R(v)=d_{R^*}(v)+2$ in this case),
one is from a doubleleton edge of $H_1^*$ and the other is a singleton edge of $H_1^*$ ($d_R(v)=d_{R^*}(v)$ in this case), and both are singleton edges of $H_1^*$ ( $d_R(v)=d_{R^*}(v)-2$ in this case).
Thus $d_R(v)$ is even.
Since $C$ does not contain any edge of $R^*$ that is incident with $w$
and $d_{R^*}(w)$ is even, then we know that $d_R(w)$ is also even.
Thus $R$ is even.
Then we find an Euler tour of $R$ that begins and ends at a vertex that is not of maximum degree in the graph $H_1$ ($H_1$ is not regular as any vertex-deleted subgraph of it has at most $2k(n-2)/2-6$ edges).
Placing the edges of $R$ from this tour alternately in $H_{11}$ and $H_{12}$ gives the desired decomposition, each graph having maximum degree $k-1$.
We show that neither $H_{11}$ nor $H_{12}$ has a $(k-1)$-overfull subgraph. Recall from before that $H_{1}$ has at most $2k(n-2)/2 - 6$ edges in any vertex-deleted subgraph. Therefore $H_{1} - E(C)$ has at most $ (2k-2)(n-2)/2 - 6$ edges in any vertex-deleted subgraph. By construction, the graphs $H_{11}$ and $H_{12}$ differ in size by at most 1, and in degree at any vertex by at most 2. Thus, arguing as before, the number of edges in any vertex-deleted subgraph of $H_{11}$ or $H_{12}$ is at most $(k-1)(n-2)/2-6 + 3 < (k-1)(n-2)/2 $, so that $H_{11}$ and $H_{12}$ contain no $(k-1)$-overfull subgraphs. It follows Theorem~\ref{thm:plantholt-shan} that $ \chi'(H_{11}) = \chi'(H_{12}) = k-1$.
We now let $G_1=G-E(C)-E(H_{11})$. As $\mu(G^{**})=2r$, we know that each $H_i$ contains two edges between any two vertices $u,v\in V(G^{**})$
with $e_{G^{**}}(u,v)=2r$.
As $H_{12}$ is a simple graph, we know that $C\cup H_{11}$ contains at least one edge between any two vertices $u,v\in V(G^*)$
with $e_{G^*}(u,v)=2r$. Thus $C\cup H_{11}$ contains at least one edge between any two vertices $u,v\in V(G)$
with $e_{G}(u,v)=r$.
Thus $\mu(G_1) \le r-1$.
For $v$ of $G$, suppose $d_{G^{*}}(v)=r k_v$ for some positive integer $k_v$. If $d_{G^*}(v)=\Delta=rk$,
then $d_{G_1}(v) = (r-1)k-1$. Otherwise,
we have $d_{G^*}(v) \le \Delta-r$ as all the degrees of vertices from $V(G)$ in $G^*$ are multiples of $r$.
Then $d_{G_1}(v) \le \max\{k_v (r-1)+1, k_v(r-1)\} $, since the degree of $v$ in $H_{11}$ is at most $k_v+2$ and so
in $G_1$, the degree of $v$ is at most $rk_v -(k_v+1-2)=k_v (r-1)+1<(r-1)k-1$, as $r\ge 3$. Thus $\Delta(G_1)=(r-1)k-1$.
Finally, for any $v\in V(G_1)$, we show that $G_1-v$
contains no $\Delta(G_1)$-overfull subgraph.
Note that $e(G_1-v) =\frac{1}{2}\sum_{i=1}^re(H_i-v)$.
Recall from~\eqref{eqn:Hi-v} that $e(H_i-v) \le 2k(n-2)/2-6$. Also
$ | e(H_i-v)-e(H_j-v)| \le 6$ for any $i,j\in [1,r]$.
Thus $e(H_1-v) \ge \frac{1}{r} \sum_{i=1}^re(H_i-v)- 6$.
As $e(H_{11}-v) \ge \frac{1}{2}(e(H_1-v)-(n-2))-1$,
we know that $e(H_{11}-v) \ge \frac{1}{2}(\frac{1}{r} \sum_{i=1}^re(H_i-v)-6-(n-2))-2$. Thus
\begin{eqnarray*}
e(G_1 - v) &=& \frac{1}{2}\sum_{i=1}^re(H_i-v)-(n-2)-e(H_{11}-v) \\
& \le & (\frac{1}{2}-\frac{1}{2r})\sum_{i=1}^re(H_i-v)+3+\frac{n-2}{2}+2-(n-2) \\
&\le & (\frac{1}{2}-\frac{1}{2r}) r(2k(n-2)/2-6)-\frac{n-2}{2}+5 \\
&=&(k(r-1)-1)(n-2)/2-3r+8 <\Delta(G_1)(n-2)/2.
\end{eqnarray*}
Thus $G_1$ contains no $\Delta(G_1)$-overfull subgraph. By Case 1, we know that $\chi'(G_1)=\Delta(G_1)$.
Therefore $\chi'(G) \le \chi'(G_1)+\chi'(H_{11})+\chi'(C)=\Delta(G)$
and so $\chi'(G)=\Delta(G)$.
\qed
\section*{Acknowledgment}
Songling Shan was supported by NSF grant DMS-2153938.
\bibliographystyle{plain}
|
{
"arxiv_id": "2302.13261",
"language": "en",
"timestamp": "2023-02-28T02:14:30",
"url": "https://arxiv.org/abs/2302.13261",
"yymm": "2302"
} | \section{Introduction}
The Covid-19 pandemic has necessitated people worldwide to adapt to new ways of doing things~\cite{beaunoyer2020covid}. With billions of people forced to conduct their daily activities online, including attending school, working from home, grocery shopping, banking, and other critical tasks~\cite{daniel2020education,aristovnik2020impacts,tazi2021parents,reddington2022development,monroe2021location,karmakar2021understanding}, the move to a fully digital world has been an inconvenience for some. Unfortunately, this drastic shift to online services has left many behind, particularly those who rely on usable, accessible, and inclusive services~\cite{scanlan2021reassessing,das2019security,das2020humans,das2019towards,das2018qualitative}. While the vulnerabilities of the disabled population have always existed, this sudden move to digital services has exacerbated existing problems~\cite{scanlan2021reassessing,bayor2018characterizing}, including privacy and security since vulnerable populations cannot use privacy and security tools and protocols successfully due to the disparities in usability and accessibility levels. Furthermore, these tools often fail to meet the specific requirements of the disabled population, even in fundamental areas such as authentication techniques~\cite{furnell2021disadvantaged,das2018johnny,das2019towards}.
Along with the usability and accessibility concerns, there are many data security and privacy concerns present, such as critical data access, smart home technology data usage, and inadequate authentication protocols. Additionally, the disabled population uses medical technology more than their non-disabled counterparts, but many of these tools and protocols are not accessible to users with different needs and abilities~\cite{liang2017understanding}. This makes accessing personal health records, and user accounts difficult for many users. Furthermore, the disabled population faces many difficulties online relating to authentication methods such as CAPTCHAs~\cite{helkala2012disabilities,das2020smart}. Most CAPTCHAs require a user to enter an alphanumeric code, which can be difficult or impossible for visually impaired users. This raises questions about if privacy and security tools are designed with different user populations in mind.
To provide a comprehensive understanding of the research undertaken in this area, we conducted a systematic literature review of $2,352$ research articles on the privacy and security of web services and the disabled populations. We screened these articles by title, abstract, and full text, selecting $63$ papers that focused on the privacy and security of web services as they relate to the disabled population. We then conducted a detailed thematic analysis of these papers, uncovering valuable solutions to address some privacy and security concerns of the disabled population. However, our analysis also revealed significant gaps in the research, highlighting the need for future work in this area. As far as we know, this is the first Systematization of Knowledge (SoK) paper to focus on the privacy and security challenges faced by the disabled community while accessing web services.
\section{Related Work}
\label{sec:related}
While still a relatively new and developing field, a growing collection of literature focuses on the privacy and security of people with disabilities using web services.
\subsection{Differing Tool Usage Perceptions: Web Services}
Both on and offline, the general population and disabled population have vastly different needs and abilities. As technology advances, many adults increasingly use online services such as banking, social media, email, and healthcare~\cite{gitlow2014technology,das2017celebrities,mitzner2010older,walsh2021my,marston2016technology,das2021does,dev2018privacy}. As a result of this increase in technology use, many of these users have privacy and security concerns related to web services and how their data is being used~\cite{awotunde2021privacy,mccole2010trust,markert2023transcontinental}. While these web services can benefit users greatly, researchers such as Mentis et al. have found that they also create various privacy and security risks for vulnerable populations. In addition, many adults who use these services have mild cognitive impairment and other disabilities that make it difficult to understand the implications of sharing personal information online, the importance of password management, and recognizing scams~\cite{mentis2019upside,shrestha2022secureld,das2022sok,das2020non,das2020don,das2019don}. While these web services should make technology more accessible to all users, our SOK demonstrates that we need to perform an in-depth study to understand the needs of understudied populations.
\subsection{Privacy and Security Concerns}
When trying to understand more about how tool usage differs amongst these populations, the topic of authentication and CAPTCHA completion was at the forefront of six~\cite{helkala2012disabilities,shirali2009spoken,fuglerud2011secure,yan2008usability,ma2013investigating,kumar2020age} research papers. Authentication protocols are a hallmark of online privacy and security~\cite{das2019evaluating,jones2021literature,duezguen2020towards,majumdar2021sok}, necessary for all users to complete to gain access to their accounts or personal information. However, some authentication methods, such as CAPTCHAs, can be difficult or impossible for disabled users to complete since they rely heavily on visual outputs~\cite{patrick2022understanding,zhang2022building,mcleod2022challenges,jensen2021multi}. Therefore, Fuglerud et al. proposed a talking mobile one-time-password client that would provide users with both auditory and visual outputs~\cite{fuglerud2011secure}. This tool creates an environment where various types of users can complete authentication mechanisms without being overlooked based on their needs or abilities. However, our research reveals a scarcity of authentication tools and designs tailored to address the requirements of disabled populations.
\section{Methods}
\label{sec:methods}
Through this study, we aim to answer the following research questions (RQs):
\begin{itemize}
\item \textit{RQ1: What are the privacy and security concerns related to the disabled community when interacting with web services?}
\item \textit{RQ2: How can CAPTCHAs/authentication be improved to protect the privacy and security of people with disabilities for online communication?}
\item \textit{RQ3: How can universal design, design for privacy, and inclusive privacy and security be implemented in different web services?}
\end{itemize}
To answer these questions, our literature review included several steps: (i) database search, (ii) title screening, (iii) duplicate removal, (iv) abstract screening,(v) full-text screening, and (vi)thematic analysis. Papers were included if they meet the following criteria: (1) Published in a peer-reviewed publication, (2) Published in English, (3) Technology discussed focuses on privacy and/or security of web services, (4) Target population includes a significant portion of individuals with disabilities. The exclusion criteria includes: (1) The technology discussed in the research work was not used primarily by people with disabilities, (2) The papers did not include a direct discussion of the privacy and security of users with disabilities for web services, (3) The paper was an abstract, poster, work-in-progress, or otherwise not a full paper, (4) The full-text of the papers were not available even after searching through multiple databases or after contacting the authors. Our methodology was adapted from prior works by Stowell et al.~\cite{stowell2018designing}, Das et al.~\cite{das2019all}, Tazi et al.~\cite{tazi2022sok,tazi2022sok1}, Noah and Das~\cite{noah2021exploring}, and Shrestha et al.~\cite{shrestha2022sok,shrestha2022exploring}.
\subsection{Database Search and Title Screening}
We conducted our search by exploring five digital databases, namely:IEEE Xplore~\footnote{https://ieeexplore.ieee.org/Xplore}, SSRN~\footnote{https://www.ssrn.com}, Google Scholar~\footnote{https://scholar.google.com/}, Science Direct~\footnote{https://www.sciencedirect.com/}, and ACM Digital Library~\footnote{https://dl.acm.org/}. The data collection spanned from May to July 2021 and included any paper published before July 2021.
We collected $14$ papers from IEEE Xplore, $3$ papers from SSRN, $1000$ papers from Google Scholar, $991$ papers from Science Direct, and $344$ papers from ACM Digital Library. The keyword search for IEEE Xplore, SSRN, and Science Direct was "disability + privacy + security," and the "research articles" filter was applied. For ACM Digital Library, the keyword search used was "disability" AND "privacy," AND "security" with the "full text" filter applied. We used the Publish or Perish~\cite{harzing2010publish} software to review Google Scholar articles. The keyword search used in Publish or Perish was "privacy and security" + "disabled people." This search was limited to $1000$ results by the software. We reviewed a total of $2,352$ article titles from all five databases. A paper was at this point deemed pertinent if the title discussed web services for people with disabilities, including those with specific impairments like visual, hearing, or motor impairments. Additionally, the title was required to describe a study investigating privacy and security concerns of using web services for the disabled population or the usage of web services in general about privacy or security. A paper was also only considered if it met the inclusion requirements. After duplicate removal, our corpus consisted of $138$ articles.
\subsection{Abstract and Full Text Screening }
We manually reviewed the abstracts of all $138$ papers in the research collection for relevance to our RQs.
We removed $27$ papers during abstract screening, leaving $111$ papers for full-text screening. On these $111$ papers, we conducted a full-text screening where we reviewed the methods, findings, analysis, and discussions.
After the full-text screening, $63$ relevant papers remained for the detailed thematic analysis.
\subsection{Data Extraction and Thematic Analysis}
For all $63$ papers remaining in our corpus, we extracted quantitative and qualitative findings to assess the web services' privacy and security perspectives on the disabled population-focused research conducted by prior studies. The extracted data included population samples, user experience, study design characteristics, and type of technology used (web services for our research). The results, discussion, and conclusion data from each paper were analyzed and coded according to themes identified by the first and third authors. The inter-coder reliability score for the coding was $89.4\%$. In places where the two authors could not agree, the second author decided. A random sample of $12$ papers was taken where the abstracts, methods, results, and discussions were reviewed. This resulted in themes such as:
\begin{itemize}
\item Type of disability: visual impairments, Down Syndrome, cognitive disabilities
\item Type of participant: some studies include both disabled and non-disabled people, while other studies include only disabled people
\item Difficulty using authentication interfaces
\item CAPTCHA completion can be hard or impossible for those who are blind, have low vision, or have a learning disability (dyslexia, ADHD.)
\end{itemize}
The remaining papers were then evaluated by going through each and generating a complete codebook. This process yielded a codebook that consists of $33$ overarching codes, which were themed into seven overarching themes including,\lq\lq~Authentication Interface Issues~\rq\rq, \lq\lq~Privacy Concerns as Reasons for Non-Use~\rq\rq,\lq\lq~Critical Data Access~\rq\rq,\lq\lq~Online Vulnerability~\rq\rq,\lq\lq~Solutions to authentica~\rq\rq,\lq\lq~Universal Design~\rq\rq and\lq\lq~Usability of Security Tools and Protocols~\rq\rq.
\section{Findings and Discussions}
\label{sec:findings}
In this section, we report on our findings while addressing the RQs mentioned in the previous section.
\subsection{RQ1: Privacy and Security Concerns of Disabled People for Web Services}
Our first research question addresses the privacy and security concerns of people with disabilities when interacting with web services. We addressed this RQ by analyzing the different papers within the themes related to this specific research question which are four, namely:\lq\lq~Authentication Interface Issues~\rq\rq, \lq\lq~Privacy Concerns as Reasons for Non-Use~\rq\rq,\lq\lq~Critical Data Access~\rq\rq,\lq\lq~Online Vulnerability~\rq\rq. Table~\ref{tab:RQ1} provides the snapshot of the distribution of the papers which cater to RQ1. In the following subsections, we will provide more details about these themes.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ |p{3cm}|p{4cm}| }
\hline
Themes & Number of Papers\\
\hline \hline
Authentication Interface Issues & $4$ (6.35\%)~\cite{helkala2012disabilities,yan2008usability,ma2013investigating,bayor2018characterizing} \\
Privacy Concerns as Reasons for Non-Use & $27$ (42.86\%) ~\cite{kordzadeh2016antecedents,ermakova2015antecedents,roberts2021evaluating,lafky2011personal,yao2012adoption,maqbool2021challenges,liang2017understanding,kaplan2019alzheimer,lafky2008prospective,ziefle2011medical,tantidham2019emergency,el2013trust,novitzky2015review,vasco2020internet,hadian2017efficient,pacheco2016secure,rahman2018blockchain,ramli2014privacy,wang2012intelligent,ifrim2017art,dang2019survey,hussain2021security,anisha2020automated,fosch2021cybersecurity,mohan2016security,kumar2020age,ahmed2020threats} \\
Critical Data Access & $7$ (11.11\%)~\cite{solanas2013m,beach2009disability,hamidi2020using,singh2007password,hamidi2018should,grunwel2016delegation,distler2020acceptable} \\
Online Vulnerability & $14$ (22.22\%)~\cite{onyeaka2020understanding,ahmed2015privacy,lazar2011understanding,macmillan2020autistic,jattamart2020perspectives,hersh2017mobile,scanlan2021reassessing,mentis2019upside,munozproviding,giannoumis2018accessibility,hersh2018mobility,king2019becoming,kim2019conceptualizing,chalghoumi2019information} \\
\hline
\end{tabular}
\caption{The Distribution of Papers Across Themes Answering the RQ1}
\label{tab:RQ1}
\end{center}
\end{table}
\subsubsection{Authentication Interface Issues}
Authentication is a basis of security standards and protocols for web services. While CAPTCHA completion and authentication steps are often easy for non-disabled users, the disabled population faces countless difficulties accessing their online services. While analyzing papers on security concerns for people with disabilities, we found that issues with authentication interfaces were a common theme discussed. We found underlying sub-themes, such as difficulty using authentication due to technical hindrances and how each disability can affect a user's capability to complete authentication mechanisms. Four papers from the $63$ in our corpus ~\cite{helkala2012disabilities,yan2008usability,ma2013investigating,bayor2018characterizing} relating to this category. One such paper discusses the success of CAPTCHA completion depending on the disabilities; for most non-disabled users, CAPTCHA completion and other forms of authentication are an almost unnoticeable part of using web services.
However, users with any level of disability or impairment can find these same tasks to be difficult or impossible, as Helkala explains~\cite{helkala2012disabilities}. Through their work, Helkala explores how users with vastly different disabilities like Parkinson's disease, dyslexia, vision impairment, and upper extremity disabilities all experience different issues with CAPTCHA completion based on their abilities. In addition, this research raises important questions about how current authentication methods, such as static PIN codes, textual passwords, and one-time codes, can be altered better to fit different populations' needs and abilities.
Another equally important code within this theme is the difficulty of using authentication due to technical hindrances; these difficulties discussed were at the conceptual and adoption levels. This was detailed by Bayor et al. in their research analyzing interest in using social media amongst users with intellectual disabilities. Their findings suggest that a lack of accessible authentication methods for disabled users often hinders this interest. The authors also note that voice search, auto-login, and password retrieval protocols could be already-existing solutions for this user population~\cite{bayor2018characterizing}
\subsubsection{Privacy Concerns as Reasons for Non-Use}
In reviewing research papers on the privacy and security concerns of the disabled population when using web services, we found that an overwhelming majority of users cited privacy concerns as reasons for non-use. Every user wants their account and data to be protected from social media sites to healthcare technology. Some of the most prevalent sub-themes related to non-use were found in connection to medical technology in smart homes and concerns about health information technology used frequently by people with disabilities. If a user feels that their health information needs to be adequately protected, it was found that they often choose not to use the service at all. There are $27$ papers related to this theme, as detailed in table~\ref{tab:RQ1}. One such paper analyzes the privacy and security concerns of disabled people regarding medical technology used in smart homes.
Ziefle et al. researched the attitudes of disabled users towards a video-based monitoring system in the smart home environments of elderly or disabled people. They found that users would only feel comfortable with this system in their homes if strict privacy protocols were followed, including anonymity in transferring medical data, password protection, discretion, and avoidance of stigmatization~\cite{ziefle2011medical}. Furthermore, many health information technologies are becoming popular amongst users, especially smartphone apps and websites that access medical data. Onyeaka et al. discuss how it may be difficult for some user populations, such as those with disabilities or mental health conditions, to use these smartphone apps and websites. The researchers found that many users with disabilities would withhold crucial medical information from their healthcare providers because of privacy and security concerns about how their data was being used by the healthcare apps and websites~\cite{onyeaka2020understanding}. Concerns exist that these privacy and security issues could lead to further stigmatization and non-use by the disabled population.
\subsubsection{Critical Data Access}
We classified papers within \lq\lq~Critical Data Access\rq\rq~ if they discuss data sharing, specifically medical data, and the privacy and security concerns of disabled people over their critical data. Through these papers, we determine that users have privacy and security concerns related to sharing personal health records with caretakers, healthcare providers, insurance companies, researchers, and governments. In particular, many people with disabilities feel there are privacy trade-offs in emergency situations when they do not have control over who has access to their personal medical data. Seven papers from our corpus were included in this theme~\cite{solanas2013m,beach2009disability,hamidi2020using,singh2007password,hamidi2018should,grunwel2016delegation,distler2020acceptable}. One of these papers; Beach et al. discuss how technology aimed at enhancing independent living for people with disabilities is a growing field. However, there are still a lot of privacy and security concerns to consider. This is particularly relevant because the researchers found that users with disabilities are significantly more accepting of the sharing and recording personal medical information than non-disabled people~\cite{beach2009disability}. This raises concerns about how disabled people are more at risk of privacy and security failures than their non-disabled counterparts. On the other hand, Solanas et al. propose m-Carer, a smart mobile device that monitors patients' movements. The researchers hope to provide a way to track and find disabled users who become lost, disoriented, or need emergency medical attention~\cite{solanas2013m}. Although this new technology could help users in emergencies, it raises concerns about patient privacy invasions and how the tracking data is stored and transmitted.
\subsubsection{Online Vulnerability}
we classified papers that examine online vulnerabilities, particularly those that affect individuals with disabilities, as \lq\lq~Online Vulnerability\rq\rq. More than $22\%$ of the papers in our corpus fall under this theme, making it a prevalent one. ~\cite{onyeaka2020understanding,ahmed2015privacy,lazar2011understanding,macmillan2020autistic,jattamart2020perspectives,hersh2017mobile,scanlan2021reassessing,mentis2019upside,munozproviding,giannoumis2018accessibility,hersh2018mobility,king2019becoming,kim2019conceptualizing,chalghoumi2019information}. Many disabled users are unaware of the ever-changing nature of online privacy and security issues, and must rely on the assistance of a caregiver or family member to safeguard themselves. This raises concerns about the trade-offs between autonomy and privacy when disabled people use digital services. According to Chalghoumi et al., many disabled users are unaware of technology and web services' privacy and security concerns. The researchers found that the opinions of caregivers and family members of the disabled participant were significantly influential on the user's behavior toward online privacy~\cite{chalghoumi2019information}. This raises questions regarding how much of a disabled user's web services experience can be autonomous if caretakers substantially impact them.
\subsection{RQ2: Improving CAPTCHA/authentication}
The second RQ focuses on how CAPTCHAs/authentication can be improved to protect the privacy and security of people with disabilities when using web services. Some disabled users can find authentication completion impossible and are consequently unable to access their accounts. Six papers~\cite{shirali2009spoken,brown2010using,barbosa2016unipass} from our corpus focus on solutions to improving authentication and CAPTCHAs. Table~\ref{tab:RQ2} provides the snapshot of the distribution of these papers.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ |p{3cm}|p{4cm}| }
\hline
Theme & Number of Papers\\
\hline \hline
Solutions to authentication/CAPTCHA Issues & $3$ (4.76\%) ~\cite{shirali2009spoken,barbosa2016unipass,brown2010using} \\
\hline
\end{tabular}
\caption{The Distribution of Papers Across Themes Answering the RQ2}
\label{tab:RQ2}
\end{center}
\end{table}
Some papers relating to this theme provided the solution to authentication problems; one such solution is using passtones instead of passwords, as researched by Brown and Doswell. Rather than remembering alphanumeric sequences, Brown and Doswell propose a password alternative where users would remember a sequence of sounds~\cite{brown2010using}. The researchers explain how this tool has already been implemented using photos, but using auditory passwords would improve the experience of users with visual disabilities. While explicitly a solution for visually impaired users, this solution could be widely implemented and used by people of all different needs and abilities. Similarly, accessible password managers are another solution to issues with authentication that many users face. Barbosa et al. describe their implementation of UniPass, an accessible password manager for visually impaired users on a smart device. This tool includes features such as reading prompts and messages aloud, buttons and other graphical elements are avoided, and the device vibrates to signify the need for user input~\cite{barbosa2016unipass}. The researchers found that password managers are a promising solution for the difficulties visually impaired users face with authentication mechanisms. A different way to enhance the authentication experience of disabled users when interacting with web services is Spoken CAPTCHA. Shirali-Shahreza et al. discuss how most CAPTCHA methods currently only use visual patterns, making it impossible for blind users to complete them. The researchers propose a new CAPTCHA method, Spoken CAPTCHA, where users would hear a short sound clip asking them to say a word. The user will then respond in a speech file that can be checked not to be computer generated~\cite{shirali2009spoken}. This solution focuses on the visually impaired population and provides a way to improve authentication methods for all types of users.
\subsection{RQ3: Universal Design, Design for Privacy, and Inclusive Privacy and Security in Web Services}
The third RQ focuses on how universal design, design for privacy, and inclusive privacy and security can be implemented in different web services. These inclusive concepts provide design tools and protocols to make web services more accessible for various user populations, regardless of needs and abilities. We have gleaned two themes pertaining to this research question,\lq\lq~Universal Design~\rq\rq and\lq\lq~Usability of Security Tools and Protocols~\rq\rq. Table~\ref{tab:RQ3} provides the snapshot of the distribution of the papers which caters to the RQ3.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ |p{3cm}|p{4cm}| }
\hline
Theme & Number of Papers\\
\hline \hline
Universal Design & $6$ (9.53\%) ~\cite{vales2008ucare,wang2018inclusive,wang2017third,o2017privacy,mcrae2020privacy,medley1998ethical} \\
Usability of Security Tools and Protocols & $2$ (3.17\%) ~\cite{han2018proximity,fuglerud2011secure} \\
\hline
\end{tabular}
\caption{The Distribution of Papers Across Themes Answering the RQ3}
\label{tab:RQ3}
\end{center}
\end{table}
\subsubsection{Universal Design}
The Universal Design concept describes how the design of all products and environments should be usable by all people without the need for adaptation or specialized design. Inclusive privacy and security and privacy by design are closely related to the overarching theme of universal design. Six papers~~\cite{vales2008ucare,wang2018inclusive,wang2017third,o2017privacy,mcrae2020privacy,medley1998ethical} were included in this theme. These papers discuss the current privacy and security protocols that are most widely used and why they do not consider the needs and abilities of under-served populations such as children, older adults, people with disabilities, and people from non-Western populations. Wang et al. discuss the implementation of inclusive privacy and security tools, and protocols would prioritize the design of mechanisms that are inclusive to people with various characteristics, abilities, needs, and values~\cite{wang2018inclusive}. Similarly, we considered papers on privacy by design and how designers and technologies must put inclusive privacy and security tools/protocols at the forefront of their design. One of the most practical ways these designers can implement privacy by design is to increase digital citizen awareness surrounding consent for data processing and usage. O'Connor et al. discuss how users must have the information they need to make informed decisions about how their data is being used~\cite{o2017privacy}.
\subsubsection{Usability of Security Tools and Protocols}
The usability and accessibility of security tools and protocols are essential to the overarching theme of universal design. While the previous theme describes the theory of universal design, this theme explores implementations of the theory. The two papers related to this theme~\cite{fuglerud2011secure,han2018proximity} present inclusive password management and two-factor authentication solutions for various user populations across two related papers.
Password protection is a hallmark of online security tools and protocols. However, complicated authentication procedures to access web services can be cumbersome, especially for people with disabilities or the elderly. According to Fuglerud et al., a secure and accessible multi-modal authentication method using a one-time password client could solve this problem. Users with impairments affecting their ability to complete authentication steps now have access to auditory and visual outputs from the password client~\cite{fuglerud2011secure}. This allows all users equal access to password management tools and protocols.
The second paper by Han et al. describes how current 2FA solutions all require some form of user effort, with can negatively impact the experience of disabled users or the elderly. Therefore, the researchers propose a new type of mobile 2FA, Proximity-Proof, that does not require user interactions and defends against the powerful man-in-the-middle attack~\cite{han2018proximity}. According to the authors, Proximity-Proof is as secure as other 2FA methods and provides innovative ways for 2FA techniques to become more usable and accessible for all users.
\section{Future Work and Limitation}
\label{sec:future}
In this paper, we conducted a systematic analysis to evaluate the research articles and peer-reviewed papers published in the field of security and privacy of web services for the disabled population. We collected papers from five digital databases and limited the papers to ones available in English. As such we might have missed papers not available in these databases. However, our extensive literature review provides a detailed overview of the current research on security and privacy of web services for the disabled population. And while this gives a broad understanding of the current research and methods used, there is limited in-depth research on individual user groups within the disabled population. For example, five of the six papers relating to solutions for authentication issues were only solutions for visually impaired users. Future analyses of privacy and security concerns of the disabled population can provide valuable research into more specific subsections of the population, such as those with cognitive disabilities, mental illnesses, and different types of physical impairments.
\section{Conclusion}
\label{sec:conclusion}
For many disabled users, information technology and web services can be a way to enhance their autonomy and discover new interests or communities. However, disability can make the internet a challenging place, seeing as many disabled people have trouble writing, reading, and comprehending text information, making it hard for them to understand and use basic security and privacy measures such as passwords and passwords CAPTCHAs. To that regard, we conducted a systematic literature review on $63$ papers focused on the privacy and security of web services for the disabled population. Our findings reveal valuable solutions to privacy and security concerns of the disabled population, focused on universal design and inclusive privacy and security methods. Universal design, in particular, provides a way to create inclusive, accessible, and usable tools and protocols to protect the privacy and security of both the disabled and general populations online. These solutions would address issues such as authentication improvement, critical data access, online vulnerability, and usability of tools and protocols. However, our findings reveal gaps in the current research, such as a lack of implementation of these universal design methods and how solutions must focus on more subsections of the disabled population.
\section{Acknowledgement}
We would like to thank the Inclusive Security and Privacy focused Innovative Research in Information Technology (InSPIRIT) Laboratory at the University of Denver. This research has been funded by the Faculty Research Fund (FRF) at the University of Denver. Any opinions, findings, conclusions, or recommendations expressed in this material are solely those of the authors and not of the organization or the funding agency.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13167",
"language": "en",
"timestamp": "2023-02-28T02:11:38",
"url": "https://arxiv.org/abs/2302.13167",
"yymm": "2302"
} | \section{Introduction}
During the last decade, there have been considerable advancements in the use of magnons for storing, transmitting, and processing information.
This rapid progress has turned the emerging research field of magnonics into a promising candidate for innovating information processing technologies \cite{barman2021}.
The combination of magnonics with quantum information processing provides a highly interdisciplinary physical platform for studying various quantum phenomena in
spintronics, quantum electrodynamics, and quantum information science. Indeed, the quantum magnonics exhibits distinct quantum properties, which can be utilized for
multi-purpose quantum tasks \cite{awschalom2021, yli2020, lachance-quirion2019, clerk2020, yuan2022}.
Despite significant progress in quantum magnonics \cite{awschalom2021, yli2020, lachance-quirion2019, clerk2020, yuan2022, Lachance-Quirion2020, azimi-mousolou2020, azimi-mousolou2021, liu2022,li2018a, li2019, zhang2019, bossini2019, yuan2020a, yuan2020b, tabuchi2014, yuan2017, xiao2019, johansen2018}, there are still many features and challenges that need to be addressed in theory and in the laboratory. In particular, the experimental
verification of non-classical magnon states and quantum properties such as squeezed and entangled states would pave the way
for many possible research strategies.
The key point is interconnections between magnetic materials and electronic quantum systems. Superconducting qubits have been successfully used to detect magnons in ferromagnetic materials \cite{Lachance-Quirion2020}.
However, antiferromagnetic (AFM) materials are more sustainable for quantum applications as they offer lower magnetic susceptibility, faster dynamics, smaller device features and lower energy consumption compared to ferromagnetic materials \cite{barman2021}.
Recently, we have theoretically examined magnon-magnon entanglement and squeezing in AFMs \cite{azimi-mousolou2020, azimi-mousolou2021, liu2022}.
Here, we examine the possibility to
combine the advantageous features of transmon and AFM materials. To this end, we demonstrate effective coupling between a superconducting transmon qubit and a bipartite AFM material. We show how the polarized (chiral) magnons and bipartite magnon-magnon entanglement in the AFM can be detected through the measurement of Rabi frequency of the transmon qubit. The proposed setup is suitable for the experimental study of the quantum properties of magnons in a wide range of crystalline and synthetic AFM materials, such as NiO and MnO, MnF$_2$ and FeF$_2$, two-dimensional Ising systems like MnPSe$_3$, YIG-based synthetic AFMs, and perovskite manganites \cite{Jie2018,Takashi2016,Haakon2019,Thuc2021,Sheng2021, Changting2021, Rini2007, Ulbrich2011, rezende2019}.
The outline of the paper is as follows : In sec. \ref{Magnon-Photon-Transmon-hybridization} we describe magnon-photon-transmon hybridization and derive the interacting Hamiltonian. In sec. \ref{magnon-magnon-entanglement}, we discuss two-mode magnon entanglement in AFM materials. In sec. \ref{Sensing-magnons-quantum-characteristics-with-transmons}, we obtain an effective magnon-transmon coupling and show how this effective coupling mechanism allows to experimentally study quantun charachteristics of magnons in antiferromagnetic materials. The paper ends with a conclusion in sec. \ref{conclusion}.
\section{Magnon-Photon-Transmon hybridization}
\label{Magnon-Photon-Transmon-hybridization}
\begin{figure}[h]
\begin{center}
\includegraphics[width=80mm]{MPQ.pdf}
\end{center}
\caption{(Color online) Schematic illustration of magnon-photon-transmon hybridization. A circularly polarized microwave cavity electromagnetic field, which is described by the vector potential $\mathbf{A}_{R; \mathbf{k}}(\mathbf{r}, t)$, can interact with magnons in an antiferromagnetic material and a superconducting transmon qubit. The cavity walls are illustrated with yellow segments in the left panel. An antiferromagnetic material hosts two chiral magnons, which are shown with three-color balls in the cubic lattice inside the cavity. Two magnos are degenerate in the absence of magnetic field and a small external magnetic field $\mathbf{B}$ in the $z$ direction breaks this degenerecy (see also Fig. \ref{fig:energy}). While the coupling between magnon and cavity filed is achieved through magnetic-dipole interaction, an electric-dipole interaction describe coupling between cavity filed and transmon (right panel).}
\label{fig:model}
\end{figure}
In this section, we describe a photon-mediated coupling mechanism between a superconducting transmon qubit and polarized magnons in a bipartite AFM.
We assume a hybrid system composed of a single crystal or synthetic AFM, a transmon-type superconducting qubit, and a microwave cavity, as illustrated in Fig.~\ref{fig:model}. The system hosts four modes including two magnon modes in an AFM compound, a transmon qubit, and a microwave cavity electromagnetic mode. The dynamics of the hybridized magnon-photon-transmon system can be described by the Hamiltonian
\begin{eqnarray}
H=H_{\text{m}}+H_{\text{ph}}+H_{\text{m-ph}}+H_{\text{q}}+H_{\text{ph-q}},
\label{MH}
\end{eqnarray}
where the term $H_{\text{m}}$ describes the magnon subsystem,
$H_{\text{ph}}$ describes the microwave photon,
$H_{\text{m-ph}}$ describes the magnon-photon interaction,
$H_{\text{q}}$ describes the transmon and
$H_{\text{ph-q}}$ describes the photon-transmon interaction.
They are described in detail as follows:
{\it Two-mode magnon system}: $H_{\text{m}}$ represents a two-mode magnon Hamiltonian in a bipartite treatment of an AFM material.
Consider an AFM spin Hamiltonian $\sum_{i, j}\mathbf{S}_{i}\mathbb{I}_{ij}\mathbf{S}_{j}+\sum_{i}\mathbf{B}\cdot\mathbf{S}_{i}$, where $\mathbf{S}_{i}$ is the spin vector operator at lattice site $i$, $\mathbb{I}_{ij}$ is the bi-linear interaction tensor matrix between sites $i$ and $j$, and $\mathbf{B}$ is an external field. By applying the Holstein-Primakoff transformation at low temperature followed by the Fourier transformation to the AFM spin Hamiltonian, $H_{\text{m}}$ can be described in terms of a pair of interacting collective bosonic modes in the lattice momentum $\mathbf{k}$-space as \cite{azimi-mousolou2020, azimi-mousolou2021} (we assume $\hbar=1$ throughout the paper)
\begin{eqnarray}
H_{\text{m}}^{\mathbf{k}} &=&
\omega_{a_{\mathbf{k}}}a_{\mathbf{k}}^{\dagger} a_{\mathbf{k}} +
\omega_{b_{-\mathbf{k}}}b_{-\mathbf{k}}^{\dagger} b_{-\mathbf{k}}\nonumber\\
&& + g_{\text{m-m}}^{\mathbf{k}} a_{\mathbf{k}} b_{-\mathbf{k}} + \left(g_{\text{m-m}}^{\mathbf{k}}\right)^{*}a_{\mathbf{k}}^{\dagger}b_{-\mathbf{k}}^{\dagger}.
\label{MMH}
\end{eqnarray}
The $a_{\mathbf{k}}^{\dagger}$ ($a_{\mathbf{k}}$) and $b^{\dagger}_{-\mathbf{k}}$ ($b_{-\mathbf{k}}$) are bosonic creation (annihilation) operators on the two sublattices $A$ and $B$ with opposite magnetizations in the bipartite AFM.
Bosonic operators on opposite sublattices commute and define a pair of interacting magnons in the Kittel $(a, b)$ modes.
The Kittel modes can be hybridized into the diagonal magnon modes $(\alpha, \beta)$ through the SU(1,1) Bogoliubov transformation
\begin{eqnarray}
\left(
\begin{array}{cc}
a_{\mathbf{k}} \\
b_{-\mathbf{k}}^{\dagger}
\end{array}
\right)=\left(
\begin{array}{cc}
u_{\mathbf{k}}& v_{\mathbf{k}} \\
v_{\mathbf{k}}^{*}& u_{\mathbf{k}}^{*}
\end{array}
\right)\left(
\begin{array}{cc}
\alpha_{\mathbf{k}} \\
\beta_{-\mathbf{k}}^{\dagger}
\end{array}
\right),
\label{eq:FBT}
\end{eqnarray}
where $u_{\mathbf{k}} =\cosh(r_{\mathbf{k}})$ and $v_{\mathbf{k}} = \sinh(r_{\mathbf{k}})e^{i\phi_{\mathbf{k}}}$ with
\begin{eqnarray}
r_{\mathbf{k}}&=&\tanh^{-1}\left[\frac{1-\sqrt{1-|\Gamma_{\mathbf{k}}|^{2}}}{|\Gamma_{\mathbf{k}}|}\right]\ge 0,\nonumber\\
\phi_{\mathbf{k}}&=&\pi-\arg[\Gamma_{\mathbf{k}}],\ \ \ \
\Gamma_{\mathbf{k}}=\frac{2g_{\text{m-m}}^{\mathbf{k}}}{\omega_{a_{\mathbf{k}}}+\omega_{b_{-\mathbf{k}}}}.
\label{r-phi}
\end{eqnarray}
In terms of the $(\alpha, \beta)$ modes, the magnon Hamiltonian $H_{\text{m}}^{\mathbf{k}}$ takes the diagonal form
\begin{eqnarray}
H_{\text{m}}^{\mathbf{k}} =
\omega_{\alpha_{\mathbf{k}}}\alpha_{\mathbf{k}}^{\dagger}\alpha_{\mathbf{k}} +
\omega_{\beta_{-\mathbf{k}}}\beta_{-\mathbf{k}}^{\dagger} \beta_{-\mathbf{k}}.
\label{DMMH}
\end{eqnarray}
The bosonic diagonal modes $\alpha$ and $\beta$ describe two right and left circularly polarized (chiral) magnons \cite{barman2021, zhang2020}, which are degenerate in the absence of an external magnetic field. As shown in Fig.\ \ref{fig:energy}, for a system with only diagonal components of $\mathbf{J}_{ij}$ (=J), a magnetic field in the $z$ direction, i.e., parallel to the magnetization of the two sublattices, breaks the degeneracy \cite{azimi-mousolou2020, azimi-mousolou2021}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=90mm]{Energy.pdf}
\end{center}
\caption{(Color online) Magnon energy dispersions $\omega_{\alpha_{\mathbf{k}}}$ and $\omega_{\beta_{-\mathbf{k}}}$ in the first Brillion zone of a square lattice with lattice constant $a=1$ for an easy-axis AFM.
As model parameters, we use $|J|=1 meV$ for antiferromagnetic Heisenberg exchange, $\mathcal{K}_z=0.01J$ for uniaxial anisotropy, and $S=1/2$.
Two magnons are degenerate in the absence of an external magnetic field $\mu_B B=0$ (left panel). A magnetic field $\mu_B B=1 meV$ in the $z$ direction breaks the degeneracy (right panel).}
\label{fig:energy}
\end{figure}
{\it Microwave photon}: For the second term of the hybrid Hamiltonian in Eq.~\eqref{MH}, we assume a right circularly polarized microwave cavity electromagnetic field with the single cavity mode frequency $\omega_{c_{\mathbf{k}}}$ \cite{yuan2017, xiao2019, zhang2020, azimi-mousolou2021}. This is described by the vector potential
\begin{eqnarray}
\mathbf{A}_{R; \mathbf{k}}(\mathbf{r}, t)&=&A_{0}\left[\mathbf{e}_{R}c_{\mathbf{k}}e^{-i(\mathbf{k} \cdot \mathbf{r}+\omega_{c_{\mathbf{k}}} t)}+\mathbf{e}^{*}_{R}c^{\dagger}_{\mathbf{k}}e^{i(\mathbf{k} \cdot \mathbf{r}+\omega_{c_{\mathbf{k}}} t)}\right]
\nonumber\\
&=&e^{it\omega_{c_{\mathbf{k}}} c^{\dagger}_{\mathbf{k}}c_{\mathbf{k}}}\mathbf{A}_{R; \mathbf{k}}(\mathbf{r}, 0)e^{-it\omega_{c_{\mathbf{k}}} c^{\dagger}_{\mathbf{k}}c_{\mathbf{k}}}.
\end{eqnarray}
The vector $\mathbf{k}$ is the propagation direction of the electromagnetic wave, $A_{0}$ is the amplitude of the vector potential, and $c_{\mathbf{k}} (c^{\dagger}_{\mathbf{k}})$ is the annihilation (creation) operator of the right circularly polarized photon with unit vector $\mathbf{e}_{R}=\frac{1}{\sqrt{2}}(1, -i, 0)$. Both $\omega_{c_{\mathbf{k}}}$ and $A_{0}$ can be tuned by changing the volume of the cavity and the separation
distance between the two conductor plates in the cavity. Here, we focus on the lowest energy cavity mode and disregard contributions from the higher energy cavity modes. In the rotating frame, the photon contribution to the full Hamiltonian in Eq.~\eqref{MH} is
\begin{eqnarray}
H_{\text{ph}}^{\mathbf{k}} =\omega_{c_{\mathbf{k}}}c^{\dagger}_{\mathbf{k}}c_{\mathbf{k}},
\label{PhH}
\end{eqnarray}
for a given $\mathbf{k}$.
{\it Magnon-Photon interaction}: By turning on the electromagnetic field, the magnon modes start to interact with the cavity mode through the magnetic-dipole coupling. Explicitly, the electromagnetic field induces a magnetic field $\mathbf{B}_{\text{ph}}$,
which interacts with the total spin $\mathbf{S}$ of the AFM material through the Zeeman interaction term \cite{yuan2017, xiao2019, zhang2020, azimi-mousolou2021}
\begin{eqnarray}
H_{\text{m-ph}}=-\mathbf{B}_{\text{ph}}\cdot\mathbf{S}.
\end{eqnarray}
In the rotating frame, the photon-induced magnetic field is given by $\mathbf{B}_{\text{ph}}=\nabla\times \mathbf{A}_{\mathbf{k}}(\mathbf{r}, 0)$. Following the bosonization procedure used to derive the Hamiltonian $H_{\text{m}}^{\mathbf{k}}$, we obtain
the bosonized resonant magnon-photon interaction Hamiltonian
\begin{eqnarray}
H_{\text{m-ph}}^{\mathbf{k}}=-g_{\text{m-ph}}^{\mathbf{k}}c^{\dagger}_{\mathbf{k}}\alpha_{\mathbf{k}} + {\rm H.c.}
\label{BCH}
\end{eqnarray}
The
off-resonant interaction $(-g_{\text{m-ph}}^{\mathbf{k}}c_{\mathbf{k}}\beta_{-\mathbf{k}}+ {\rm H.c.})$ is neglected due to energy conservation.
Here, the magnon-photon exchange coupling is
\begin{eqnarray}
g_{\text{m-ph}}^{\mathbf{k}}=\lambda_{\mathbf{k}}(u_{\mathbf{k}}+v^{*}_{\mathbf{k}})
\end{eqnarray}
with $\lambda_{\mathbf{k}}=A_{0}k\sqrt{S}$ and we choose to study the case when $\mathbf{k}=(0, 0, k)$.
{\it Transmon qubit}: The third subsystem consists of a superconducting qubit described by the Hamiltonian \cite{koch2007}
\begin{eqnarray}
H_{\text{q}}=4E_{C}\hat{n}^{n}-E_{J}\cos\hat{\phi},
\label{TQhamiltonian}
\end{eqnarray}
where the first term corresponds to the kinetic energy contribution from a capacitor and the second term is the potential energy contribution by a Josephson junction. At a sufficiently large $E_{J}/E_{C}$, the superconducting system enters the transmon qubit regime. Following the ladder operator approach, one may represent the momentum, $\hat{n}$, and position, $\hat{\phi}$, operators in terms of bosonic annihilation (creation) operator $\eta$ ($\eta^{\dagger}$) as
\begin{eqnarray}
\hat{n}&=&i\left(\frac{E_{J}}{32E_{C}}\right)^{1/4}(\eta^{\dagger}-\eta),\nonumber\\
\hat{\phi}&=&\left(\frac{2E_{C}}{E_{J}}\right)^{1/4}(\eta^{\dagger}+\eta).
\end{eqnarray}
By using the ladder representation, one can write the Hamiltonian in Eq.\ \eqref{TQhamiltonian} in the form of the following anharmonic oscillator Hamiltonian
\begin{eqnarray}
H_{\text{q}}\approx\left[\omega_q+\frac{\xi}{2}\right]\eta^{\dagger}\eta-\frac{\xi}{2}(\eta^{\dagger}\eta)^{2}.
\end{eqnarray}
This follows from a Taylor expansion of the potential energy term in Eq.\ \eqref{TQhamiltonian} and a rotating wave approximation.
Here, $\omega_q=\sqrt{8E_{C}E_{J}}-E_{C}$ defines the Rabi transition frequency between the ground state $\ket{g}$ and the first excited state $\ket{e}$, $\xi=E_{C}$ is the anharmonicity. In the transmon regime, the anharmonicity is negative and large enough
that allows one to focus on the two lowest energy levels of the anharmonic oscillator as a transmon qubit, the Hamiltonian of which can be conveniently reduced to
\begin{eqnarray}
H_{\text{q}}=\omega_{q}\eta^{\dagger}\eta.
\end{eqnarray}
{\it Photon-transmon interaction}: The large electric dipole of the superconducting qubit, $\hat{\mathbf{d}}=\mathbf{d}\eta^{\dagger}+\mathbf{d}^{*}\eta$, can strongly couple to the induced electric field of the microwave photon through electric-dipole coupling \cite{koch2007}
\begin{eqnarray}
H_{\text{ph-q}}=-\mathbf{E}_{\text{ph}}\cdot\hat{\mathbf{d}},
\end{eqnarray}
where $\mathbf{E}_{\text{ph}}=\frac{d\mathbf{A}_{\mathbf{k}}(\mathbf{r}, t)}{dt}$ determines the photon-induced electric field. If we assume $\mathbf{d}||\mathbf{e}_{R}$, then, under the rotating wave approximation, the photon-qubit interaction is described by the Hamiltonian
\begin{eqnarray}
H_{\text{ph-q}}^{\mathbf{k}}=-g_{\text{ph-q}}^{\mathbf{k}}\eta^{\dagger}c_{\mathbf{k}}+{\rm H.c.},
\label{PhQH}
\end{eqnarray}
where the photon-qubit exchange coupling is given by
\begin{eqnarray}
g_{\text{ph-q}}^{\mathbf{k}}=-id\omega_{c_{\mathbf{k}}}\exp[-i\mathbf{k} \cdot \mathbf{r}]
\end{eqnarray}
with $d=|\mathbf{d}|$ being the strength of electric dipole of the superconducting transmon qubit.
Having specified each term in the Hamiltonian of Eq.~\eqref{MH}, we conclude that the magnon-photon-transmon hybrid system is explicitly described by the bosonized Hamiltonian
\begin{eqnarray}
H_{\mathbf{k}}&=&\omega_{\alpha_{\mathbf{k}}}\alpha_{\mathbf{k}}^{\dagger}\alpha_{\mathbf{k}} +
\omega_{\beta_{-\mathbf{k}}}\beta_{-\mathbf{k}}^{\dagger} \beta_{-\mathbf{k}}
+\omega_{c_{\mathbf{k}}}c^{\dagger}_{\mathbf{k}}c_{\mathbf{k}}
+\omega_{q}\eta^{\dagger}\eta
\nonumber\\
&&
-\left[g_{\text{m-ph}}^{\mathbf{k}}c^{\dagger}_{\mathbf{k}}\alpha_{\mathbf{k}}
+g_{\text{ph-q}}^{\mathbf{k}}\eta^{\dagger}c_{\mathbf{k}}+ {\rm H.c.} \right],
\label{PhQH}
\end{eqnarray}
for a momentum $\mathbf{k}$ vector in the $z$-direction, the in-plane parallel photon polarization vector $\mathbf{e}_{R}$, and the superconducting dipole $\mathbf{d}||\mathbf{e}_{R}$.
It is important to note that only the hybridized magnon in the $\alpha$ mode interacts with the photon and transmon modes in the Hamiltonian in
Eq.~\eqref{PhQH}. In other words, the $\beta$ magnon mode is effectively decoupled from the other modes in the system. This is due to the fact that we use the right circularly polarized microwave cavity electromagnetic field, which only couples to the magnon with the same polarization, the $\alpha$ mode. On the one hand, if we use a left circularly polarized cavity field, it couples the $\beta$ magnon mode with the photon and the transmon modes, and instead leaves the $\alpha$ magnon mode decoupled from the rest of the system.
The hybrid quantum system described by Eq.~\eqref{PhQH} provides a promising platform to observe and verify quantum effects in quantum magnonics and exploit them for new quantum applications. Below we employ this hybrid platform to propose a new experimental setup for observing polarized twin magnon modes as well as intrinsic two-mode magnon entanglement in bipartite AFM materials via a transmon qubit. In the next section we briefly describe the basic concepts of two-mode entanglement in AFMs.
\section{magnon-magnon entanglement}
\label{magnon-magnon-entanglement}
Let us focus on the two-mode magnon Hamiltonian described by $H_{\text{m}}^{\mathbf{k}}$ above. The coupling parameter $g_{\text{m-m}}^{\mathbf{k}}$ in Eq.~\eqref{MMH}, which is mainly given by the AFM coupling between the two opposite sublattices $A$ and $B$, introduce a strong squeezing and entanglement between bosonic magnon modes
in a way that all the eigenstates of $H_{\text{m}}^{\mathbf{k}}$ become entangled in the Kittel $(a, b)$ modes \cite{azimi-mousolou2020, azimi-mousolou2021}. Explicitly, the complete energy eigenbasis of the
Hamiltonian $H_{\text{m}}^{\mathbf{k}}$ can be expressed in the following form
\begin{eqnarray}
\ket{\psi_{xy}(r_{\mathbf{k}}, \phi_{\mathbf{k}})}&=&(\alpha^{\dagger}_{\mathbf{k}})^{x}(\beta^{\dagger}_{-\mathbf{k}})^{y}\ket{\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})}
\label{eq:MES}
\end{eqnarray}
for positive integers $x$ and $y$, and the two-mode squeezed vacuum ground state
\begin{eqnarray}
\ket{\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})}= \frac{1}{\cosh r_{\mathbf{k}}}
\sum_{n=0}^{\infty} e^{in\phi_{\mathbf{k}}} \tanh^{n} r_{\mathbf{k}}
\ket{n; a_{\mathbf{k}}}\ket{n; b_{-\mathbf{k}}}
\nonumber\\
\label{eq: two-mode squeezing ground state}
\end{eqnarray}
given in the Kittel $(a, b)$ magnon basis. Here, $x$ and $y$ represent the number of magnons in the hybridized magnon modes $\alpha_{\mathbf{k}}$ and $\beta_{-\mathbf{k}}$, respectively. Note that the hybridized magnon modes $(\alpha, \beta)$ are related to the Kittel magnon modes $(a, b)$ through Eq.~\eqref{eq:FBT}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=80mm, height=42mm]{Entanglement.pdf}
\end{center}
\caption{(Color online) Entanglement of magnon eigenstates corresponding to pairs of magnon numbers $(x, y)$ against the entanglement (squeezing) parameter $r_{\mathbf{k}}$.}
\label{fig:Entanglement}
\end{figure}
Fig.~\ref{fig:Entanglement}
illustrates the entropies of entanglement of the energy eigenbasis in Eq.~\eqref{eq:MES} for selected pairs of magnon numbers $(x, y)$ as functions of the squeezing parameter $r_{\mathbf{k}}$.
The squeezing parameter $r_{\mathbf{k}}$, which is given in Eq.\ \eqref{r-phi} by the ratio of the magnon-magnon coupling $g_{\text{m-m}}^{\mathbf{k}}$ to the average single magnon energies in the Kittel modes, is actually the only parameter that determines the entropies of entanglement of the complete energy eigenbasis. This follows from the fact that the states in Eq.~\eqref{eq:MES} are determined by $(r_{\mathbf{k}}, \phi_{\mathbf{k}})$ and $\phi_{\mathbf{k}}$ contributes only to the phase factors of the Schmidt coefficients in the Schmidt decompositions of these states.
We remind the reader that the entropy of entanglement for a bipartite state $\ket{\psi}\in H_A\otimes H_B$ is given by
\begin{eqnarray}
E\left[\ket{\psi}\right]&=&-\sum_{n}|\chi_{n}|^{2}\log|\chi_{n}|^{2}
\label{EEE}
\end{eqnarray}
with $\chi_{n}$'s being the Schmidt coefficients in $\ket{\psi}=\sum_{n}\chi_{n}\ket{i_n; A}\ket{j_n; B}$, where $\ket{i_n; A}$ and $\ket{j_n; B}$ are orthonormal states in subsystem
$A$ and subsystem $B$, respectively \cite{nielsen2000}.
For the energy eigenstates in
Eq.~\eqref{eq:MES}, we obtain the following normalized Schmidt decompositions
\begin{eqnarray}
\begin{array}{ll}
\ket{\psi_{xy}(r_{\mathbf{k}}, \phi_{\mathbf{k}})} = & \\
& \\
\left\{
\begin{array}{ll}
\sum_{n=0}^{\infty} p^{(x, y)}_{n; \mathbf{k}}\ket{n+\delta; a_{\mathbf{k}}}
\ket{n; b_{-\mathbf{k}}},\ x\ge y \\
\sum_{n=0}^{\infty} p^{(x, y)}_{n; \mathbf{k}}\ket{n; a_{\mathbf{k}}}
\ket{n+\delta; b_{-\mathbf{k}}},\ x\le y
\end{array}
\right.
&
\end{array}
\label{EES}
\end{eqnarray}
where $\delta=|x-y|$. Here, the Schmidt coefficients are given by
\begin{eqnarray}
p^{(x, y)}_{n; \mathbf{k}}=\frac{1}{\sqrt{x!y!}}
\left(\frac{1}{u_{\mathbf{k}}^{*}}\right)^{\delta}\left(\frac{1}{u_{\mathbf{k}}^{*}v_{\mathbf{k}}}\right)^{m}f^{(m, \delta)}_{n; \mathbf{k}}p_{n; \mathbf{k}},\ \ \
\label{SCES1}
\end{eqnarray}
for $m=\min\{x, y\}$, with
\begin{eqnarray}
p_{n; \mathbf{k}} = \frac{e^{in\phi_{\mathbf{k}}}}{\cosh
r_{\mathbf{k}}}\tanh^{n}r_{\mathbf{k}},
\label{SCES2}
\end{eqnarray}
and $f^{(m, \delta)}_{n; \mathbf{k}}$ that satisfies the following recursive relations
\begin{eqnarray}
f^{(m, \delta>0)}_{n; \mathbf{k}}&=&|u_{\mathbf{k}}|^{2}\sqrt{n+\delta}f^{(m, \delta-1)}_{n; \mathbf{k}}
\nonumber \\ & & -|v_{\mathbf{k}}|^{2}\sqrt{n+1}f^{(m, \delta-1)}_{n+1; \mathbf{k}} , \nonumber\\
f^{(m>0, 0)}_{n; \mathbf{k}}&=&n|u_{\mathbf{k}}|^{4}f^{(m-1, 0)}_{n-1; \mathbf{k}}-(2n+1)|u_{\mathbf{k}}v_{\mathbf{k}}|^{2}f^{(m-1,0)}_{n; \mathbf{k}}\nonumber\\
&&+(n+1)|v_{\mathbf{k}}|^{4}f^{(m-1,0)}_{n+1; \mathbf{k}}
\nonumber\\
\label{SCES3}
\end{eqnarray}
with $f^{(0, 0)}_{n; \mathbf{k}}=1$ for each $n$. From Eqs. \eqref{SCES1}-\eqref{SCES3}, it is clear that the absolute value of the Schmidt coefficients $|p^{(x, y)}_{n; \mathbf{k}}|$,
and thus the entanglement entropies of all energy eigenbasis states in the Kittel magnon modes $(a, b)$, namely,
\begin{eqnarray}
E\left[(\alpha^{\dagger}_{\mathbf{k}})^{x}(\beta^{\dagger}_{-\mathbf{k}})^{y}\ket{\psi_{00}}\right]&=&-\sum_{n=0}^{\infty}|p^{(x, y)}_{n; \mathbf{k}}|^{2}\log|p^{(x, y)}_{n; \mathbf{k}}|^{2},\nonumber\\
\label{EEE}
\end{eqnarray}
are single variable functions of the squeezing parameter $r_{\mathbf{k}}$. In other words, the squeezing parameter $r_{\mathbf{k}}$ is the only entanglement parameter that determines two-mode magnon entanglement in the AFM system described by $H_{\text{m}}^{\mathbf{k}}$.
In the following we show how a superconducting transmon qubit can be used to observe different magnons and the squeezing/entanglement parameter $r_{\mathbf{k}}$. The latter allows us to quantify quantum characteristics such as two-mode squeezing and entanglement in AFM materials.
\section{Sensing magnons and thier quantum characteristics with transmons}
\label{Sensing-magnons-quantum-characteristics-with-transmons}
\subsection{Magnon-transmon effective coupling}
The Hamiltonian in Eq.~\eqref{PhQH}, that allows for magnon-photon-transmon hybrid states,
provides an effective photon mediated magnon-transmon coupling. To determine this effective coupling rate one may use the Schrieffer–Wolff unitary transformation \cite{schrieffer1966},
\begin{eqnarray}
H^{\prime}_{\mathbf{k}}=e^{W_{\mathbf{k}}}H_{\mathbf{k}}e^{-W_{\mathbf{k}}}
\label{Schrieffer-Wolff}
\end{eqnarray}
to effectively decouple the photon mode from magnon and transmon modes in the hybrid Hamiltonian up to first order.
Consider the following decomposition of the hybrid Hamiltonian in
Eq.~\eqref{PhQH}
\begin{eqnarray}
H_{\mathbf{k}}&=&H_{\mathbf{k}; 0}+V_{\mathbf{k}},\nonumber\\
H_{\mathbf{k}; 0}&=&\omega_{\alpha_{\mathbf{k}}}\alpha_{\mathbf{k}}^{\dagger}\alpha_{\mathbf{k}}
+\omega_{c_{\mathbf{k}}}c^{\dagger}_{\mathbf{k}}c_{\mathbf{k}}
+\omega_{q}\eta^{\dagger}\eta ,
\nonumber\\
V_{\mathbf{k}}&=&-g_{\text{m-ph}}^{\mathbf{k}}c^{\dagger}_{\mathbf{k}}\alpha_{\mathbf{k}}
-g_{\text{ph-q}}^{\mathbf{k}}\eta^{\dagger}c_{\mathbf{k}}+ {\rm H.c.},
\label{EffPhQH}
\end{eqnarray}
where we neglect the magnon $\beta$ mode as it is decoupled from the rest of the Hamiltonian $H_{\mathbf{k}}$.
By using the Baker-Campbell-Haussdorf formula,
the transformation in
Eq.~\eqref{Schrieffer-Wolff} can be expanded as
\begin{eqnarray}
H^{\prime}_{\mathbf{k}} & = & H_{\mathbf{k}; 0}+V_{\mathbf{k}}+[W_{\mathbf{k}}, H_{\mathbf{k}; 0}]+[W_{\mathbf{k}}, V_{\mathbf{k}}]
\nonumber\\
& & + \frac{1}{2}[W_{\mathbf{k}}, [W_{\mathbf{k}}, H_{\mathbf{k}; 0}]]+\frac{1}{2}[W_{\mathbf{k}}, [W_{\mathbf{k}}, V_{\mathbf{k}}]]+\ldots \ \ \ \
\label{Baker-Campbell-Haussdorf}
\end{eqnarray}
This three-mode Schrieffer–Wolff Hamiltonian can be made block diagonal turning the system into a two-mode magnon-transmon subsystem decoupled from a one-mode photon subsystem
by choosing the generator $W_{\mathbf{k}}$ such that
%
\begin{eqnarray}
V_{\mathbf{k}}+[W_{\mathbf{k}}, H_{\mathbf{k}; 0}]=0.
\label{generator}
\end{eqnarray}
%
By substituting the solution of Eq.\ \eqref{generator} into
Eq.~\eqref{Baker-Campbell-Haussdorf}, one can obtain the
standard form of the Schrieffer–Wolff Hamiltonian
%
\begin{eqnarray}
H^{\prime}_{\mathbf{k}}=
H_{\mathbf{k}; 0}+
{\frac{1}{2}}[W_{\mathbf{k}}, V_{\mathbf{k}}]+
O(V_{\mathbf{k}}^{3})
\end{eqnarray}
%
up to first order in the interaction term $V_{\mathbf{k}}$.
Equation \eqref{generator} always has a definite solution as the perturbative component $V_{\mathbf{k}}$ is off-diagonal in the eigenbasis of
$H_{\mathbf{k}; 0}$. By solving Eq.\ \eqref{generator}, we obtain the generator of the Schrieffer–Wolff transformation
\begin{eqnarray}
W_{\mathbf{k}}=\left[\frac{g_{\text{m-ph}}^{\mathbf{k}}}{\omega_{\alpha_{\mathbf{k}}}-\omega_{c_{\mathbf{k}}}}c_{\mathbf{k}}^{\dagger}\alpha_{\mathbf{k}}
-\frac{g_{\text{ph-q}}^{\mathbf{k}}}{\omega_{q}-\omega_{c_{\mathbf{k}}}}\eta^{\dagger}c_{\mathbf{k}}\right]- {\rm H.c.}\ \ \
\end{eqnarray}
that leads to the following block diagonal hybrid Hamiltonian
\begin{eqnarray}
H^{\prime}_{\mathbf{k}}&=&H_{\mathbf{k}; 0}+{\frac {1}{2}}[W_{\mathbf{k}}, V_{\mathbf{k}}]
\nonumber\\
&=&\omega^{\prime}_{c_{\mathbf{k}}}c^{\dagger}_{\mathbf{k}}c_{\mathbf{k}}+\omega^{\prime}_{\alpha_{\mathbf{k}}}\alpha_{\mathbf{k}}^{\dagger}\alpha_{\mathbf{k}}
+\omega^{\prime}_{q}\eta^{\dagger}\eta
\nonumber\\
&&+g_{\text{m-q}}^{\mathbf{k}}\eta^{\dagger}_{\mathbf{k}}\alpha_{\mathbf{k}}
+\left(g_{\text{m-q}}^{\mathbf{k}}\right)^{*}\alpha_{\mathbf{k}}^{\dagger}\eta_{\mathbf{k}},
\label{SWPhQH}
\end{eqnarray}
where
\begin{eqnarray}
\omega^{\prime}_{c_{\mathbf{k}}}&=&\omega_{c_{\mathbf{k}}}-\frac{|g_{\text{m-ph}}^{\mathbf{k}}|^{2}}{\omega_{\alpha_{\mathbf{k}}}-\omega_{c_{\mathbf{k}}}}-\frac{|g_{\text{ph-q}}^{\mathbf{k}}|^{2}}{\omega_{q}-\omega_{c_{\mathbf{k}}}} ,
\nonumber\\
\omega^{\prime}_{\alpha_{\mathbf{k}}}&=&\omega_{\alpha_{\mathbf{k}}}+\frac{|g_{\text{m-ph}}^{\mathbf{k}}|^{2}}{\omega_{\alpha_{\mathbf{k}}}-\omega_{c_{\mathbf{k}}}} ,
\nonumber\\
\omega^{\prime}_{q}&=&\omega_{q}+\frac{|g_{\text{ph-q}}^{\mathbf{k}}|^{2}}{\omega_{q}-\omega_{c_{\mathbf{k}}}} ,
\nonumber\\
g_{\text{m-q}}^{\mathbf{k}}&=&g_{\text{m-ph}}^{\mathbf{k}}g_{\text{ph-q}}^{\mathbf{k}}\left[\frac{1}{\omega_{\alpha_{\mathbf{k}}}-\omega_{c_{\mathbf{k}}}}+\frac{1}{\omega_{q}-\omega_{c_{\mathbf{k}}}}\right].
\end{eqnarray}
As the photon mode is effectively decoupled from the rest of the Hamiltonian in Eq.~\eqref{SWPhQH}, the effective magnon-transmon interacting Hamiltonian reads
\begin{eqnarray}
H^{\mathbf{k}; \text{eff}}_{m-q}&=&\omega^{\prime}_{\alpha_{\mathbf{k}}}\alpha_{\mathbf{k}}^{\dagger}\alpha_{\mathbf{k}}
+\omega^{\prime}_{q}\eta^{\dagger}\eta
\nonumber \\
& & +g_{\text{m-q}}^{\mathbf{k}}\eta^{\dagger}_{\mathbf{k}}\alpha_{\mathbf{k}}
+\left(g_{\text{m-q}}^{\mathbf{k}}\right)^{*}\alpha_{\mathbf{k}}^{\dagger}\eta_{\mathbf{k}}.
\label{eq:eff-m-q-Hamiltonian}
\end{eqnarray}
\subsection{Transmon-qubit to probe magnons and their quantum characteristics in AFMs}
The computational basis of the transmon qubit consits of the ground and first excited states $\ket{0}\equiv \ket{g}$ and $\ket{1}\equiv \ket{e}$, respectively, of the anharmonic oscillator in the transmon regime.
In this case, the raising and lowering operators of the transmon qubit can be represented as $\eta^{\dagger}=\ket{1}\bra{0}$ and $\eta=\ket{0}\bra{1}$. The eigenstates of the number operator
\begin{eqnarray}
N_{\mathbf{k}}=\alpha_{\mathbf{k}}^{\dagger}\alpha_{\mathbf{k}}
+\eta^{\dagger}\eta=\alpha_{\mathbf{k}}^{\dagger}\alpha_{\mathbf{k}} +\ket{1}\bra{1},
\end{eqnarray}
are $\{\ket{0, 0}; \ket{1, 0}, \ket{0, 1};...; \ket{n, 0}, \ket{n-1, 1};...\}$, where the first entry counts the number of magnons in the hybridized mode $\alpha$ and the second entry labels the qubit state. These eigenstates span the magnon-qubit Hilbert space. The number operator commutes with the effective Hamiltonian in Eq.~\eqref{eq:eff-m-q-Hamiltonian}, i.e.,
\begin{eqnarray}
[N_{\mathbf{k}}, H^{\mathbf{k}; \text{eff}}_{m-q}]=0.
\end{eqnarray}
This implies that the effective Hamiltonian takes the block diagonal form:
\begin{eqnarray}
H^{\mathbf{k}; \text{eff}}_{m-q}=\bigoplus_{n=0}H^{\mathbf{k}; n}_{m-q},
\end{eqnarray}
where $n$ is the eigenvalue of the number operator $N_{\mathbf{k}}$, i.e., counts the total number of magnon and transmon excitations.
Except for the case $n=0$ that the Hamiltonian submatrix is a 1D block, for each $n>0$ the block Hamiltonians $H^{\mathbf{k}; n}_{m-q}$ are $2\times 2$ matrix of the form
\begin{eqnarray}
H^{\mathbf{k}; n}_{m-q}=\left(
\begin{array}{cc}
n \omega^{\prime}_{\alpha_{\mathbf{k}}}& \sqrt{n}\left(g_{\text{m-q}}^{\mathbf{k}}\right)^{*} \\
\sqrt{n}g_{\text{m-q}}^{\mathbf{k}} & n \omega^{\prime}_{\alpha_{\mathbf{k}}}-2\Delta_{\mathbf{k}}
\end{array}
\right),
\label{eq:block Hamiltonian}
\end{eqnarray}
with $\Delta_{\mathbf{k}}=\left(\omega^{\prime}_{\alpha_{\mathbf{k}}}-\omega^{\prime}_{q}\right)/2$ being the detuning between magnon and qubit frequencies.
By shifting the qubit energy levels $\ket{0}$ and $\ket{1}$ with the amount of $\Delta_{\mathbf{k}}$, we may rewrite the Hamiltonian in
Eq.~\eqref{eq:block Hamiltonian} as a effective single transmon qubit Hamiltonian
\begin{eqnarray}
H^{\text{eff}}_{q}=n \omega^{\prime}_{\alpha_{\mathbf{k}}}\mathbb{I}+\sqrt{n}\Omega_{\mathbf{k}}^x\sigma_x
+\sqrt{n}\Omega_{\mathbf{k}}^y\sigma_y+\Delta_{\mathbf{k}}\sigma_z
\label{eq:effectiveQH}
\end{eqnarray}
for each $n$. Here, $\Omega_{\mathbf{k}}=\Omega_{\mathbf{k}}^x+i\Omega_{\mathbf{k}}^y=g_{\text{m-q}}^{\mathbf{k}}$ characterizes the Rabi frequency of the qubit, $\mathbb{I}$ is the $2\times 2$ identity matrix and $\sigma_l,\ \ l=x, y, z$, are the Pauli matrices in the ordered effective qubit basis $\{\ket{n, 0}, \ket{n-1, 1}\}$.
This Hamiltonian results in the following energy eigensystem:
\begin{eqnarray}
\epsilon_{\pm}&=&n \omega^{\prime}_{\alpha_{\mathbf{k}}}\pm\sqrt{\Delta_{\mathbf{k}}^2+n|\Omega_{\mathbf{k}}|^2},
\nonumber\\
\ket{\epsilon_{+}}&=&\cos\left(\frac{\theta_{\mathbf{k}}}{2}\right)\ket{n, 0}+ e^{i\phi_{\mathbf{k}}}\sin\left(\frac{\theta_{\mathbf{k}}}{2}\right)\ket{n-1, 1},
\nonumber\\
\ket{\epsilon_{-}}&=&\sin\left(\frac{\theta_{\mathbf{k}}}{2}\right)\ket{n, 0}- e^{i\phi_{\mathbf{k}}}\cos\left(\frac{\theta_{\mathbf{k}}}{2}\right)\ket{n-1, 1}\ \ \ \
\end{eqnarray}
with $\Omega_{\mathbf{k}}=|\Omega_{\mathbf{k}}|e^{i\phi_{\mathbf{k}}}$ and $\tan\theta_{\mathbf{k}}=\frac{|\Omega_{\mathbf{k}}|}{\Delta_{\mathbf{k}}}$.
Suppose the transmon qubit is initialized in the state $\ket{0}$ at time $t=0$ for a fixed $n$, for instance $n=1$, that is
$\ket{\psi(0)}=\ket{1,0}$. Governed by the effective qubit Hamiltonian in
Eq.~\eqref{eq:effectiveQH},
the initial state evolves to
\begin{eqnarray}
\ket{\psi(t)}&=&e^{-itH^{\text{eff}}_{q}}\ket{\psi(0)}\nonumber\\
&=&e^{-it\epsilon_{+}}\cos\left(\frac{\theta_{\mathbf{k}}}{2}\right)\ket{\epsilon_{+}}+e^{-it\epsilon_{-}}\sin\left(\frac{\theta_{\mathbf{k}}}{2}\right)\ket{\epsilon_{-}},\nonumber\\
\end{eqnarray}
after time $t$, which give rise to the following Rabi oscillation
\begin{eqnarray}
P_{0\rightarrow 1}(t)&=&|\langle1|\psi(t)\rangle|^{2}=\sin^{2}\left(\theta_{\mathbf{k}}\right)\sin^{2}\left(\frac{(\epsilon_{+}-\epsilon_{-})t}{2}\right)\nonumber\\
&=&\frac{|\Omega_{\mathbf{k}}|^2}{\Delta_{\mathbf{k}}^2+|\Omega_{\mathbf{k}}|^2}\sin^{2}\left(\frac{(\epsilon_{+}-\epsilon_{-})t}{2}\right).
\end{eqnarray}
This indicates that the probability of finding the transmon qubit
in the state $\ket{1}$ after time $t$ oscillates with the frequency
\begin{eqnarray}
f_{\mathbf{k}}=\frac{(\epsilon_{+}-\epsilon_{-})}{2}=\sqrt{\Delta_{\mathbf{k}}^2+|\Omega_{\mathbf{k}}|^2},
\end{eqnarray}
and intensity
\begin{eqnarray}
I_{\mathbf{k}}=\frac{|\Omega_{\mathbf{k}}|^2}{\Delta_{\mathbf{k}}^2+|\Omega_{\mathbf{k}}|^2}.
\end{eqnarray}
Note that the maximum intensity $I_{\mathbf{k}}=1$ occurs at the zero detuning $\Delta_{\mathbf{k}}=0$, which is equivalent to the following qubit parameter tuning
\begin{eqnarray}
\omega_q&=&\omega_{\alpha_{\mathbf{k}}},\nonumber\\
|g_{\text{ph-q}}^{\mathbf{k}}|&=&|g_{\text{m-ph}}^{\mathbf{k}}|.
\label{QP-zerodetuning}
\end{eqnarray}
The detuning can be achieved, for instance, by appropriate adjustments of photon frequency and amplitude of vector potential as well as an applied magnetic
field in the $z$ direction, as depicted in Fig.\ \ref{fig:model}.
As a result of zero detuning, the angular frequency of the Rabi oscillation becomes
\begin{eqnarray}
f_{\mathbf{k}}=\frac{2|g_{\text{m-ph}}^{\mathbf{k}}|^2}{|\omega_{q}-\omega_{c_{\mathbf{k}}}|}=\frac{2\lambda_{\mathbf{k}}^2}{|\omega_{q}-\omega_{c_{\mathbf{k}}}|}\Delta\left[\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})\right],
\label{AFofRO}
\end{eqnarray}
where
\begin{eqnarray}
\Delta\left[\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})\right]=\cosh 2r_{\mathbf{k}}+\sinh2r_{\mathbf{k}}\cos\phi_{\mathbf{k}}
\end{eqnarray}
is the the Einstein-Podolsky-Rosen (EPR) function for the two-mode ground state
$\ket{\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})}$
given by Eq.~\eqref{eq: two-mode squeezing ground state} \cite{azimi-mousolou2020,azimi-mousolou2021} (see appendix for details about EPR).
The EPR function, which characterizes the Bell-type nonlocal correlations known as EPR nonlocality, is a highly relevant concept in the study of continuous variable entanglement \cite{giedke2003, fadel2020}.
We can always assume the parameter $\Gamma_{\mathbf{k}}$ in Eq.~\eqref{r-phi}
to be real-valued, in which case $\phi_{\mathbf{k}} = 0\ \text{or}\ \pi$ and thus
\begin{eqnarray}
\Delta\left[\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})\right] =
\left\{ \begin{array}{ll}
e^{2r_{\mathbf{k}}}, &\ \ \mathrm{if} \ \ \ \ \phi_{\mathbf{k}} = 0\ (\Gamma_{\mathbf{k}} < 0), \\
e^{-2r_{\mathbf{k}}}, &\ \ \mathrm{if} \ \ \ \ \phi_{\mathbf{k}} =\pi\ (\Gamma_{\mathbf{k}} > 0).
\end{array} \right. \nonumber\\
\label{EPRforEE}
\end{eqnarray}
Since the ground state EPR function and the magnon-magnon entanglement entropies all depend on the same entanglement (squeezing) parameter, one may observe the magnon-magnon entanglement through the EPR function
$\Delta\left[\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})\right]$
and in fact through the qubit angular frequency in Eq.~\eqref{AFofRO} of the Rabi oscillation. For instance, we obtain the entanglement entropy for the two-mode ground state
\begin{eqnarray}
E\left[\ket{\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})}\right] & = & \left[ \cosh^{2} (r_{\mathbf{k}}) \log \cosh^{2} (r_{\mathbf{k}}) \right.
\nonumber\\
& & \left. -\sinh^{2}(r_{\mathbf{k}})\log\sinh^{2}(r_{\mathbf{k}})\right],
\label{entropymeasure}
\end{eqnarray}
as a function of the qubit angular frequency through
\begin{eqnarray}
r_{\mathbf{k}} & = & \frac{e^{i\phi_{\mathbf{k}}}}{2}\Delta\left[\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})\right]=
\frac{e^{i\phi_{\mathbf{k}}}|\omega_{q}-\omega_{c_{\mathbf{k}}}|}{4\lambda_{\mathbf{k}}^2}f_{\mathbf{k}}
\label{SPforEE}
\end{eqnarray}
for $\phi_{\mathbf{k}}=0,\ \pi$.
Eq. \eqref{SPforEE} follows from Eqs. \eqref{AFofRO} and \eqref{EPRforEE}. The entanglement entropies of all magnon eigenbasis states given by Eq. \eqref{EEE} are actually functions of the qubit angular frequency through the relation in Eq. \eqref{SPforEE}.
In practice the entanglement entropy, Eq.\ \eqref{entropymeasure}, is a function of the parameter $r_{\mathbf{k}}$, which can be identified by Eq.\ \eqref{SPforEE} once the qubit angular frequency $f_{\mathbf{k}}$ has been determined experimentally.
Figure~\ref{fig:EvsEPR} illustrates, as an example, the two-mode magnon entanglement in the ground (vacuum) state and number of excited states against the EPR function $ \Delta\left[\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})\right]\propto f_{\mathbf{k}}$, for AFM spin lattices.
\begin{figure}[h]
\begin{center}
\includegraphics[width=80mm, height=50mm]{evsepr.pdf}
\end{center}
\caption{(Color online) Entanglement entropies of magnon eigenstates
corresponding to selected pairs of magnon numbers (x, y) against the EPR function $ \Delta\left[\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})\right]$ for AFM spin lattices. Stronger entanglement is observed for non-local states associated to $\phi_{\mathbf{k}}=\pi$, whereas $\phi_{\mathbf{k}}=0$ represents a local state regime with weaker magnon-magnon entanglement.}
\label{fig:EvsEPR}
\end{figure}
Two distinct regions, the non-local bipartite entangled state, $\phi_{\mathbf{k}}=\pi$, and the local bipartite entangled state, $\phi_{\mathbf{k}}=0$, with transition point at $ \Delta\left[\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})\right]=1$ can be distinguished in Fig.~\ref{fig:EvsEPR}. The region of stronger magnon-magnon entanglement for non-local two-mode magnon state is observed by the EPR uncertainty relation $ \Delta\left[\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})\right]<1$.
The clear relation between the EPR function and the two-mode magnon entanglement entropy allows for experimental quantification of magnon-magnon entanglement through the EPR function $ \Delta\left[\psi_{00}(r_{\mathbf{k}}, \phi_{\mathbf{k}})\right]$ and indeed the
frequency, $f_{\mathbf{k}}$, of Rabi oscillation of the transmon qubit. It is worth mentioning that the EPR nonlocality has been used for verification of entanglement in optical and atomic systems based on homodyne detection and types of interferometry setups \cite{gross2011, armstrong2015, peise2015, lee2016, kunkel2018, fadel2018, li2020}. However, these types of measurement setups are not realistic for magnon systems, since these technologies are mainly based on beam splitters that have limitations for characterizing magnon entanglement. We propose as a solution, a mechanism and measurement setup that rely on qubit-light-matter interaction as a probe to observe the EPR function and thus EPR nonlocality and the degree of magnon-magnon entanglement. Moreover, Eq.~\eqref{QP-zerodetuning} shows that at the zero
detuning, the magnon frequency in the hybridized $\alpha$ mode can also be observed through qubit frequency.
A similar procedure and formulation as above hold if we couple the transmon qubit to a bipartite AFM material instead through the magnon $\beta$ mode, for instance, by using oppositely (left) circularly polarized light. Using different polarization for the photon would allow one to detect the twin chiral magnon modes in bipartite AFM materials.
Fig.\ \ref{fig:RFDofDM} shows that the angular frequency $f_{\mathbf{k}}$ of the Rabi oscillation of a transmon qubit can observe and distinguish the two hybridized magnong modes in the system provided that appropriate polarized light is used. The figure also shows the correlation between indistinguishablity of the two hybridized magnong modes, EPR nonlocality, and the entanglement between Kittel
magnon modes. The higher the indistinguishability (around the zone center), the higher the non-locality and entanglement.
\begin{figure}[h]
\begin{center}
\includegraphics[width=86mm, height=33mm]{RFDofDM.pdf}
\end{center}
\caption{(Color online) Left panel: The angular frequency, $f_{\mathbf{k}}$, of the Rabi oscillation of transmon qubit depending on whether the transmon qubit is coupled to the magnon in $\alpha$ mode (red) through right circularly polarized photon or to the magnon in $\beta$ mode (blue) through left circularly polarized photon. The inset shows the corresponding dispersion energies for the two hybridized magnon modes $\alpha$ (red) and $\beta$ (blue). Right panel: EPR function (gray dashed curve) and
entanglement (black solid curve) between Kittel magnon modes in the vacuum ground state for different values of lattice momentum $k$.
Similar results can be obtained for excited states. We assume
uniaxial AFM materials \cite{azimi-mousolou2021} with simple cubic lattice structure subjected to external magnetic field in the $z$ direction. The lattice momentum $k$ takes its values along $(0,0,1)$ direction with the lattice constant set to unity. We consider the nearest neighbor Heisenberg interaction $J$ and the easy-axis anisotropy $\mathcal{K}_z$ with model parameter values: $J=10 meV$, $\mathcal{K}_z=0.01J$, $B=2.5 T$ for the amplitude of the magnetic field in $z$ direction, and $S=1/2$. For the microwave cavity photon we assume $A_0=1 meV$ and $\omega_c=0.05 meV$.}
\label{fig:RFDofDM}
\end{figure}
\section{conclusion}
\label{conclusion}
In conclusion, we demonstrate microwave cavity mediated hybridization
of superconducting transmon qubit and chiral magnons in bipartite AFM materials. We derive analytical expressions for the hybridized Hamiltonian and the coupling strengths. This coupling allows us not only to identify magnons in AFM materials, but also to verify their chirality and to characterize the nonlocality and bipartite entanglement between Kittel magnon modes in the system. These are all observed through measurement of the angular frequency of Rabi oscillation in the transmon qubit. We hope the present work opens up a new route to experimentally access rich quantum properties of magnons in AFM materials.
The broad range of crystalline and synthetic AFM materials, such as the oxides NiO and MnO, the fluorides MnF$_2$ and FeF$_2$, 2D Ising systems like MnPSe$_3$, YIG-based synthetic AFMs, and perovskite manganites \cite{Jie2018,Takashi2016,Haakon2019,Thuc2021,Sheng2021, Changting2021, Rini2007, Ulbrich2011, rezende2019}, provide a space for experimental observation of the present results.
\section*{acknowledgments}
The authors acknowledge financial support from Knut and
Alice Wallenberg Foundation through Grant No. 2018.0060. A.D. acknowledges financial support from the Swedish Research Council (VR) through Grants No.~2016-05980 and VR 2019-05304. O.E. acknowledges support from the Swedish Research Council (VR), the Swedish Foundation for Strategic Research (SSF), the Swedish Energy Agency (Energimyndigheten), ERC (synergy grant FASTCORR, project 854843), eSSENCE, and STandUPP. D.T. acknowledges support from the Swedish Research
Council (VR) through Grant No. 2019-03666. E.S. acknowledges financial support from
the Swedish Research Council (VR) through Grant No. 2017-03832. Some of the
computations were performed on resources provided by the Swedish
National Infrastructure for Computing (SNIC) at the National Supercomputer Center (NSC),
Link\"oping University, the PDC Centre for High Performance Computing (PDC-HPC), KTH,
and the High Performance Computing Center North (HPC2N), Ume{\aa} University.
\section*{Appendix}
Here, for a general two-mode quantum state $\ket{\psi}$, the EPR function is quantified by \cite{giedke2003, fadel2020}
\begin{eqnarray}
\Delta(\psi) =
\frac{1}{2}[\text{Var}_{\psi}(X_{\mathbf{k}}^{A}+X_{\mathbf{k}}^{B})
+ \text{Var}_{\psi}(P_{\mathbf{k}}^{A}-P_{\mathbf{k}}^{B})],
\label{EPRR}
\end{eqnarray}
where $X_{\mathbf{k}}^{A} = \frac{a_{\mathbf{k}} + a_{\mathbf{k}}^{\dagger}}{\sqrt{2}}$ $\left(X_{\mathbf{k}}^{B} =
\frac{b_{\mathbf{k}}+b_{\mathbf{k}}^{\dagger}}{\sqrt{2}}\right)$ and
$P_{\mathbf{k}}^{A} = \frac{a_{\mathbf{k}} - a_{\mathbf{k}}^{\dagger}}{i\sqrt{2}}$ $\left(P_{\mathbf{k}}^{B} = \frac{b_{\mathbf{k}} -
b_{\mathbf{k}}^{\dagger}}{i\sqrt{2}}\right)$ are assumed to be the dimensionless
position and momentum quadratures for the $a_{\mathbf{k}} (b_{\mathbf{k}})$ mode, respectively. The
$\text{Var}_{\psi}(V)$ is the variance of an Hermitian operator $V$ with respect to the state $\ket{\psi}$.
The uncertainty relation $\Delta(\psi)\ge 1$ is known to hold for any given bipartite separable state $\ket{\psi}$ \cite{fadel2020}.
Therefore, any violation of this inequality is an indication of the state $\ket{\psi}$ being nonlocal and indeed a bipartite entangled state.
Note that the EPR nonlocality specifies a stronger type of entanglement than a nonzero entropy of entanglement in the sense that there are states with nonzero entropy of entanglement which do not violate the uncertainty relation.
|
{
"arxiv_id": "2302.13239",
"language": "en",
"timestamp": "2023-02-28T02:13:56",
"url": "https://arxiv.org/abs/2302.13239",
"yymm": "2302"
} | \section{Introduction}\label{sec1}
Tropical cyclones (TCs) in coastal regions around the world result in fatalities and significant economic damage \cite{bib1}. There are approximately 90 TCs held worldwide in each year \cite{bib2}. One of the main risks associated with TCs is wild wind, which not only causes property damage but also moderates the strength of storm surges and other secondary risks such as ocean waves risen. The wind speed map of the return period is commonly used to delineate and quantify the statistical distribution of storm intensity and frequency for tropical cyclonic wind danger \cite{bib3}. Typically, approaches like the extreme value theory (EVT), which is based on historical meteorological records, which are used to examine the frequency of wind speeds for a single site \cite{bib4}. Ground measurements for local areas may be limited, necessitating the use of historical or stochastic simulated TC modelling based on the Monte-Carlo approach \cite{bib5,bib6}. Methods utilising basin-wide stochastic simulations of complete TC tracks have been developed for wider regions \cite{bib7, bib8}.
It is possible to simulate a TC wind field using numerical or parametric techniques. Although numerical models, like the Weather Research and Forecasting (WRF) model, have been extensively used in wind field reconstruction and forecasting \cite{bib9, bib10}, parametric wind field models have become more popular in TC wind hazard assessment due to their satisfactory modelling accuracy with constrained TC parameters as inputs. Typically, historical TC track datasets can be used to derive the critical parameters, and considerable efforts have been made to assemble the world's severe TC record \cite{bib2}. However, generated parameter values can occasionally have low dependability, and the accuracy of TC parameters may have an impact on the accuracy of parametric models.
Assessment of the TC wind hazard has progressed unevenly across nations and areas. Numerous TC-prone nations have developed and extensively used wind hazard models at the national level. For example, the HAZUS hurricane model has been modified since 1997 in the United States \cite{bib11}, and the Florida Office of Insurance Regulation has supported the creation of the Florida Public Hurricane Loss Model \cite{bib20}. The Comprehensive Approach to Probabilistic Risk Assessment (CAPRA) for numerous Central American nations has been produced \cite{bib12}, and the Tropical Cyclone Risk Model (TCRM) for Australia was created in 2008 \cite{bib13} and published in 2011 \cite{bib14}. A parametric wind field model (Holland model) was used in 2011, with UNEP cooperation, to recreate the historical TC wind fields globally for all basins from 1970 to 2010 \cite{bib23,bib24}. Giuliani and Peduzzi \cite{bib15} developed global wind speed maps based on this research, and methodologies were slightly enhanced in 2013 and 2015 \cite{bib16, bib25, bib26}.
The improvement of the assessment of the worldwide TC wind danger still has a significant gap. Generally speaking, wind dangers are geographically variable, and local elements like latitude, slope and direction aspect, land use, and land cover have a significant impact on TC wind profiles at the local scale \cite{bib17, bib11}. However, the majority of recent studies at the national or international level oversimplify or ignore the impact of local conditions on TC wind dangers \cite{bib7, bib13, bib23, bib24, bib25, bib26}. The modelling of TC wind fields is severely hampered by the absence of measurements of critical TC characteristics such as locations of TC centre, estimated central pressure, and estimated pressure drop at the centre \cite{bib2,bib18,bib19}. Because of this, there aren't many TC wind hazard products for the Northwest Pacific (NP), North Indian (NOI), and South Indian (SOI) basins that are publicly accessible. However, a few exclusive models have been created and released in the form of a black box \cite{bib20}.
Using tropical cyclone parameters including the location of the TC centre, estimated central pressure, and estimated pressure drop at the centre, this research aims to determine the value of radius of maximum wind using historical observations or reconstruction TC parameters. Considering, as well, the effects of geographical elements like topography and geo-surface roughness. In this paper,section \ref{sec2}, data and methodology are described. The results of the proposed method are shown in section \ref{sec3} and are compared with those from the other two expression mention in paper \cite{bib21,bib22}. In section \ref{sec4}, the conclusion is provided. The section \ref{sec5} indicates the study's future scope.
\section{Data and Methodology}\label{sec2}
The study utilized best track dataset from the Regional Specialized Meteorological Center for Tropical Cyclones over North Indian Ocean, generally known as the Indian Meteorological Department (IMD) to evaluate the value of the suggested method. The datasets include TC metrics such as TC number, time (year, month, day, and hour), locations (longitude and latitude of TC centre), estimated central pressure ($P_{c}$), maximum wind speed ($V_{m}$), and estimated pressure drop at the centre ($P_{d}$), which are stored in the datasets at a time interval of 3 hours. Additionally, we use the radius of maximum wind data from IMD bulletins to compare with the suggested method.
The main objective of this study is to create a method for calculating the radius of maximum wind using TC characteristics, such as the latitude and longitude of the TC centre and the estimated pressure drop at the TC centre. Considering the influence of geographical factors like topography and geo-surface roughness as well.
We conclude at the relationship for determining the radius of maximum wind ($r_{max}$)after analysing all the variables that affect it as well as the historical observation.
\begin{equation}
r_{max} = 10.16812 \times e^{(-0.213\sqrt{P_d} + 0.0169\phi)} + 23.817
\end{equation}
where $\phi$ is latitude of the TC centre and $P_d$ is estimated pressure drop at the TC centre.
This relationship, which determines the value of $r_{max}$, is valid for values of $P_d$ that are less than or equal to 12 hPa.
We compare the suggested method to determine the $r_{max}$ with the expression proposed by Willoughby et al.\cite{bib21} (Eq. \ref{eq2}) , Tan and Fang \cite{bib22} (Eq. \ref{eq3}) and the best data provided by the IMD.
\begin{equation}
r_{max} = 46.6 \times e^{(-0.015V_{max} + 0.0169 \phi)}
\label{eq2}
\end{equation}
\begin{equation}
r_{max} = -26.73 \times ln(1013.25 - P_c) + 142.41
\label{eq3}
\end{equation}
\section{Result}\label{sec3}
Two TC cases were used to evaluate the mentioned method to determining the value of $r_{max}$. Only two TC cases are discussed in the section \ref{sec3} of the paper in order to keep it manageable. The supplementary material for the study discusses five additional TC cases. To determine whether the suggested method is accurate, we calculate the error percentage between the best data provide by the IMD and suggested method. Verify that the suggested method has a lower error percentage than the other two references by comparing the two.
\begin{equation*}
\text{Error percentage} = \frac{\text{Experimental Value} - \text{Actual Value}}{\text{Actual Value}}
\end{equation*}
Extremely Severe Cyclonic Storm, Tauktae is the first TC case shown in Table 1; we find that the error percentage of the suggested method ranges from $-9$$\%$ to $-50$$\%$, while that of the expressions of Willoughby et al. \cite{bib21} and Tan and Fang \cite{bib22} ranges from $-3$$\%$ to 66$\%$.
We get to the conclusion that the suggested method underestimates the value of $r_{max}$, whereas the other two expressions occasionally overestimate and occasionally underestimate the value of $r_{max}$ in the case of Extremely Severe Cyclonic Storm, Tauktae.
\begin{table}[h]\label{tab1}
\begin{tabular}{p{40pt}|p{20pt}|p{40pt}|p{40pt}|p{30pt}|p{25pt}|p{25pt}|p{25pt}}
\hline
Date/Time& IMD &Suggested &Willoughby et al. & Tan and Fang & $E_1$& $E_2$ & $E_3$ \\
\hline
14/06&40&31.8147&38.571&66.288&-20.46&-3.57&65.72 \\
15/00&60&30.7164&31.69&60.7136&-48.80& -47.18&1.18\\
15/06&60&30.2973&29.65&58.30 &-49.50&-50.58&-2.83\\
15/12&32&29.4437&25.785&53.10& -7.98&-19.42&65.93\\
15/18&32&29.1063&24.207&50.4 &-9.04&-24.35&57.5\\
\hline
\end{tabular}
\caption{Results of first TC case, where $E_1$ indicates the error percentage between suggested method and IMD, $E_2$ represents the error percentage between Willoughby et al.'s expression and IMD, and $E_3$ represents the error percentage between Tan and Fang's expression and IMD}
\end{table}
Cyclonic Storm, Gulab is the second case shown in Table \ref{tab2}; we find that the error precentage of the suggested method ranges from $-42 \%$ to $51\%$, while that of the expressions of Willoughby et al. \cite{bib21} , and Tan and Fang \cite{bib22} ranges from $-26 \%$ to $202\%$.
After looking at the cases, we can conclude that the accuracy of the suggested method is higher than that of the other two sources.
\begin{table}[h]
\begin{tabular}{p{40pt}|p{20pt}|p{40pt}|p{40pt}|p{30pt}|p{25pt}|p{25pt}|p{25pt}}
\hline
Date/Time& IMD &Suggested&Willoughby et al. & Tan and Fang & $E_1$& $E_2$ & $E_3$ \\
\hline
25/00&55&32.43&40.55&71.39&-41.02&-26.27&29.8 \\
25/06&55&32.05&40.55&71.39&-41.72&-26.27&29.8\\
25/12&24&31.70&37.55&65.28&32.09&56.45&174.5\\
25/18&24&31.70&37.55&66.28&32.09&56.45&176.16\\
26/00&30&31.40&34.84&63.35&4.67&16.13&111.16\\
26/06&24&30.89&32.38&60.71&28.71&34.91&152.95\\
26/12&24&30.89&32.38&60.71&28.71&34.91&152.95\\
26/18&21&31.71&37.62&63.35&51.02&79.14&201.66\\
\hline
\end{tabular}
\caption{Results of Second TC case, where $E_1$ indicates the error percentage between suggested method and IMD, $E_2$ represents the error percentage between Willoughby et al.'s expression and IMD, and $E_3$ represents the error percentage between Tan and Fang's expression and IMD}
\label{tab2}
\end{table}
\section{Conclusion} \label{sec4}
To find the value of $r_{max}$ in this study, a new expression has been developed. The equation included TC parameters including the center's position (Lat. and log.), as well as the estimated pressure drop at the center. Considering the influence of geographical factors like topography and geo-surface roughness as well. This expression may produce results that are better to those obtained using the other expression.
\begin{equation*}
r_{max} = 10.16812 \times e^{(-0.213\sqrt{P_d} + 0.0169\phi)} + 23.817
\end{equation*}
where $\phi$ is latitude of the TC centre and $P_d$ is estimated pressure drop at the centre.
This relationship, which determines the value of $r_{max}$, is valid for values of $P_d$ that are less than or equal to 12 hPa and no any condition on the latitude of the TC centre over the North Indian basin.
\section{Future Scope}\label{sec5}
We investigate more TC situations and work to create an expression for estimated pressure drops at the centre that are greater than 12 hPa. To determine the value of $r_{max}$, try validating this expression on additional basins, such as the South Pacific and North Pacific basins with considering the influence of geographical factors like topography and surface roughness as well.
\section*{Acknowledgement}
The best track data and metrics for tropical cyclone characteristics are freely available thanks to the Indian Meteorological Department, which is appreciated by the authors.
\section*{Supplementary Material}
This paper's supplementary material includes five additional TC cases over the North Indian Basin, with the same notation as in the paper.
\section*{Declarations}
\subsection*{Conflict of Interest}
Conflict of interest is not declared by any of the authors.
\subsection*{Author Statement}
Research and manuscript preparation were done equally by each author.
\subsection*{Data Availability}
The datasets include TC metrics such as TC number, time (year, month, day, and hour), locations (longitude and latitude of TC centre), estimated central pressure ($P_{c}$), maximum wind speed ($V_{m}$), and estimated pressure drop at the centre ($P_{d}$), which are stored in the datasets at a time interval of 3 hours is freely available on the website of IMD.
|
{
"arxiv_id": "2302.13189",
"language": "en",
"timestamp": "2023-02-28T02:12:09",
"url": "https://arxiv.org/abs/2302.13189",
"yymm": "2302"
} | \subsubsection*{CONTENT: 18:33}
\def\excludegraphics#1{includegraphics:#1}
\long\def\skipover#1{}
\begin{abstract}
Classical semantics assumes that one can model reference, predication and quantification with respect to a fixed domain of possible referent objects. Non-logical terms and quantification are then interpreted in relation to this domain: constant names denote unique elements of the domain, predicates are associated with subsets of the domain and quantifiers ranging over all elements of the domain. The current paper explores the wide variety of different ways in which this classical picture of precisely referring terms can be generalised to account for variability of meaning due to factors such as vagueness, context and diversity of definitions or opinions.
Both predicative expressions and names can be given either multiple semantic referents or be associated with semantic referents that have some structure associated with variability.
A semantic framework \textit{Variable Reference Semantics\/} (VRS) will be presented that can accommodate several different modes of variability that may occur either separately or in combination.
Following this general analysis of semantic variability, the phenomenon of \emph{co-predication\/} will be considered. It will be found that this phenomenon is still problematic, even within the very flexible VRS framework.
\end{abstract}
\begin{keywords}
Vagueness, Polysemy, Predication, Quantification, Copredication
\end{keywords}
\maketitle
\section{Introduction}
The notion of reference and the operations of predication and
quantification are fundamental to classical first-order logic. The
standard semantics for this logic assumes a fixed domain of possible
referent objects, with naming constants referring to unique elements
of the domain, predicates being associated to subsets of the domain
and quantifiers ranging over all the elements of the domain. Thus, if
$\mathcal{D}$ is the domain of objects that can be referred to, then a
constant name, say $\pi{c}$, will donate an object $\delta(\pi{c})$, such that
$\delta(c)\in\mathcal{D}$, and a predicate $P$ will be taken to denote
a set of objects $\delta(P)$, with $\delta(P)\subseteq\mathcal{D}$.
Then one can straightforwardly interpret the predicating expression
$P(\pi{c})$ as a proposition that is true if and only if
$\delta(\pi{c})\in\delta(P)$.
But what if we are dealing with a language that is subject to
variability in the interpretation of its symbols: a name may not
always refer to a unique, precisely demarcated entity; a predicate
need not always correspond to a specific set of entities. Accounting
for such semantic variability requires some generalisation or other
modification of the classical denotational semantics.
The aim of the current paper is not to propose a single theory but
rather to explore some representational possibilities. In the first
part of this paper I consider different ways in which semantics can be
given to predication in the presence of semantic variability. I shall
first consider some general ideas regarding different views of
vagueness, in particular the \emph{de re\/} and \emph{de dicto\/}
accounts of this phenomenon. I shall suggest that these need not be
mutually exclusive, but could be describing different aspects of
semantic variability. I explore possible models of denotation,
predication and quantification in the presence of such variability,
first informally, with the aid of some diagrams and then in terms of a
formal framework, based on \emph{standpoint semantics\/} \cite{Bennett11lomorevi,gomez2018dealing,alvarez2022standpoint}, within which
variable references can be modelled. In the final section of the
paper I shall consider the problem of \emph{co-predication}. This has
been the subject of considerable debate in recent years and is a
phenomenon that throws up many examples that pose difficulties for
different approaches and has been used to try to support/reject models
of semantic variability.
\subsection{Types of Vagueness}
The literature on vagueness has generally assumed that
the phenomena of vagueness could arise from three potential sources
(see e.g.\ \citet{Barnes10a}):
\begin{enumerate}[itemsep=0ex]
\item indeterminacy of representation (linguistic a.k.a\ \emph{de dicto\/}
vagueness),
\item indeterminacy of things in the world (\emph{ontic\/} a.k.a.\ \emph{de
re\/} vagueness),
\item limitations of knowledge (\emph{epistemic\/} vagueness).
\end{enumerate}
The epistemic view of vagueness has some strong advocates
\citep{Williamson92vague+ig,Williamson94vague,Sorensen2001-SORVAC} and
many other take the view that the logic of multiple possible
interpretations takes a similar form to logics of knowledge and belief
(e.g.\ \citet{Lawry08approp,Lawry2009aij}). Indeed, the
\emph{standpoint semantics}, which will be used in our following
analysis can be regarded as being of this form. However, the question
of whether this is a deep or superficial similarity is not relevant to
our current concerns; and the distinction between \emph{de dicto\/}
and \emph{de re\/} aspects of multiple reference, would also arise
within an epistemic account. So in the current paper we shall not
further consider the epistemic view.
\subsection{\emph{De Dicto\/} Vagueness}
A widely held view is that all vagueness is essentially \emph{de
dicto}, and that any kind of vagueness that seems to come from
another source can actually be explained in \emph{de dicto\/} terms
\citep{Lewis86pluralityOfWorlds,Varzi01a}. A fairly typical version
of such an attitude is that of Varzi who (focusing on the domain of
geography, within which vagueness is pervasive) takes a strong
position against ontic vagueness. Varzi's view of the relationship
between vague terms and their referents is summarised in the following
quotation:
\begin{quote}
``[To] say that the referent of a geographic term is not sharply
demarcated is to say that the term vaguely designates an object, not
that it designates a vague object.'' \citep{Varzi01a}
\end{quote}
Advocates of exclusively \emph{de dicto\/} vagueness typically favour
some variety of \emph{supervaluationist\/} account of linguistic
vagueness, within which the meanings of vague terms are explained in
terms of a collection (a set or some more structured ensemble) of
possible precise interpretations (often called
\emph{precisifications}). An early proposal that vagueness can be
analysed in terms of multiple precise senses was made by Mehlberg
\shortcite{Mehlberg58a}, and a formal semantics based on a
multiplicity of classical interpretations was used by van Fraassen
\shortcite{Fraassen69a} to explain `the logic of presupposition'.
This kind of formal model was subsequently applied to the analysis of
vagueness by Fine \shortcite{Fine75a}, and thereafter has been one of
the more popular approaches to the semantics of vagueness adopted by
philosophers and logicians, and a somewhat similar approach was
proposed by \citet{Kamp75a}, which has been highly influential among
linguists.
What one might call \emph{absolutist\/} versions of supervaluationism
are those that, following Fine, hold to a doctrine of
\emph{super-truth}.\footnote{What I call `absolutist'
supervaluationism is what \citet{Williamson03vague+real} refers to
as `traditional' supervaluationism.} This is the tenet that the
truth of an assertion containing vague terminology should be equated
with it being true according to all \emph{admissible\/}
precisifications \citep{Fine75a,Keefe2000a,Keefe2008a}. In such
theories the set of admissible precisifications is generally taken as
primitive, just as is the set of possible worlds in Kripke semantics
for modal logics. Other, supervaluation inspired, theories propose a
similar semantics based on the idea that the truth of a vague
assertion can only be evaluated relative to a locally determined
precisification (or set of precisifications), but reject the idea that
the notion of super-truth is useful. One might call such theories
\emph{relativist\/} supervaluation theories. Such theories include
that of \citet{Shapiro06a} and my own
{\emph{standpoint semantics\/}
\cite{Bennett11lomorevi,gomez2018dealing}.\footnote{According to standpoint
semantics, vague statements are true or false relative to a
\emph{standpoint}. My notion of standpoint is formally identified
with a set of precisifications determined by a set of (consistent)
judgements that are accepted as true in a given context. This idea
is similar to that presented by \citet{Lewis79a}, and elaborated by
\cite{Gottschling04a}.}
}
Apart from providing a general framework for specifying a \emph{de
dicto\/} semantics of vagueness, the supervaluationist idea is also
attractive in that it can account for
\emph{penumbral connection\/} \citep{Fine75a}, which many believe to
be an essential ingredient of an adequate theory of vagueness. This is
the phenomenon whereby logical laws (such as the principle of
non-contradiction) and semantic constraints (such as mutual
exclusiveness of two properties --- e.g. `... is red' and `.. is
orange') are maintained even for statements involving vague concepts.
The solution, in a nutshell, being that, even though words may have
multiple different interpretations, each admissible precisification of
a language makes precise all vocabulary in a way that ensures mutual
coherence of the interpretation of distinct but semantically related
terms.
\skipover{
With regard to the concerns of the current paper, it is worth noting
that supervaluationist accounts tend to work most smoothly when
applied to vague predicates, and face some problems in accounting for
vagueness that appears to reside within nominal expressions (see
e.g.\ \citet{Weatherson03a}). Later (in Section~\ref{SEC:dedicto}),
I shall consider various possibilities for recasting vagueness in
objects as vagueness in predicates. However, I shall argue that such a
reduction cannot fully account for our intuitive understanding of
certain sentences where a `vague object' is referred to multiple
times. In order to represent the meanings of such sentences I propose
a semantics in which indeterminacy affects the interpretation of both
predicative and nominal expressions.
}
\subsection{\emph{De Re\/} Vagueness}
In natural language, objects are often described as vague. We commonly
encounter sentences such as: `The smoke formed a vague patch on the
horizon'; `The vaccine injection left a vague mark on my arm'; `He saw
the vague outline of a building through the fog'. In this kind of
usage, `vague' means something like `indefinite in shape, form or
extension'. Of course, `indefinite' is almost synonymous with `vague'
so this definition is far from fully explanatory. However, if we take
sentences like the aforegoing examples at face value, they seem to
indicate that vagueness can be associated with an object in virtue of
some characteristic of its spatial extension. One may then argue that
spatial extension is an intrinsic property of an object, and that, if
an object has a vague intrinsic property, this indicates vagueness `of
the thing'. Such vagueness may be called \emph{de re\/} or
\emph{ontic}.
The idea that vagueness of objects is primarily associated with
vagueness of \emph{spatial extension\/} has been endorsed and examined
by \citet{Tye90vague-objects}, who gives the following criterion for
identifying vague objects: ``A concrete object $o$ is vague if and
only if: $o$ has borderline spatio-temporal parts;
and there is no determinate fact of the matter about whether there
are objects that are neither parts, borderline parts, nor non-parts
of $o$.'' (The second, rather complex condition concerns the intuition
that we cannot definitely identify borderline parts of a vague object.
The current paper will not consider this second-order aspect of
vagueness.)
\subsection{Combining \emph{De Re\/} and \emph{De Dicto\/} and the Idea
of \emph{De Sensu\/}}
Rejecting the possibility of \emph{de re\/} vagueness requires one to
argue that the forms of language by which vagueness is apparently
ascribed to objects are in fact misleading idioms, whose correct
interpretation does not involve genuine ontological commitment to
vague objects.
However, contrary to what many proponents of one or other of the two
explanations of vagueness often maintain, accepting that vagueness may be
\emph{de re\/} does not require one to deny that vagueness is often
\emph{de dicto}. Indeed, \citet{Tye90vague-objects} suggests that
vagueness can be present both in predicates and also in objects. He
argues that the vagueness of objects cannot simply be explained by
saying that they are instances of vague predicates.
But the idea of vague objects in the physical world may be hard
to accept. Although at the quantum level of atomic particles we may
conceive of the position of an electron might be vague in a physical
sense, vagueness of macroscopic physical objects seems a very odd idea.
For example, in consider whether a particular twig is a part of
some pile of twigs, it seems unintuitive to consider the twig pile
as a vague physical object, which may or may not include certain twigs.
As a palliative to this worry, I propose the existence of \emph{de sensu\/}
vagueness.
\emph{De sensu\/} vagueness is a kind of indeterminism, in the form of
a multi-faceted structure, that is located within the
sense, that is in the semantic denotation term. In the
case of names and nominal variables, this means that they can refer
to semantic objects that are indeterminate with regard to any exact
physical entity. Both the model of vagueness proposed in fuzzy logic
\cite{Zadeh65a,Zadeh75a,Goguen69a} and
also the idea of \emph{dot cateogories\/} and \emph{dot objects\/}
proposed in some accounts of co-predication (e.g.\ \cite{arapinis2015plea})
can be regarded as based on a \emph{de sensu\/} conception of vagueness.
\section{Illustrating the Semantics of Predication}
Predication involves a predicate and a nominal expression. Both the
predicate and the nominal can be given a semantics that allows either
\emph{de dicto\/} vagueness or \emph{de sensu\/} vagueness or both of
these. \emph{De dicto\/} vagueness is modelled within that part of the
semantics that maps symbolic expressions (predicates and nominals) to
their referents. If there is no \emph{de sensu\/} vagueness then the
referents will be precise entities, and these will normally be taken
as being actual entities in the world. Thus, the reference of a
nominal will be a particular precisely demarcated material entity and
the reference of a property (or relation) predicate will be a
determinate set (or set of pairs) of material entities. If there is
\emph{de sensu\/} vagueness then the references of symbols are
semantic objects which can be vague in so far as their correspondence
with precisely demarcated material entities is not fully determinate.
Before specifying a formal semantic framework, it may be useful to
consider possible models of predication by means of diagrams. The
image in Figure~\ref{FIG:situation+classical}(a) depicts an aerial
view of a hilly region with rocky crags. As we see, the terrain is
irregular and there is no obvious unique way of dividing it into
separate `crag' objects. The name `Arg Crag' has been given to one of
the rocky outcrops. However, there may be different opinions regarding
exactly which of outcrop is Arg Crag. Indeed, some people might use
the name to refer to the whole of this rocky area, whereas others
would consider that it refers to a more specific rock structure.
In the standard classical semantics each conceptual term denotes a
fixed set of entities and each name must refer to a single precise
entity. Thus, event before specifying denotations we will need to
divide the craggy region into specific individual objects, to make a
set of possible referents:
\begin{figure}[htb]
\centerline{
\parbox{2.7in}{\includegraphics[width=2.6in]{figures/the-situation.png}
}
~~~~~~~~
\parbox{2.3in}{\includegraphics[width=2.1in]{figures/classical-predication.png}
}
}
\caption{(a) Arg Crag and surrounding area.~~~~~~~ (b) Classical denotational
semantics.}
\label{FIG:situation+classical}
\end{figure}
The best-known approach to modelling vague concepts that has been
taken in computer science and AI is that of \textit{fuzzy sets}. A
fuzzy set is one such at rather than having a definite true or false
membership condition, there are degrees of membership.
Figure~\ref{FIG:fuzzy+super}(a)
depicts the standard fuzzy logic model of vagueness: each singular
term denotes a single precise referent entity and each property
predicate denotes a fuzzy set of precise entities.
Figure~\ref{FIG:fuzzy+super}(b) depicts the form of semantics arising
for a
\emph{de dicto\/} theory of vagueness, where both properties and
objects are precise but the predicate symbols and names of the
language that are referentially indeterminate.
\begin{figure}[ht]
\centerline{
\parbox{2.7in}{\includegraphics[width=2.6in]{figures/vague-predication-PsvNsp.pdf}\\
\mbox{}~~~~~~~~~~~~~~~~~~~~~~~~~~~\textbf{(a)} }
~~~
\parbox{2.3in}{\includegraphics[width=2.1in]{figures/multi-pred-name.png}\\
\mbox{}~~~~~~~~~~~~~~~~~~~~~~~~~~~\textbf{(b)} }
}
\caption{(a) Fuzzy model of a vague predicate. (b) Multiple denotation
model, such as supervaluation semantics.}
\label{FIG:fuzzy+super}
\end{figure}
One might assume that the blurry concept boundary of the fuzzy logic
model provides a model of predicate denotation that is radically different from
that given by the supervaluationist multi-denotation model. But
it can be argued that this difference is not greatly significant
because we can consider the fuzzy set as essentially equivalent to
a densely nested structure of classical set denotations (one for each
degree of membership value). However, the semantics for evaluating a
predicating expression, $C(\pi{a})$ is significantly different, since
in fuzzy logic the truth value of $C(\pi{a})$ would be the degree of
membership of object denoted by $\pi{a}$ with respect to the fuzzy
set denoted by $C$; whereas, in the multiple denotation model we
simply get different truth values for different choices and then
(as we shall see in detail below) can determine truth in relation to a
`standpoint' regarding what choices we consider admissible.
A limitation of both the standard fuzzy model and the classical
multiple reference model is that only variability in the terms is
modelled, not variability in the object referred to.
However, both approaches can be modified to incorporate vague
referents, such as objects with indeterminate physical boundaries.
For instance, objects with fuzzy extensions can be modelled as fuzzy sets of points.
And within non-fuzzy multiple-reference semantics, one could also
associate a set of different extensions for different precise versions
of an object (or perhaps maximal and minimal extensions, as in the
`egg-yolk' representation of \citet{Cohn96KRvague}).
\begin{figure}[htb]
\centerline{
\includegraphics[width=2.3in]{figures/full-multi.png}
}
\caption{Possible multiplicity of reference when predicates, names and objects
are all indeterminate.}
\label{FIG:full-multi}
\end{figure}
Figure~\ref{FIG:full-multi}
depicts an extended form of multi-reference semantics (\emph{The Full Multi\/}).
Here we see not only that the predicates and names can have variable
reference, but also there can be multiple possible precise versions of
each reference object. This variability in the objects could
correspond to \emph{de re\/} vagueness, but I prefer to think of it as
a form of \emph{de sensu\/} vagueness, with the objects being vague
semantic objects (visualised as the small black discs) that correspond to multiple precise physical
objects.
It is worth noting that Figure~\ref{FIG:full-multi} still represents a
considerable simplification of the potential semantic variability that
might arise. In particular, I have assumed that the global set of vague
objects, together with their associations to precise entities,
remains fixed, even though the subset associated with predicate
$\pi{C}$ may vary. In other words $\pi{C}$ only varies in how it
\emph{classifies\/} objects, not in how these objects are
\emph{individuated}.
The VRS semantics presented below is more general. It allows different
senses of sortal predicates to be associated with different ways of
individuating objects (for instance under some interpretations of
`Crag', all three of the roundish
craggy objects within the innermost circle of the diagram, might be
considered as parts of a single large crag.
\def{\mathbb{S}}{{\mathbb{S}_0}}
\def{\mathbb{V}_1}{{\mathbb{V}_1}}
\def\hbox{$\mathcal{P}$}{\hbox{$\mathcal{P}$}}
\def\hbox{$\mathcal{S}$}{\hbox{$\mathcal{S}$}}
\def\PropConst#1{\hbox{$P_{#1}$}}
\def\StandConst#1{\hbox{$s_{#1}$}}
\def\hbox{$s$}{\hbox{$s$}}
\def\hbox{$s'$}{\hbox{$s'$}}
\def\pi{\hbox{$\pi$}}
\def\p#1{\hbox{$\pi_{#1}$}}
\def\hbox{$\pi'$}{\hbox{$\pi'$}}
\def\hbox{$\pi''$}{\hbox{$\pi''$}}
\def\hbox{$\spform{s'}$}{\hbox{$s'$}}
\def\hbox{$\spform{s}$}{\hbox{$s$}}
\def\s#1{\hbox{$s_{#1}$}}
\def\hbox{$*$}{\hbox{$*$}}
\def\{\PropConst{1}\,, \ldots, \PropConst{n} \}{\{\PropConst{1}\,, \ldots, \PropConst{n} \}}
\def\{\StandConst{1}\,, \ldots, \StandConst{m}, \star \}{\{\StandConst{1}\,, \ldots, \StandConst{m} \}}
\def\{\ldots, \pr_{i}\,, \ldots \}{\{\ldots, \pi_{i}\,, \ldots \}}
\def\Pi{\hbox{$\Pi$}}
\def\mbf{t}{\mbf{t}}
\def\mbf{f}{\mbf{f}}
\def\{\True, \False\}{\{\mbf{t}, \mbf{f}\}}
\def\standb#1{\mathop\square\nolimits_{#1}}
\def\standb{s}{\standb{s}}
\def\standb{s'}{\standb{s'}}
\def\standbi#1{\standb{s_{#1}}}
\def\standd#1{\mathop\lozenge\nolimits_{#1}}
\def\standd{s}{\standd{s}}
\def\standd{s'}{\standd{s'}}
\def\standdi#1{\standd{s_{#1}}}
\def\standb{*}{\mathop\square\nolimits_{*}}
\def\standd{*}{\mathop\lozenge\nolimits_{*}}
\def\standindef#1{\mathcal{I}_{#1}\,}
\def\mathcal{I}_{*}{\mathcal{I}_{*}\,}
\def\standdef#1{\mathcal{D}_{#1}\,}
\def\mathcal{D}_{*}{\mathcal{D}_{*}\,}
\def\standbi#1{\square_{s_{#1}}}
\def\standdi#1{\mathop\lozenge_{s_{#1}}}
\def\standb{*}{\square_{{*}}}
\def\mathfrak{M}_{\PropStandpointLogic}{\mathfrak{M}_{{\mathbb{S}}}}
\def\mathfrak{M}{\mathcal{M}}
\def\model_\Gamma{\mathfrak{M}_\Gamma}
\def\hbox{$\tuple{\Precs, \hbox{$\mathcal{\sigma}$}\xspace, \delta}\,$}{\hbox{$\tuple{\Pi, (\hbox{$\hbox{$\mathcal{\sigma}$}\xspace$},\preceq), \delta}\,$}}
\def\hbox{$\hbox{$\mathcal{\sigma}$}\xspace$}{\hbox{$S$}}
\def\accrelation#1{\hbox{$\s{#1}$}}
\def\hbox{$\hbox{$\mathcal{\sigma}$}\xspace(\star)$}{\hbox{$\hbox{$*$}$}}
\def\delta_{\mathcal{A}}{\delta_{\mathcal{P}}}
\def\delta_{\mathcal{K}}{\delta_{\mathcal{K}}}
\def\delta_{\mathcal{Q}}{\delta_{\mathcal{Q}}}
\def\delta_{\mathcal{R}}{\delta_{\mathcal{R}}}
\def\delta_{\mathcal{S}}{\delta_{\mathcal{S}}}
\def\delta_{\mathcal{T}}{\delta_{\mathcal{T}}}
\def\delta_{\mathcal{N}}{\delta_{\mathcal{N}}}
\def\delta_{\mathcal{X}}{\delta_{\mathcal{X}}}
\def\delta_{\mathcal{V}}{\delta_{\mathcal{V}}}
\def\delta_{\mathcal{F}}{\delta_{\mathcal{F}}}
\def\delta_{\mathcal{G}}{\delta_{\mathcal{G}}}
\def\delta_{\mathcal{Y}}{\delta_{\mathcal{Y}}}
\section{Semantic Analysis of Variable Reference}
We now consider what kind of semantics can account for the general
form of variable denotation illustrated in
Figure~\ref{FIG:full-multi}. \emph{Standpoint Semantics\/} provides a
quite general framework within which polysemy can be modelled in terms
of the symbols of a formal language having multiple possible
denotations. We shall start by specifying a simple propositional
standpoint logic and then elaborate this to a first-order formalism,
\emph{Variable Reference Semantics}, within which we can model
predication and quantification. But first we explain the basic idea of
standpoint semantics.
Standpoint Semantics is based on a formal structure that models
semantic variability in terms of the following two closely connected
aspects:
\begin{itemize}
\item A \emph{precisification\/} is a precise version of a vague
language. It is used as an index to assign precise denotations to
vague terms. The model is based on having a set of precisification,
each corresponding to a consistent precise interpretation of the
language.
\item A \emph{standpoint\/} is modelled as a set of
\emph{precisifications}. Each standpoint corresponds to a range of
possible precise interpretations that are compatible with a
particular context of language understanding. It could capture
explicit specifications of terminology given by some organisation or implied
constraints on meanings that arise whenever some vague statement is
made in conversation (e.g. ``That rock formation is a crag, but this
is a boulder'').
\end{itemize}
\subsection{Propositional Standpoint Logic}
\subsubsection{Syntax}
The language of \emph{propositional standpoint logic}
${\mathbb{S}}$ is based on a non-empty finite set of propositional variables
\hbox{$\hbox{$\mathcal{P}$} = \{\PropConst{1}\,, \ldots, \PropConst{n} \}$}. It extends the usual syntax of
propositional logic by adding a set of standpoint
operators $\Sigma = \{
\standbi{1},
\dots,
\standbi{n},
\standb{*}\}$,
where $\hbox{$*$}$ is used to designate the \emph{universal
standpoint}.\footnote{The language is often augmented by also
specifying a (partial) order relation over the standpoint operators,
which indicates that one standpoint may be more specific or more
general than another. For the purposes of the current paper, we do
not consider this elaboration the basic system. }
So the language of ${\mathbb{S}}$ is the smallest set
$S$ of formulae such that $\mathcal{P}\subseteq S$ and all
formulae of the forms
$\{\n\phi,\ (\phi\con\psi),\ \square_\sigma \phi\}\subseteq S$
for each $\sigma\in\Sigma$ and every $\phi,\psi \in S$.
One can easily extend the language by defining additional connectives
(e.g.\ $\vee$, $\to$) and the dual operators
($\standdi{1} \dots,
\standdi{n},
\mathop\lozenge_{\hbox{$*$}}$)
in the usual way.
\subsubsection{Semantics}
In order to characterise the semantics of \emph{standpoint logic\/}
${\mathbb{S}}$, we
specify a class $\mathfrak{M}_{\PropStandpointLogic}$ of structures of the form $\tuple{\Pi,S,\delta}$ where:
\begin{itemize}
\item $\Pi = \{\ldots, \pr_{i}\,, \ldots \}$ is a set of precisifications
(which are analogous to possible worlds),
\item $S= $ is a set $\{\accrelation{1},...,\accrelation{n},
\hbox{$\hbox{$\mathcal{\sigma}$}\xspace(\star)$}\}$ of subsets of $\Pi$,
\item $\delta: \mathcal{P}\imp \wp(\Pi)$ is a function
mapping each propositional
variable $p\in\mathcal{P}$ to the set $\delta(p)\subseteq \Pi$ of precisifications
at which it is true. (Here, $\wp(\Pi)$ denotes the
powerset of $\Pi$.)
\end{itemize}
The most distinctive elements of the model are the $\accrelation{i}$,
which model the notion of standpoint via a set of precisifications
that are \emph{admissible\/} for that standpoint. Thus, if $\pi
\in \s{i}$ then all propositions that are unequivocally true according
to standpoint $\s{i}$ are true at precisification $\pi$.
For a model structure $\mathcal{M}\in \mathfrak{M}_{\PropStandpointLogic}$, we write $(\mathfrak{M},
\pi) \models \phi$ to mean that formula $\phi$ is true at a
precisification $\pi\in\Pi$ in $\mathcal{M}$. For a model
$\mathcal{M} = \tuple{\Pi,S,\delta}$, this relationship is defined by:
\begin{itemize}
\item $(\mathfrak{M}, \pi) \models P$\ \ if and only if\ \ $\pi \in \delta(P)$,
\item $(\mathfrak{M}, \pi) \models \n \alpha $ \ if and only if\ $(\mathfrak{M}, \pi) \nvDash \alpha$,
\item $(\mathfrak{M}, \pi) \models \alpha \con \beta$
\ if and only if
$(\mathfrak{M}, \pi) \models \alpha$ and $(\mathfrak{M}, \pi) \models \beta$,
\item $(\mathfrak{M}, \pi) \models \standbi{i} \alpha$
\ if and only if
$(\mathfrak{M}, \hbox{$\pi'$}) \models \alpha$ for all
$\hbox{$\pi'$} \in \accrelation{i}$.
\item $(\mathfrak{M}, \pi) \models \standb{*} \alpha$
\ if and only if
$(\mathfrak{M}, \hbox{$\pi'$}) \models \alpha$ for all $\hbox{$\pi'$}\in\Pi$.
\end{itemize}
A formula $\phi$ is \emph{valid\/} if it is true at every
precisification of every model in $\mathfrak{M}_{\PropStandpointLogic}$. In this case we write
$\models_{\mathbb{S}_0} \phi$.
\vspace{1ex}
The logic ${\mathbb{S}}$ enables one to formalise the content of statements such
as. ``Arg is definitely either a crag or a boulder. This map labels it
a crag, but I would say it could be called either, although a boulder is
different from a crag.'' With {\sf Ba} and {\sf Ca} meaning respectively
`Arg is a boulder' and `Arg is a crag', one could write:
\def\pi#1{\mathsf{#1}}
$$
\mathop\square_{\hbox{$*$}}(\pi{B}\pi{a} \dis \pi{C}\pi{a})
\ \con \
\mathop\square_{\textit{map}}\pi{C}\pi{a}
\ \con \
\mathop\lozenge_{\textit{me}}\pi{B}\pi{a}
\con
\mathop\lozenge_{\textit{me}}\pi{C}\pi{a}
\con
\mathop\square_{\textit{me}}\n(\pi{B}\pi{a} \con \pi{C}\pi{a})
$$
\subsection{Variable Reference Logic}
\label{SEC:VRL}
We now generalise the standpoint semantics framework to
define a first-order variable reference logic ${\mathbb{V}_1}$
that can represent predication and quantification.\footnote{In this
presentation, we omit specifying the semantics for propositional
connectives and standpoint
operators and just give the semantics for predication and
quantification in terms of truth conditions at a particular
precisification in a given model. A more comprehensive semantics
could be given by incorporating, with slight modification, the
relevant specifications from the propositional semantics.}
\subsubsection{Syntax}
The language of ${\mathbb{V}_1}$ is built from a vocabulary
$\mathcal{V} =
\tuple{\mathcal{K},\mathcal{P},\mathcal{Q},\mathcal{N},\mathcal{X}}$,
comprising the following symbols:
\begin{itemize}[itemsep=0ex,label=$\bullet$]
\item $\mathcal{K} = \{ \ldots, \mathsf{K}_i, \ldots \}$ is a set of count-noun symbols (sortals),
\item $\mathcal{P} = \{ \ldots, \mathsf{P}_i, \ldots \}$ is a
set of individual property predicates,
\item $\mathcal{Q} = \{ \ldots, \mathsf{Q}_i, \ldots \}$
is a set of precise entity property predicates, \hfill
(e.g.\ exact spatial properties)
\item $\mathcal{N} = \{ \ldots, \mathsf{n}_i, \ldots \}$
is a set of proper name symbols.
\item $\mathcal{X} = \{ \ldots, x_i, \ldots \}$
is a set of nominal variable symbols.
\end{itemize}
The symbols of $\mathcal{K}$, $\mathcal{P}$ and $\mathcal{Q}$ can all
be applied as predicates, with the sortal symbols of $\mathcal{K}$
also being used to specify a range of quantification.
Symbols of both $\mathcal{N}$ and $\mathcal{X}$ can both occur as
arguments of predicates, although the variable symbols of
$\mathcal{X}$ are only meaningful in the context of quantification.
The set $S$ of formulae of ${\mathbb{V}_1}$ is the smallest set such that:
\begin{enumerate}[label=$\bullet$,itemsep=0ex]
\item
$\{ \alpha(\tau) \ | \
\alpha\in (\mathcal{K}\!\cup\!\mathcal{P}\!\cup\!\mathcal{Q}),
\tau\in (\mathcal{N}\!\cup\!\mathcal{X})\} \subseteq S$
\hfill (contains all atomic predication formulae)
\item
$\{\n\phi,\ (\phi\con\psi),\ \mathop\square_{s_i} \phi,\ \mathop\square_{\hbox{$*$}}\phi \} \subseteq S$\ \ for every $\phi,\psi\in S$
\hfill (closed under connectives)
\item
$(\forall \pi{K}\!:\!x)[\phi] \in S$ for every $\pi{K}\in\mathcal{K}$ every
$x\in\mathcal{X}$ and every $\phi \in S$
\hfill (includes quantified formulae)
\end{enumerate}
\subsubsection{Semantics}
The semantics for variable reference logic ${\mathbb{V}_1}$ will be
based on structures
$\tuple{ E, \Pi, \mathcal{V}, \delta }$
where:
\begin{enumerate}[label=$\bullet$,itemsep=0ex]
\item $E$ is the set of precise entities.
\item $\Pi$ is the set of precisifications.
\item $\mathcal{V} = \tuple{ \mathcal{K}, \mathcal{P}, \mathcal{Q},
\mathcal{N}, \mathcal{X} }$
is the non-logical vocabulary, as specified above,
\item $\delta = \tuple{
\delta_{\mathcal{K}},
\delta_{\mathcal{P}},
\delta_{\mathcal{Q}},
\delta_{\mathcal{N}},
\delta_{\mathcal{X}} }$ is a denotation function that can be
considered as divided into components specifying the denotations for
each type of non-logical symbol (see below).
\end{enumerate}
To simplify the explanation of the semantics, I first
define the semantic representation of a \emph{indefinite individual}.
(Here, and in the following, $B^A$ denotes the set of all functions from domain set $A$
into the range set $B$):
\begin{itemize}
\item $I = E^\Pi$ is the set of (indefinite) individuals, each of
which is a mapping from the set of precisificaiton indices $\Pi$ to the
set of precise entities.
\end{itemize}
For each individual $i\in I$
and each precisification $\pi\in\Pi$, the value of $i(\pi)$ will be
a precise version of individual $i$ according to precisification $\pi$.
\vspace{1ex}
\noindent The denotation functions for all the non-logical
vocabulary of the language as follows:
\begin{itemize}
\item $\delta_{\mathcal{K}} : \mathcal{K} \to \wp(I)^\Pi$ \ \ is a
function mapping each sortal concept (count noun) in
$\mathcal{K}$ to a function from precisifications to sets of indefinite individuals.
\end{itemize}
On the basis of $\delta_{\mathcal{K}}$ we define
$I_{\pi} = \bigcup \{ \delta_{\mathcal{K}}(\mathsf{K})(\pi) \ |
\ \mathsf{K} \in \mathcal{K}\}$,
the set of all individuals of any sort according to precisification $\pi$.
We can now define:
\begin{itemize}
\item $\delta_{\mathcal{P}} : \mathcal{P} \to \wp(I)^\Pi$, \
such that, for all $\mathsf{P} \in \mathcal{P}$\
we must have $\delta_{\mathcal{P}}(\mathsf{P})(\pi) \subseteq I_\pi$.
\item $\delta_{\mathcal{Q}} : \mathcal{Q} \to \wp(E)$ \ \
\ \ (each precise predicate is associated with a set of precise entities)
\item $\delta_{\mathcal{N}} : \mathcal{N} \to \wp(I)^\Pi$, \
subject to the condition that, for all $\mathsf{n} \in \mathcal{N}$
we must have $\delta_{\mathcal{N}}(\mathsf{n})(\pi) \in
I_\pi$.
\item $\delta_{\mathcal{X}} : \mathcal{X} \to \wp(I)$,
\ \ \ (but the semantically relevant denotations of variables are
determined by sortals occurring in quantifiers)
\end{itemize}
\subsubsection{Interpretation of Reference and Predication}
For a model $\mathcal{M} =\tuple{ E, \Pi, \mathcal{V}, \delta }$ and
precisification $\pi\in \Pi$, the truth conditions for atomic
predication formulae are as follows:
\begin{itemize}
\item $(\mathfrak{M}, \pi) \models \pi{K}(\pi{n})$\ \ if and only if\ \
$(\delta_{\mathcal{N}}(\pi{n}))(\pi) \in (\delta_{\mathcal{K}}(\pi{K}))(\pi)$
\item $(\mathfrak{M}, \pi) \models \pi{P}(\pi{n})$\ \ if and only if\ \
$(\delta_{\mathcal{N}}(\pi{n}))(\pi) \in (\delta_{\mathcal{A}}(\pi{P}))(\pi)$
\item $(\mathfrak{M}, \pi) \models \pi{Q}(\pi{n})$\ \ if and only if\ \
$((\delta_{\mathcal{N}}(\pi{n}))(\pi))(\pi) \in \delta_{\mathcal{Q}}(\pi{Q})$
\item $(\mathfrak{M}, \pi) \models \pi{K}(x)$\ \ if and only if\ \
$\delta_{\mathcal{X}}(x) \in (\delta_{\mathcal{K}}(\pi{K}))(\pi)$
\item $(\mathfrak{M}, \pi) \models \pi{P}(x)$\ \ if and only if\ \
$\delta_{\mathcal{X}}(x) \in (\delta_{\mathcal{A}}(\pi{P}))(\pi)$
\item $(\mathfrak{M}, \pi) \models \pi{Q}(x)$\ \ if and only if\ \
$(\delta_{\mathcal{X}}(x))(\pi) \in \delta_{\mathcal{Q}}(\pi{Q})$
\end{itemize}
To make sense these specifications you need to be aware that
evaluation of a symbol may require zero, one, or two levels of
de-referencing in relation to the precisification index $\pi$. You
should first note that the $\pi{K}$ and $\pi{P}$ predications are
semantically equivalent. When applied to a name constant, {\sf n},
both the constant and the predicate get evaluated with respect to a
precisification, so that the name denotes a particular individual and
the predicate denotes a set of such individuals. When the argument is
a variable rather than a name constant, the variable directly denotes
an individual without any need for evaluation relative to a
precisification. In the case of (exact) $\pi{Q}$ predications,
individuals need to be further evaluated relative to the
precisification in order to obtain a precise entity, which can be
tested for membership of the precise set denoted by property
$\pi{Q}$. So although $\pi{Q}$ predicates are not themselves subject
to variation in relation to $\pi$ the impose an extra level of
variability in the interpretation their argument symbol.
It may seem curious that the same precisification index $\pi$ is used
both for mapping names (and predicates) to individuals (and sets of individuals), and
also for mapping from individuals to precise entities. Thus the
individual denoted by a name {\sf n} in precisification $\pi$
is $(\delta_{\mathcal{N}}(\pi{n}))(\pi)$ and according to $\pi$ it also refers to
the precise entity $((\delta_{\mathcal{N}}(\pi{n}))(\pi))(\pi)$. This slightly
simplifies the specification and does not
appear to place any constraint on the semantics.
\subsubsection{Interpretation of Quantification}
To facilitate specification of
the semantics for quantification, I
define a meta-level operation $\mathcal{M}^{(x_i\Rightarrow \xi)}$
on interpretation structures that enables us to replace the value of a
variable with a new value.
For
$\mathcal{M} =\tuple{ E, \Pi, \mathcal{V}, \delta }$,
with
$\mathcal{V} = \tuple{ \mathcal{K}, \mathcal{P}, \mathcal{Q}, \mathcal{N}, \mathcal{X} }$,
$\delta = \tuple{
\delta_{\mathcal{K}},
\delta_{\mathcal{P}},
\delta_{\mathcal{Q}},
\delta_{\mathcal{N}},
\delta_{\mathcal{X}} }$,
and $\xi\in E^\Pi$
let
$\mathcal{M}^{(x_i\Rightarrow \xi)}$ be the structure
$\mathcal{M}' =\tuple{ E, \Pi, \mathcal{V}, \delta' }$,
where
$\delta' = \tuple{
\delta_{\mathcal{K}},
\delta_{\mathcal{P}},
\delta_{\mathcal{Q}},
\delta_{\mathcal{N}},
\delta'_{\mathcal{X}} }$
and $\delta'_{\mathcal{X}}(x_j) = \delta_{\mathcal{X}}(x_j)$
for every $x_j\neq x_i$ and $\delta'_{\mathcal{X}}(x_i) = \xi$.
Finally we can specify the interpretation of a quantified formula:
\begin{itemize}
\item
$(\mathcal{M}, \pi)\ \models\ (\forall\,\pi{K}\!:\! x)[\phi(x)]
\hbox{~~~~if and only if~~~~}
(\mathcal{M}^{(x_i\Rightarrow \xi)}, \pi)\ \models\ \phi(x)
\hbox{~~for all}\ \xi \in \delta_{\mathcal{K}}(\pi{K})(\pi)$
\item
$(\mathcal{M}, \pi)\ \models\ (\mathcal{E}\,\pi{K}\!:\! x)[\phi(x)]
\hbox{~~~~if and only if~~~~}
(\mathcal{M}^{(x_i\Rightarrow \xi)}, \pi)\ \models\ \phi(x)
\hbox{~~for some}\ \xi \in \delta_{\mathcal{K}}(\pi{K})(\pi)$
\end{itemize}
This is much the same as how one would specify quantification in a
classical sorted logic. The universally quantified formula is true
just in case its immediate sub-formula is true for all possible values
of the variable, taken from the range of the sortal predicate.
\subsubsection{What Can ${\mathbb{V}_1}$ Express?}
So was the result worth all the work of defining that complicated
semantics? Does it help us understand and represent semantic
variability and its effect on reference, predication and
quantification. Yes, I think so. Although the final definitions of
the quantifiers seem simple, one needs to consider that significant
work has been done by the denotation functions for the different kinds
of symbol and the semantics given above for the different cases of
predication. Their effect is to allow quantification to operate at an
intermediate level in the interpretation of semantic variability of
reference. The \emph{individuation\/} of potential referents occurs
prior to quantification by establishing individuals in relation to a
given interpretation of sortal predicates. But these individuals can
still be indeterminate in that they may correspond to many exact
entities.
Consider the statement ``There is definitely a mountain in Equador,
that some say is on the equator and others say it is not''. This might
be represented as:
$$
\mathop\square_{\hbox{$*$}}(\mathcal{E}\, \pi{Mountain}\!:\!x)
[\ \pi{InEquador}(x) \con
\mathop\lozenge_{\hbox{$*$}}[\pi{OnEquator}(x) \con \mathop\lozenge_{\hbox{$*$}}[\n\pi{OnEquator}(x)]\ ]
$$
I believe that this kind of statement cannot be represented without
using a semantics that can account for both \emph{de dicto\/} and
\emph{de re\/} forms of vagueness. Its truth conditions require that
there is something that is definitely a mountain and definitely in
Equador but whose extent is ill defined such that it could reasonably
be said either that it does or does not lie on the equator.
\def\pi#1{\hbox{$\mathsf{#1}$}}
\section{Co-Predication and Deep Polysemy of Sortal Concepts}
I now turn to another issue involving polysemy and reference, that I
admit not having properly considered during the whole time I was
constructing the VRS logic described above: the issue of
co-predication
\cite{asher2011lexical,arapinis2015plea,gotham2017composing,liebesman2017copredication,viebahn2020copredication}.
When I first became aware that people were studying
this phenomenon as an issue in its own right, rather than just being
a particular case of polysemy, I was somewhat surprised. And even
after preliminary consideration, I assumed that it could be handled
without special difficulty within the double level model
of semantic indeterminacy provided by VRS. However, once I had got
more deeply
acquainted with the problem I came to realise that it does require
special attention. This section will be a largely informal
discussion of the topic.
\subsection{The Problem of Co-Predication}
A \emph{sortal\/} concept is one that carries with it criteria
for individuating and counting entities.
Hence one might expect that sortals should at least be unambiguous
when it comes to the fundamental criteria for being an instance
or at least the general ontological category of the objects that
can instantiate them.
However this is not the case.
Consider the sentence ``There is a book by Olaf Stapledon on my bookshelf.'' With the simplification of treating
being on my bookshelf as a simple predicate we would get the following `naive' classical representation:
\begin{equation*}
\begin{split}
\mathcal{E} x [ \pi{Book}(x) \con \pi{By}(x,\pi{OS}) \con \pi{OnMyBookShelf}(x) ]
\end{split}
\end{equation*}
A problem with this representation is that the predicate \pi{Book} appears to be polysemous between the sense of being an informational artifact and being a physical object. And, moreover, whereas the authorship predicate, \pi{By}, applies to the informational sense, the predicate \pi{OnMyBookShelf} applies to the physical sense.
It is this application to a single referent of two predicates
that seem to describe very different kinds of object that is called
co-predication. The issue is not just that \pi{Book} has two senses but that the predications seem to imply the existence of an object that is an instance of both senses of \pi{Book}.
One might expect that this problem could be avoided by adopting the view that books are a type of object that has both an information and a physical aspect and that different types of predicate that can be applied to books must be interpreted with respect to the aspect appropriate to the type of predicate (authorship applying to the informational aspect and physical properties to the physical aspect of book). But, unfortunately, the idea of multi-aspect objects can only account for very simple examples. Consider, the sentence:
``There are two books by Olaf Stapledon on my bookshelf.''
\begin{equation*}
\begin{split}
\mathcal{E} x \mathcal{E} y [x\neq y & \con \pi{Book}(x) \con \pi{Book}(y) \\
& \con \pi{By}(x,\pi{OS}) \con \pi{By}(y,\pi{OS}) \\
& \con \pi{OnMyBookShelf}(x) \con \pi{OnMyBookShelf}(y) ]
\end{split}
\end{equation*}
The problem is that determining how many books by Olaf Stapledon are
on my self depends upon whether I count in terms of informational
artifacts or physical volumes. I might have two copies of the same
book, or two book titles contained within the same volume (or some
more complex combination of volumes and contents). Thus, a claim
regarding the number of books can only have definite meaning once I
choose what kind of book object I wish to count. But accepting this
leads to a recapitulation of the original problem. Once I choose
between informational and physical books, I am no longer dealing with
entities that can support both the informational property of the books
content originating from a particular author and also the physical
property of being a physical object located on a particular shelf.
Before considering some proposals made in the literature regarding how
one might account for co-predication, let us first note that the
VRS theory of predicates and vague objects given above
(Section~\ref{SEC:VRL}) fails to provide a straightforward
explanation of typical co-predication examples.
That semantics both allows vagueness of sortal predicates that
categorise and individuate objects over which we may quantify and also
allows these objects to have properties that are not fully
determinate, such as a mountain, whose boundary is unclear.
So maybe the complicated VRS semantics could make sense of the
following formulation:
$$
(\exists\,\pi{Book}\!:\!x)[\pi{ByOS}(x) \wedge \pi{OnMyBookShelf}(x)]
$$
But for this to work, we would need to assume that the polysemy of
book is within the semantic object denoted by $x$ so that the two
predicates can be evaluated with respect to different aspects of some
multi-faceted book object that has both physical and informational
aspects.
However, the problem of individuation and counting still remains.
Although VRS allows that {\sf Book} can have different senses
both in terms of individuation and in terms of exact entities,
it is based on the assumption that any ambiguity in individuation
criteria is resolved prior to determining the set of individuals
over which the quantifier $(\exists\,\pi{Book}\!:\!x)$ will range.
Hence, although we can have both indeterminate counting criteria
and indeterminate individuals, we cannot have indeterminate individuals whose
precise correlates satisfy different counting criteria.
In the case of vague of geographic features, for example, one might
say that `individuation must precede demarcation'.
\subsection{Towards a Solution}
Many approaches, some very clever and most rather complex, have been
proposed for accounting for co-predication. I will not consider these
in the current paper except to mention that some have proposed that
co-predication involves mechanism for \emph{coercion\/} or
\emph{inheritance\/}
between different categories of entity that can be referred to by
polysemous nouns that exhibit co-predication phenomena
\cite{asher2011lexical,liebesman2017copredication},
some have proposed special
kinds of compound ontological categories
(e.g.\ \cite{arapinis2015plea})
and some have proposed complex semantics of individuation (e.g.\
\cite{gotham2017composing}). None of the proposals I have encountered
so far seems completely satisfactory to me.
My view is that different individuation conditions must be
associated with different precisifications of the count noun. I don't
think we can individuate objects unless we have restricted the
interpretation of the term to the extent that it enables individuation.
The VRS approach does allow different modes of individuation to be
associated with different senses of a sortal concept, which could
account for the radically different ontological types associated with
certain count nouns such as `book'. But if this is the case, we still
need to account for co-predication examples where it seems that
individual objects exhibit multiple ontologically diverse aspects.
Such possibilities would need to be specified in terms of type
conversions that can occur due to context, and are often unambiguous
due to corresponding entities of a different type being uniquely
determined in many cases (e.g.\ in many particular situations
a physical book instance uniquely determines and informational book
and a reference to an informational book determines a unique physical book).
\section{Conclusions and Further Work}
The paper has explored the issue of multiple possible references of linguistic terms, that may arise due to vagueness of terms or differences in opinions on how they should be interpreted, and how such variability effects the semantics of reference, predication and quantification, as conceived within a denotational approach to semantics.
I have proposed the framework of \emph{variable reference semantics} to interpret a logical language in a way that can account for both \emph{de dicto\/} vagueness in predicates and also \emph{de re}, or, as I would prefer to call it \emph{de sensu\/} variability in the `objects' that are referred to.
The system presented is intended more as a proof of concept than a workable formal language. The semantics is rather complex but when formulating representations within the object language of the logic ${\mathbb{V}_1}$ much of this complexity is hidden under the bonnet.
However, the utility of the system remains to be demonstrated beyond a few relatively simple examples. The main direction for further work would be to consider a much wider range of examples and evaluate the strengths and limitations of the proposed formalism. As for formal logic in general, I do not really envisage the system being used in its full blown form, but rather some aspects of the language and semantics might be used in a restricted form to support various kinds of application, such as, for example, querying of information systems.
The other direction in which I would like to extend the work is with further consideration of issues such as co-predication, which would test the limitations of the system in dealing with semantic phenomena related to semantic variability.
\skipover{
================================
\vspace{1cm}
An ontologist might say such a representation is plain wrong or that the formula under its standard interpretation does not adequately capture the semantics of the assertion.
\cite{viebahn2020copredication}
I first read \emph{First and Last Men\/} by Olaf Stapledon when I was a student. I read it a again last year but I had lost my original copy so had to buy it again.
However, it seems that in many cases this kind of \emph{copredication\/} does not cause any difficulty in conveying information.
It certainly seems natural that we can mix the physical and informational interpretations of `book' in many situations. One can pick up a book and say `This book is by Olaf Stapledon.'
Co-predication occurs when one polysemous nominal expression has simultaneous
predications selecting for two different meanings or senses. Anaphoric binding of different
senses of a polysemous noun occurs when a pronominal expression takes a sense that is
different from the sense attributed to its antecedent noun.
\subsection{The Problem of Co-Predication}
\subsection{Theoretical Models of Co-Predication}
There are two main views:
\begin{itemize}
\item A word such as `book' is lexically ambiguous. That is the word has two meanings. Asher
\item A word such as `book' categorises objects that are semantically composite. Vieu Gotham
\end{itemize}
\href{https://e1.pcloud.link/publink/show?code=XZeMWRZgwAFlLdng0XPXpII9umGVhTRJyKX}{Viebahn 2020 Copredication} \cite{viebahn2020copredication}
\href{https://e1.pcloud.link/publink/show?code=XZeMWRZgwAFlLdng0XPXpII9umGVhTRJyKX}{Gotham Thesis}
\cite{gotham2017composing}
\subsection{Mutliple-Individuation Senses}
The problems in counting
Gotham \cite{gotham2017composing} uses the theory of pluralities of Link \cite{link2002logical}.
Different senses have different individuation criteria giving rise to different counting principles. This difference can be located within the equality relation associated with a particular mode of individuation.
According to Gotham the individuation criteria appropriate for counting objects in a given context are induced by the predicates in which they occur.
Predicates that apply to an informational sense of book will give rise to counting in terms of informational objects and those applying to physical objects will induce counting in terms of physical volumes. And in the case of co-predication involving both informational and physical properties are `composed' by taking the union of the equality relations for each mode.
In the case of books, this means that, when one counts `informative' `heavy' books, one should count entities that are distinct both in terms of informational content and physical constitution.
co-predicated
There is a subsequent criticism in
\cite{liebesman2019copredication}.
\cite{ortega2019polysemy}
\subsubsection{Dot Objects}
Distributional analysis \cite{jezek2014distributional}
Mereological dot objects \cite{arapinis2015plea},
Collins
\cite{collins2017copredication}
\cite{collins2021diversity}
\subsection{Contextual Coercion of Referent}
This is the idea that a references to categorically ambiguous objects (a dot object, such as book) can be automatically converted by the context in which they occur to reference to an entity of the category appropriate for that context.
\subsection{My Solution??}
Co-occurrence of exemplars.
If two things always occur together it is natural that they are referred to by one word.
This is fine as long as they always occur together (ie have always aligned individuation criteria) but can be problmatic when the criteria are not always aligned. (As in the case of books).
However, even in cases that are not strictly aligned there can be very strong alignment is a wide range of contexts. eg One will typically be considering a particular physical book corresponding to a particular informational artifact or several physical books that in most cases will be 1:1 aligned with information artifacts. Hence, counting the physical and informational object will be aligned.
Maybe a collective object can have different plurality numbers with respect to different aspects.
But what about quantification as a higher order operator:
eg Every student had read two of the recommended books.
Two of the recommended books had been read by every student.
$$
(\mathcal{E}_2 \pi{Book}: x)[\pi{Read}(s,x)]
$$
Quantification seems ok in terms of standpoints.
On the standpoint account we take as a starting standpoint, one that is not committed as to whether books should be counted in terms of physical volumes or in terms of informational artifacts.
In many cases a sentence will come out true for both of these interpretations so will be unequivocal. But in some cases the count could be correct according to one interpretation but incorrect according to another.
But what about co-predication? Well that would need to be done in terms of aspects of an entity.
The problem comes when there is not a one-to-one mapping.
They banned the book and ordered it to be burned.
For a book in the information sense to be burned it suggests that copies of it should be burned.
Eg. ``This is not a book!''. Cannot be considered as false under the interpretation that book must be an informational artifact.
\subsubsection{Solving Copredication with Standpoins}
The bigger problem for me seems to be the copredication rather than the quantification.
We cannot find global precisifications that satisfy two predicates, because the senses are disjoint.
We must have some kind of compound object.
The compound object must have mappings to both the physical and informational.
My proposed solution is that count nouns that exhibit co-predication phenomena have multiple senses of individuation that are used to pick out objects from a network of property bearing entities. These entities are of two or more distinct ontological types (e.g.\ physical and informational) but tightly linked by relationships linking different ontological categories. A paradigm case would be books, where we have physical volumes and informational artifacts.
For simplicity I first consider a very simple structuring of the linked entity types. Let us assume that the relationship between physical volumes and informational books is such that every physical volume corresponds to exactly one informational artifact, but each informational artifact can correspond to any number of physical volumes.
In contrast to some other proposals I neither propose that we should
My view is
Can have an ove-specified joint meaning. One where we assume that there is a unique pairing between physical and informational books.
I think this would do it. It would show that certain contexts require standpoints in which there is a 1-1 correspondence.
Could there be two compound senses but differing in counting criteria?
\subsection{Prolific Ontological Heterogeneity of the Term `Meal'}
In much of the research on co-predication, the example of `book' has been taken as a paradigm case. This is certainly a good example to consider, since `book' has two clearly distinct modes of individuation (physical volumes \textit{vs.} information artifacts) as well as some interesting complication, such as collected works, multi-volume publications and variant editions. However, the fact that there are only two clearly different individuation modes (albeit with some complications) means that the co-predication possibilities are somewhat limited compared to other examples with much more diversity in possible modes of individuation.
I suggest that the issue of multiple individuation criteria is better illustrated by the term `meal', which is much more prolific in the number of ways one might count its instances. The word meal is vague in a huge number of ways. There are many considerations relating to `borderlne' issues, such as the distinction between a few mouthfuls and a meal or whether there could be a meal where nobody actually ate anything (e.g. the dining hall was evacuated due to a fire alarm). But we are now interested only in polysemy relating to fundamental ontological type giving rise to different modes of individuation and counting. I suggest that the following six individuation modes can be distinguished.:
\begin{enumerate}[label=(\Alph*)]
\item A type of eating event, usually differentiated by time of day it is eaten, but also associated with a range of typical foods:
\begin{itemize}
\item ``Half board accommodation includes two meals. You can choose either breakfast and lunch or breakfast and dinner. Full board includes four meals: breakfast, lunch, afternoon tea and dinner.''
\end{itemize}
\item An eating event occurence which can be solitary but often involves multiple people participating in the same meal.
\begin{itemize}
\item ``The conference programme included two meals: a reception buffet and a formal dinner''
\end{itemize}
\item An individual eating event:
\begin{itemize}
\item ``One pizza and one salad is enough food for three meals.''
\item ``We ate together at the bistro. John loved his meal and we both liked the food, but I didn't enjoy my meal as I kept thinking about the money I had lost.''
\end{itemize}
Here, we could further differentiate senses where we are thinking of the actual food from senses where we are thinking about other aspects of the experience. What is essential to this mode is that the `meal' count will be the same as the number of people involved.
\item A physical food portion. That is, a quantity of food prepared for (and usually served to) an individual person during a particular time interval:
\begin{itemize}
\item ``The students ordered 8 meals between them --- 5 pizzas and 3 omelettes. 2 of the students just ordered a beer.''
\item
``The refectory prepares over 500 meals each day. Usually fewer than 20 go to waste.''
\end{itemize}
\item A physical food ensemble intended for a group of people:
\begin{itemize}
\item Three meals had been laid out. One on each of the three tables. Each of the meals was for four people.
\end{itemize}
\item A particular food type or collection of food types often eaten together:
\begin{itemize}
\item ``Pancakes (with blueberries) is one of my favourite meals.''
\item ``The menu lists five set meals including two vegetarian meals''
\end{itemize}
\item Complex eating event types. One can potentially specify and enumerate a wide variety complex eating event types. Such meal types are listed in certain kinds of menu:
\begin{itemize}
\item ``The cruise offers three special meals, which you can book as many times as you wish:
oysters and champagne in your cabin,
tea and cake for two on the poop deck,
fish dinner at the captain's table.
\end{itemize}
Meals individuated by mode (F) have a similar
ontological status to those of
mode (A) but are more specialised and are typically combined with food specifications akin to mode (F); and they support a counting mode that is distinct from either (A) or (F).
\end{enumerate}
Figure~(\ref{FIG:meal-ontology}) shows the connections between meal senses (A)--(F) in terms of categories of designated entities and corresponding referential phrases in a typical example, that might be described by the sentence ``John and Mary ate pancakes for breakfast together.''
We see that despite the possible senses spanning many ontological categories associated with different ways in which `meals' can be counted, in a typical situation the associations between entities of the different categories is relatively straightforward. The eating event (involving both participants) can be associated with a type of meal (breakfast) and with a physical ensemble of food items (all the pancakes and accompanying edible accoutrements), which can be described as a particular meal type. We also have the possibility of dividing the joint eating event into tow individual events, each corresponding to a food portion (some part of the ensemble). The situation would be a little more complex if the two had eaten different foods. But we can see that in many situations, identifying a referent in terms of an entity of one ontological category would uniquely determine a referent of another associated category.
\begin{figure}
\centerline{
\includegraphics[width=4in]{figures/meal-ontology.png}
}
\caption{\centerline{Some senses of `meal' with different modes of individuation.}}
\label{FIG:meal-ontology}
\end{figure}
The meal with my new colleges was really entertaining and
delicious too.
\subsection{An Enhanced Model}
My suggestion is that different individuation conditions must be associated with different precisifications of the count noun. I don't think we can individuate objects unless we have settled upon an interpretation of the term that enables individuation.
However, these do not necessarily correspond to senses that are associated with entities of a single ontological category.
Gotham's mechanism is quite ingenious.
I would consider Gotham's composition of physical and information individuation criteria to be a specific and not typical sense of book.
}
\section{Introduction}
\end{document}
\section{Introduction}
The notion of reference and the operations of predication and
quantification are fundamental to classical first-order logic. Its
standard semantics assumes a fixed domain of
objects, with name constants referring to unique elements
of the domain, predicates associated with subsets of the domain
and quantifiers ranging over all the elements of the domain. Thus, if
$\mathcal{D}$ is the domain of objects, then a
constant name, say $\pf{c}$, will denote an object $\delta(\pf{c})$, such that
$\delta(\pf{c})\in\mathcal{D}$, and a predicate $P$ will be taken to denote
a set of objects $\delta(P)$, with $\delta(P)\subseteq\mathcal{D}$.
In this way, one can straightforwardly interpret the predicating expression
$P(\pf{c})$ as a proposition that is true if and only if
$\delta(\pf{c})\in\delta(P)$.
However, in natural languages there is considerable
variability in the interpretation of symbols: a name may not
always refer to a unique, precisely demarcated entity; a predicate
need not always correspond to a specific set of entities.
Building on \cite{Bennett2022jowo},
this paper explores how the classical semantics can be modified
to account for semantic variability. We shall
first consider some general ideas regarding the nature of vagueness
and its sources. The distinction between linguistic (\emph{de dicto\/}) and
ontological (\emph{de re\/}) vagueness will be considered, and we will
suggest that the sources of vagueness within the interaction
between language and the world are more diverse than is
often assumed.
We proceed to consider possible models of denotation,
predication and quantification in the presence of indeterminacy,
first informally, with the aid of some diagrams and then in terms of a
formal framework, based on \emph{standpoint semantics\/}
\cite{Bennett11lomorevi,gomez2018dealing-short,alvarez2022standpoint-short}, within which
variable references can be modelled.
\section{Theories of Vagueness}
The literature on vagueness has generally assumed that
the phenomena of vagueness could arise from three potential sources
\citet{Barnes10a-short}: (a) indeterminacy of representation (linguistic a.k.a\ \emph{de dicto\/}
vagueness), (b) indeterminacy of things in the world (known as \emph{ontic\/}
of \emph{de
re\/} vagueness) and (c) limitations of knowledge (\emph{epistemic\/} vagueness).
The epistemic view has some strong advocates
(e.g. \cite{Williamson92vague+ig-short})
and it is apparent that semantics of multiple possible
interpretations take a similar form to logics of knowledge and belief \cite{Lawry2009aij}.
Indeed, the
\emph{standpoint semantics}, which will be used in our following
analysis can be regarded as being of this form. But the question
of whether vagueness is an epistemic phenomena, or merely shares a similar
logic, does not seem to have any direct bearing on our semantic
analysis, so in the current paper we shall not further consider the epistemic view.
\subsection{Vagueness as exclusively \emph{De Dicto\/}}
A widely held view is that all vagueness is essentially \emph{de
dicto}. That is, it is entirely a property of language, not of the world:
it consists in the variability or lack of precision in the way that
words and phrases correspond to the precise entities and properties of the real world that they aim do describe.
According to this view, forms of natural language that seem to attribute vagueness directly to things in the world are misleading and can be paraphrased in \emph{de dicto\/} terms
\citep{Lewis86pluralityOfWorlds,Varzi01a}. A fairly typical version
of such an attitude is that of Varzi who (focusing on the domain of
geography, within which vagueness is pervasive) takes a strong
position against ontic vagueness. Varzi's view of the relationship
between vague terms and their referents is summarised in the following
quotation:
\begin{quote}
``[To] say that the referent of a geographic term is not sharply
demarcated is to say that the term vaguely designates an object, not
that it designates a vague object.'' \citep{Varzi01a}
\end{quote}
The view that vagueness is a linguistic phenomenon
leaves open many possibilities regarding which aspects of language are involved, and how one might model vagueness within a theoretical semantic framework. These will be explored in detail below.
\subsection{The possibility of \emph{De Re\/} or \emph{Ontic\/} Vagueness}
Natural language sentences commonly describe objects as `vague':
`The smoke formed a vague patch on the horizon';
`The vaccine injection left a vague mark on my arm';
`He saw the vague outline of a building through the fog'.
Here `vague' means something like `indefinite in shape, form or
extension'. If we take
such sentences at face value, they seem to
indicate that vagueness can be associated with an object in virtue of its spatio-temporal extension, its material constituency or even abstract characteristics.
One may then argue that such properties are intrinsic to objects, and that, if
an object has a vague intrinsic property, this indicates vagueness `of
the thing', often called \emph{de re\/} vagueness. And, in so far as
{\em things\/} are identified with entities existing in the world, such vagueness is often described as \emph{ontic\/} (or \emph{ontological\/}) meaning that it pertains to what exists in reality.
Note that the view that vagueness may be
\emph{de re\/} or \emph{ontic\/} does not require one to deny that vagueness is often
\emph{de dicto}. Even if the world consisted entirely of precisely demarcated objects with exact physical properties, there could still be vague terminology (e.g.\ adjectives such as `large') and hence vague ways to describe the world. Proponents of \textit{de re\/} vagueness typically only claim that \textit{some\/} kinds of vagueness are ontic \citet{Tye90vague-objects,Barnes10a}.
The idea that vagueness of objects is primarily associated with
vagueness of \emph{spatial extension\/} has been endorsed and examined
by Tye \citet{Tye90vague-objects}, who gives the following criterion for
identifying vague objects: ``A concrete object $o$ is vague if and
only if: $o$ has borderline spatio-temporal parts;
and there is no determinate fact of the matter about whether there
are objects that are neither parts, borderline parts, nor non-parts
of $o$.'' The second, rather complex condition concerns the intuition
that we cannot definitely identify borderline parts of a vague object.
The current paper will not consider this second-order aspect of
vagueness. However, we will be presenting a semantics in which there can be objects whose spatial extension, and hence material constituents, are indeterminate.
The case against \textit{ontic\/} vagueness is typically based on two contentions: (a) the idea of an object in the world being in itself indeterminate is mysterious and implausible; (b) statements that appear to imply \textit{ontic\/} vagueness are readily paraphrased into forms where the vagueness is evidently \emph{de dicto}.
While it is true that, at the level of atomic
particles, we may accept that the position of an electron might be indeterminate (as is strongly indicated by quantum mechanics), this kind of indeterminacy seems very far from what we would need to account for vagueness in the demarcation of macroscopic physical objects. Consider a pile of twigs upon a twig-strewn forest floor. The indeterminacy of whether a particular twig is part of the pile does not arise from any indeterminacy in the physical locations of twigs. Rather, it seems that, for exactly the same physical situation, there can be different judgements about which twigs should be counted as part of the pile. Nevertheless, it may still be reasonable (in accord with Tye) to take the view that the extension of the pile of twigs is indeterminate.
So perhaps it is reasonable to consider that a `thing' such as a pile of twigs could have an indeterminate extension, even though there is no \emph{ontic\/} indeterminacy in the reality within which the twigs are manifest. Hence, if a pile of twigs is considered a `thing', the indeterminacy of its extension would be \emph{de re}, though not {\em ontic}. Here, we see that there is an ambiguity in the meaning of `thing' (and hence in the term \emph{de re}). One can interpret it in the sense of an entity existing in a concrete reality or in the sense of an object of discourse, whose physical manifestation need not be completely determinate.
\skipover{
How one describes these possibilities depends on how one describes language and meaning in general. Here we make minimal assumptions regarding what we take to be a typical view of those working in formal semantics and ontology construction. We assume only that linguistic expressions (words, phrases, propositions) correspond to semantic entities of certain types and that these semantic entities are related in some some way to entities and/or properties of the real world. Thus, in the simplest semantics for predicate logic, property predicates correspond to sets, and the members of these sets correspond directly to entities in the world; and for constant names the situation is even simpler, since a name may be taken to denote an entity that is identical to some entity in the world.
But even within this very simple picture of how language relates to reality, there are two different relationships that might be affected by indeterminacy: it might affect (a) the relation between words and their corresponding semantic designations (vagueness of reference), or (b) the relation between semantic designations and the world (vagueness in the `sense' of the semantic referent). These possibilities will be elaborated within our semantics, but to give a brief sketch of this distinction we can consider the difference between a concept word indeterminately referring to some set of entities, as compared to a concept word that uniquely refers to a \emph{fuzzy set}, such that the inclusion of entities of the world within the fuzzy set is indeterminate.
Most formal representations make a fundamental distinction between predicates and names, in terms of both their syntactic and semantic categories, and incorporate additional syntax and semantics for expressing quantification. Hence, a full explanation of \emph{de dicto\/} vagueness in relation to a formal semantics must consider how it may arise and operate in relation to several semantic categories and forms of expression. In the semantics presented below we explicitly model indeterminacy affecting both predicates and names and also quantification.
}
\subsection{Established Accounts of Vagueness: Fuzzy Logic and Supervauationism}
Among computer scientists {\em fuzzy logic\/}
\cite{Zadeh65a}
is the dominant approach to modelling
vagueness. This theory modifies the classical
denotations of expressions.
Vague concepts denote {\em fuzzy sets\/} such that the degree of membership of an object in a fuzzy set is graded (typically represented by a real number in the range [0..1]). This degree of membership also determines fuzzy truth values in that the truth of $\pf{C}(c)$
is simply the degree of membership of the object denoted by $c$ in the fuzzy set denoted by $\pf{C}$ and the truth value of complex formulae is then determined by interpreting
logical operators in terms of fuzzy truth functions. Although the original presentation of
fuzzy logic does not treat objects as vague,
many researchers have also modelled vague objects
as fuzzy sets of points \cite{schockaert2009spatial}.
Many philosophers favour
some variety of \emph{supervaluationist\/} account, which models linguistic
vagueness in terms of variability in the relation between vocabulary terms
and their semantic reference. This is modelled in terms of a set of
possible precise interpretations (often called
\emph{precisifications}). An early proposal that vagueness can be
analysed in terms of multiple precise senses was made by Mehlberg
\shortcite{Mehlberg58a-short}, and a formal semantics based on a
multiplicity of classical interpretations was used by van Fraassen
\shortcite{Fraassen69a} to explain `the logic of presupposition'.
This kind of formal model was subsequently applied to the analysis of
vagueness by Fine \shortcite{Fine75a} and a similar approach was
proposed by \citet{Kamp75a}.
An attraction of supervaluationism is its account
\emph{penumbral connections\/} \citep{Fine75a}, which many believe to
be an essential characteristic of vagueness. This is
the phenomenon whereby logical laws (such as the principle of
non-contradiction) and semantic constraints (such as mutual
exclusiveness of properties such as `... is short' and `... is
tall') are maintained even for statements involving vague concepts.
The solution, in a nutshell, being that, even though words may have
multiple different interpretations, each admissible precisification of
a language makes precise all vocabulary in a way that ensures mutual
coherence of the interpretation of distinct but semantically related
terms.
\section{Semantic Aspects of Predicates and Objects in Relation to Vagueness}
The semantic content of a predicate can be analysed into many
aspects. We pick out the following as being of particular relevance to the study of vagueness:
\begin{enumerate}[itemsep=0.5ex,labelwidth=0em,labelsep=0.5em,itemindent=0em,leftmargin=1em]
\item[]{\bf Classification:} Given a set of objects, a predicate classifies them
into members or non-members of the set.
\item[]{\bf Individuation:} Given a relevant information about a state of the world (its spatial and material properties) the meaning of a {\em sortal\/} predicate provides
the criteria by which individual instances of the predicate are recognised. (In terms
of axioms these criteria can often be divided into {\em existential conditions\/} and
{\em identity criteria}.)
\item[]{\bf Demarcation/Constituency:} The meaning of {\em sortal} predicates also provides
criteria for spatially demarcating and determining the material constituents
of its instances.
\end{enumerate}
In classical logic `classification' is the most prominent,
perhaps even the only, aspect of predication.
This is because in the usual semantics for first-order logic,
the `universe' or `domain of quantification' is typically presented as if
ontologically prior to predication, in that we `interpret' predicates
in relation to a given domain.
However, in order to name an object or count instances of a category of object we need
to be able to individuate the object or objects to which we refer, and this implies
that individuation is necessary for (and one could say ontologically prior to) classification.
Furthermore, as illustrated by the `desert' examples below, individuating objects as instances
of a certain kind does not necessarily require that they be identified with precisely demarcated
portions of the real world. Hence, the objects of discourse referred to by names or by quantification may
not be completely definite in all their attributes. Accounting for the independence
between criteria for individuation, classification and demarcation in the presence of vagueness is a
motivation for the semantics that we shall present below.
\subsection{Classifying, Individuating and Demarcating Deserts}\label{sec:desert-example}
\def\desertpic#1{\includegraphics[width=1.3in,angle=270,trim={40 0 40 0},clip]{#1}}
\begin{figure}
\setlength{\tabcolsep}{1mm}
\def1.5in{1.5in}
\def2mm 0 0 0{2mm 0 0 0}
\begin{tabular}{cccc}
\desertpic{figures/desert_cooler.png} &
\desertpic{figures/desert_separated.png} &
\desertpic{figures/desert_single_close.png} &
\desertpic{figures/desert_close.png}
\\
\desertpic{figures/desert_cooler_contours.png} &
\desertpic{figures/desert_separated_contours.png} &
\desertpic{figures/desert_single_close_contours.png} &
\desertpic{figures/desert_close_contours.png} \\
(a) & (b) & (c) & (d)
\end{tabular}
\caption{Illustration of individuation, demarcation and classification
in relation to deserts.}
\label{FIG:deserts}
\end{figure}
To illustrate how the different aspects of predication operate,
we consider the concept `desert'. Several factors may be relevant to
deciding what is a desert, in particular: lack of precipitation (aridity), temperature and to a lesser extent vegetation, land form and land use. For simplicity, we consider a situation where a measure of aridity is taken to
be the only relevant factor (we may assume that other factors are either
uniform or co-vary with aridity within the region under consideration).
Consider the situations
illustrated in Fig.~\ref{FIG:deserts} depicting several states of a region with `my house'
indicated by a black icon:
\begin{itemize}
\item[(a)] The region was cool and fertile although two of the most arid areas were arguably deserts.
\emph{My house was near to one arid area that was arguably a desert.}
\item[(b)] The climate warmed and the arid areas became true deserts, but with a fertile valley between them. \emph{My house was then near to but unequivocally not part of a desert.}
\item[(c)] Eventually the desert areas expanded and merged into one big desert. \emph{Then, my house was at its boundaries, arguably though not unequivocally in the desert.}
\item[(d)] Recently a little moisture has returned along the valley. \emph{Some argue that the whole arid area is still a desert. Others refer to two deserts in that area, separated by the valley. Either way, my house is still arguably in a desert.}
\end{itemize}
The narrative sequence is not relevant to present concerns, except that it shows that situations of the same general type may exhibit different aspects of vagueness depending on specific details.
What we should particularly note is that: in (a) the classification of the arid area as desert is equivocal;
in (b) and (c) we definitely have two deserts but each has an indeterminate boundary;
in (c) we definitely have exactly one desert, but it is debatable whether my house is within the desert area; and, in (d) the number of deserts present is indeterminate.
The semantics that we shall develop is motivated by considering situations
such as those depicted Fig.~\ref{FIG:deserts}, where there is a separation between the aspects
of classification, individuation and demarcation with respect to vagueness.
\subsection{Possibilities for Semantic Modelling of Vague Predicates and Objects}
Vagueness of both predicates and names can be modelled
either in terms of the mapping from vocabulary to some form of `semantic denotation'
or in the correspondence between semantic denotations and the real world. Or, indeed,
each of these relationships could model a different aspect of vagueness.
Fig.~\ref{FIG:fuzzy+super} illustrates some possibilities.
Fig.~\ref{FIG:fuzzy+super}(a) depicts
a hilly region with rocky crags, where the terrain is
irregular and there is no clear way of dividing it into
separate `crag' objects. The name `Arg Crag' has been given to one of
the rocky outcrops. However, there may be different opinions regarding
exactly which outcrop is Arg Crag. Indeed, some people may use
the name to refer to the whole of this rocky area, whereas others
consider that it refers to a more specific rock structure.
\def1.2in{1.2in}
\begin{figure}[htb]
\hspace{-3ex}
\centerline{
\hbox{\raise10ex\hbox{(a)}}
\includegraphics[width=1.7in]{figures/the-situation.png}
\hbox{\raise10ex\hbox{(c)}}\hspace{-3ex}
\includegraphics[width=1.5in]{figures/vague-predication-PsvNsp.pdf}
\hbox{\raise10ex\hbox{(e)}}\hspace{-3ex}
\includegraphics[width=1.5in]{figures/variable-ref-to-fuzzy-sets.png}
}
\centerline{
\hbox{\raise15ex\hbox{(b)}}\hspace{-2em}
\raise3ex\hbox{\includegraphics[height=0.9in]{figures/classical-predication.png}}
~~~~
\hbox{\raise15ex\hbox{(d)}}\hspace{-2em}
\includegraphics[height=1.2in]{figures/multi-pred-name.png}
~~~~
\hbox{\raise17ex\hbox{(f)}}\hspace{-2em}
\includegraphics[height=1.4in]{figures/full-multi.png}
}
\caption{
(a) Arg Crag and surrounding area. (b) Classical semantics.
(c) Fuzzy model of a vague predicate. (d) Multiple denotation
model, such as supervaluation semantics. (e) Concepts can
have multiple extensions, each of which is a fuzzy set.
(f) multiple reference for predicates and also multiple extensions for
objects.}
\label{FIG:fuzzy+super}
\end{figure}
\iffalse
\begin{figure}[htb]
\centerline{
\hbox{\raise10ex\hbox{(a)}}~~\includegraphics[width=2in]{figures/the-situation.png}
~~
\hbox{\raise10ex\hbox{(b)}}~~
\includegraphics[height=0.9in]{figures/classical-predication.png}
}
\vspace{3ex}
\def1.2in{1.2in}
\centerline{
\hbox{\raise10ex\hbox{(c)}}~~
\includegraphics[height=1.2in]{figures/vague-predication-PsvNsp.pdf}
~~
\hbox{\raise10ex\hbox{(d)}}~~
\includegraphics[height=1.2in]{figures/multi-pred-name.png}
}
\vspace{3ex}
\centerline{
\hbox{\raise10ex\hbox{(e)}}~~
\includegraphics[height=1.2in]{figures/variable-ref-to-fuzzy-sets.png}
~~
\hbox{\raise10ex\hbox{(f)}}~~
\includegraphics[height=1.4in]{figures/full-multi.png}
}
\caption{
(a) Arg Crag and surrounding area. (b) Classical semantics.
(c) Fuzzy model of a vague predicate. (d) Multiple denotation
model, such as supervaluation semantics. (e) Concepts can
have multiple extensions, each of which is a fuzzy set.
(f) multiple reference for predicates and also multiple extensions for
objects.}
\label{FIG:fuzzy+super}
\end{figure}
\fi
In classical semantics each conceptual term denotes a
fixed set of entities and each name refers to a single precise
entity. Thus, even before specifying denotations, we will need to
divide the craggy region into specific individual objects, to make a
set of possible referents.
Figs.~\ref{FIG:fuzzy+super}(c)--(f) depict various ways in which vaguenes
of predicates and objects can be modelled. Sub-figures (c) and (d) show
fuzzy-logic and supervaluation semantics models.
In (c) {\sf C} denotes
a fuzzy set, whereas in (d) both the predicate and object have a variable reference
(with each {\em precisification\/} determining a unique reference).
Sub-figure (e) depicts a possible combination of the fuzzy and supervaluation approaches, where predicate symbols can have multiple references, each of which is a fuzzy set.
A limitation of models (c)--(e) is that fuzziness/variability is only associated
with the predicate and/or the reference relation but not the object referred to.
However, both approaches can be modified to incorporate vague
referents, that might correspond to objects with indeterminate physical boundaries.
In fuzzy logic, objects with indefinite extensions can be modelled as
fuzzy sets of points \cite{schockaert2009spatial}.
And within a multiple-reference semantics, one could also
associate a set of different extensions for different precise versions
of an object (see e.g.\ the
`egg-yolk' representation of \citet{Cohn96KRvague-short}).
Fig.~\ref{FIG:fuzzy+super}(f)
depicts an extended form of multi-reference semantics (\emph{The Full Multi\/}).
Here we see not only that the predicates and names can have variable
reference, but also there can be multiple possible precise versions of
each reference object. This variability in the objects could
be called \emph{de re\/} vagueness, but we regard it as semantic vagueness,
with the objects being vague semantic objects (visualised as the small black discs) that potentially correspond to many precise physical extensions.
It may appear that the two stages of multiple
reference are redundant. The relation goes first from the name {\bf a} to one of the indefinite objects
and then to a precise extension; but surely we would get the same potential precise referents for {\sf a} if we simply assign all the precise referents we could reach as possible precise referents of {\sf a}, with no need for an intervening indefinite object. However, this would force us to conflate the
different types of vagueness illustrated in the desert examples above. So we could not have something that was definitely a desert or a mountain but whose extension was indefinite. Moreover, we believe that quantification operates in relation
to vague individuals not their precise possible extensions. This will be made explicit in our formal semantics and underpins our solution to the `Problem of the Many', which we shall discuss in Sec.~\ref{SEC:phil-views}.
Figure~\ref{FIG:fuzzy+super}(f) still
considerably simplifies the potential semantic variability that
might arise. It presupposes that the global set of vague
objects, together with their associations to precise entities,
remains fixed, even though the subset associated with predicate
$\pf{C}$ may vary. In other words, $\pf{C}$ only varies in how it
\emph{classifies\/} objects, not in how it
\emph{individuates\/} them.
The logic ${\mathbb{V}_1}$ presented below is more general. It allows different
senses of sortal predicates to be associated with different ways of
individuating objects (for instance under some interpretations of
`Crag', all three of the roundish
craggy objects within the innermost circle of the diagram might be
considered as parts of a single large crag.
\def{\mathbb{S}}{{\mathbb{S}_0}}
\def{\mathbb{V}_1}{{\mathbb{V}_1}}
\def\hbox{$\mathcal{P}$}{\hbox{$\mathcal{P}$}}
\def\hbox{$\mathcal{S}$}{\hbox{$\mathcal{S}$}}
\def\PropConst#1{\hbox{$P_{#1}$}}
\def\StandConst#1{\hbox{$s_{#1}$}}
\def\hbox{$s$}{\hbox{$s$}}
\def\hbox{$s'$}{\hbox{$s'$}}
\def\pi{\hbox{$\pi$}}
\def\p#1{\hbox{$\pi_{#1}$}}
\def\hbox{$\pi'$}{\hbox{$\pi'$}}
\def\hbox{$\pi''$}{\hbox{$\pi''$}}
\def\hbox{$\spform{s'}$}{\hbox{$s'$}}
\def\hbox{$\spform{s}$}{\hbox{$s$}}
\def\s#1{\hbox{$s_{#1}$}}
\def\hbox{$*$}{\hbox{$*$}}
\def\{\PropConst{1}\,, \ldots, \PropConst{n} \}{\{\PropConst{1}\,, \ldots, \PropConst{n} \}}
\def\{\StandConst{1}\,, \ldots, \StandConst{m}, \star \}{\{\StandConst{1}\,, \ldots, \StandConst{m} \}}
\def\{\ldots, \pr_{i}\,, \ldots \}{\{\ldots, \pi_{i}\,, \ldots \}}
\def\Pi{\hbox{$\Pi$}}
\def\mbf{t}{\mbf{t}}
\def\mbf{f}{\mbf{f}}
\def\{\True, \False\}{\{\mbf{t}, \mbf{f}\}}
\def\standb#1{\mathop\square_{#1}}
\def\standb{s}{\standb{s}}
\def\standb{s'}{\standb{s'}}
\def\standbi#1{\standb{s_{#1}}}
\def\standd#1{\mathop\lozenge_{#1}}
\def\standd{s}{\standd{s}}
\def\standd{s'}{\standd{s'}}
\def\standdi#1{\standd{s_{#1}}}
\def\standb{*}{\mathop\square_{*}}
\def\standd{*}{\mathop\lozenge_{*}}
\def\standindef#1{\mathcal{I}_{#1}\,}
\def\mathcal{I}_{*}{\mathcal{I}_{*}\,}
\def\standdef#1{\mathcal{D}_{#1}\,}
\def\mathcal{D}_{*}{\mathcal{D}_{*}\,}
\def\standbi#1{\square_{s_{#1}}}
\def\standdi#1{\mathop\lozenge_{s_{#1}}}
\def\standb{*}{\square_{{*}}}
\def\mathfrak{M}_{\PropStandpointLogic}{\mathfrak{M}_{{\mathbb{S}}}}
\def\mathfrak{M}{\mathcal{M}}
\def\model_\Gamma{\mathfrak{M}_\Gamma}
\def\hbox{$\tuple{\Precs, \hbox{$\mathcal{\sigma}$}\xspace, \delta}\,$}{\hbox{$\tuple{\Pi, (\hbox{$\hbox{$\mathcal{\sigma}$}\xspace$},\preceq), \delta}\,$}}
\def\hbox{$\hbox{$\mathcal{\sigma}$}\xspace$}{\hbox{$S$}}
\def\accrelation#1{\hbox{$\s{#1}$}}
\def\hbox{$\hbox{$\mathcal{\sigma}$}\xspace(\star)$}{\hbox{$\hbox{$*$}$}}
\def\delta_{\mathcal{A}}{\delta_{\mathcal{A}}}
\def\delta_{\mathcal{K}}{\delta_{\mathcal{K}}}
\def\delta_{\mathcal{Q}}{\delta_{\mathcal{Q}}}
\def\delta_{\mathcal{R}}{\delta_{\mathcal{R}}}
\def\delta_{\mathcal{S}}{\delta_{\mathcal{S}}}
\def\delta_{\mathcal{T}}{\delta_{\mathcal{T}}}
\def\delta_{\mathcal{N}}{\delta_{\mathcal{N}}}
\def\delta_{\mathcal{X}}{\delta_{\mathcal{X}}}
\def\delta_{\mathcal{V}}{\delta_{\mathcal{V}}}
\def\delta_{\mathcal{F}}{\delta_{\mathcal{F}}}
\def\delta_{\mathcal{G}}{\delta_{\mathcal{G}}}
\def\delta_{\mathcal{Y}}{\delta_{\mathcal{Y}}}
\section{A Semantic Theory of Variable Reference}
We now consider what kind of semantics can account for the general
form of variable denotation illustrated in
Fig.~\ref{FIG:fuzzy+super}(f). \emph{Standpoint Semantics\/} provides a
reasonably general framework within which
semantic variability can be modelled in terms
of the symbols of a formal language having multiple possible
denotations. We shall start by introducing first-order standpoint logic
and then elaborate this to the formalism
\emph{Variable Reference Logic}, within which we can model
predication and quantification of vague objects.
\subsection{Standpoint Logic}
Standpoint Semantics is based on a formal structure that models
semantic variability in terms of the following two closely connected
components:
\begin{itemize}
\item A \emph{precisification\/} is a precise and consistent interpretation of a vague
language, and it coherently assigns precise denotations to all its
vague elements. A model for a vague sentence
contains a set of admissible precisifications of that sentence.
\item A \emph{standpoint\/} is modelled as a set of
\emph{precisifications} which are compatible with a
point of view or context of language understanding.
A standpoint can capture both
explicit specifications of terminology and implicit
constraints on meanings that arise in conversation
(e.g. ``That person is tall'' constrains the meaning of tallness).
\end{itemize}
\subsubsection{Syntax of First-Order Standpoint Logic}
First-order standpoint logic ($\PredStandpointLogic$)
is based on a \define{vocabulary} $\tuple{\mathcal{P}, \mathcal{N}, \mathcal{S}}$, consisting of
\define{predicate symbols} $\mathcal{P}$
(each associated with an arity \mbox{$n\in\mathbb{N}$}),
\define{constant symbols} $\mathcal{N}$ and \define{standpoint symbols}
$\mathcal{S}$, usually denoted with
$\sts,\sts'$, such that $\hbox{$*$}\in\mathcal{S}$, where $\hbox{$*$}$ is used to designate the \emph{universal
standpoint}.
There is also a set $\mathcal{X}=\{ x, y, \mydots\}$ of \define{variables},
and the set $\mathcal{T}=\mathcal{N}\cup\mathcal{X}$ of \define{terms} contains all constants and variables.
The set $\PredStandpointLogic$ of first-order standpoint \define{formulae} is given by
\\[1ex]
\centerline{
$ \phi,\psi := \pf{P}(t_1,\dots,t_k) \mid \n\phi \mid \phi\con\psi \mid \forall x\phi \mid \standb{s}\phi\ , $
}\\[1.2ex]
where \mbox{$\pf{P}\in\mathcal{P}$} is a $k$-ary predicate symbol, \mbox{$t_1,\dots,t_k\in\mathcal{T}$} are terms,
\mbox{$x\in\mathcal{X}$}, and \mbox{$\ste\in\Stands$}.
The definable connectives and operators $\mathbf{t}$, $\mathbf{f}$, \mbox{$\phi\lor\psi$}, \mbox{$\phi\rightarrow\psi$}, \mbox{$\exists x\phi$}, and \mbox{$\standd{\ste}\phi$} are as usual
\subsubsection{Semantics of First-Order Standpoint Logic}\label{sec:sem-standpoint-logic}
Given a vocabulary $\tuple{\mathcal{P},\mathcal{N},\mathcal{S}}$, a \define{first-or\-der stand\-point structure\xspace} $\mathcal{M}$ is a tuple $\tuple{\Delta, \Pi, \sigma, \delta}$, where
$\Delta$ is the (non-empty)
\define{domain} of $\mathcal{M}$,
$\Pi$ is a non-empty set of \define{precisifications},
$\sigma$ is a function mapping each standpoint symbol from $\mathcal{S}$ to a set of precisifications (i.e., a subset of $\Pi$), and
$\delta$ is a function mapping each precisification from $\Pi$ to an ordinary first-order structure $\mathcal{I}$ over the domain $\Delta$, whose interpretation function $\cdot^{\struct}$ maps\/:
\vspace{-2ex}
\begin{itemize}
\item each predicate symbol $\pf{P}{\,\in\,}\mathcal{P}$ of arity $k$ to an $k$-ary relation \mbox{$\interprets{\pf{P}} {\,\subseteq\,} \Delta^k$},
\item each constant symbol $a{\,\in\,}\mathcal{N}$ to a domain element \mbox{$\interprets{a}{\,\in\,}\Delta$}.
\end{itemize}
The semantics of \cite{gomez-alvarez22howtoagree-short}
required that $\interpret{a}{\delta(\pi_1)}=\interpret{a}{\delta(\pi_2)}$ for any $\pi_1,\pi_2\in\Pi$ and
$a\in\mathcal{N}$. Here we allow \define{non-rigid constants} that may denote different objects in different precisifications.
But (in contrast to the more expressive ${\mathbb{V}_1}$ that will be developed below) all precisifications of a first-order standpoint structure will implicitly share the same interpretation domain $\Delta$; that is, the \emph{constant domain assumption} is adopted.
The most distinctive elements of the model are the $\accrelation{i}$,
which model the notion of standpoint as a set of precisifications
that are \emph{admissible\/} for that standpoint.
A proposition is {\em unequivocally true\/} according
to standpoint $\s{i}$ iff it is true at every precisification $\pi\in \s{i}$.
Let $\mathcal{M}=\tuple{\Delta,\Pi,\sigma,\delta}$ be a first-or\-der stand\-point structure\xspace for the vocabulary $\tuple{\mathcal{P},\mathcal{N},\mathcal{S}}$ and $\mathcal{X}$ be a set of variables.
%
A \define{variable assignment} is a function $v:\mathcal{X}\to\Delta$ mapping variables to domain elements.
Given a variable assignment $v$, we denote by $v_{\set{x\mapsto\epsilon}}$ the function mapping $x$ to $\epsilon\in\Delta$ and any other variable $y$ to $v(y)$.
\noindent
An interpretation function $\cdot^{\struct}$ and a variable assignment specify how to interpret terms as domain elements\/:\footnote{ Here, and in the following, $\interpret{\sigma}{\gamma}$
is a concise notation for $\gamma(\sigma)$.
It gives the semantic denotation of symbol $\sigma$ according to a function $\gamma$.}
\vspace{-5ex}
\begin{gather*}
\interpret{t}{\mathcal{I},v} =
\begin{cases}
v(x) & \text{if } t=x\in\mathcal{X}, \\
\interprets{a} & \text{if } t=a\in\mathcal{N}.
\end{cases}
\end{gather*}
\vspace{-0.5ex}
\noindent Then, let $\pi\in\Pi$ and $v:\mathcal{X}\to\Delta$ be a variable assignment.
The satisfaction relation $\models$ is:%
\vspace{-1ex}
\addtolength{\jot}{-0.4em}
\begin{align*}
& \mathcal{M},\!\pi,\!v\ \models\ \pf{P}(t_1,\ldots,t_k)\!\! & \text{iff~~}\ \ & (\interpret{t_1}{\delta(\pi), v},\ldots,\interpret{t_k}{\delta(\pi),v}) \in \interpret{\pf{P}}{\delta(\pi)} \\
& \mathcal{M},\!\pi,\!v\ \models\ \neg\phi & \text{iff~~}\ \ & \mathcal{M},\!\pi,\!v\not\models\phi \\
& \mathcal{M},\!\pi,\!v\ \models\ \phi\land\psi & \text{iff~~}\ \ & \mathcal{M},\!\pi,\!v\ \models\ \phi \text{ and } \mathcal{M},\pi,v\models\psi \\
& \mathcal{M},\!\pi,\!v\ \models\ \forall x\phi & \text{iff~~}\ \ & \mathcal{M},\!\pi,\!v_{\set{x\mapsto\epsilon}}\ \models\ \phi \text{ for all } \epsilon{\,\in\,}\Delta \\
& \mathcal{M},\!\pi,\!v\ \models\ \standb{\ste}\phi & \text{iff~~}\ \ & \mathcal{M},\!\pi'\!,v\ \models\ \phi \text{ for all } \pi'\in\sigma(\ste) \\
& ~~~\mathcal{M},\!\pi\ \models\ \phi & \text{iff~~}\ \ & \mathcal{M},\!\pi,\!v\ \models\ \phi \text{ for all } v:\mathcal{X}\to\Delta \\
& ~~~~~~ \,\mathcal{M}\ \models\ \phi & \text{iff~~}\ \ & \mathcal{M},\!\pi\ \models\ \phi \text{ for all } \pi\in\Pi
\end{align*}
\noindent When \mbox{$\mathcal{M}\models\phi$} we say that $\mathcal{M}$ is a \define{model} for $\phi$.
\vspace{1ex}
The logic $\PredStandpointLogic$ enables one to formalise the content of some statements discussed in Section \ref{sec:desert-example}, namely (1), ``\emph{My house was near to one arid area that was arguably a desert}'' and (2), ``\emph{My house was then near to but unequivocally not part of a desert}''. We assume that $\pf{Desert}$ and $\pf{AridArea}$ are vague predicates, but (my) $\pf{house}$ has definite coordinates and $\pf{PartOf}$ is a definite predicate.
\vspace{-2em}
\begin{gather}
\exists\, a\ \mathop\square_{\hbox{$*$}}(\pf{AridArea}(a)\con \pf{Near}(\pf{house},\pf{a})) \con \mathop\lozenge_{\hbox{$*$}}\pf{Desert}(\pf{a}) \tag{a} \\
\mathop\square_{\hbox{$*$}}\exists\, a\ (\pf{AridArea}(a)\con \pf{Near}(\pf{house},\pf{a}) \con \neg\pf{PartOf}(\pf{house},\pf{a}) \con \pf{Desert}(\pf{a}))\tag{b}
\end{gather}
\vspace{-0.5em}
It should however be apparent that addressing examples involving vague objects with $\PredStandpointLogic$ becomes less straightforward and would involve, at least, a reasonable amount of paraphrasing. For instance, one may be inclined to formalise sentence (3) as
\vspace{-2em}
\begin{gather}
\exists\, a\ \mathop\square_{\hbox{$*$}}(\pf{AridArea}(a)\!\con\! \pf{Desert}(a))\! \con\! \mathop\lozenge_{\hbox{$*$}}\pf{PartOf}(\pf{house},\pf{a})\! \con\! \neg\mathop\square_{\hbox{$*$}}\pf{PartOf}(\pf{house},\pf{a})\tag{c}
\end{gather}
\vspace{-0.5ex}
However, if the domain only contains precise individuals (e.g. all possible areas), each specific area $a$ has the same extension in all precisifications, so $\pf{house}$ cannot be arguably but not definitely part of it. Hence, this formula is unsatsifiable in $\PredStandpointLogic$, unless \pf{PartOf} can be indefinite, which is
contrary to its intended (purely geometrical) meaning.
\subsection{Variable Reference Logic}
\label{SEC:VRL}
We now generalise standpoint semantics to
define variable reference logic, ${\mathbb{V}_1}$,
that can represent predication and quantification over indefinite objects.
The most significant complication of ${\mathbb{V}_1}$ in relation to $\PredStandpointLogic$
is that rather than the domain being comprised only of atomic entities, we also introduce
`indefinite individuals' that are modelled as functions from precisifications
to precise entities. Another complication is that predicates are divided into:
`sortals' (corresponding to count nouns such as `desert' or `cat')
that determine subdomains of the domain of indefinite individuals,
`indefinite predicates' that classify or relate indefinite individuals, and
`precise entity predicates' that represent properties or relations of exact spatio-temporal
and/or material extensions in reality.
With respect to sortals, individual properties and quantification, indefinite individuals are treated as basic objects (e.g.\ a particular mountain or fluffy cat). But in the application of a `precise entity predicate' (e.g.\ `... contains $7.3{\times}10^{26}$ atoms') the individual is evaluated with respect to a precisification to obtain a precise entity (the individual's extension) that determines the truth of the precise predicate. Although our semantics allows indefinite individuals to be any function from precisifications to precise entities, instances of sortals will typically by highly constrained by
axioms expressing conditions arising from the meaning of the sortal. For example, if a desert is a maximal area of the Earth's surface satisfying certain conditions (e.g.\ low precipitation) then, even if exact thresholds on the parameters of these conditions are not given, the set of indefinite individuals that could correspond to a desert is hugely restricted.
\subsubsection{Syntax}
The language of ${\mathbb{V}_1}$ is built from a vocabulary
$\mathcal{V} =
\tuple{\mathcal{K},\mathcal{A},\mathcal{Q},\mathcal{N}
\mathcal{S}}$,
comprising:
\begin{itemize}[leftmargin=2em, itemsep=0ex,label={~$\bullet$~}]
\item $\mathcal{K} = \{ \ldots, \mathsf{K}_i, \ldots \}$, ~~ sortal predicates\ (e.g.\ \pf{Desert}, \pf{Mountain}, \pf{Cat}),
\item $\mathcal{A} = \{ \ldots, \mathsf{A}_i, \ldots \}$, ~~ indefinite predicates\ (e.g.\ \pf{Dry}, \pf{Fluffy}, \pf{Climbed}),
\item $\mathcal{Q} = \{ \ldots, \mathsf{Q}_i, \ldots \}$, ~~ precise entity predicates\
(e.g.\ exact spatial properties),
\item $\mathcal{N} = \{ \ldots, \mathsf{n}_i, \ldots \}$, ~~ proper name symbols\
(e.g.\ \pf{everest}, \pf{tibbles}).
\item $\mathcal{S}\, = \{ \ldots, s_i, \ldots, \hbox{$*$} \}$, ~~ standpoint symbols.
\end{itemize}
Let $\mathcal{P}=\mathcal{K}\cup\mathcal{A}\cup\mathcal{Q}$ be the set of predicate symbols, containing all sortals, indefinite predicates and precise entity predicates.
Again, there is a set of variables $\mathcal{X}$, and the set $\mathcal{T}= \mathcal{N}\cup\mathcal{X}$ contains all proper name symbols and variables.
Then, the set ${\mathbb{V}_1}$ of variable reference logic \define{formulae} is given by\\[1ex]
\centerline{$
\phi,\psi \ \ := \ \ \pf{P}(\tau) \ \mid\ \n\phi \ \mid\ \phi\!\con\!\psi \ \mid\ \forall x\, \phi \ \mid\ \standb{s}\phi \ ,
$}
\\[1.2ex]
where \mbox{$\pf{P}\in\mathcal{P}$}, \mbox{$\tau\in\mathcal{T}$},
\mbox{$x\in\mathcal{X}$}, \mbox{$\pf{K}\in\mathcal{K}$} and \mbox{$\ste\in\Stands$}.
\subsubsection{Semantics}
Given a vocabulary $\mathcal{V}= \tuple{ \mathcal{K}, \mathcal{A}, \mathcal{Q},
\mathcal{N}, \mathcal{S} }$, a \emph{variable reference logic structure} is a tuple
$\tuple{ E, \Pi, \sigma, \delta}$
where:
\begin{enumerate}[label=$\bullet$,itemsep=0ex]
\item $E$\ is the set of precise entities.
\item $\Pi$\ is the set of precisifications.
\item $\sigma$\
is a function mapping standpoint symbols to precisifications, and
\item $\delta = \tuple{
\delta_{\mathcal{K}},
\delta_{\mathcal{A}},
\delta_{\mathcal{Q}},
\delta_{\mathcal{N}
}$ is a denotation function divided into components specifying the denotations for
each type of non-logical symbol (see below).
\end{enumerate}
\defI_{*}{I_{*}}
We define the set of \textit{indefinite individuals} $I_{*}=Maps(\Pi,E)$ as the set of all mappings from precisifications to precise entities.
For each indefinite individual $i\in I_{*}$ and $\pi\in\Pi$, $i(\pi)$ will be an element of $E$, which constitutes the precise version of individual $i$ according to precisification $\pi$.
At each precisification $\pi\in\Pi$, each name symbol $\mathsf{n}\in\mathcal{N}$ denotes some element of $I_{*}$
and each sortal $\mathsf{K}\in\mathcal{K}$ and individual attribute predicate $\mathsf{A}\in\mathcal{A}$ denotes a subset of $I_{*}$.
The set of indefinite individuals at a precisification $\pi$ depends on the interpretation of the sortal predicates at $\pi$. The denotation functions map:
\begin{enumerate}[itemsep=1ex,label=$\bullet$]
\item $\delta_{\mathcal{K}} : $ each precisification $\pi\in\Pi$ into a function from each sortal symbol $\pf{K}{\,\in\,}\mathcal{K}$ to a subset $\interpret{\pf{K}}{\delta_{\mathcal{K}}(\pi)}\subseteqI_{*}$ --- i.e.\ a set of indefinite individuals.
\\[0.5ex]
On the basis of $\delta_{\mathcal{K}}$ we define
$I_{\pi} = \bigcup \{ \interpret{\mathsf{K}}{{\delta_{\mathcal{K}}(\pi)}} \ |
\ \mathsf{K} \in \mathcal{K}\}$, the domain of (indefinite) individuals according to
precisification $\pi$.
$I_{\pi}$ is
the set of all individuals of any sort, where the sortals are interpreted according to precisification $\pi$.
\item $\delta_{\mathcal{A}} : $ each precisification $\pi\in\Pi$ into a function from each indefinite predicate symbol $\pf{A}{\,\in\,}\mathcal{A}$ of arity $k$ to a $k$-ary relation $\interpret{\pf{A}}{\delta_{\mathcal{A}}} {\,\subseteq\,} I_{\pi}^k$,
\item $\delta_{\mathcal{Q}} : $ each entity predicate symbol $\pf{Q}{\,\in\,}\mathcal{Q}$ of arity $k$ to $k$-ary relations $\interpret{\pf{Q}}{\delta_{\mathcal{Q}}} {\,\subseteq\,} E^k$,
\item $\delta_{\mathcal{N}} : $ each precisification $\pi\in\Pi$ into a function from each indefinite name symbol $\pf{N}{\,\in\,}\mathcal{N}$ to an indefinite individual $\interpret{\mathsf{N}}{{\delta_{\mathcal{N}}(\pi)}}\in I_\pi$.
\end{enumerate}
Let us now deal with the variable assignments in \emph{variable reference logic}. Let $\mathcal{M} =\tuple{ E, \Pi, \sigma, \delta }$ be a model for the vocabulary $\tuple{ \mathcal{K}, \mathcal{A}, \mathcal{Q}, \mathcal{N}, \mathcal{S} }$ and $\mathcal{X}$ be a set of variables.
A \define{variable assignment} is a function $v:\mathcal{X}\toI_{*}$ mapping variables to individuals.
Given a variable assignment $v$, we denote by $v_{\set{x\mapsto i}}$ the function mapping $x$ to $i \inI_{*}$ and any other variable $y$ to $v(y)$.
The interpretation function $\delta_{\tiny\mathcal{N}}$ and a variable assignment specify how to interpret terms by domain elements in each precisification $\pi\in\Pi$\/:
\vspace{-1ex}
\begin{gather*}
\interpret{\tau}{\pi,v} =
\begin{cases}
v(x) & \text{if } \tau=x\in\mathcal{X}, \\
\interpret{\pf{n}}{\delta_{\mathcal{N}}(\pi)} & \text{if } \tau=\mathsf{n}\in\mathcal{N}.
\end{cases}
\end{gather*}
\noindent Then, let $\pi\in\Pi$ and $v:\mathcal{X}\to\Delta$ be a variable assignment.
The satisfaction relation $\models$ is:%
\vspace{-3ex}
\begin{align*}
& \mathcal{M},\!\pi,\!v \models \pf{K}(\tau)\!\! & \text{iff}\ \ & \interpret{\tau}{\pi,v} \in \interpret{\pf{K}}{\delta_{\mathcal{K}}(\pi)}\\
& \mathcal{M},\!\pi,\!v \models \pf{A}(\tau_1,\ldots,\tau_k)\!\! & \text{iff}\ \ & \langle\interpret{\tau_1}{\delta(\pi), v},\ldots,\interpret{\tau_k}{\delta(\pi),v}\rangle \in \interpret{\pf{A}}{\delta_{\mathcal{A}}(\pi)}\\
& \mathcal{M},\!\pi,\!v \models \pf{Q}(\tau_1,\ldots,\tau_k)\!\! & \text{iff}\ \ & \langle\interpret{\tau_1}{\delta(\pi), v}(\pi),\ldots,\interpret{\tau_k}{\delta(\pi),v}(\pi)\rangle\in \interpret{\pf{Q}}{\delta_{\mathcal{Q}}} \\
& \mathcal{M},\!\pi,\!v \models \neg\phi & \text{iff}\ \ & \mathcal{M},\!\pi,\!v\not\models\phi \\
& \mathcal{M},\!\pi,\!v \models \phi\land\psi & \text{iff}\ \ & \mathcal{M},\!\pi,\!v\models\phi \text{ and } \mathcal{M},\pi,v\models\psi \\
&\mathcal{M},\!\pi,\!v \models \forall x\,\phi(x) & \text{iff}\ \ & \mathcal{M},\!\pi,\!v_{\set{x\mapsto i}} \models\ \phi(x)
\hbox{~~for all}\ i \in I_\pi \\
& \mathcal{M},\!\pi,\!v \models \standb{\ste}\phi & \text{iff}\ \ & \mathcal{M},\!\pi'\!,v \models \phi \text{ for all } \pi'\in\sigma(\ste)
\end{align*}
To understand the interpretation of atomic
predications, one must be aware that the
evaluation of a symbol may require
one or two levels of
de-referencing in relation to the precisification index $\pi$. First, notice
that the $\pf{K}$ and $\pf{A}$ predications are interpreted in the same way. When applied to a name symbol {\sf n},
both the constant and the predicate get evaluated with respect to a
precisification, so that the names denote particular individuals and
the predicate denotes a set of such individuals. When the argument is
a variable rather than a name symbol, the variable directly denotes
an individual without any need for evaluation relative to a
precisification. In the case of (exact) $\pf{Q}$ predications,
individuals need to be further evaluated relative to the
precisification in order to obtain precise entities, which can be
tested for membership of the precise set denoted by the property
$\pf{Q}$. So, although $\pf{Q}$ predicates are not themselves subject
to variation in relation to $\pi$,
they require an extra level of
disambiguation in the interpretation of their argument symbol.
\iffalse
\section{(ORIGINAL:) Semantic Analysis of Variable Reference}
\subsection{Variable Reference Logic}
\label{SEC:VRL}
We now generalise the standpoint semantics framework to
define a first-order variable reference logic ${\mathbb{V}_1}$
that can represent predication and quantification.\footnote{In this
presentation, we omit specifying the semantics for propositional
connectives and standpoint
operators and just give the semantics for predication and
quantification in terms of truth conditions at a particular
precisification in a given model. A more comprehensive semantics
could be given by incorporating, with slight modification, the
relevant specifications from the propositional semantics.}
\subsubsection{Syntax}
The language of ${\mathbb{V}_1}$ is built from a vocabulary
$\mathcal{V} =
\tuple{\mathcal{K},\mathcal{P},\mathcal{Q},\mathcal{N},\mathcal{X}}$,
comprising the following symbols:
\begin{itemize}[itemsep=0ex,label=$\bullet$]
\item $\mathcal{K} = \{ \ldots, \mathsf{K}_i, \ldots \}$ is a set of count-noun symbols (sortals),
\item $\mathcal{P} = \{ \ldots, \mathsf{P}_i, \ldots \}$ is a
set of individual property predicates,
\item $\mathcal{Q} = \{ \ldots, \mathsf{Q}_i, \ldots \}$
is a set of precise entity property predicates, \hfill
(e.g.\ exact spatial properties)
\item $\mathcal{N} = \{ \ldots, \mathsf{n}_i, \ldots \}$
is a set of proper name symbols.
\item $\mathcal{X} = \{ \ldots, x_i, \ldots \}$
is a set of nominal variable symbols.
\end{itemize}
The symbols of $\mathcal{K}$, $\mathcal{P}$ and $\mathcal{Q}$ can all
be applied as predicates, with the sortal symbols of $\mathcal{K}$
also being used to specify a range of quantification.
Symbols of both $\mathcal{N}$ and $\mathcal{X}$ can both occur as
arguments of predicates, although the variable symbols of
$\mathcal{X}$ are only meaningful in the context of quantification.
The set ${\mathbb{V}_1}$ of formulae of variable reference logic is the smallest set such that:
\begin{enumerate}[label=$\bullet$,itemsep=0ex]
\item
$\{ \alpha(\tau) \ | \
\alpha\in (\mathcal{K}\!\cup\!\mathcal{P}\!\cup\!\mathcal{Q}),
\tau\in (\mathcal{N}\!\cup\!\mathcal{X})\} \subseteq {\mathbb{V}_1}$
\hfill (contains all atomic predication formulae)
\item
$\{\n\phi,\ (\phi\con\psi),\ \mathop\square_{s_i} \phi,\ \mathop\square_{\hbox{$*$}}\phi \} \subseteq {\mathbb{V}_1}$\ \ for every $\phi,\psi\in {\mathbb{V}_1}$
\hfill (closed under connectives)
\item
$(\forall \pf{K}\!:\!x)[\phi] \in {\mathbb{V}_1}$ for every $\pf{K}\in\mathcal{K}$ every
$x\in\mathcal{X}$ and every $\phi \in {\mathbb{V}_1}$
\hfill (includes quantified formulae)
\end{enumerate}
\subsubsection{Semantics}
The semantics for variable reference logic ${\mathbb{V}_1}$ with the non-logical vocabulary $\mathcal{V}= \tuple{ \mathcal{K}, \mathcal{P}, \mathcal{Q},
\mathcal{N}, \mathcal{X} }$ will be
based on structures
$\tuple{ E, \Pi, \sigma, \delta}$
where:
\begin{enumerate}[label=$\bullet$,itemsep=0ex]
\item $E$ is the set of precise entities.
\item $\Pi$ is the set of precisifications.
\item $\sigma$
is a function mapping standpoint symbols to precisifications, and
\item $\delta = \tuple{
\delta_{\mathcal{K}},
\delta_{\mathcal{P}},
\delta_{\mathcal{Q}},
\delta_{\mathcal{N}},
\delta_{\mathcal{X}} }$ is a denotation function that can be
considered as divided into components specifying the denotations for
each type of non-logical symbol (see below).
\end{enumerate}
To simplify the explanation of the semantics, we define some notation that will
simplify the symbolic complexity of the presentation:
\defI_{*}{I_{*}}
\begin{itemize}
\item Following a standard convention, we write $B^A$ to denote the set of all functions from domain set $A$ into the range set $B$.
\item $2^A$ denotes the set of all subsets of $A$.
\item $I_{*} = E^\Pi$ is the set of all functions from precisificaiton indices $\pi\in\Pi$ into to the set $E$ of precise entities.
\end{itemize}
The elements of $I_{*}$ are used to model \textit{indefinite individuals}.
At each precisification, each name symbol will denote some element of $I_{*}$
and each sortal and individual property predicate will denote a subset of $I_{*}$.
For each
indefinite individual $i\in I_{*}$ and each precisification $\pi\in\Pi$, the value of $i(\pi)$ will be
a precise version of individual $i$ according to precisification $\pi$.
Hence, $2^{I_{*}}$ is the set of all subsets of $I_{*}$ and each element of
$2^{I_{*}}$ can represent a \textit{set\/} of imprecise individuals.
The set of all imprecise individuals at each precicisification, $\pi$ depends on the interpretation of the sortal predicates at $\pi$.
The denotation functions for sortal predicates is as follows:
\begin{itemize}
\item $\delta_{\mathcal{K}} : \mathcal{K} \to (2^{I_{*}})^\Pi$ \ \ is a
function mapping each sortal symbol (count noun) in
$\mathcal{K}$ to a function from precisifications to subsets of $I_{*}$ --- i.e.\ sets of indefinite individuals.
\end{itemize}
On the basis of $\delta_{\mathcal{K}}$ we define
$I_{\pi} = \bigcup \{ \delta_{\mathcal{K}}(\mathsf{K})(\pi) \ |
\ \mathsf{K} \in \mathcal{K}\}$, the domain of individuals according to
precisification $\pi$. This
the set of all individuals of any sort, where the sortals are interpreted according to precisification $\pi$.
We can now define:
\begin{itemize}
\item $\delta_{\mathcal{P}} : \mathcal{P} \to (2^{I_{*}})^\Pi$, \
such that, for all $\mathsf{P} \in \mathcal{P}$\
we must have $\delta_{\mathcal{P}}(\mathsf{P})(\pi) \subseteq I_\pi$.
\item $\delta_{\mathcal{Q}} : \mathcal{Q} \to 2^E$ \ \
\ \ (each precise predicate is associated with a set of precise entities)
\item $\delta_{\mathcal{N}} : \mathcal{N} \to
(I_{*})^\Pi$, \
subject to the condition that, for all $\mathsf{n} \in \mathcal{N}$
we must have $\delta_{\mathcal{N}}(\mathsf{n})(\pi) \in
I_\pi$.
\item $\delta_{\mathcal{X}} : \mathcal{X} \to 2^{I_{*}}$,
\ \ \ (but the semantically relevant denotations of variables are
determined by sortals occurring in quantifiers) \lga{Why do variables map to subsets of indefinite individuals rather than to single individuals? They should map to the same as named individuals don't they?}
\end{itemize}
\subsubsection{Interpretation of Reference and Predication}
For a model $\mathcal{M} =\tuple{ E, \Pi, \mathcal{V}, \delta }$ and
precisification $\pi\in \Pi$, the truth conditions for atomic
predication formulae are as follows:
\begin{itemize}
\item $(\mathfrak{M}, \pi) \models \pf{K}(\pf{n})$\ \ if and only if\ \
$(\delta_{\mathcal{N}}(\pf{n}))(\pi) \in (\delta_{\mathcal{K}}(\pf{K}))(\pi)$
\item $(\mathfrak{M}, \pi) \models \pf{P}(\pf{n})$\ \ if and only if\ \
$(\delta_{\mathcal{N}}(\pf{n}))(\pi) \in (\delta_{\mathcal{A}}(\pf{P}))(\pi)$
\item $(\mathfrak{M}, \pi) \models \pf{Q}(\pf{n})$\ \ if and only if\ \
$((\delta_{\mathcal{N}}(\pf{n}))(\pi))(\pi) \in \delta_{\mathcal{Q}}(\pf{Q})$
\item $(\mathfrak{M}, \pi) \models \pf{K}(x)$\ \ if and only if\ \
$\delta_{\mathcal{X}}(x) \in (\delta_{\mathcal{K}}(\pf{K}))(\pi)$
\item $(\mathfrak{M}, \pi) \models \pf{P}(x)$\ \ if and only if\ \
$\delta_{\mathcal{X}}(x) \in (\delta_{\mathcal{A}}(\pf{P}))(\pi)$
\item $(\mathfrak{M}, \pi) \models \pf{Q}(x)$\ \ if and only if\ \
$(\delta_{\mathcal{X}}(x))(\pi) \in \delta_{\mathcal{Q}}(\pf{Q})$
\end{itemize}
To make sense these specifications you need to be aware that
evaluation of a symbol may require zero, one, or two levels of
de-referencing in relation to the precisification index $\pi$. You
should first note that the $\pf{K}$ and $\pf{P}$ predications are
semantically equivalent. When applied to a name constant, {\sf n},
both the constant and the predicate get evaluated with respect to a
precisification, so that the name denotes a particular individual and
the predicate denotes a set of such individuals. When the argument is
a variable rather than a name constant, the variable directly denotes
an individual without any need for evaluation relative to a
precisification. In the case of (exact) $\pf{Q}$ predications,
individuals need to be further evaluated relative to the
precisification in order to obtain a precise entity, which can be
tested for membership of the precise set denoted by property
$\pf{Q}$. So, although $\pf{Q}$ predicates are not themselves subject
to variation in relation to $\pi$,
they impose an extra level of
variability in the interpretation their argument symbol.
It may seem curious that the same precisification index $\pi$ is used
both for mapping names (and predicates) to individuals (and sets of individuals), and
also for mapping from individuals to precise entities. Thus the
individual denoted by a name {\sf n} in precisification $\pi$
is $(\delta_{\mathcal{N}}(\pf{n}))(\pi)$ and according to $\pi$ it also refers to
the precise entity $((\delta_{\mathcal{N}}(\pf{n}))(\pi))(\pi)$. This slightly
simplifies the specification and does not
appear to place any constraint on the semantics.
\subsubsection{Interpretation of Quantification}
To facilitate specification of
the semantics for quantification, I
define a meta-level operation $\mathcal{M}^{(x_i\Rightarrow \xi)}$
on interpretation structures that enables us to replace the value of a
variable with a new value.
For
$\mathcal{M} =\tuple{ E, \Pi, \mathcal{V}, \delta }$,
with
$\mathcal{V} = \tuple{ \mathcal{K}, \mathcal{P}, \mathcal{Q}, \mathcal{N}, \mathcal{X} }$,
$\delta = \tuple{
\delta_{\mathcal{K}},
\delta_{\mathcal{P}},
\delta_{\mathcal{Q}},
\delta_{\mathcal{N}},
\delta_{\mathcal{X}} }$,
and $\xi\in E^\Pi$
let
$\mathcal{M}^{(x_i\Rightarrow \xi)}$ be the structure
$\mathcal{M}' =\tuple{ E, \Pi, \mathcal{V}, \delta' }$,
where
$\delta' = \tuple{
\delta_{\mathcal{K}},
\delta_{\mathcal{P}},
\delta_{\mathcal{Q}},
\delta_{\mathcal{N}},
\delta'_{\mathcal{X}} }$
and $\delta'_{\mathcal{X}}(x_j) = \delta_{\mathcal{X}}(x_j)$
for every $x_j\neq x_i$ and $\delta'_{\mathcal{X}}(x_i) = \xi$.
Finally we can specify the interpretation of a quantified formula:
\begin{itemize}
\item
$(\mathcal{M}, \pi)\ \models\ (\forall\,\pf{K}\!:\! x)[\phi(x)]
\hbox{~~~~if and only if~~~~}
(\mathcal{M}^{(x_i\Rightarrow \xi)}, \pi)\ \models\ \phi(x)
\hbox{~~for all}\ \xi \in \delta_{\mathcal{K}}(\pf{K})(\pi)$
\item
$(\mathcal{M}, \pi)\ \models\ (\mathcal{E}\,\pf{K}\!:\! x)[\phi(x)]
\hbox{~~~~if and only if~~~~}
(\mathcal{M}^{(x_i\Rightarrow \xi)}, \pi)\ \models\ \phi(x)
\hbox{~~for some}\ \xi \in \delta_{\mathcal{K}}(\pf{K})(\pi)$
\end{itemize}
This is much the same as how one would specify quantification in a
classical sorted logic. The universally quantified formula is true
just in case its immediate sub-formula is true for all possible values
of the variable, taken from the range of the sortal predicate.
\fi
\subsubsection{Classification, Individuation and Demarcation with ${\mathbb{V}_1}$}
The elaboration in the denotation functions for the different kinds
of symbol and the semantics given above for the different cases of
predication allow quantification to operate at an
intermediate level in the interpretation of
reference. The \emph{individuation\/} of potential referents occurs
prior to quantification by establishing individuals in relation to a
given interpretation of sortal predicates. But these individuals can
still be indeterminate in that they may correspond to different exact
entities.
The formulas (a) and (b) in Section \ref{sec:sem-standpoint-logic} are also satisfiable ${\mathbb{V}_1}$ formulas, and correctly formalise the scenarios described.
Furthermore, the formula (c), that was unsatisfiable in $\PredStandpointLogic$,
is satisfiable in ${\mathbb{V}_1}$, because area is modelled as an indefinite individual
that is unequivocally classified as a $\pf{Desert}$ and at some but not all precisifications is associated to a precise extension containing $\pf{house}$.
Finally, statement (d), ``\emph{Some argue that the whole arid area is still a desert. Others refer to two deserts in that area, separated by the valley. Either way, my house is still arguably in a desert}'', can be formalised as follows:\\[-3ex]
\begin{gather*}
\exists\, a\ (\mathop\square_{\hbox{$*$}}\pf{AridArea}(a)\con \mathop\lozenge_{some}\pf{Desert}(a) \\
\con \mathop\lozenge_{others}(\exists\, b,c\ (b \neq c) \con \pf{PartOf}(b,a) \con \pf{PartOf}(c,a) \con \pf{Desert}(b)\con \pf{Desert}(c)))\\
\con \mathop\lozenge_{*}\exists\, x\ \pf{Desert}(a) \con \pf{PartOf}(\pf{house},x)
\end{gather*}
\vspace{-1ex}
Thus, all statements in our example involving the classification, individuation and demarcation of vague objects can be easily represented in ${\mathbb{V}_1}$.
\section{From ${\mathbb{V}_1}$ back to First Order Standpoint Logic}
We next turn to the question of the expressive power of ${\mathbb{V}_1}$, specifically in comparison to $\PredStandpointLogic$.
We will provide a translation from ${\mathbb{V}_1}$ into $\PredStandpointLogic$, which not only settles the above question but is also interesting in its own right, since it spells out the restrictions that ${\mathbb{V}_1}$ imposes on the semantics and facilitates the discussion on some ontological commitments that are arguably implicit when favouring the {\em de dicto\/} translation of a ${\mathbb{V}_1}$ formula.
The function $\mathrm{trans} : {\mathbb{V}_1} \to \PredStandpointLogic$, mapping ${\mathbb{V}_1}$ formulae to formulae in first-order standpoint logic, is recursively defined as follows:\\[-1ex]
\noindent
\parbox{\textwidth}{
\begin{align*}
&\trans{\pf{K}(\tau)} = \pf{K}(\tau)&\trans{\n\phi} = \n \trans{\phi} &\\
&\trans{\pf{A}(\vec\tau)} = \pf{A}(\vec\tau) &\trans{\phi_1\! \con\! \phi_2} = \trans{\phi_1} \wedge \trans{\phi_2}& \\
&\trans{\forall x\, \phi(x)} = \forall x\ (\pf{ink}(x) \rightarrow \trans{\phi}) &\trans{\standb{s}\phi} = \standb{s} \trans{\phi}&\\
\end{align*}
\vspace{-10.7ex}
\begin{gather*}
\trans{\pf{Q}(\tau_{1},\mydots,\tau_{k})} =
\exists e_{1},\mydots,e_{k}(\pf{Q}(e_{1},\mydots,e_{k})
\land \pf{prec}(\tau_{1},e_{1})) \con \mydots \con \pf{prec}(\tau_{k},e_{k}))\
\end{gather*}
}
\noindent Then, we consider the set $\Phi$ of axioms in $\PredStandpointLogic$ such that:
\begin{align}
\Phi=&\{\ \forall x (\pf{ind}(x) \leftrightarrow \neg\pf{ext}(x)),&
&\forall x (\pf{ind}(x)\! \leftrightarrow \!\mathop\square_{*}\pf{ind}(x))\! \land\! (\pf{ext}(x)\! \leftrightarrow\! \mathop\square_{*}\pf{ext}(x)), \\
&\bigwedge\textstyle_{\scaleobj{1}{\mathsf{Q}\in\mathcal{Q}}}\forall \vec x ( \pf{Q}(\vec x) \leftrightarrow \mathop\square_{*}\pf{Q}(\vec x) ), &
&\forall x,y (\pf{prec}(x,y) \rightarrow (\pf{ind}(x) \land \pf{ext}(y))),\ \\
&\forall x \exists y (\pf{ind}(x)\!\rightarrow \!\pf{prec}(x,y) ), &
&\forall x,y (\pf{prec}(x,y) \con \pf{prec}(x,z) \rightarrow y=z), \\
&&& \!\!\!\!\!\!\!
\forall x_1,..,x_k (\pf{A}(x_1,..,x_k)\! \rightarrow\! \pf{ink}(x_1)\! \con\!\mydots\! \con\!\pf{ink}(x_k) ),\\
&&& \!\!\!\!\!\!\!
\forall x \ (\pf{ink}(x)\! \rightarrow\! \pf{ind}(x)) \wedge ({\pf{ink}}(x)\! \leftrightarrow \! \bigvee\textstyle_{\scaleobj{0.75}{\mathsf{K}\in\mathcal{K}}} \pf{K}(x)) \ \}
\end{align}
These axioms state that every domain element is either an individual or an extension but not both, with $\pf{ind}$ and $\pf{ext}$ rigid predicates (line 1); that every precise entity property predicate is rigid; that $\pf{prec}(x,y)$ relates an individual $\pf{ind}(x)$ to a precise entity $\pf{ext}(y)$ (line 2), and that every individual $\pf{ind}(x)$ has (at each precisification) a unique precise extension $y$, $\pf{prec}(x,y)$ (line 3); that every element of a domain with an indefinite predicate $\pf{A}$ must be an individual in that precisification ($\pf{ink}$) (line 4), and finally that an individual in that precisification is an individual of some sort $\mathsf{k}\in\mathsf{K}$ (line 5).
Finally, for a ${\mathbb{V}_1}$ formula $\psi$ , we specify:
$$
\mathrm{Trans}(\psi):=\trans{\psi}\wedge\bigwedge_{\phi\in\Phi}\standb{*}\phi$$
A ${\mathbb{V}_1}$ formula $\psi$ and its translation $\mathrm{Trans}(\psi)$ are equisatisatisfiable, that is, $\psi$ is ${\mathbb{V}_1}$-satisfiable if and only if $\mathrm{Trans}(\psi)$ is $\PredStandpointLogic$-satisfiable. Correctness of the translation can be shown by establishing a correspondence between the two kinds of model in each direction. The proof is by induction and is straightforward, yet lengthy, thus it is omitted in the current manuscript.
It is worth remarking that the translation is linear in the size of the formula and preserves the monodic fragment of standpoint logic, where modalities occur only in front of formulas with at most one free variable. The monodic fragment of first-order modal logic is known to be decidable \cite{gabbay03many-dimensional}, and tight complexity bounds have been shown for monodic fragments of standpoint logic \cite{Gomez23StandpointEL}. This highlights the usefulness of the translation, since reasoning support for ${\mathbb{V}_1}$ can be provided through its implementation.
\iffalse
\begin{proof}[Proof sketch]
($\Rightarrow$) Assuming the satisfiability of $\psi$ with vocabulary $\mathcal{V}^{\scaleobj{0.75}{\mathbb{V}}}= \tuple{ \mathcal{K}, \mathcal{A}, \mathcal{Q},
\mathcal{N}, \mathcal{S} }$, consider some model $\mathfrak{M}^{\scaleobj{0.75}{\mathbb{V}}}=\tuple{ E^{\scaleobj{0.75}{\mathbb{V}}}, \Pi^{\scaleobj{0.75}{\mathbb{V}}}, \sigma^{\scaleobj{0.75}{\mathbb{V}}}, \delta^{\scaleobj{0.75}{\mathbb{V}}}}$ of it. Let now $\mathfrak{M}=\tuple{ \Delta, \Pi, \sigma, \delta}$ denote the $\PredStandpointLogic$-interpretation for the vocabulary $\mathcal{V}= \tuple{ \mathcal{K}\cup \mathcal{A}\cup \mathcal{Q}, \mathcal{N}, \mathcal{S} }$ such that $\Delta = E^{\scaleobj{0.75}{\mathbb{V}}}\cup I_{*}$, i.e. the domain is the union of precise entities and indefinite individuals, $\Pi$ and $\sigma$ are as in $\mathfrak{M}^{\scaleobj{0.75}{\mathbb{V}}}$ and the interpretation function $\delta$ has $\delta^{\scaleobj{0.75}{\mathbb{V}}}_{\mathcal{K}} \cup \delta^{\scaleobj{0.75}{\mathbb{V}}}_{\mathcal{A}} \cup \delta^{\scaleobj{0.75}{\mathbb{V}}}_{\mathcal{N}}\subseteq\delta$, $\pf{Q}^{\delta(\pi)}=\pf{Q}^{\delta^{\scaleobj{0.75}{\mathbb{V}}}_{\mathcal{Q}}}$ for each $\pi\in\Pi$ and $\pf{Q}\in\mathcal{Q}$, and for each $\pi\in\Pi$ interprets the predicates introduced by the translation in the following way:
$\pf{ext}^{\delta(\pi)}=E^{\scaleobj{0.75}{\mathbb{V}}}$, $\pf{ind}^{\delta(\pi)}=I_{*}$, $\pf{ink}^{\delta(\pi)}=I_{\pi}$,
and $\pf{prec}^{\delta(\pi)}=\{(i,e)\mid i\in I_{*}, i(\pi)=e\}$.
Then it can be shown by a straightforward structural induction over $\psi$ that, for every $\pi\in\Pi$, $(\mathfrak{M}^{\scaleobj{0.75}{\mathbb{V}}}, \pi, v) \models \psi$ if and only $\mathfrak{M}, \pi, v \models \Trans{\psi}$.
($\Leftarrow$) Consider a $\PredStandpointLogic$-model $\mathfrak{M}^{\scaleobj{0.75}{\mathbb{S}}}$ of $\mathrm{Trans}(\psi)$ with vocabulary $\mathcal{V}^{\scaleobj{0.75}{\mathbb{S}}}= \tuple{ \mathcal{P},\mathcal{N}, \mathcal{S} }$. Define the vocabulary $\mathcal{V}= \tuple{ \mathcal{K}, \mathcal{A}, \mathcal{Q},
\mathcal{N}, \mathcal{S} }$, by letting $\mathcal{K}=\{\pf{P}\in\mathcal{P}\backslash \{\pf{ind}\} \mid \pf{P}^{\delta(\pi)} \cap \pf{ind}^{\delta(\pi)}\neq\emptyset\}$, $\mathcal{A}=\emptyset$ and $\mathcal{Q}=\{\pf{P}\in\mathcal{P}\backslash \{\pf{ext}\} \mid \pf{P}^{\delta(\pi)} \cap \pf{ext}^{\delta(\pi)}\neq\emptyset\}$. We now define the ${\mathbb{V}_1}$ model $\mathfrak{M}=\tuple{ E, \Pi, \sigma, \delta}$ with vocabulary $\mathcal{V}$, by letting $\Pi$ and $\sigma$ be as in $\mathfrak{M}^{\scaleobj{0.75}{\mathbb{S}}}$,
the domain $E=\pf{ext}^{\delta^{\scaleobj{0.75}{\mathbb{S}}}(\pi)}$ for some $\pi\in\Pi$, the set of indefinite individuals $I_{*}=Maps(\Pi,E)$, noting that $\pf{ind}^{\delta^{\scaleobj{0.75}{\mathbb{S}}}(\pi)}\subseteqI_{*}$ where every $i\in \pf{ind}^{\delta^{\scaleobj{0.75}{\mathbb{S}}}(\pi)}$ translates to an indefinite individual such that $i(\pi)=e$ for some $e\in E$ iff $(i,e)\in\pf{precs}^{\delta^{\scaleobj{0.75}{\mathbb{S}}}(\pi)}$ with $i,e\in E^{\scaleobj{0.75}{\mathbb{S}}}$, the individuals of $\pi$, $I_{\pi}=\pf{ink}^{\delta^{\scaleobj{0.75}{\mathbb{S}}}(\pi)}$ and the interpretation of the predicates $\pf{K}^{\delta_\mathcal{K}(\pi)} = \pf{K}^{\delta^{\scaleobj{0.75}{\mathbb{S}}}(\pi)}$ and $\pf{N}^{\delta_\mathcal{N}(\pi)} = \pf{N}^{\delta^{\scaleobj{0.75}{\mathbb{S}}}(\pi)}$ for all $\pi \in \Pi$, $\pf{K}\in\mathcal{K}$ and $\pf{N}\in\mathcal{N}$,
and $\pf{Q}^{\delta_\mathcal{Q}} = \pf{Q}^{\delta^{\scaleobj{0.75}{\mathbb{S}}}(\pi)}$ for any $\pi \in \Pi$ and all $\pf{Q}\in\mathcal{Q}$.
It is easily checked that, due to the second conjunct of $\mathrm{Trans}$, the structure thus defined is indeed a ${\mathbb{V}_1}$ model. In order to show that $\mathfrak{M}$ is a model of $\psi$, we can proceed as before and show by structural induction over $\psi$ that, for every $\pi\in\Pi$ and variable assignment $v$, $\mathfrak{M}^{\scaleobj{0.75}{\mathbb{S}}}, \pi, v \models \psi$ if and only $\mathfrak{M}, \pi, v \models \psi$. Thus, the established modelhood of $\mathfrak{M}$ ensures satisfiability of $\phi$.
\end{proof}
\fi
\def\pi#1{\hbox{$\mathsf{#1}$}}
\section{Some Philosophical Views Considered in relation to the ${\mathbb{V}_1}$ Semantics}
\label{SEC:phil-views}
From the translation, we may observe that we move from having a domain with only precise entities to also having a potentially large collection of \emph{indefinite entities} that are abstract in nature and only relate to their extension via the predicate $\pf{prec}$.
This issue regarding whether multiple precise versions of vague things can exist simultaneously (especially fuzzy boundaried objects like clouds) is known to philosophers as `The Problem of the Many'
\cite{Unger80many}, and has been the subject of considerable debate \cite{Weatherson09sep-short}.
One view is that the issue can be resolved by modifying the identity relation.
According to Geach \cite{Geach80r+g}, the different precise versions of a cat are identical relative to the
high-level count noun `cat', whereas Lewis \cite{Lewis93a} suggests that physical objects that are almost the same
with respect to spatial extension and material composition are ordinarily regarded as identical. Our semantics has some affinity
with Geach's approach in that we consider that it is count nouns that determine individuation and hence the sets of
possible precise versions of individuals. However, in our theory there is no need to modify identity, since quantification
is over individuals not their possible precise extensions.
We believe that Lewis's identity condition cannot account for many examples of objects with vague boundaries, where there
can be large difference between what we may take to be their spatial extension. For instance, small variations in our interpretation of `desert' may in certain circumstances result in very different spatial extensions being ascribed.
Lewis, along with Varzi \cite{Varzi01a}, also suggests a supervaluationist account where the
different candidate extensions of a vague object correspond to their extension in different precisifications.
So, at a given precisification
there is always a single physical extension of each object.
But what is distinctive about our semantics is that we model variability in the extension of an individual
as an additional kind of vagueness, not subsumed by the indeterminacy of a name (or sortal) in referring to
an individual (or class of individuals).
Lowe \cite{Lowe82a} posits an ontological difference between objects of a high-level category such as `cat' and physical portions of feline tissue present in the world, which he calls `cat-constituters'. This is very much in line with our semantics, especially as presented in the flat (translated) form that contains both definite and indefinite
basic elements. Critics of Lowe (e.g.\ Lewis) have argued that the existence of both cats and cat-constituters is ontologically profligate (or incoherent).
We consider instances of the high-level categories to be semantic objects modelling the way language operates rather than carrying ontological commitment.
This is clearest in the semantics of ${\mathbb{V}_1}$, where modelling indefinite objects as functions from precisifications to extensions
is a way of packaging indeterminacy within a semantic object. In the flat version of our semantics, indefinite objects are present (along with precise extensions)
as basic elements of the domain; however we still consider them semantic rather than {\em ontic\/} elements.
\section{Conclusions and Further Work}
We have analysed the phenomenon of vagueness in both predicates and objects, and considered examples suggesting that the aspects of individuation, classification and demarcation associated with predication need to be
considered separately.
We have proposed the framework of \emph{variable reference logic}, which takes the general form of a supervaluation semantics but allows variability both in the set of individuals falling under a predicate and also
in the spatial extension and physical constitution of `objects' denoted by names
or quantified variables. This separation not only supports flexible expressive capabilities of our formal language, but enables the representation to capture definite aspects of information despite other aspects being affected by vagueness.
There remains much interesting work to be done investigating what can be expressed
and inferred by means of ${\mathbb{V}_1}$.
We envisage reasoning support for ${\mathbb{V}_1}$ being of practical interest for specific kinds of application, such as querying of geographical information systems,
and plan to develop a case study in this area.
The linear translation to flat standpoint logic and ongoing progress on implementations for that logic are a strong indicator that query answering functionality may be feasible.
\vspace{-0.7ex}
\section{Introduction}
This document provides instructions for style and layout and how to submit the final
version. Although it was written for individual authors contributing to IOS Press books,
it can also be used by the author/editor preparing a monograph or an edited volume.
Authors should realize that the manuscript submitted by the volume editor to IOS
Press will be almost identical to the final, published version that appears in the book,
except for the pagination and the insertion of running headlines. Proofreading as
regards technical content and English usage is the responsibility of the author.
A template file for \LaTeX2e is available from
https://www.iospress.com/book-article-instructions. \LaTeX{} styles required for the \LaTeX{} template are also
available.\footnote{For authors using MS Word separate Instructions as well
as a Word template are available from https://www.iospress.com/book-article-instructions.}
\section{Typographical Style and Layout}
\subsection{Type Area}
The text output area is automatically set within an area 12.4 cm
horizontally and 20 cm vertically. Please do not use any
\LaTeX{} or \TeX{} commands that affect the layout or formatting of
your document (i.e. commands like \verb|\textheight|,
\verb|\textwidth|, etc.).
\subsection{Font}
The font type for running text (body text) is 10~point Times New Roman.
There is no need to code normal type (roman text). For literal text, please use
\texttt{type\-writer} (\verb|\texttt{}|)
or \textsf{sans serif} (\verb|\textsf{}|). \emph{Italic} (\verb|\emph{}|)
or \textbf{boldface} (\verb|\textbf{}|) should be used for emphasis.
\subsection{General Layout}
Use single (1.0) line spacing throughout the document. For the main
body of the paper use the commands of the standard \LaTeX{}
``article'' class. You can add packages or declare new \LaTeX{}
functions if and only if there is no conflict between your packages
and the \texttt{IOS-Book-Article}.
Always give a \verb|\label| where possible and use \verb|\ref| for cross-referencing.
\subsection{(Sub-)Section Headings}
Use the standard \LaTeX{} commands for headings: {\small \verb|\section|, \verb|\subsection|, \verb|\subsubsection|, \verb|\paragraph|}.
Headings will be automatically numbered.
Use initial capitals in the headings, except for articles (a, an, the), coordinate
conjunctions (and, or, nor), and prepositions, unless they appear at the beginning
of the heading.
\subsection{Footnotes and Endnotes}
Please keep footnotes to a minimum. If they take up more space than roughly 10\% of
the type area, list them as endnotes, before the References. Footnotes and endnotes
should both be numbered in arabic numerals and, in the case of endnotes, preceded by
the heading ``Endnotes''.
\subsection{References}
Please use the Vancouver citing \& reference system, and the National Library of
Medicine (NLM) style, and include the Digital Object Identifier (DOI) if known.
Place citations as numbers in square brackets in the text. All publications cited in
the text should be presented in a list of references at the end of the manuscript.
List the references in the order in which they appear in the text. Some examples of
the NLM style:
\medskip
\noindent\ul{Journal article:}\par\noindent
Petitti DB, Crooks VC, Buckwalter JG, Chiu V. Blood pressure levels before dementia.
Arch Neurol. 2005 Jan;62(1):112-6, doi: ....
\medskip
\noindent\ul{Paper from a proceedings:}\par\noindent
Rice AS, Farquhar-Smith WP, Bridges D, Brooks JW. Canabinoids and pain. In: Dostorovsky
JO, Carr DB, Koltzenburg M, editors. Proceedings of the 10th World Congress on Pain;
2002 Aug 17-22; San Diego, CA. Seattle (WA): IASP Press; c2003. p. 437-68, doi: ....
\medskip
\noindent\ul{Contributed chapter in a book:}\par\noindent
Whiteside TL, Heberman RB. Effectors of immunity and rationale for immunotherapy. In:
Kufe DW, Pollock RE, Weichselbaum RR, Bast RC Jr, Gansler TS, Holland JF, Frei~E~3rd,
editors. Cancer medicine 6. Hamilton (ON): BC Decker Inc; 2003. p. 221-8, doi: ....
\smallskip
\noindent\ul{Book by author(s):}\par\noindent
Jenkins PF. Making sense of the chest x-ray: a hands-on guide. New York: Oxford
University Press; 2005. 194 p., doi: ....
\smallskip
\noindent\ul{Edited book:}\par\noindent
Izzo JL Jr, Black HR, editors. Hypertension primer: the essentials of high blood pressure.
3rd ed. Philadelphia: Lippincott Williams \& Wilkins; c2003. 532 p., doi: ....
\smallskip
\noindent\ul{Proceedings:}\par\noindent
Ferreira de Oliveira MJ, editor. Accessibility and quality of health services. Proceedings of
the 28th Meeting of the European Working Group on Operational Research Applied to Health
Services (ORAHS); 2002 Jul 28-Aug 2; Rio de Janeiro, Brazil. Frankfurt (Germany): Peter Lang;
c2004. 287 p., doi: ....
\vspace*{6pt}
If your bibliography is structured in the BibTeX format, loading your *.bib file and the provided BibTeX style vancouver.bst allows you to get the final format of the bibliography. Please note that the bibtex program should be used to generate the *.bbl file.
\section{Illustrations}
\subsection{General Remarks on Illustrations}
The text should include references to all illustrations. Refer to illustrations in the
text as Table~1, Table~2, Figure~1, Figure~2, etc., not with the section or chapter number
included, e.g. Table 3.2, Figure 4.3, etc. Do not use the words ``below'' or ``above''
referring to the tables, figures, etc.
Do not collect illustrations at the back of your article, but incorporate them in the
text. Position tables and figures with at least 2 lines
extra space between them and the running text.
Illustrations should be centered on the page, except for small figures that can fit
side by side inside the type area. Tables and figures should not have text wrapped
alongside.
Place figure captions \textit{below} the figure, table captions \textit{above} the table.
Use bold for table/figure labels and numbers, e.g.: \textbf{Table 1.}, \textbf{Figure 2.},
and roman for the text of the caption. Keep table and figure captions justified. Center
short figure captions only.
The minimum \textit{font size} for characters in tables is 8 points, and for lettering in other
illustrations, 6 points.
On maps and other figures where a \textit{scale} is needed, use bar scales rather than
numerical ones of the type 1:10,000.
\subsection{Quality of Illustrations}
Embed the fonts used if the application provides that option.
Ensure consistency by using similar sizes and fonts for a group of small figures.
To add lettering to figures, it is best to use Helvetica or Arial (sans serif fonts)
to avoid effects such as shading, outline letters, etc.
Avoid using illustrations
taken from the Web. The resolution of images intended for viewing on a screen is
not sufficient for the printed version of the book.
If you are incorporating screen captures, keep in mind that the text may not be
legible after reproduction.
\subsection{Color Illustrations}
Please note, that illustrations will only be printed in color if the volume editor agrees to
pay the production costs for color printing. Color in illustrations will be retained
in the online (ebook) edition.
\section{Equations}
Number equations consecutively, not section-wise. Place the numbers in parentheses at
the right-hand margin, level with the last line of the equation. Refer to equations in the
text as Eq. (1), Eqs. (3) and (5).
\section{Fine Tuning}
\subsection{Type Area}
\textbf{Check once more that all the text and illustrations are inside the type area and
that the type area is used to the maximum.} You may of course end a page with one
or more blank lines to avoid `widow' headings, or at the end of a chapter.
\subsection{Capitalization}
Use initial capitals in the title and headings, except for articles (a, an, the), coordinate
conjunctions (and, or, nor), and prepositions, unless they appear at the beginning of the
title or heading.
\subsection{Page Numbers and Running Headlines}
You do not need to include page numbers or running headlines. These elements will be
added by the publisher.
\section{Submitting the Manuscript}
Submit the following to the volume editor:
\begin{enumerate}
\item The main source file, and any other required files. Do not submit more than
one version of any item.
\item The source files should compile without errors with pdflatex or latex.
\item Figures should be submitted in EPS, PDF, PNG or JPG format.
\item A high resolution PDF file generated from the source files you submit.
\end{enumerate}
|
{
"arxiv_id": "2302.13207",
"language": "en",
"timestamp": "2023-02-28T02:13:06",
"url": "https://arxiv.org/abs/2302.13207",
"yymm": "2302"
} | \section{Introduction}
\IEEEPARstart{X}{R}ay Computed Tomography (XCT) is a powerful volumetric imaging technique, widely applied in medical and industrial applications to reveal the internal structure of an object by collecting thousands of projection images from different views around the object. The achievable image quality depends on the number of acquisitions. With the help of advanced algorithms and using regularisation functions such as Total Variation (TV) that enforce certain image smoothness properties \cite{song2007sparseness}, or learned, data-driven regularises \cite{yuan2018sipid}, \cite{lee2018deep}, \cite{pelt2022cycloidal}, \cite{ernst2021sinogram}, \cite{adler2018learned}, a significant reduction in the number of measurements is possible, without a significant reduction in image quality. In fact, a range of recent papers has shown that even single projection images are sufficient in certain settings to identify one known image out of a small selection of possible reconstructions (see for example \cite{li20213}) though this only works by imposing very strong prior constraints on the reconstruction, which is only possible if the objects are extremely predictable in their 3D shape.
Our work takes a different approach. Instead of trying to reconstruct full tomographic images from limited observations, which require very strong prior information, we only aim at the recovery of the 3D location of point and line like features. We are here interested in time-sensitive applications, where the internal structure of an object changes rapidly, and where we might only be able to take a single image at each time-step. Inspired by computer stereo vision systems, we assume an imaging setup with two or three X-ray source/detector pairs as shown in Fig. \ref{Two X-ray Sources Schematic}.
This allows us to take two or three projection images at the same time, but, without significantly restricting the volumetric images we are inspecting, this will not allow us to compute a full volumetric image as the two or three projection images simply do not contain enough information. Instead, we propose to only identify and locate specific internal features. In fact, we show that we are able to identify point and line like features and estimate their location in 3D space. To achieve this, we develop two deep neural networks, one that detects relevant features in the individual 2D projection images and one that matches these features and places them into 3D space. Whilst this will not provide the same level of information as full tomographic imaging, in several applications, this technique can nevertheless provide valuable dynamic information that is not accessible with traditional methods.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{pdf/2.png}\\
\caption{For stereo X-ray tomographic imaging with two views, two X-ray projection images are taken from an object from two different directions.}\label{Two X-ray Sources Schematic}
\end{center}
\end{figure}
Our work is inspired by stereo vision applications, where two visible-light cameras are used to infer the location of visible objects in 3D space. With visible light, for non-transparent object, the amount of light measured at any point on a camera's image plane is assumed to be due to the reflection of light from a single point in the 3D scene. Thus, if we take two images of a scene from slightly different directions and if we are able to match points that come from the same point in the 3D scene between the two images, then a depth map for each image can be computed from simple geometric principles, placing each point in each image to a unique point in 3D space. The main issue in stereo vision is thus one of matching points between the two images of a stereo pair \cite{horn1986robot}.
Unlike stereo vision with visible light, in X-ray imaging, individual points on the imaging plane represent X-ray attenuation values along the entire X-ray path, that is, along the straight line from the X-ray source to the pixel on the X-ray detector \cite{kak2001principles}.
Therefore, individual points on one imaging plane no longer correspond to a single point in 3D space, but to an entire line. For a full tomographic reconstruction, point matching based approaches are thus not possible. However, if there are point features in 3D space, such as the end of a linear feature, the corner of an object or an object that is of a similar size to the resolution of the system, then, this feature can be mapped in 3D space even from as few as two tomographic projections. To achieve this we need to 1) identify all point like features in the two X-ray views and 2) match these features between the two views. Once found and matched, mapping the features into 3D space uses the same geometric considerations as stereo vision. This approach can also be extended to line like features, by treating each line as a string of point like features, though here, matching can become more difficult.
\subsection{Proposed method}
Whilst there are many existing feature detection methods \cite{bay2006surf,bay2008speeded,lowe2004sift,viswanathan2009features,derpanis2004harris}, these have generally been developed for standard imaging applications and we found they do not work well on X-ray transmission images. In this paper, we thus propose a novel, deep learning based feature detector that can be trained on specific point and line like features. The particular challenge here is for the method to ignore the other image information that clutters X-ray projection images, but that is not related to point and line like features. The proposed framework is shown in Fig. \ref{Extracted features reconstruction frameworks}. After extracting the point and line like features in the 2D X-ray projection images, we then match the features by using the filtered back-projected (FBP) \cite{sagara2010abdominal}, \cite{hoffman1979quantitation} method to generate a 3D volumetric image of the extracted features. The back-projection of the features from the two images that are due to the same point in space will overlap in 3D space. If there are few features and if these features are randomly located in space, then it is likely that this intersection is unique, which will then lead to a unique match, though the more features we have, the more likely will it be that there will be an intersection of the lines in 3D space from more than one point in each image. In this case, an exact match is not possible from two observations, though by adding a third observation, this issue can be overcome. Another issue that can arise if there are many features in 3D space is that two features are aligned such that they both are measured in a single point on one of the imaging plains. To increase the robustness to the exact localisation of features on the two imaging planes, instead of simply looking at intersections in 3D space after back-projection, we use a simple deep neural network to process the back-projected volumetric image to generate a 3D image containing the point like features.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{pdf/1.png}\\
\caption{The proposed feature reconstruction framework. The sampling process, shown with the green arrow, represents the imaging process which generates a few projection images. Our approach then implements a feature detection step (blue arrow) and, once features are detected, a 3D mapping step (red arrow).}\label{Extracted features reconstruction frameworks}
\end{center}
\end{figure}
\subsection{Contributions}
The main contributions of this paper are:
1) We design a convolutional neural network to identify point and line features in individual X-ray projection images that work even if the images show additional complex object features.
2) We design a deep neural network that uses pairs of feature images to map the features to their spatial location in 3D. This network in effect matches the features and uses geometric information to compute spatial locations.
3) We evaluate the methods on two datasets, a synthetic dataset and a real X-ray imaging data-set.
\section{Methodology}
\subsection{Feature Detection}
There are many feature detection methods that have been developed for 2D images and common methods include the Speeded Up Robust Feature (SURF) method \cite{bay2006surf}, \cite{bay2008speeded}, Scale Invariant Feature Transform (SIFT) method \cite{lowe2004sift}, Features from Accelerated Segment Test (FAST) method \cite{viswanathan2009features} and Harris corner detection method\cite{derpanis2004harris}. However, these methods search for features commonly found in photographic images. They have not been designed to detect point and line like features in tomographic data and are thus not suitable for our task. Not only are they often unable to detect point and line features, they are also prone to detect other image structure in the tomographic background, which is not of interest here. Therefore, we design a binary classification neural network to detect features of interest. The feature detection problem can be summarized as finding parameters $\theta$ such that
\begin{equation}
\mathbf{y}_{mask} = f\left ( \mathbf{x}_{raw};\theta \right )\label{eq:1.1}
\end{equation}
is a mapping from a raw 2D X-ray projection images $\mathbf{x}_{raw}\in \mathbb{R}^{m\times n}$ to a binary feature mask $\mathbf{y}_{mask}\in \mathbb{B}^{m\times n}$. The parameters are learned from training a dataset $\left \{ \left ( \mathbf{x}^{i}_{raw},\mathbf{y}^{i}_{mask} \right ) \right \}^N_{i=1}$ which comprises $N$ sample pairs.
\subsection{Mapping Features into 3D space }
Once we have identified the features in the 2D projections, we could try and match points in projection pairs in the same way in which this is done in stereo vision. This is a two stage process with the difficult step being the matching of points in one projection to those in the other projection. In stereo vision, this matching is typically done by looking at the similarity of the image around a point and matching the points if their neighbourhoods are similar. This matching is further constrained by the fact that a point in one image can only match with points along a line in the other image, a constraint known as the epipolar constraint of the stereo camera model. In fact, if there are few points that need to be matched between two images, this constraint often means that there is a unique match as long as there is no more than one point feature on each epipolar plane in space. This process can thus also be used for matching points in our setting. Note that, for a trinocular X-ray stereo system, that is, for a system with 3 X-ray sources and detectors, where none of the epipolar planes of each of the three detector pairs is parallel, unique matching becomes increasingly likely, as now, not only do matching points lie on one epipolar plane, but on three epipolar planes \cite{trinocular}. To match linear features, we can start by matching the endpoints of a linear feature and then track the line from its endpoints. Once matched, point features can then be mapped into 3D space using knowledge of the camera geometry.
The similarity of the geometric depth estimation problem is shown in Fig \ref{point matching principles}. The left panel shows the principle of visible-light stereo matching and depth estimation. The right panel shows the geometry of stereo X-ray tomographic imaging with two projections, which can also calculate the spatial information of a point in space from matched points in the two projections.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{pdf/3.png}\\
\caption{The geometry of visible-light stereo matching of point features using a pinhole camera model and X-ray projection point matching. }\label{point matching principles}
\end{center}
\end{figure}
To increase the robustness of this approach to errors in the localisation of point features, we here instead use a learned feature matching and 3D mapping method. We train a 3D U-net to process the back-projected volumetric image of the detected feature maps to generate a 3D image containing the point and line features only. Formally, if $B_L(\cdot)$ and $B_R(\cdot)$ are the filtered backprojection operators \cite{feldkamp1984practical} for the two projection images, then we train a mapping
\begin{equation}
\mathbf{y}_{vol} = g\left (B_L( \widehat{\mathbf{y}}_{mask}^L)+ B_R(\widehat{\mathbf{y}}_{mask}^R);\theta \right ) \label{eq:1.2}
\end{equation}
from two extracted feature maps $\widehat{\mathbf{y}}_{mask}^{L}$ and $\widehat{\mathbf{y}}_{mask}^{R}$ to a 3D tomographic volume $\mathbf{y}_{vol}\in \mathbb{R}^{m\times n \times o}$.
\subsection{Proposed Framework}
Our proposed approach is summarised in Fig. \ref{overview of framework}. We employ a standard 2D u-net \cite{ronneberger2015u} as a feature detector $f(\cdot;\theta)$, and a similar standard 3D u-net \cite{cciccek20163d} to mode the mapping $g(\cdot,\cdot;\theta)$ to extracted projections to 3D feature maps. Both networks act as multi-label classification networks so we use a sigmoid output non-linearity together with the binary cross-entropy loss function.
\begin{figure*}
\begin{center}
\includegraphics[width=6.5in]{pdf/4.png}\\
\caption{Overview of the proposed framework. The input is a pair of X-ray projection images which contain the X-ray projection of objects as well as the projection of line and point like features. Each projection is fed independently into the same 2D u-net to compute two different feature maps, where the background is removed. The two feature maps are then back-projected into a 3D volume using the FDK algorithm, which is then further processed using a 3D u-net to generate the 3D spatial feature maps (Then blue line and points are the 3D feature locations and the red is their projections on YZ/XZ planes). Note that for the 2D and 3D u-net in our framework, the input size is $256*256$ for 2D u-net, $512*512*512$ for 3D u-net, and the binary cross entropy loss function for the classification problem, different from the standard 2D/3D u-net.}\label{overview of framework}
\end{center}
\end{figure*}
\section{Datasets}
To train and test our approach, we use a synthetic and a real XCT dataset. The synthetic dataset uses randomly located point and line features superimposed over randomly placed polyhedra and spheres, each with varying attenuation. The real XCT data was generated during a proof of concept experiment to study the consolidation during carbon fibre tape layup, where thin copper wires were embedded in a carbon fibre tape to act as fiducial markers.
\subsection{The Synthetic Dataset}
Our synthetic dataset consists of 100 3D images, generated by randomly generating 10 shapes (either polyhedra or ellipsoids), each with a random orientation, random dimensions and random attenuation values. To generate the volume, the attenuation of overlapping shapes is added to generate the attenuation in the overlapping region. The shapes are restricted to lie within the cylinder that is covered by the X-ray cone-beam by removing the parts of each object that lie outside the cylinder. 3 random points and one random line feature are then added. Point and line features are generated with 10 different attenuation values for each volume so that we have 1000 different volumes. Each volume is then projected twice using a cone-beam geometry generated using the Astra Toolbox \cite{van2015astra}, where the two projection directions are rotated by 90 degrees relative to each other. A 3D rendering of an example is shown in Fig. \ref{one sample}.
The original 3D volumes have $512*512*512$ voxels whilst the projections have $1024*1024$ pixels which can generate 144 overlapping blocks of $256*256$ pixels from each projection. We show three small blocks of randomly selected 2D training data pairs in Fig. \ref{training samples}. To make the synthetic data more realistic, for each image we have different intensities for the point and line features, gray values for point and line features are blurred with a Gaussian blur. All attenuation values are drawn from a uniform random distribution with values between 0 and 1.
\begin{figure}
\begin{center}
\includegraphics[width=2.5in]{pdf/5.png}\\
\caption{A rendering of an example of the 3D volumetric data showing polyhedra and ellipsoids as well as point and line features. }\label{one sample}
\end{center}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tabular}{lll}
$\includegraphics[width=0.9in]{pdf/6a1.png}$ & $\includegraphics[width=0.9in]{pdf/6b1.png}$ & $\includegraphics[width=0.9in]{pdf/6c1.png}$ \\
$\includegraphics[width=0.9in]{pdf/6a2.png}$ & $\includegraphics[width=0.9in]{pdf/6b2.png}$ & $\includegraphics[width=0.9in]{pdf/6c2.png}$
\end{tabular}\caption{Three sets of training samples (small blocks) for the feature detection method. The top images show the input data and the bottom images show the feature masks, which are used as the ground truth for feature detection.}\label{training samples}
\end{figure}
\subsection{The Carbon Fibre XCT dataset}
The real data was generated for the carbon fibre layup and consolidation experiment, where carbon fibre tape was deposited overlayup tool. The experiment was conducted in an X-ray scanner, and we collected x-ray projections at 60 different time points features the consolidation process. To track 3D deformation from stereo projections, two projection images were taken at each time point. 25 micrometer thick copper wires were embedded into the carbon-hand-annotated fiducial markers for 3D tracking of movement. The experiment was conducted with a bespoke test rig placed in a Nikon XTH225 X-ray tomography system. Images were acquired with a magnification of 8, giving a field of view of about 25mm so that each detector pixel is about 25 micrometers squared. To allow fast imaging and to reduce noise, the data from 2000 by 2000 pixel detector was binned into 1000 by 1000 pixels. The images are pre-processed by converting the measured X-ray intensity into attenuation values \cite{kak2001principles}.
The dataset thus consists of 60 pairs of projection images. The lines were annotated in each projection by hand. Two different projection images are shown on the left in Fig. \ref{2 large image samples}, where we also show the hand annotated locations of three linear features on the right. As the copper wires produced in the original projections are extremely faint, we also generated augmented datasets, where we changed the attenuation value along these features, by either doubling or halving the values.
\begin{figure}
\centering
\subfigure[Projection samples]{
\begin{minipage}[b]{0.43\linewidth}
\includegraphics[width=1\linewidth]{pdf/8_1.png}\vspace{4pt}
\includegraphics[width=1\linewidth]{pdf/8_2.png}
\end{minipage}}
\subfigure[Ground truth samples]{
\begin{minipage}[b]{0.43\linewidth}
\includegraphics[width=1\linewidth]{pdf/8_11.png}\vspace{4pt}
\includegraphics[width=1\linewidth]{pdf/8_22.png}
\end{minipage}}
\caption{A pair of x-ray stereo projection images from a carbon fibre layup experiment. Two projections are shown on the left and hand annotated images that show the feature location are shown on the right. We limited the evaluation of the confusion matrix into the yellow box to reduce the occurrences of non-feature pixels to better present evaluation results
}\label{2 large image samples}
\end{figure}
\section{Experimental evaluation}
We evaluate the feature detection and the 3D feature mapping steps independently.
\subsection{General training and evaluation approach}
The 2D classification network for feature detection is trained using the projections as inputs and the binary images showing point locations as output. As for the 3D volumetric reconstruction experiment, based on the Eq. \ref{eq:1.2}, the inputs are the filtered back-projection volume images generated from the feature maps detected with the feaqture detection network.
Both networks are implemented using TensorFlow 2.0 and optimised using an Nvidia Titan XP graphics card. We use the Adam optimiser with a learning rate of $10^{-4}$ run for 100 epochs.
\subsection{Feature Detection}
Our first feature detection experiment uses the synthetic data-set. We split the data into test and train sets so that we use 95 of the volumes for training and the projections from the remaining 5 volumes for testing. Note that for each of these volumes, we have 10 different feature intensities for each volumes and two projections from each, so that we have 1900 projections for training and 100 independently generated projections for testing.
Once trained, the performance of the method on 3 of the test samples (small blocks) is demonstrated in Fig. \ref{Feature result s-s}, where we show three examples, with predicted feature locations on the left, ground truth locations in the centre and the original projections on the right. To evaluate the performance numerically, we show the normalised confusion matrix in Table. \ref{synthetic test data CM}. We also show the ROC curve (receiver operating characteristic curve) in Fig. \ref{ROC}. Our test performance curve almost reaches the effect of the perfect model, indicated from the area under the curve (AUC) is close to 1.
\begin{figure}[htbp]
\centering
\begin{tabular}{ccc}
Input & Output & Truth \\
$\includegraphics[width=0.9in]{pdf/9a1.png}$ & $\includegraphics[width=0.9in]{pdf/9a2.png}$ & $\includegraphics[width=0.9in]{pdf/9a3.png}$ \\
$\includegraphics[width=0.9in]{pdf/9b1.png}$ & $\includegraphics[width=0.9in]{pdf/9b2.png}$ & $\includegraphics[width=0.9in]{pdf/9b3.png}$ \\
$\includegraphics[width=0.9in]{pdf/9c1.png}$ & $\includegraphics[width=0.9in]{pdf/9c2.png}$ & $\includegraphics[width=0.9in]{pdf/9c3.png}$
\end{tabular}\caption{Three test samples with features at different gray-levels. The first column shows the 2D projection, the second column is the estimated feature locations and the last column is the ground truth.}\label{Feature result s-s}
\end{figure}
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|l|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{Confusion matrix}} & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}\\Predicted condition\\\end{tabular}} & \\
\cline{3-4}
\multicolumn{2}{|c|}{} & \begin{tabular}[c]{@{}c@{}}\\Positive\\\end{tabular} & Negative & \\
\hline
\multirow{2}{*}{Actual condition} & Positive & \begin{tabular}[c]{@{}c@{}}\\TP = 719\\\end{tabular} & \begin{tabular}[c]{@{}c@{}}\\FN = 52\\\end{tabular} & \multicolumn{1}{c|}{TPR=TP/(TP+FN)=0.933} \\
\cline{2-5}
& Negative & \begin{tabular}[c]{@{}c@{}}\\FP = 37\\\end{tabular} & TN = 64728 & \multicolumn{1}{c|}{FPR=FP/(FP+TN)=0.0006} \\
\hline
\multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & \begin{tabular}[c]{@{}c@{}}\\PPV = \\TP/(TP+FP) = 0.951\\\end{tabular} & \begin{tabular}[c]{@{}c@{}}FOR =\\FN/(FN+TN) = 0.0008\end{tabular} & \\
\hline
\end{tabular}\caption{Confusion matrix on synthetic test data.}\label{synthetic test data CM}
\end{table*}
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{pdf/11.png}\\
\caption{ROC curve for our test data for the synthetic data-set. The AUC is 0.9937.}\label{ROC}
\end{center}
\end{figure}
As true X-ray data have larger projection images but fewer training examples, we train the network on image blocks. From the 120 projection images, we thus generate 11520 overlapping blocks of $256*256$ pixels and use these as our training set.
We split the data into test and train sets by taking one projection direction as the training example and the comparison projection direction as the test sample. Example results are shown in Fig. \ref{cf2 result samples}, where we again show predicted feature locations (left), ground truth (middle) as well as projections (right). The confusion matrix is shown in Table. \ref{real test data on CM} and the ROC blue curve in Fig. \ref{ROC comparsion}.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{pdf/12.png}\\
\caption{Results on two test samples. The first column is the predicted feature location, the second column is the ground truth feature location, and the last column is the original projection image.}\label{cf2 result samples}
\end{center}
\end{figure}
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|l|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{Confusion matrix}} & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}\\Predicted condition\\\end{tabular}} & \\
\cline{3-4}
\multicolumn{2}{|c|}{} & \begin{tabular}[c]{@{}c@{}}\\Positive\\\end{tabular} & Negative & \\
\hline
\multirow{2}{*}{Actual condition} & Positive & \begin{tabular}[c]{@{}c@{}}\\TP = 1214\\\end{tabular} & \begin{tabular}[c]{@{}c@{}}\\FN = 89\\\end{tabular} & \multicolumn{1}{c|}{TPR=TP/(TP+FN)=0.932} \\
\cline{2-5}
& Negative & \begin{tabular}[c]{@{}c@{}}\\FP = 264\\\end{tabular} & TN = 129505 & \multicolumn{1}{c|}{FPR=FP/(FP+TN)=0.002} \\
\hline
\multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & \begin{tabular}[c]{@{}c@{}}\\PPV = \\TP/(TP+FP) = 0.821\\\end{tabular} & \begin{tabular}[c]{@{}c@{}}FOR =\\FN/(FN+TN) = 0.0001\end{tabular} & \\
\hline
\end{tabular}\caption{Confusion matrix on real test data.}\label{real test data on CM}
\end{table*}
To further explore the influence of the intensity of the feature relative to the background attenuation in the projection image, we compare the performance of the model by training the modified dataset if we have features that are halve or 1.5 times as strongly attenuating. Example images with differently strong features are shown in Fig. \ref{intensity comparsion}. Here we use 0.5 times to represent weak intensity, 1 times for normal intensity and 1.5 times for strong intensity. Classification performance is shown in Fig. \ref{ROC comparsion}, where we show the ROC curves for different feature intensities. As long as the features are not too weak, our model correctly detects the features.
\begin{figure}
\centering
\begin{tabular}{rccc}
\begin{sideways}Intensity\end{sideways} & 0.5x & 1x & 1.5x \\
\begin{sideways}Input\end{sideways} & $\includegraphics[width=0.9in]{pdf/15_1.png}$ & $\includegraphics[width=0.9in]{pdf/15_2.png}$ & $\includegraphics[width=0.9in]{pdf/15_3.png}$ \\
\begin{sideways}Output\end{sideways} & $\includegraphics[width=0.9in]{pdf/15_4.png}$ & $\includegraphics[width=0.9in]{pdf/15_5.png}$ & $\includegraphics[width=0.9in]{pdf/15_6.png}$ \\
& (a) & (b) & (c)
\end{tabular}\caption{A comparison of the model's performance with features of different strengths. From left to right, we show features that are 0.5 time, 1 times and 1.5 times as strongly attenuating relative to the features in the original data. The top row shows the same block from one projection image but with different feature attenuation and the bottom row shows the detected features.}\label{intensity comparsion}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{pdf/16.png}\\
\caption{ROC curve for the real data but with varying feature attenuation. The red curve shows features that are 1.5 times stronger than those in the original data, the blue curve is the original feature strength and the green curve is the performance if features have half the attenuation. The stronger features lead to perfect recovery, whilst the AUC for the less strong features drops to 0.9230.}\label{ROC comparsion}
\end{center}
\end{figure}
\subsection{Experiments of Feature Volumetric Reconstruction}
To evaluate our approach to feature matching and 3D location estimation, we use the synthetic data-set only, as we do not have the ground truth for the real dataset. We train the 3D U-net on the estimated feature maps from the training dataset, using the ground truth 3D location of feature as the target. We then evaluate the method using the test data.
Fig. \ref{occulsion case} shows an example where the geometry does not allow us to find unique 3D mappings, as in the second projection image, a point feature has been occluded by the line feature, so that the point feature from the first view can only be match with its corresponded point feature in the other view if we assume that two features overlap in one image. We find that even for this difficult case, the neural network can handle this uncertainty, with the rendering of the true and estimated 3D feature locations for this example shown in Fig. \ref{mapping result and gt}. We list 10 of our test samples with their average confusion matrix at Table. \ref{3D mapping CM} and numerically evaluate the performance of this approach compare the absolute difference between the centre position of point features and line feature end point centers for the 10 test samples. We found that the average absolute difference is lower than 1.5 pixels.
\begin{figure}
\centering
\begin{tabular}{cc}
First view & Second view \\
$\includegraphics[width=1.6in]{pdf/17a.png}$ & $\includegraphics[width=1.6in]{pdf/17b.png}$ \end{tabular}\caption{A pair of projections from a synthetic test sample. The first view shows all three point features as well as the line feature, whilst in the second view, a point feature overlaps with the line feature and is thus not visible.}\label{occulsion case}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
3D mapping & Ground truth \\
$\includegraphics[width=1.6in]{pdf/18a.png}$ & $\includegraphics[width=1.6in]{pdf/18b.png}$ \end{tabular}\caption{Rendering of the 3D location of the features reconstructed from the estimated feature positions in the projections as shown in Fig. \ref{occulsion case}, showing that the method still reconstructs the features, even though they are not all visible in both projection views.}\label{mapping result and gt}
\end{figure}
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|l|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{Confusion matrix}} & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}\\Predicted condition\\\end{tabular}} & \\
\cline{3-4}
\multicolumn{2}{|c|}{} & \begin{tabular}[c]{@{}c@{}}\\Positive\\\end{tabular} & Negative & \\
\hline
\multirow{2}{*}{Actual condition} & Positive & \begin{tabular}[c]{@{}c@{}}\\TP = 12206\\\end{tabular} & \begin{tabular}[c]{@{}c@{}}\\FN = 2273\\\end{tabular} & \multicolumn{1}{c|}{TPR=TP/(TP+FN)=0.843} \\
\cline{2-5}
& Negative & \begin{tabular}[c]{@{}c@{}}\\FP = 2119\\\end{tabular} & TN = \num{1.34e8} & \multicolumn{1}{c|}{FPR=FP/(FP+TN)=\num{1e-5}} \\
\hline
\multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & \begin{tabular}[c]{@{}c@{}}\\PPV = \\TP/(TP+FP) = 0.852\\\end{tabular} & \begin{tabular}[c]{@{}c@{}}FOR =\\FN/(FN+TN) = \num{1e-5}\end{tabular} & \\
\hline
\end{tabular}\caption{Confusion matrix for 3D mapping result.}\label{3D mapping CM}
\end{table*}
To demonstrate the improved performance of our neural network approach, we also match the features of Fig. \ref{occulsion case} and map them into 3D space via the epipolar geometric information as our reference. In this epipolar geometric method, the intersection point in 3D space of the back-projection generated by two different single views is the position of the feature point in 3D space, which is mathematically verifiable and accurate. However, considering the voxel and pixel size in 3D space and 2D projections in the digital image, there will inevitably be errors when selecting the center of the feature point. As shown in Fig \ref{epipolar geometric 3D mapping}, the absolute pixels error for the three points feature and two points feature at the end of line features is 1. Comparing Fig \ref{epipolar geometric 3D mapping} with Fig. \ref{mapping result and gt}, our neural network method performs as well as the epipolar geometric method but crucially also works in some of the cases where the epipolar method fails due to point occlusion.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{pdf/19.png}\\
\caption{The feature mapped from two feature detected projections into 3D space by epipolar geometric method with a replaced projection on second view due to a point feature is occluded by the line feature.}\label{epipolar geometric 3D mapping}
\end{center}
\end{figure}
\section{Discussion and conclusions}
In this paper, we introduce the concept of stereo X-ray tomography, where we use two (or possibly 3) X-ray projection images to estimate spatial locations of features in 3D space. Whilst we are not able to reconstruct arbitrary objects in 3D without additional prior information, we show that it is possible to reconstruct the location of the point and line features in 3D. This can have many applications in tomographic imaging, especially when we are unable to collect full tomographic projections, which is useful when mapping dynamic processes that are too fast for full tomographic acquisition. For these imaging systems, setups with two (or three) X-ray source and detector systems can be envisaged that inspect an object at roughly 90-degree angles. In this setting, the main challenge is the identification and matching of points in the individual projections. Methods used in stereo vision, which typically rely on feature matching methods that match entire pixel neighbourhoods, do not work in transmission tomography. Instead, we propose the use of a learned feature detector together with a feature matching method that exploits epipolar geometry constraints. We have shown the robustness of the feature detection method and could demonstrate that, for problems with few features where unique matching is possible, a simple 3D U-net can map back-projected feature maps into 3D locations. It should be stressed that for two projection images (binocular Stereo CT), feature matching can become a challenge when using the epipolar constraint if we have larger numbers of features as matching is not uniquely possible if two features lie on the same epipolar plane in 3D. In this case, matching is however much more likely if we have 3 projections (triocular Stereo CT), such that none of the equipolar planes are parallel.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13178",
"language": "en",
"timestamp": "2023-02-28T02:11:58",
"url": "https://arxiv.org/abs/2302.13178",
"yymm": "2302"
} | \section{Introduction}
The use of \acp{ELAA} to further boost the potential of the massive \ac{MIMO} concept is a key ingredient to enable new use cases and emerging applications in the evolution towards 6G \cite{Emil2019}. In these systems, also referred to as \ac{XL}-\ac{MIMO} systems, the number of antennas at the \ac{BS} is in the order of hundreds, which enables to serve a much larger set of users. One of the crucial challenges when considering the case of XL-MIMO is that, when the user distances to the \ac{BS} are comparable to the \ac{BS} size, the consideration of \ac{SW} propagation is instrumental to exploit the features of this regime \cite{LuZe22,Cui23}.
The promising attributes of \ac{XL}-\ac{MIMO} systems in multi-user set-ups come in hand with important physical layer challenges: first, the cost of training overhead for channel estimation is proportional to the number of users and antennas, which translates into a larger complexity and also penalizes the achievable throughput \cite{Marzetta2010}. Similarly, the complexity of the precoding design and the selection of the optimal set of users for transmission grows with the problem dimension (i.e., antennas and users) \cite{Muller2016,YoGo06}. Besides, precise channel characterization beyond simplified \ac{LoS} and one-ring approaches conventionally assumed in this context \cite{LuZe22,Han2020} are required to realistically predict network performance \cite{Chen20}. Finally, although the assumption of perfect \ac{CSI} availability is of widespread use in the \ac{XL}-\ac{MIMO} literature \cite{Marinello2020,GoLoCa21,Filho2022}, channel \textit{does} vary due to microscopic perturbations affecting \textit{both} the dominant specular components and the diffusely propagating multipath waves \cite{Demir22}, which introduces \ac{CSI} uncertainty and reduces the system performance \cite{TrHe13}.
Aiming to address the aforementioned challenges in the context of \ac{XL}-\ac{MIMO} systems, we analyze the problem of joint user scheduling and precoding in a realistic set-up. Specifically, we investigate the achievable \ac{SE} in a \ac{DL} multi-user scenario with a very large number of \ac{BS} antennas and users, on which user selection and precoding design are performed using outdated and noisy channel estimations. SW propagation features are integrated with a general channel model that combines the characteristics of the one-ring model with those of spatially correlated diffuse components \cite{Demir22}. We propose a channel training procedure in which only a subset of users participates in the training stage, thus reducing the training overhead and improving the achievable \ac{SE}. We design a joint user scheduling and precoding algorithm using imperfect and outdated channel estimates, and analyze the performance loss incurred due to such imperfect \ac{CSI}. Results show that the proposed algorithm outperforms other benchmark approaches based on \ac{SUS} when the cost of training overhead is considered
\textit{Notation:} Lower and upper case bold letters denote vectors and matrices, while $\mathbb{C}^M$ is complex vector space with dimension $M$; $\left ( \cdot \right )^{\Transpose}$, $\left ( \cdot \right )^{\Hermitian}$, and $\trace\left ( \cdot \right )$ denote the transpose, Hermitian transpose, and trace operations; $\Real\{\cdot\}$ and $\Imag\{\cdot\}$ represent the real and the imaginary parts of a complex number; symbol $\sim$ reads as \emph{statistically distributed as}, and $\Exp\big[\cdot\big]$ is the statistical expectation; $\mathcal{N}_\mathbb{C}(\mathbf{0},\B{C})$ is a zero-mean Gaussian distribution with covariance $\B{C}$, and $\mathcal{U}[a,b)$ is a Uniform distribution within the interval; $\|\cdot\|$ is the Euclidean norm, and $|\cdot|$ a set cardinality.
\section{System model}
\label{sec:model}
Let us consider the \ac{DL} of an \ac{XL}-\ac{MIMO} setup, where the \ac{BS} equipped with an \ac{ELAA} with $M\gg 1$ elements sends data to $K$ single-antenna users.
We assume a 2-D scenario where the \ac{BS} \ac{ELAA} is an \ac{ULA} centered at the origin of a circular coordinate system. In this scenario, each position is determined by a given radius $r_k$, which represents the distance to the center of the \ac{ULA}, and a given angle $\theta_k$. The signal arrives to the users through a \ac{LoS} channel component, and/or after several reflections. This is captured by the general channel model in \cite{Demir22}, so that the the $k$-th user channel vector $\B{h}_{\rm k}\in\mathbb{C}^M$ is expressed as
\begin{equation}
\B{h}_{\rm k}=\sum\nolimits_{s=1}^{S_k}{\text{e}}^{{\text{j}} \varphi_{k,s}}\bar{\B{h}}_{\rm k,\rm s}+\tilde{\B{h}}_{\rm k},
\label{eq:channelModel}
\end{equation}
where $\varphi_{k,s}$ represents the phase shift due to microscopic perturbations; $\tilde{\B{h}}_{\rm k}$ is the contribution of diffuse scattering as the summation of numerous weak multipath waves; $\bar{\B{h}}_{\rm k,\rm s}$ represent the dominant ${S_k}$ specular components\footnote{When using very large \ac{MIMO} and/or highly directive steerable antennas, specular paths need to be considered for realistic channel modeling \cite{metis2015}.} for user $k$, with $s=1$ denoting the \ac{LoS} component and $s=2\ldots S_k$ indicating the additional specular waves -- which are regarded as \ac{NLoS} paths. Assuming a block fading channel model of length $\tau_c$, with smaller duration that the channel coherence time, both $\varphi_{k,s}$ and $\bar{\B{h}}_{\rm k,\rm s}$ are constant $\forall k,s$.
On the contrary, $\varphi_{k,s}$ and $\tilde{\B{h}}_{\rm k}$ take independent realizations from one coherence block to another, but their statistics are available at the transmitter as follows: $\varphi_{k,s}\sim\mathcal{U}[0,2\pi)$ and $\tilde{\B{h}}_{\rm k}\sim\mathcal{N}(\bf{0},\B{R}_{\rm k})$, with $\B{R}_{\rm k}$ capturing the spatial correlation of the antenna array and the long-term channel effects \cite{Demir22}. The power magnitudes of the specular and diffuse components are $\Omega_{\rm s}\approx\sum_{s=1}^{S_k}\|\bar{\B{h}}_{\rm k,\rm s}\|^2$ and $\Omega_{\rm d}=\trace(\B{R}_{\rm k})$, respectively, with the power ratio being defined as $\kappa=\frac{\Omega_{\rm s}}{\Omega_{\rm d}}$, similar to the Rician $K$ parameter.
Let $r_{k,s}$ and $\theta_{k,s}$ be the radius and the angle corresponding to either the user $k$ location for a \ac{LoS} component, or to the last reflection for a \ac{NLoS} specular path.
Hence, the response vector $\bar{\B{h}}_{\rm k,\rm s}$ in \eqref{eq:channelModel} reads as
\begin{equation}
\bar{\B{h}}_{\rm k,\rm s}=\rho_{k,s}[{\text{e}}^{-{\text{j}}\frac{2\pi}{\lambda}r_{k,s,1}},{\text{e}}^{-{\text{j}}\frac{2\pi}{\lambda}r_{k,s,2}},\ldots,{\text{e}}^{-{\text{j}}\frac{2\pi}{\lambda}r_{k,s,M}}]^{\Transpose},
\end{equation}
where $\lambda$ is the signal wavelength and $\rho_{k,s}\in(0,1]$ is the attenuation of the specular component due to physical effects like path-loss, reflection or absorption. Finally, $r_{k,s,m}$ stands for the radius of the $m$-th element of the antenna array. Considering an spherical wave-front, this radius is
\begin{align}
r_{k,s,m}=r_{k,s}\sqrt{1-2m{d}_{k,s}\sin\theta_{k,s}+{d}_{k,s}^2m^2},
\label{eq:antennaDistance}
\end{align}
with $m\in\left[-\tfrac{M}{2},\tfrac{M}{2}\right]$, $d_{k,s}=\frac{d}{r_{k,s}}$, and $d$ being the \ac{ULA} inter-antenna distance.
The data symbols to be transmitted, denoted as $s_k\sim\mathcal{N}_\mathbb{C}(0,1)$, $k=1,\ldots,K$ are processed using linear precoding vectors ${\bf{p}}_{\rm k}\in\mathbb{C}^M$ that satisfy the power constraint $\sum_{k=1}^K\|{\bf{p}}_{\rm k}\|_2^2\leq P_\text{TX}$, being $P_\text{TX}$ the available transmit power at the \ac{BS}. At the user $k$, the received signal is
\begin{equation}
\label{eq4}
y_k=s_k{\bf{p}}_{\rm k}^{\Hermitian}{\bf{h}}_{\rm k}+\sum\nolimits_{j\neq k}s_j{\bf{p}}_{\rm j}^{\Hermitian}{\bf{h}}_{\rm k}+n_k,
\end{equation}
where the intended symbol $s_k$ is affected by \ac{IUI} and \ac{AWGN} represented by $n_k\sim\mathcal{N}_\mathbb{C}(0,\sigma^2_n)$. By considering the \ac{IUI} (second term in \eqref{eq4}) as noise and assuming Gaussian signaling, the achievable \ac{SE} for user $k$ is defined as \cite{YoGo06}
\begin{align}
\label{eq:sumRate}
R_k=\left(1-\frac{\tau_p}{\tau_c}\right)\log_2\Bigg(1+\tfrac{|{\bf{p}}_{\rm k}^{\Hermitian}{\bf{h}}_{\rm k}|^2}{\sigma^2_n+\sum_{j\in\mathcal{S},j\neq k}|{\bf{p}}_{\rm j}^{\Hermitian}{\bf{h}}_{\rm k}|^2}\Bigg),
\end{align}
where $\tau_p$ is the number of channel uses employed for training on each coherence block.
The goal is to find the set of users $\mathcal{S}\subseteq\{1,\ldots,K\}$, and their corresponding precoders, that can be served in the same time-frequency resource, so that the achievable \ac{SE} is maximized, i.e.,
\begin{equation}
\argmax_{\{{\bf{p}}_{\rm k}\}_{k\in\mathcal{S}}} \sum\nolimits_{k\in\mathcal{S}}R_k\quad\text{s.t.}\quad\sum\nolimits_{k\in\mathcal{S}}\|{\bf{p}}_{\rm k}\|_2^2\leq P_\text{TX}.
\label{eq:problemForm}
\end{equation}
\section{User scheduling with imperfect CSI}
It is clear from \eqref{eq:problemForm} that the system performance strongly depends on the determination of the user set $\mathcal{S}$. While the consideration of instantaneous \ac{CSI} fully available at the BS is a typical assumption in the literature \cite{GoLoCa21,Filho2022,Souza2022,Ribeiro2021}, channel uncertainties due to channel aging and imperfect estimation also impact on this choice \cite{Chen20}. Besides, the cost of channel training reduces the achievable \ac{SE} as a pre-log factor in \eqref{eq:sumRate}, an issue usually neglected in the related literature. With all these aspects in mind, we tackle the problem of user scheduling under imperfect \ac{CSI}. This is accomplished in two steps: first define an \textit{equivalent channel gain} metric to estimate user priorities based on the channel estimates available at the \ac{BS}, and next proceed to the joint precoding and scheduling design based on imperfect CSI.
\subsection{Equivalent channel gains}
According to the block fading channel model described in the previous section, the diffuse component $\tilde{\B{h}}_{\rm k}$ and phase shifts $\varphi_{k,s}$ in \eqref{eq:channelModel} are unknown to the BS \cite{Demir22}. Hence, the expected channel gain for user $k$ is estimated as:
\begin{align}
&\Exp\big[\|\B{h}_{\rm k}\|_2^2\big]=\sum\nolimits_{s=1}^{S_k}\|\bar{\B{h}}_{\rm k,\rm s}\|_2^2+2\Exp\big[\Real\{\tilde{\B{h}}_{\rm k}^{\Hermitian}\bar{\B{h}}_{\rm k,\rm s}{\text{e}}^{{\text{j}} \varphi_{k,s}}\}\big]\nonumber\\
&+2\sum\nolimits_{z>s}^{S_k}\Exp\big[\Real\{\bar{\B{h}}_{\rm k,\rm z}^{\Hermitian}\bar{\B{h}}_{\rm k,\rm s}{\text{e}}^{{\text{j}} (\varphi_{k,s}-\varphi_{k,z})}\}\big]+\Exp\big[\|\tilde{\B{h}}_{\rm k}\|_2^2\big]\nonumber\\
&\overset{(a)}{=}\sum\nolimits_{s=1}^{S_k}\|\bar{\B{h}}_{\rm k,\rm s}\|_2^2+2\Exp\big[\Real\{\tilde{\B{h}}_{\rm k}^{\Hermitian}\bar{\B{h}}_{\rm k,\rm s}{\text{e}}^{{\text{j}} \varphi_{k,s}}\}\big]+\trace(\B{R}_{\rm k})\nonumber\\
&\overset{(b)}{=}\sum\nolimits_{s=1}^{S_k}\|\bar{\B{h}}_{\rm k,\rm s}\|_2^2+\trace(\B{R}_{\rm k}).
\label{eq:avgChannelGain}
\end{align}
In \eqref{eq:avgChannelGain}, equality (\textit{a}) comes from the fact that expectations for cross-products are $\sim0$ because of phase uncertainty: note first that small values for $\bar{\B{h}}_{\rm k,\rm z}^{\Hermitian}\bar{\B{h}}_{\rm k,\rm s}$, $s\neq z$ are likely; second, even if these products are non-negligible, we obtain
\begin{align}
\Exp\big[\Real\{\bar{\B{h}}_{\rm k,\rm z}^{\Hermitian}\bar{\B{h}}_{\rm k,\rm s}&{\text{e}}^{{\text{j}} (\varphi_{k,s}-\varphi_{k,z})}\}\big]=\notag\\
&\Real\{\bar{\B{h}}_{\rm k,\rm z}^{\Hermitian}\bar{\B{h}}_{\rm k,\rm s}\}\Exp\big[\Real\{{\text{e}}^{{\text{j}} (\varphi_{k,s}-\varphi_{k,z})}\}\big]\label{eq:gainTerm}\\
&-\Imag\{\bar{\B{h}}_{\rm k,\rm z}^{\Hermitian}\bar{\B{h}}_{\rm k,\rm s}\}\Exp\big[\Imag\{{\text{e}}^{{\text{j}} (\varphi_{k,s}-\varphi_{k,z})}\}\big]\notag.
\end{align}
Focusing now on the first term in \eqref{eq:gainTerm} (a similar rationale applies to the second one), we obtain
\begin{align*}
&\Exp\big[\Real\{{\text{e}}^{{\text{j}} (\varphi_{k,s}-\varphi_{k,z})}\}\big]=\Exp\big[\cos(\varphi_{k,s})\cos(\varphi_{k,z})\big]\\
&+\Exp\big[\sin(\varphi_{k,s})\sin(\varphi_{k,z})\big]=\\
&\left(\tfrac{1}{2\pi}\int_{0}^{2\pi}\cos(\varphi_{k,s})\right)^2+\left(\tfrac{1}{2\pi}\int_{0}^{2\pi}\sin(\varphi_{k,s})\right)^2=0,
\end{align*}
where last equality comes from the i.i.d. property of the phase shifts related to the microscopic perturbations for the channel paths in \eqref{eq:channelModel}. For equality (\textit{b}) in \eqref{eq:avgChannelGain} we use
\begin{align}
\Exp\big[\Real\{\tilde{\B{h}}_{\rm k}^{\Hermitian}\bar{\B{h}}_{\rm k,\rm s}{\text{e}}^{{\text{j}} \varphi_{k,s}}\}\big]&=
\Exp\big[\Real\{\tilde{\B{h}}_{\rm k}^{\Hermitian}\bar{\B{h}}_{\rm k,\rm s}\}\big]\Exp\big[\cos(\varphi_{k,s})\big]\notag\\
&-\Exp\big[\Imag\{\tilde{\B{h}}_{\rm k}^{\Hermitian}\bar{\B{h}}_{\rm k,\rm s}\}\big]\Exp\big[\sin(\varphi_{k,s})\big]=0,\notag
\end{align}
that is a consequence of $\tilde{\B{h}}_{\rm k}\sim\mathcal{N}(\bf{0},\B{R}_{\rm k})$.
Now, when the number of users $K$ and antennas $M$ are large, determining $ \Exp\big[\|\B{h}_{\rm k}\|_2^2\big]$ $\forall k$ is costly. Therefore, we propose to approximate \eqref{eq:avgChannelGain} by the following expression
\begin{equation}
\Exp\big[\|\B{h}_{\rm k}\|_2^2\big]\approx M\left(1+\frac{1}{\kappa}\right)\sum\nolimits_{s=1}^{S_k}\rho_{k,s}^2=g_k.
\label{eq:gain}
\end{equation}
Note that the \textit{equivalent} channel gain $g_k$ provides an approximation to the expected channel gains based on the long-term \ac{CSI} available at the \ac{BS}. Building on this idea, we will develop an iterative method to jointly select the users to be served by the \ac{BS} and design their precoding vectors for \ac{DL} transmission.
\subsection{Imperfect CSI scheduling and precoding}
Alg. \ref{alg:Scheduler}, presents the proposed imperfect-CSI scheduling and precoding (ISP) algorithm, which is based on the following rationale: During data transmission over the $n$-th coherence block, users to be served in the block $n+1$ are scheduled one by one. The algorithm starts determining the expected channel gains $g_k$ $\forall k \in \{1,\ldots,K\}$ as in \eqref{eq:gain}, using the long-term channel statistics, and scheduling the one with the largest gain. In subsequent iterations, the $g_k$ values are refined with a penalty factor accounting for the \ac{IUI} caused by previously scheduled users and, again, the user with the largest gain is scheduled. This penalty factor also considers the \ac{CSI} available at the \ac{BS} for the $n$-th coherence block. To further detail Alg. \ref{alg:Scheduler}, let $\mathcal{S}^{(\ell)}$ be the set of scheduled users at iteration $\ell$, with, $|\mathcal{S}^{(\ell)}|=\ell$. Therefore, the {equivalent} channel gain for a user $k\notin \mathcal{S}^{(\ell)}$ at iteration $\ell$, accounting for \ac{IUI}, is given by
\begin{equation}
g_k^{(\ell)}=g_k-\sum\nolimits_{s=1}^{S_k}\bar{\B{h}}_{\rm k,\rm s}^{\Hermitian}\B{F}^{(\ell)}\bar{\B{h}}_{\rm k,\rm s}-\trace\left(\B{R}_{\rm k}\B{F}^{(\ell)}\right),
\label{eq:gupadte}
\end{equation}
where $\B{F}^{(\ell)}=\sum_{i=1}^\ell\B{f}_{\rm k_i}^{(i)}(\B{f}_{\rm k_i}^{(i)})^{{\Hermitian}}$ with $\B{f}_{\rm k_i}^{(i)}$ unit-norm precoders for the user $k_i\in\mathcal{S}^{(\ell)}$, selected at iteration $i$. The update of $g_k^{(\ell)}$ in \eqref{eq:gupadte} relies on computations similar to those in \eqref{eq:avgChannelGain} but the spatial correlation matrix $\B{R}_{\rm k}$ and the matrix of precoders $\B{F}^{(\ell)}$ must be determined based on imperfect \ac{CSI}. An expression for the spatial correlation matrix $\B{R}_{\rm k}$ is given in the following lemma. For the sake of notation simplicity, index $k$ is dropped.
\begin{lemma}\label{thm}
Let $[\B{R}]_{m,n}$ be the element at row $m$ and column $n$ of the spatial correlation matrix $\B{R}$ given by
\begin{equation}
\label{eqR}
[\B{R}]_{m,n}=\beta\int {\text{e}}^{-j\frac{2\pi}{\lambda}r_m}{\text{e}}^{j\frac{2\pi}{\lambda}r_n}f(\theta)d\theta,
\end{equation}
where $\beta$ is the average gain, $r_m$ and $r_n$ are the distances corresponding to the $m$ and $n$ antennas, and $f(\theta)$ is the probability density function. Then, $[\B{R}]_{m,n}$ can be computed in closed-form as
\begin{align}
[\B{R}]_{m,n}&=\tfrac{\beta{\text{e}}^{{\text{j}}\left(a-\frac{b^2}{4c}\right)}\sqrt{\pi}(1+{\text{j}})}{4\varphi\sqrt{2c}}\bigg[\phi\left(\tfrac{1-{\text{j}}}{\sqrt{2}}\big(\sqrt{c}\varphi+\tfrac{b}{2\sqrt{c}}\big)\right)\notag\\
&-\phi\left(\tfrac{1-{\text{j}}}{\sqrt{2}}\big(-\sqrt{c}\varphi+\tfrac{b}{2\sqrt{c}}\big)\right)\bigg].
\label{eq:covUniform}
\end{align}
where $\phi(\cdot)$ is the error function, and
\begin{align*}
a&=\tfrac{2\pi}{\lambda}\big[(m-n)d\sin(\vartheta)+\tfrac{(n^2-m^2)d^2}{2r}\cos^2(\vartheta),\label{eq:aux}\\
b&=\tfrac{2\pi}{\lambda}\cos(\vartheta)[(m-n)d-\tfrac{(n^2-m^2)d^2}{r}\sin(\vartheta)],\\
c&=\tfrac{2\pi}{\lambda}\tfrac{(n^2-m^2)d^2}{2r}\sin^2(\vartheta).
\end{align*}
\end{lemma}
\begin{proof}\label{proof-th1}
See Appendix.
\end{proof}
To compute the matrix of precoders $\B{F}^{(\ell)}$ in \eqref{eq:gupadte}, we model the $k$-th user current channel state in terms of a past state plus an innovation component \cite{TrHe13}, as follows
\begin{equation}
\B{h}_{\rm k}[n+1]=\alpha\B{h}_{\rm k}[n]+\B{z}_{\rm k}[n],
\label{eq:errorModel}
\end{equation}
where $\B{h}_{\rm k}[n]$ is the channel state at the $n$-th coherence block, $\alpha$ is a temporal correlation parameter, and $\B{z}_{\rm k}[n]\sim\mathcal{N}(\B{0},(1-\alpha^2)\B{R}_{\rm \B{z}})$ is a channel innovation term uncorrelated with $\B{h}_{\rm k}[n]$.
By considering the estimation error incurred during the $n$-th block, $\breve{\B{h}}_{\rm k}[n]$, such that $\B{h}_{\rm k}[n]=\hat{\B{h}}_{\rm k}[n]+\breve{\B{h}}_{\rm k}[n]$, \eqref{eq:errorModel} results in
\begin{equation}
\B{h}_{\rm k}[n+1]=\alpha\hat{\B{h}}_{\rm k}[n]+\alpha\breve{\B{h}}_{\rm k}[n]+\B{z}_{\rm k}[n]=\alpha\hat{\B{h}}_{\rm k}[n]+\B{e}_{\rm k}[n],
\label{eq:himperfect}
\end{equation}
where $\B{e}_{\rm k}[n]$ is a zero-mean error term. Assuming least-squares channel estimation with an orthogonal training sequence, the covariance matrix of $\B{e}_{\rm k}[n]$ is $\B{R}_{\rm \B{e}}=\alpha^2\frac{\sigma_n^2}{ P_\text{TX}}\B{I}+(1-\alpha^2)\B{R}_{\rm \B{z}}$. Finally, for the spatial correlation of the innovation term, we derive $\Exp\big[\B{h}_{\rm k}\B{h}_{\rm k}^{\Hermitian}\big]$ in a way similar to \eqref{eq:avgChannelGain}, i.e.,
\begin{equation}
\B{R}_{\rm \B{z}}=\sum\nolimits_{s=1}^{S_k}\bar{\B{h}}_{\rm k,\rm s}\bar{\B{h}}_{\rm k,\rm s}^{\Hermitian}+\B{R}_{\rm k}.
\end{equation}
\begin{algorithm}[t]
\caption{I-CSI Scheduling-Precoding (ISP)}\label{alg:Scheduler}
\begin{algorithmic}[1]
\small
\STATEx \textbf{During data transmission stage for coherence block $n$}
\STATE $\mathcal{S}^{(0)}\gets\emptyset$, $\ell \gets 0$
\STATE $g_k$, $\forall k \gets$ initialization with \eqref{eq:gain}
\REPEAT
\REPEAT
\STATE $k\gets \max_{i\notin\mathcal{S}^{(\ell)}}g_{i}^{(\ell)}$
\STATE $g_{k}^{(\ell)}\gets$ update using \eqref{eq:gupadte}
\STATE $q\gets \max_{i\notin\mathcal{S}^{(\ell)}}g_{i}^{(\ell)}$
\UNTIL{$q=k$}
\STATE $\ell \gets \ell+1$
\STATE $\mathcal{S}^{(\ell)}\gets\mathcal{S}^{(\ell-1)}\cup\{k\}$
\STATE $\B{f}^{(\ell)}_{\rm k}\gets$ Compute ZF precoders \eqref{eq:ZF-ICSI}, $\forall k\in\mathcal{S}^{(\ell)}$
\STATE ${\bf{p}}^{(\ell)}_{\rm k}\gets\B{f}^{(\ell)}_{\rm k}p_{\rm k}^{(\ell)} $, with waterfilling power allocation
\STATE $\tau_p^{(\ell)}=(|\mathcal{S}^{(\ell)}|+|\mathcal{G}|)\dot\tau$
\STATE $\sum_{k\in\mathcal{S}}R_k$ with $\B{p}^{(\ell)}_{\rm k},\hat{\B{h}}_{\rm k}[n],\forall k$, and $\tau_p^{(\ell)}$ [cf. \eqref{eq:sumRate}]
\UNTIL{$|\mathcal{S}^{(\ell)}|=K$ or performance metric decreases}
\STATE Determine $\mathcal{G}$, $\mathcal{S}\gets\mathcal{S}^{(\ell)}$
\STATEx \textbf{Training stage for coherence block $n+1$}
\STATE Estimate channels for users $k\in\mathcal{S}\cup\mathcal{G}$
\STATE Compute ZF precoders $\forall k\in\mathcal{S}$, with $\hat{\B{h}}_{\rm k}[n+1]$
\STATEx \textbf{Data transmission stage for coherence block $n+1$}
\end{algorithmic}
\end{algorithm}
Based on this \ac{CSI} uncertainty model, the \ac{ZF} precoding vectors in \eqref{eq:gupadte} $\forall k\in\mathcal{S}^{(\ell)}$ are computed as
\begin{equation}
[\tilde{\B{f}}_{\rm k_1}^{(\ell)},\ldots,\tilde{\B{f}}_{\rm k_\ell}^{(\ell)}]^T=(\hat{\B{H}}^{(\ell)}(\hat{\B{H}}^{(\ell)})^{\Hermitian})^{-1}\hat{\B{H}}^{(\ell)},
\label{eq:ZF-ICSI}
\end{equation}
where $\hat{\B{H}}^{(\ell)}=[\hat{\B{h}}_{\rm k_1}[n],\ldots,\hat{\B{h}}_{\rm k_\ell}[n]]^T$ contains the outdated channel estimates. Then, the unit-norm precoders $\B{f}_{\rm k}^{(\ell)}$ are readily obtained as $\B{f}_{\rm k}^{(\ell)}=\tilde{\B{f}}_{\rm k}^{(\ell)}/\|\tilde{\B{f}}_{\rm k}^{(\ell)}\|$. These precoders, together with the updated power allocation $p_k$, $\forall k$ and pre-log factors, are used to compute the achievable \ac{SE} in \eqref{eq:sumRate} at each iteration. ISP iterations end when selecting a new user does not improve the achievable \ac{SE}, returning the final scheduling set $\mathcal{S}$. Interestingly, as only the long-term channel conditions and the $n$-th coherence block \ac{CSI} are required for the proposed procedure, we determine the set $\mathcal{S}$ \textit{prior} to the training procedure of the $n+1$ coherence block. We also propose to update the \ac{CSI} at the $n+1$ coherence block training stage only for those users with good channel conditions, i.e. either those scheduled for transmission ($k\in\mathcal{S}$), or those potential candidates for being scheduled ($k\in\mathcal{G}$), such that $\mathcal{S}\cap \mathcal{G}=\emptyset$. Candidate users in $\mathcal{G}$ may be selected according to different criteria; for instance, users satisfying $g_k\geq \nu$, where $\nu$ is a threshold for the expected channel gains. This reduces the pre-log factor in \eqref{eq:sumRate}; specifically, if $\tau_p$ scales linearly with the number of users \cite{Demir22}, then
\begin{equation}
\frac{\tau_c-\tau_p}{\tau_c}=\frac{\tau_c-(|\mathcal{S}|+|\mathcal{G}|)\dot\tau}{\tau_c},
\end{equation}
where $\dot\tau$ is the number of channel uses employed to acquire the \ac{CSI} of an individual user.
Once the channel for the users belonging to $\mathcal{S}$ are estimated, the \ac{BS} computes the \ac{ZF} precoders based on the available imperfect \ac{CSI}, i.e., $\hat{\B{h}}_{\rm k}[n+1]$. Overall, the key benefits of ISP are the reduction of the training overhead, and the removal of the scheduling procedure from the data transmission stage.
\section{Numerical Results}
The performance of ISP has been assessed through computer experiments. Unless explicitly stated, the simulation parameters are those in Table \ref{tab:Sim1} which correspond to an urban environment with users moving at a relatively high speed. For this setup, the \textit{critical distance} is $135$m \cite{GoLoCa21}, and we consider a temporal correlation factor $\alpha=J_0(2\pi f_d T_s\tau_s)$, with $J_0(\cdot)$ the Bessel function of the first kind and zero-th order, $f_d$ the Doppler frequency, $T_s$ the sampling period, and $\tau_s$ the \ac{CSI} delay in terms of samples \cite{Cl68}.
\begin{table}[t]
\centering
\caption{{Simulation parameter settings.}}\label{tab:Sim1}
\setlength{\tabcolsep}{5pt}
\def1.2{1.2}
\vspace*{-0.2cm}
\begin{tabular}{|l|l|l|l|}
\hline
{\textbf{Parameter}} & {\textbf{Value}} & {\textbf{Parameter}} & {\textbf{Value}} \\ \hline\hline
Channel realiz. & 100 &
$\#$ of users & $K=200$ \\ \hline
Specular comp. & $S_k=4$ &
Wavelength & $\lambda=0.15$ m \\ \hline
Sampling freq. & $f_s=1$ MHz
& Antenna dist. & $d=\frac{\lambda}{2}$ m \\ \hline
$\#$ of antennas & $M=200$ &
Power ratio & $\kappa=2$ \\ \hline
Angular range & $[-\frac{\pi}{4}, \frac{\pi}{4}]$ rad &
Distance range & $[40,230]$ m \\ \hline
Angular std dev. & $\sigma_\delta=10^\circ$ &
Block len. (samp.) & $\tau_c=10000$ \\ \hline
CSI delay (samp.) & $\tau_s=10000$ &
User speed & $v=30$ km/h\\ \hline
\end{tabular}
\end{table}
We first evaluate in Fig. \ref{fig:diffuse} the impact of different channel configuration parameters in \eqref{eq:channelModel} under the assumption of perfect \ac{CSI}, and use the optimal \ac{SUS} procedure in \cite{YoGo06} for benchmarking purposes. We consider different number of specular paths $S_k$ and power ratios $\kappa$. Experiments show that increasing the number of paths $S_k$ leads to larger channel gains and, accordingly, improves the achievable \ac{SE}. A similar effect arises when increasing the value of $\kappa$, i.e., the power received from the specular paths. We see that the worst performance is obtained for the \ac{LoS} case. In other words, richer channels facilitate obtaining better throughputs, as the improvements in channel gains dominate over the potential increase in interference to other users.
\begin{figure}[t]
\centering
\includegraphics[width=.9\columnwidth]{Diffuse_achievRate_date27-Jul-2022}
\caption{Achievable \ac{SE} vs. SNR (dB) for a different number of paths $S_k$ and power ratios $\kappa$. The remaining parameter values are given in Table \ref{tab:Sim1}. }
\label{fig:diffuse}
\vspace{-2mm}
\end{figure}
In Fig. \ref{fig:imperfect} we assess the performance losses due to imperfect \ac{CSI} according to the channel model in \eqref{eq:himperfect} and $\tau_s=\{2000,10000\}$ samples. For the estimation error, we assume that the SNR remains constant during both training and data transmission. We compare with the perfect \ac{CSI} scenario, and also with the hypothetical situation where the scheduling procedure is performed with the imperfect \ac{CSI}, but the \ac{ZF}-precoders at the data transmission stage use the \textit{true} channels. This way, the performance losses caused by imperfect \ac{CSI} are separated for \textit{i}) scheduling and \textit{ii}) \ac{ZF} precoding. This latter curve is labeled as {ISP-P} for reference purposes. Remarkably, when disregarding the losses due to imperfect \ac{ZF} precoding. the ISP procedure operating with outdated \ac{CSI} achieves a performance similar to the perfect \ac{CSI} case.
\begin{figure}[t]
\centering
\includegraphics[width=.9\columnwidth]{Imperfect_achievRate_date28-Jul-2022}
\caption{Achievable \ac{SE} vs. SNR (dB) for $S_k=4$, $\kappa=2$ and different \ac{CSI} delays $\tau_s$. Parameter values are given in Table \ref{tab:Sim1}. }
\label{fig:imperfect}
\vspace{-2mm}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.9\columnwidth]{Overhead_achievRate_date28-Jul-2022}
\caption{Achievable \ac{SE} vs. SNR (dB) for $S_k=4$, $\kappa=2$, $\tau_s=10^4$, and several training overheads $\dot\tau$. Parameter values are given in Table \ref{tab:Sim1}. }
\label{fig:overhead}
\vspace{-2mm}
\end{figure}
Fig. \ref{fig:overhead} studies the effect of the training overhead by modifying the amount of time $\dot\tau$ necessary to estimate an individual user. For completeness, we include the cases where \textit{all} users have their channels estimated, \textit{SUS-$K$}, and a genie-aided scheme where the training stage just updates the channels for the set of scheduled users, \textit{SUS-$|\mathcal{S}|$}. The training lengths considered are $\dot\tau\in\{30,50,70\}$ samples. Note that $\dot\tau=30$ is a very optimistic value for $M=200$ \cite{Demir22}. The curves for ISP assume that, in addition to the scheduled users in $\mathcal{S}$, the non-scheduled candidate users with large gains in $\mathcal{G}$ have their channels estimated. We observed in our experiments that the number of users in $\mathcal{G}$ (i.e., those considered for scheduling but finally not served) is, on average, less than $10$ for $\dot\tau=30$, and smaller values are obtained for larger $\dot\tau$. Therefore, we consider $|\mathcal{G}|=15$ for the ISP configuration.
We observe in Fig. \ref{fig:overhead} a significant reduction in the achievable \ac{SE} caused by the training overhead and imperfect \ac{CSI}. Observe also that, if the channels for all $K$ users are estimated, the performance dramatically drops even in the optimistic scenario with $\dot\tau=30$. In turn, when the genie-aided approach is considered, the best result is achieved for $\dot\tau=30$. Nevertheless, as typical scheduling schemes ignore the performance impact of training overhead, when $\dot\tau=50$ the performance of the genie-aided approach and ISP are similar. We see that when $\dot\tau=70$ ISP provides better results for SNRs above $15$dB. In general, ISP selects users exhibiting large expected channel gains in \eqref{eq:gain}, which are reduced when \ac{CSI} accuracy is low, thus promoting a conservative users selection compared to conventional schemes. Finally, as $\dot\tau$ and the training overhead increases with both $M$ and $K$, we expect that the benefits of employing ISP increase for larger values of $M$ and/or $K$.
\section{Conclusion}
The design of user selection schemes for XL-MIMO systems under practical constraints is fundamental for their successful deployment. Assuming a realitic channel model beyond the conventional \ac{LoS} assumption, together with \ac{CSI} uncertainties and training overheads, we addressed this problem for the first time in the literature under SW propagation. The proposed algorithm incorporates this knowledge into the design, showing that the performance of the scheduler is similar to that achieved with solutions designed for perfect \ac{CSI}
\begin{appendix
\label{sec:correlation}
Starting from \eqref{eqR}, we consider a local scattering model where we have a nominal angle $\vartheta$ plus a random component $\delta$, i.e., $\theta=\vartheta+\delta$ \cite{Demir22}. We consider small values for $\delta$, i.e., $\delta<\frac{\pi}{12}$, since values of $\delta$ in this range are practical for urban environments, and smaller angles even apply to rural areas. To determine \eqref{eqR} we resort to the Fresnel approximation \cite{Sh62,CuDa22}
\begin{equation}
r_m\approx r-md\sin(\vartheta+\delta)+\frac{m^2d^2\cos^2(\vartheta+\delta)}{2r}.
\end{equation}
Using trigonometrical identities, the integral in \eqref{eqR} is
\begin{align}
[\B{R}]_{m,n}&=\beta\int {\text{e}}^{{{\text{j}}\frac{2\pi}{\lambda}}\big[(m-n)d[\sin(\vartheta)\cos(\delta)+\cos(\vartheta)\sin(\delta)]}\notag\\
&\times{\text{e}}^{{{\text{j}}\frac{2\pi}{\lambda}}\frac{(n^2-m^2)d^2}{2r}[\cos(\vartheta)\cos(\delta)-\sin(\vartheta)\sin(\delta)]^2}f(\delta)d\delta\notag\\
&=\beta\int{\text{e}}^{{\text{j}}(\tilde{a}+\tilde{b}+\tilde{c})}f(\delta)d\delta, \label{eq:R}
\end{align}
where we have rearranged the terms as
\begin{align*}
\tilde{a}&=\tfrac{2\pi}{\lambda}\big[(m-n)d\sin(\vartheta)\cos(\delta)+\tfrac{(n^2-m^2)d^2}{2r}\cos^2(\vartheta)\cos^2(\delta)]\notag\\
\tilde{b}&=\tfrac{2\pi}{\lambda}\cos(\vartheta)\sin(\delta)[(m-n)d-\tfrac{(n^2-m^2)d^2}t{r}\cos(\delta)\sin(\vartheta)]\notag\\
\tilde{c}&=\tfrac{2\pi}{\lambda}\tfrac{(n^2-m^2)d^2}{2r}\sin^2(\vartheta)\sin^2(\delta).
\end{align*}
When $d/r\approx 0$, results reduce to the far-field scenario.
\end{appendix}
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13179",
"language": "en",
"timestamp": "2023-02-28T02:11:59",
"url": "https://arxiv.org/abs/2302.13179",
"yymm": "2302"
} | \section{}
Solar coronal jets are frequently occurring collimated ejections of solar plasma, originating from magnetically mixed polarity
locations on the Sun of size scale comparable to that of a supergranule. Many, if not most, coronal jets are produced by eruptions
of small-scale filaments, or minifilaments, whose magnetic field reconnects both with itself and also with surrounding coronal
field. There is evidence that minifilament eruptions are a scaled-down version of typical filament eruptions that produce solar flares
and coronal mass ejections (CMEs). Moreover, the magnetic processes building up to and triggering minifilament
eruptions, which is often flux cancelation, might similarly build up and trigger the larger filaments to erupt. Thus, detailed study of coronal jets
will inform us of the physics leading to, triggering, and driving the larger
eruptions. Additionally, such studies potentially can inform us of smaller-scale coronal-jet-like features, such as jetlets and perhaps some
spicules, that might work the same way as coronal jets. We propose a high-resolution ($\sim$0$''\kern-0.5em.\,1$ pixels),
high-cadence
($\sim$5\,seconds) EUV-solar-imaging mission for the upcoming decades, that would be dedicated to
observations of features of the coronal-jet size scale, and smaller-scale solar features produced by similar physics. Such a mission could provide invaluable insight into the operation
of larger features such as CMEs that produce significant Space Weather disturbances, and also smaller-scale features that could
be important for coronal heating, solar wind acceleration, and heliospheric features such as the magnetic switchbacks that are
frequently observed in the solar wind.
\tiny
\fontsize{8}{11}\helveticabold { \section{Keywords:} Solar filament eruptions, solar corona, solar x-ray emission, solar extreme ultraviolet emission, solar coronal jets, solar magnetic activity}
\end{abstract}
\section{Introduction}
\label{sec-introduction}
Solar coronal jets are transient phenomena that originate near the solar surface and extend
into the corona in the form of long and narrow spires. They are visible at soft X-ray (SXR) and EUV wavelengths,
and occur in coronal holes,
quiet Sun, and the periphery of active regions. While there were some scattered earlier observations, coronal jets were
first observed in detail in SXRs with the {\sl Yohkoh}\ satellite \citep{shibata.et92}, and they have since been observed
with several instruments in different wavelengths. (Surges, which traditionally have been observed in the chromosphere, share some properties with coronal jets,
and in some cases accompany coronal jets; e.g., \citeauthor{canfield.et96}~\citeyear{canfield.et96}; \citeauthor{moore.et10}~\citeyear{moore.et10},~\citeyear{moore.et13}; \citeauthor{sterling.et16b}~\citeyear{sterling.et16b}.)
In coronal images, coronal jets consist of a spire that emanates from a bright base region and extends into the
corona. From a study of SXR coronal jets in polar coronal holes \citep{savcheva.et07}, coronal jets
live for tens of minutes, and have an
occurrence rate of about 60/day in the two polar coronal holes, which translates to about one per hour in a
given polar coronal hole. That same study reported that the coronal-jet spires reach $\sim$50,000\,km, and have
widths $\sim$10,000\,km. Coronal jets seem to have two speeds, or two components of different speeds; a slower speed
of $\sim$200\,km~s$^{-1}$, which is close to the local sound speed, and a faster speed of $\sim$1000\,km~s$^{-1}$, which is near
the Alfv\'{e}n\ speed \citep{cirtain.et07}. It had been pointed out much earlier \citep{shibata.et92} that the base region of the coronal jet
often has a particularly bright spot in the coronal-jet base, and that brightening is offset to one side of the base.
Several general summaries and reviews of coronal jets have come out during different
eras of coronal-jet studies \citep[][]{shimojo.et00,shibata.et11,raouafi.et16,hinode.et19,shen21,sterling21,schmieder22}.
It has been argued by \citet{sterling.et15} that most or all coronal jets result from the eruption of small-scale
filaments, or {\it minifilaments} (being
several times to orders-of-magnitude smaller than ``typical" large-scale filaments, that erupt to make typical solar flares and
coronal mass ejections), based on their study of 20 polar coronal hole jets, and building on several earlier studies of
coronal jets \citep{nistico.et09,moore.et10,moore.et13,hong.et11,huang.et12,shen.et12,adams.et14}. \citet{sterling.et15}
proposed that the minifilament eruption and resulting coronal jet are scaled-down versions of typical filament eruptions
that produce solar flares and coronal mass ejections (CMEs).
Figure~\ref{fig:s22_jet14} shows an example of a coronal jet, occurring in the south polar coronal hole.
Panel~\ref{fig:s22_jet14} (a) shows the coronal jet
when it is well developed, in a SXR image from the {\sl Hinode}\ spacecraft. The blue arrow points to the spire, and the
green arrow points to the base brightening. We call the latter a JBP for the ``jet bright point" in the coronal-jet's base
\citep[this terminology is from][]{sterling.et15}. Panels~\ref{fig:s22_jet14} (b)---\ref{fig:s22_jet14} (d) show the
same location in EUV images
from the {\sl SDO}/AIA~instrument's 193\,\AA\ channel. Panel~\ref{fig:s22_jet14}(b) shows the situation prior to the start of the coronal jet.
Panel~\ref{fig:s22_jet14} (c) is from about the same time as the SXR image in~\ref{fig:s22_jet14} (a). It shows the spire less prominently than in
SXRs at that moment, but it also shows a minifilament (in absorption) in the process of erupting outward (yellow arrow). These erupting
minifilaments usually are, at best, only hinted at in SXR images, but are often clearly visible in at least one and often in
several AIA EUV channels \citep{sterling.et22}.
In~\ref{fig:s22_jet14} (d), this minifilament has continued to erupt, with portions of it leaking out into the bright spire.
Figure~\ref{fig:s15_zu} shows the scenario proposed by \citet{sterling.et15} to explain coronal jets. This
shows a coronal-hole region: most of the photospheric magnetic flux has the same polarity (negative in this case), and the
ambient coronal field is open.
While this drawing is tailored for a coronal hole region, the same description holds where
the ambient coronal field is a long loop (compared to the size of the base region) instead, which would be common in
quiet Sun and active regions. A positive-polarity patch is present in the region, and this forms an anemone-type structure \citep{shibata.et07}, with flux emanating out of the positive patch and closing down in negative flux surrounding the
positive patch in 3-D, where Figure~\ref{fig:s15_zu}(a) shows a 2-D cross-section of this structure. One
lobe of the anemone -
the smaller lobe on the right-hand side (i.e., the closed-field region between B and C in Fig.~\ref{fig:s15_zu}) in
this depiction - contains non-potential (sheared and twisted) field, and holds a
minifilament. The adjacent lobe on the left
side is larger and contains more-nearly potential field. Figure~\ref{fig:s15_zu}(b) shows the minifiament, and its enveloping
flux-rope field, erupting. This results in two magnetic reconnections. One reconnection occurs where
the erupting-minifilament's field encounters the ambient coronal field above the larger lobe. This adds new, heated loops
to the large lobe, and creates new open field, along which heated material can flow out and create the coronal-jet spire. The
second reconnection occurs among the leg field of the erupting flux-rope-enveloping field. This causes a miniature flare to occur,
in a fashion analogous to how typical solar flares are formed \citep[e.g.,][]{hirayama74,shibata.et95,moore.et01}. This miniature flare
is what appears as the aforementioned JBP.\@ In Figure~\ref{fig:s15_zu}(c), the erupting field has reconnected far enough into the open-field region for the
cool minifilament material in the flux-rope core of the erupting field to leak out onto the open field, where it flows away as part of the spire.
Studies of coronal jets occurring on the solar disk have provided insight into the magnetic origins of the coronal-jet-producing minifilament
eruptions. \citet{panesar.et16a} found that magnetic cancelation occurred at the coronal-jet location before and during the coronal jet in 10
quiet Sun jets, and in a similar study \citet{panesar.et18a} found magnetic cancelation occurring near the start of 13 coronal
hole jets. Additional multiple-coronal-jet studies support these findings, including \citet{mcglasson.et19} and
\citet{muglach21}. A study of
\citep{kumar.et19} argues that ``shearing and/or rotational photospheric motions" are more important than cancelation
in producing coronal-hole jets that they studied; they do find, however, evidence that minifilament eruptions produce
about two-thirds of the coronal jets, and that small-scale eruptions without cool-minfilament material cause the remaining ones.
Magnetic cancelation has also been found to accompany many active region jets (\citeauthor{sterling.et16b}~\citeyear{sterling.et16b}, \citeauthor{sterling.et17}~\citeyear{sterling.et17}; \citeauthor{mulay.et16}~\citeyear{mulay.et16} state that cancelation, flux emergence, or
cancelation-plus-emergence produce coronal jets they observe). Single-event studies have also found cancelation to
accompany coronal jets in many cases. There are several other observational examples of magnetic flux cancelation leading
to minifilament eruptions that produce coronal jets \citep[e.g.,][]{solanki.et19,yang.et19,mazumder19}. See the
above-mentioned reviews for additional citations.
Observations support that flux cancelation at the neutral line labeled ``B" in Figure~\ref{fig:s15_zu}(a) often triggers the minifilament
to erupt. Similarly, magnetic flux cancelation also likely often builds the minifilament prior
to eruption, in some cases hours to $\sim$two days prior to its eruption \citep{panesar.et17}. If the minifilament has
twist on it, perhaps supplied when canceling magnetic elements themselves contain shear derived from photospheric
motions, that twist can be supplied to the coronal-jet's spire upon eruption of that minifilament and its reconnection with the
coronal field, explaining why many coronal-jet spires display a spinning motion during their evolution \citep[e.g.][]{moore.et15}.
\section{Some Coronal-jet-observing Instruments}
\label{sec-instruments}
We briefly introduce instruments often used for coronal-jet studies over recent decades. Here we describe those instruments that are
referred to most in this paper. See the reviews listed in \S\ref{sec-introduction} for discussions of other instruments used in coronal jet
observations.
In SXRs, coronal jets were first extensively observed with the {\sl Yohkoh}/Soft X-ray Telescope \citep[SXT;][]{tsuneta.et91}, which operated from 1991---2001. It had a detector with square pixels of width $2''\kern-0.5em.\,455$, and variable
cadence with the fastest being about 2\,seconds, although it often ran with much coarser cadence. Its followup was the X-ray Telescope \citep[XRT;][]{golub.et07} on the {\sl Hinode}\ satellite, launched in 2006, and still operating as of this writing. It has pixels of width
$1''\kern-0.5em.\,02$, and --- for observations most appropriate for coronal jets --- operated with a cadence of $\sim$30\,seconds.
Both SXT and XRT imaged with variable field of view (FOV), although for cadences sufficient to observe coronal jets of
$\sim$10-minute lifetime both instruments used a FOV smaller than that of the full solar disk.
With {\sl Yohkoh}/SXT, a large percentage of observed coronal jets occurred in active regions, with very few seen in polar regions
\citep{shimojo.et96}. \citet{koutchmy.et97} did see some polar coronal hole jets with SXT, but only with relatively long
exposures of 15 and 30\,seconds. In contrast, coronal jets are very prominent and common in polar regions in {\sl Hinode}/XRT
observations \citep{cirtain.et07}. As discussed in \citet{hinode.et19} (in the subsection on coronal jets), this difference in
visibility between the two SXR-imaging instruments can be understood because the filters that
see the coolest SXR plasma with {\sl Yohkoh}/SXT had sensitivity that dropped off sharply below about 2\,MK, while
the coolest filters of {\sl Hinode}/XRT have good sensitivity to plasmas of down to just under 1\,MK\@. From filter-ratio
temperature studies, \citet{nistico.et11}, \citet{pucci.et13}, and \citet{paraschiv.et15} determined that polar-coronal-jet
spires have temperatures of $\sim$1---2\,MK\@. Therefore this could explain why they are easily visible in
images from the cooler-temperature-detecting {\sl Hinode}/XRT, but much-less visible or invisible in images from the hotter-temperature-detecting
{\sl Yohkoh}/SXT\@.
For EUV observations, although there were some earlier useful observations with the EUV Imager (EUVI) telescope on the
{\sl STEREO}\ spacecraft \citep{nistico.et09,nistico.et10}, the results discussed in the present paper largely derive from the
Atmospheric Imaging Assembly \citep[AIA;][]{lemen.et12} on the Solar Dynamics Observatory ({\sl SDO}) satellite,
which has been operational from 2010, and is still operational as of this writing. It has a detector of $0''\kern-0.5em.\,6$
pixels, and regularly observes the entire solar disk with 12\,second cadence in seven EUV channels centered at
304, 171, 193, 211, 131, 335, 94\,\AA, roughly in order from detectability of the coolest to the hottest plasmas for non-flaring
situations (the details of the ordering depend on the distribution of temperatures in the emitting plasmas, and also
some channels have good response in multiple temperature ranges; \citeauthor{lemen.et12}~\citeyear{lemen.et12}
gives the AIA response curves and principle ions contributing to each wavelength band). \citet{sterling.et15} found
that polar coronal hole jets are best visible in the first four of these channels, and that the hotter channels of 131, 335,
and 94\,\AA\ added little new information. Active region jets, which tend to be hotter than the polar coronal hole
jets, generally are well seen over a broader range of AIA channels. \citet{shimojo.et00} found active region jets to
have temperatures 3---8\,MK, based on SXT filter-ratio methods,
and later studies have also found active region jet temperatures in this range \citep{paraschiv.et22}.
Several papers by
\citet{mulay.et16,mulay.et17a,mulay.et17b} use
the EUV Imaging Spectrometer (EIS) on {\sl Hinode}\ to undertake spectroscopic studies of coronal jets, and survey a broader
temperature range than that of the SXR
filter-ratio methods of the just-mentioned studies. They report the bulk of the emission of active-region-jet spires to
be of temperatures $\raisebox{-1.0ex}{$\stackrel{\textstyle<}{\sim}$}$1---2\,MK\@. This is substantially lower than the active-region-jet temperatures from
the filter ratios mentioned above, e.g., the 3---8\,MK of \citet{shimojo.et00}. But this difference is likely due
to the nature of the plasmas that the respective instruments can detect. EIS, being an EUV spectrometer,
has spectral coverage of substantially cooler spectral lines than those contributing to the SXR emission.
Thus it is likely that there is a wide distribution of plasma temperatures in coronal jets, and -- not surprisingly
-- the SXR telescopes preferentially detect the hotter plasmas in those coronal jet spires, and therefore yield higher
temperatures for coronal jets than the bulk of the coronal-jet plasmas detected by EIS\@.
Spectroscopy in the UV from {\sl IRIS}\ has provided valuable insight into coronal jets. This includes studies finding rotational
(spinning) motion in coronal jets \citep[e.g.,][]{cheung.et15,liu.et18,schmieder.et22,ruan.et19}, confirming indications of such rotation
from earlier observations in EUV \citep{pike.et98}. These spectra also provide information on densities in coronal
jets \citep[e.g.,][]{cheung.et15,mulay.et17a,panesar.et22}. Moreover, the high resolution of the {\sl IRIS}\ slitjaw images can
complement the EUV and SXR coronal-jet observations in, for example, zeroing in on the fine-scale structure and
dynamics at the coronal-jet magnetic-source location in the photosphere \citep{sterling.et17}.
Magnetograms from the {\sl SDO}\ Helioseismic and Magnetic Imager \citep[HMI;][]{scherrer.et12} is frequently used to study
the photospheric magnetic flux values and changes around the base of coronal jets. HMI has pixels of $0''\kern-0.5em.\,5$,
and takes a line-of-sight magnetogram of the full solar disk once every 45\,seconds.
\section{Coronal Jets and Jet-like Activity on Different Size Scales}
\label{sec-scales}
\subsection{Coronal-jet Physics on Large Scales}
\label{subsec-large_scales}
Coronal jets appear to be small-scale versions of larger eruptions, with the eruptive process that results in a minifilament eruption
that produces a coronal-jet spire and a JBP corresponding to large-scale eruptions that result in filament eruptions and typical
solar flares \citep{sterling.et15}. That is, just as ``typical" solar filaments erupt (in what we are here calling ``large-scale
eruptions") to make long-observed ``typical" solar flares and that sometimes expel coronal mass ejections into the heliosphere,
coronal-jets appear to be made by a minifilament eruption (a scaled-down version of a large-scale filament eruption) that leaves
in its wake a JBP (a scaled-down version of a typical solar flare), and to result in material and magnetic disturbances that flow
out along a spire and that sometimes flow into the heliosphere.
If coronal jets are indeed a scaled-down version of larger ``standard flare model" solar eruptions,
then we would expect other aspects of the smaller-scale eruptions that cause coronal jets and JBPs to have counterparts in the
larger-scale eruptions
that cause CMEs. Here we discuss examining large-scale eruptions based on what has been
found in coronal jets.
A characteristic of coronal jets is the anemone magnetic setup, similar to that shown in Figure~\ref{fig:s15_zu}(a). There are many examples
of flares occurring from anemone active regions \citep{asai.et09,lugaz.et11,kumar.et13,devi.et20}. \citet{joshi.et17} showed
that the setup for a large-scale eruption matched that of the coronal-jet minifilament-eruption picture, and that the dynamic motions
of the eruption matched closely that of an erupting minifilament producing a coronal jet. A similar schematic was in fact
drawn to explain a series of recurring solar eruptions much earlier (\citeauthor{sterling.et01}~\citeyear{sterling.et01},
\citeyear{sterling.et01a}, \citeyear{sterling.et01b}; these schematics in fact helped inspire the Fig.~\ref{fig:s15_zu} schematic
of \citeauthor{sterling.et15}~\citeyear{sterling.et15}). These setups show that the same type of magnetic setup
appears to be capable of generating similar solar expulsions both on the coronal-jet size scale, and on the size scale of typical
solar eruptions. Whether a coronal jet results or a CME results depends on
how much of erupting minifilament/flux-rope lobe remains after the external reconnection in Figure~\ref{fig:s15_zu}(b)
and~\ref{fig:s15_zu}(c). If the flux rope is robust enough to survive that reconnection (that is, if only the outer portion
of the flux-rope lobe is eroded away by external reconnection), then the remaining lobe and flux rope can escape to form a
CME that carries a magnetic flux rope in its core region. If on the other hand the external reconnection totally reconnects the flux rope, so that the
field lines that were previously closed in a flux rope all become open, then the feature becomes a coronal jet instead of
a CME\@.
An anemone setup appears to be necessary for coronal-jet formation, and formation of coronal jets in such a setup is supported
by numerical simulations \citep{wyper.et17,wyper.et18a,wyper.et18b,wyper.et19,doyle.et19}. But large-scale eruptions
also occur outside of an anemone setup, and so we might ask whether the eruptions of minifilaments that cause coronal jets might
also have similarities to larger-scale eruptions, independent of whether those larger-scale eruptions occur in an
anemone configuration. One possible such similarity is in the manner in which the minifilament eruptions and the large-scale
eruptions are triggered to erupt. We have seen above that coronal-jet-producing erupting minifilaments
are apparently often built-up and triggered to erupt by magnetic-flux cancelation. What about larger-scale eruptions?
To investigate whether large-scale eruptions are built up and triggered to erupt in a manner similar to coronal-jet-producing
minifilament eruptions, \citet{sterling.et18}
studied how large-scale eruptions evolve toward eruption.
In the case of coronal jets, the magnetic
elements taking part in the cancelation typically converge toward each other over the hours prior to the
eruption onset, as discussed in several papers \citep{panesar.et16a,panesar.et17,panesar.et18a,sterling.et17}, and
as exemplified by Figure~\ref{fig:pan16_zu}(c). This time period is short enough for those elements
to have relatively little interaction with surrounding flux elements. In contrast, large-scale eruptions often occur in active regions that develop
for many days, or even weeks, prior to expelling an eruption (complex regions, such as delta regions, can evolve faster than
this, but the objective of \citeauthor{sterling.et18}~\citeyear{sterling.et18} was to compare with more standard
eruptions). Thus, in order to follow the region from the time of
flux emergence through to the time of the eruption, it was necessary to look at regions that were small enough for this
evolution to occur during a single disk passage of the region. \citet{sterling.et18} presented two examples of this class.
In both cases the active regions were comparatively small bipolar active regions (total flux in each $\raisebox{-1.0ex}{$\stackrel{\textstyle<}{\sim}$}$10$^{21}$\,Mx). Also in
both cases the eruptions occurred about five days after emergence, and those eruptions produced CMEs observed in
coronagraphs. One of the regions remained almost completely isolated from any surrounding substantial flux over this period.
And the second (shown in Fig.~\ref{fig:s_et18_event1_b_zu}) was largely isolated from surrounding flux, although one of its
polarities did have some interaction with nearby pre-exisiting opposite-polarity flux.
Both regions displayed similar evolution. Figure~\ref{fig:s_et18_event1_b_zu} shows the evolution of one of these regions.
The boxed region in (a) shows a bipolar active region that is still emerging in this frame. In panel~(b) the emergence
is continuing, with centroids of the the main positive-polarity (white) and negative-polarity (black) patches further separated from
their central neutral line than they were in~(a). By the time of~(c) however, they are no longer separating, and some of the
opposite-polarity portions of the region have converged toward the central neutral line. Panel~(d) shows a time-distance
map of this region, analogous to that in Figure~\ref{fig:pan16_zu}(c). This shows that the polarities initially separate for
about one day following their initial emergence. Their mutual directions then reverse, and the polarities start to converge.
The orange line shows on this plot the time of the CME-producing eruption; this did not occur until after the polarities had
converged on each other, and were undergoing flux cancelation along their central neutral line. There were no CME-producing
eruptions
from this region prior to this time. {\sl SDO}/AIA observations show that the bright centroid of the resulting {\sl GOES}\ C-class flare was on
that neutral line. Thus, similar to the situation with coronal jets, a flux rope eruption occurs along a cancelation neutral line. In this
case, the region evolved for about four days with essentially no activity, and then had an eruption only after
that cancelation started taking place.
A second region examined in \citet{sterling.et18}, which also was substantially isolated magnetically from surrounding structures,
began with flux emergence, underwent flux-polarity separation, and then had flux convergence and apparent cancelation along
its central neutral line among some portion of its two opposite-polarity patches. This resulted in a eruption that produced a
{\sl GOES}\ B-class flare on the region's central neutral line (although in this case a second, weaker, eruption also occurred on a
neutral line formed from one of the emerging polarities and a pre-existing opposite polarity patch), and in the expulsion
of a CME\@.
\citet{chintzoglou.et19}, studying more complex magnetic situations involving multiple active regions, also found that eruptions
occurred at flux-cancelation neutral lines.
Returning to the discussion of the size scale of the erupting minifilaments that can cause coronal jets, \citet{moore.et22} examined
the evolution of 10 bipolar ephemeral active regions (BEARs) in a manner similar to \citet{sterling.et18}, but where they
tracked their regions from emergence to disappearance. These 10 regions produced 43 small-scale eruptions in total,
and all of these eruptions occurred at a neutral line in which apparent flux cancelation was taking place. This again supports
that the the physics that causes eruptions on the coronal-jet size scale is essentially the same as that which causes
large-scale eruptions that produce typical solar flares and CMEs.
These observations strongly support that flux cancelation is often essential in the
magnetic build-up and triggering of both smaller-scale eruptions that cause coronal jets, and larger-scale eruptions
that cause flares and CMEs. This is fully consistent with the
mechanism for the build-up of the non-potential energy required for eruption via flux cancelation, as suggested by
\citet{vanball.et89}. Observations of these processes occurring on faster time scales in the smaller-size-scale jets, however,
helps to elucidate strategies for investigating the processes in the more-slowly developing larger active regions.
\subsection{Possible Extensions to Smaller Scales}
\label{subsec-smaller_scales}
From the preceding discussions, we have presented evidence that the same basic process leading to eruptions
occurs on two size scales -- that of large-scale eruptions and that of coronal jets. \citet{sterling.et16a} considered a possible extended
relationship to smaller size scales, by plotting the the size of a typical filament or filament-like structure that erupts
on one axis, and, on the other axis, a measure of the number of the respective eruptive events occurring at any given time
on the Sun. Their
motivation was to see whether a substantial number of similar features might occur on a spicule
size scale, assuming that the coronal-jet mechanism continues to scale downward to sufficiently small sizes. If so, then it might
be that at least some spicules (and perhaps many or most spicules) result from erupting filament-like features
(erupting flux ropes) of that size scale. (See \S\ref{subsec-spicules} for
a summary of spicule properties.)
The larger of the two size scales of eruptive events that have observed filament eruptions are the
``typical" filament eruptions that have solar flares occurring beneath them, and -- in the case of ejective
eruptions -- expulsion of a CME\@. \citet{sterling.et16a} took a typical eruptive filament size to be 70,000\,km,
with an appropriate scatter based on observed values
for a large number of filaments by \citet{bernasconi.et05}. For the number of large-scale eruptions, they took
observed CME rates of from less than one to a few per day \citep{yashiro.et04,chen11}. For coronal jets, they
estimated corresponding numbers for the size of erupting minifilaments based on measurements in \citet{sterling.et15},
which on average was just over 5000\,km for the erupting-minifilament lengths. For the frequency of eruptions they
extrapolated rates given in \citet{savcheva.et07}, which yields about a few hundred per day over the entire Sun.
In order to compare with spicule occurrence rates, \citet{sterling.et16a} converted the occurrence rates for the large-scale
eruptions and coronal-jet-size-scale
minifilament eruptions to the expected number of events occurring on the Sun at any random given time. The motivation for
these units was to utilize the historical studies of spicule counts, which were sometimes expressed as the total number of spicules
on the Sun seen at a given time. Those resulting values vary substantially \citep{athay59,lynch.et73}, but overall they are roughly in the
neighborhood of $10^6$ spicules on the Sun at a given time. (\citeauthor{judge.et10}~\citeyear{judge.et10} estimate about
a factor of ten higher; see \citeauthor{sterling.et20c}~\citeyear{sterling.et20c}.)
In order to complete the comparison with the larger erupting features, a value for the typical size of the erupting
filament-like flux rope that would produce a spicule is required. No such cool-material eruptions have been convincingly
observed to date, and therefore spicules being formed by the coronal-jet-producing minifilament-eruption-type
mechanism is wholly speculative at this
point. But because coronal-jet-spire widths are similar to the measured lengths of the erupting minifilaments that produce them,
by analogy, \citet{sterling.et16a} hypothesized that there might be erupting {\it micro}filaments (erupting micro flux ropes)
of lengths comparable to the width
of spicules (a few hundred km), that produces some spicules.
Figure~\ref{fig:s_m16_zu2} shows the resulting plot. When a linear extension is made from the large-scale eruptions through
the coronal-jet-sized eruptions, and then extended down to the size scale of the putative erupting microfilaments, the ordinate's value
for their occurrence rate falls near the lower end of the estimated spicule-number counts. This implies that at least some
portion of spicules might be scaled-down versions of coronal jets, formed by eruptions of microfilaments, or it could be that multiple
spicules are produced by a single such eruption.
We next consider from an observational standpoint the suggestion that the coronal-jet-producing mechanism operates on size scales smaller
than that of coronal jets.
\subsection{Jetlets}
\label{subsec-jetlets}
\citet{raouafi.et14} studied long and narrow transient features of a scale smaller than coronal jets, using {\sl SDO}/AIA images and HMI magentograms.
They called these features {\it jetlets}, due to their similarity to coronal jets, except having a smaller size (both width and length). Jetlets are
smaller than coronal jets, but larger than chromospheric spicules. \citet{raouafi.et14} found
the jetlets to occur at the base of coronal plumes, and they suggest that they (along with transient base brightenings) are the result of
``quasi-random cancellations of fragmented and diffuse minority magnetic polarity." Therefore, these features seem to be smaller
versions of coronal jets. (``Plumes" are long and narrow features – first noticed during total eclipses – extending out to several solar radii in polar regions. Compared to jet-like features, plumes are long-lasting, persisting for $\sim$day. See, e.g.,
\citeauthor{poletto15}~\citeyear{poletto15}.)
\citet{panesar.et18b} examined jetlets with {\sl IRIS}\ UV and AIA EUV images, and HMI magnetograms. They studied ten jetlets, and
found them to have lengths $\sim$27,000\,km, spire widths $\sim$3000\,km, base size of $\sim$4000\,km and speed $\sim$70\,km~s$^{-1}$.
They argued that jetlets are a more general solar feature than presented in \citet{raouafi.et14}, occurring at the edges of chromospheric
network both inside and outside of
plumes. In agreement with \citet{raouafi.et14}, they found that magnetic flux cancelation was the likely cause of the jetlets, just as
it often leads to coronal jets. Furthermore, their jetlets were accompanied by brightenings analogous to the JBP seen at the base
of coronal jets. Based on these and other characteristics, \citet{panesar.et18b} concluded that the jetlets are likely scaled-down
version of coronal
jets, and that they are consistent with the erupting-minifilament scenario for their production. They did not, however, observe a
clear indication of the existence of an actual cool-material erupting minifilament at the base of their jetlets.
\citet{panesar.et19} extended these studies to even smaller-sized jetlets, using EUV 172\,\AA\ (Fe~{\sc ix}/Fe~{\sc x}) images from the
Hi-C\,2.1 rocket flight. This instrument had pixels of width $0''\kern-0.5em.\,129$, and
cadence of 4.4\,seconds; see \citet{rachmeler.et19} for details. Six events were identified from the data from Hi-C\,2.1's five-minute flight.
As with the \citet{panesar.et18b} jetlet study, these events also occurred at the edges of network cells. On average they
had spire lengths of $\sim$9000\,km, widths of 600\,km, and speeds of $\sim$60\,km~s$^{-1}$. At least four of these events
seemed consistent with being small-scale coronal jets following the erupting-minifilament mechanism, although once again
there were no direct observations of erupting cool-material minifilaments.
\subsection{Spicule-sized Features}
\label{subsec-spicules}
Spicules are chromospheric features that are extremely common, have lengths $\sim$5000---10,000\,km, widths of a few hundred
km, lifetimes of a few minutes, and cluster around the magnetic network \citep[e.g.][]{beckers68,beckers72}. Although many
ideas exist, their generation mechanism is still unknown; see discussions in various review works \citep[e.g.,][]{beckers68,beckers72,sterling00,tsiropoula.et12,hinode.et19,sterling21}.
As mentioned in \S\ref{subsec-smaller_scales}, erupting microfilament flux ropes have not been observed in spicules. They may, however, be
present, but hard to observe for a variety of reasons. Similarly, a bright point that corresponds to a JBP has not been convincingly
observed at the base of spicules. These points are not consistent with an erupting-microfilament mechanism for spicules. Nonetheless,
these absences are not definitive evidence that these features do not exist in spicules; they may exist but be hard to detect, as
discussed in \citet{sterling.et20c}.
Moreover, there are several observations that are consistent with a microfilament-eruption mechanism for spicules. One of these
is the observation of mixed polarity elements at the base of many spicules. Fresh evidence for this is presented in
\citet{samanta.et19}, obtained using state-of-the-art ground-based observations in the pre-DIKIST era. This work presents
evidence that spicules result from dynamic activity at the base of spicules,
which could be due to flux cancelation and/or emergence. As discussed above, there is extensive evidence that many coronal
jets result from
flux-cancelation episodes. Ideas for coronal-jet production from flux emergence has also been presented \citep{yokoyama.et95}.
Spicules also display characteristics of spinning motions \citep{pasachoff.et68,depontieu.et12,sterling.et20b}. We have noted that
the minifilament-eruption idea offers an explanation for the spinning of coronal jets, when an erupting twisted minifilament
transfers its twist to the coronal-jet-spire's coronal field via external reconnection. Thus the same mechanism acting on speculative erupting microfilaments might explain this spicule spinning as magnetic untwisting.
There remains, however, the possibility that spicules are driven by any of a number of other suggested mechanisms (see the above-cited
reviews), and many of these ideas cannot yet be ruled out. Spicule-sized features that work via the coronal-jet-production
mechanism may instead
drive other features of that size, such as the UV network jets described by \citet{tian.et14} \citep[some of which are UV observable
in EUV, at least in AIA 171\,\AA\ images;][]{tian.et14},
or the ``chromospheric anemone jets" observed in active regions, or similar jet-like features observed in plages \citep{depontieu.et04,sterling.et20c}.
\section{The Importance of Coronal Jets}
\label{sec-importance}
Coronal jets are important for solar physics in a number of ways. One of these is to gain insight into the buildup and onset of
large-scale eruptions. Studies of these eruptions has importance in a variety of areas, ranging from the understanding
of key inputs to Space Weather to gaining insight into fundamental astrophysical processes.
One of the key unknowns of solar physics is the details of the mechanism leading up to and causing large-scale
solar eruptions that produce typical-sized-filament eruptions, solar flares, and CMEs. The revelation that many coronal
jets are scaled-down versions of large eruptions has important implications for resolving these questions. Not only are
coronal jets of a smaller size scale than those large eruptions, but also the pre-coronal-jet evolution time scale of coronal
jets is substantially shorter than
that for large eruptions: Large eruptions from bipolar active regions usually require at least many days to build up the magnetic circumstances
that result in the eruption \citep[e.g.,][]{sterling.et18}, while in contrast, for a sample of quiet-region jets the corresponding time
scale was found to be of the order of hours or a couple of days \citep{panesar.et17}. Additionally, it is difficult to find examples
of magnetically isolated large-scale regions that produce large eruptions for many days while they remain on the Earth-facing side of
the Sun, and this exacerbates the difficulty in unraveling the fundamental processes that lead to eruptions. On the other hand,
it is often easy to find and follow coronal-jet-producing regions on the Sun, by working backwards from the time of coronal-jet occurrence.
As discussed in \S\ref{subsec-large_scales}, \citet{sterling.et18} used these points to learn about regions leading to flares and CMEs in
two small active regions, largely based on lessons learned from coronal-jet studies. These studies and comparisons are,
however, still vastly incomplete. Careful study of the details of eruptions happening on the coronal-jet-sized scale are necessary to
understand fully the coronal-jet-production process. And then the findings can be used as starting input for improved studies of
large-scale events.
And as demonstrated in \S\ref{subsec-smaller_scales}, coronal jets almost certainly can provide insight into the operation of some
smaller-scale events. This is in particular true for those smaller
objects with the most apparent coronal-jet-like qualities; this includes jetlets, at least some of which must be scaled-down versions
of coronal jets. It will be important to see how far the obvious similarities continue down in size scale by using long-term high-resolution
and high-cadence observations at appropriate wavelengths. This will help determine whether the similarities continue down to the
size of the UV network jets, which are nearing spicule size. Because the solar wind appears to originate from most if not all locations
on the Sun (in particular, the fast solar wind originates from open-field areas, while the slow wind comes from
closed-field regions), the solar-surface events that drive it are likely distributed
almost uniformly over the surface. An understanding of jetlets and UV network jets will provide insight into whether these events plausibly
power the solar wind. Careful additional coronal-jet studies are required to guide and assist observational investigations of these yet
smaller-scale features.
Coronal-jet physics may have even broader implications beyond the most obvious looking coronal-jet-like or solar-eruption counterparts. For example,
\citet{katsukawa.et07} have examined ``penumbral jets," rooted along the filament-like low-lying magnetic loops (the penumbral fibrils)
that spread radially from sunspot
umbra and form the penumbra \citep{tiwari.et13}. \citet{tiwari.et16,tiwari.et18} also studied penumbral jets, and found that large ones
originated from the end of the penumbral filament
rooted farthest away from the umbra, and that these regions had mixed magnetic polarities that underwent magnetic cancelation
near the time of
penumbral-jet generation. Moreover, using {\sl IRIS}\ spectra, they found Doppler evidence that the penumbral jets are undergoing spinning motion.
Although these penumbral jets are $\sim$100 times smaller than typical SXR coronal jets, their properties of magnetic cancelation
at their bases, long (compared to their width) spires, and spin, are similar to what has been found in numerous coronal
jets. This suggests that the basic physical mechanism creating these penumbral jets might be essentially the same as that which produces
coronal jets.
``Campfires" are features recently discovered in high-resolution EUV images from the Extreme Ultraviolet Imager (EUI)
on the {\sl Solar Orbiter}\ spacecraft, appearing as small localized brighteings of size scales of a few 100---few 1000\,km and lasting 10---200\,seconds \citep{berghmans.et21}. By comparing a selection of campfires with {\sl SDO}/HMI magentograms, \citet{panesar.et21} present evidence
that they occur on canceling magnetic neutral lines, similar to how coronal-jet-producing erupting minifilaments and accompanying base
brightenings also frequently occur on canceling neutral lines. This suggests that the same process that makes coronal jets (small-scale-filament/flux
rope eruptions) might also make campfires.
If the same processes indeed make features as varied as large-scale solar eruptions, coronal jets, jetlets, some spicules, UV network jets,
campfires, penumbral jets, and perhaps other features, then it is important for heliophysics to clarify which magnetic and physical
circumstances produce which feature in what situations.
Coronal jets also have influence far out into the heliosphere. White-light coronagraph images show that some CMEs are relatively narrow,
with angular extents $\raisebox{-1.0ex}{$\stackrel{\textstyle<}{\sim}$}$5$^\circ$; these features have been called both ``narrow CMEs" and
``white-light jets" \citep[See discussion and references in][]{sterling18}. \citet{wang.et98}
have shown that these features often originate from jetting activity at the solar surface, and a mechanism for producing these
features has been presented, based on the minifilament-eruption picture, by \citet{panesar.et16b}. \citet{sterling.et20a} present
observations of material expelled from coronal jets extending out to tens of solar radii in images from the {\sl STEREO}\ ``Hi1" Heliospheric Imager.
There is also evidence of {\it in situ} detection of coronal-jet material in the solar wind \citep{yu.et14,yu.et16}.
Both \citep{sterling.et20a}, and a followup work \citep{neugebauer.et21}, suggest that coronal jets, and/or smaller jet-like
features that work via the minifilament-eruption mechanism, might propagate out to the heliospheric locations of the {\sl Parker Solar Probe}\ satellite,
and be detected as magnetic Alfv\'{e}n ic kinks in the field that are known as ``switchbacks" \citep{bale.et19,schwadron.et21}. The idea is that the erupting minifilaments might carry
twist, and external reconnection with the ambient coronal field could transfer that twist to the field, as described by \citet{shibata.et86}.
That twist could convert to swaying of inner coronal field \citep{moore.et15}, and then steepen into an Alfv\'{e}n ic kink -- forming the switchback
-- due to variations of the Alfv\'{e}n\ speed in the solar wind \citep{sterling.et20a}. Switchbacks are extremely common in the solar
wind at distances of a few tens of solar radii \citep{bale.et19}. They also appear to carry the imprint of size scales at the Sun
corresponding to supergranules \citep{bale.et21,fargette.et21}, and even of granules \citep{fargette.et21}. The minifilaments that
erupts to make coronal jets \citep[$\sim$10,000---few $\times$10,000\,km; e.g.,][]{sterling.et15,panesar.et16a} are not so
different from supergranule scales ($\sim$40,000\,km), while the width of spicules and similar features (few 100\,km or so) is
not too different from the size scale of granules ($\sim$1000\,km). Better observations of coronal jets can help determine whether this
switchback-production idea matches detailed observations, or whether a different mechanism might be responsible for the
switchbacks, such as ``interchange reconnection" ideas \citep[e.g.,][]{zank.et20,owens.et20,drake.et21,schwadron.et21}, or any
of several ideas for generating the switchbacks in the solar wind \citep[see citations in][]{fargette.et22}, while keeping in mind that
the mechanism should explain the observations of a supergranule- and/or granule-size-scale dependence of switchback size scales.
These points illustrate the value to solar physics -- and beyond -- of understanding the nature of the processes that lead up to
and produce coronal jets. To this end, we suggest a new instrument focused on observing coronal-jet-sized features in the low
corona. We first note, however, that another reason for further observations of coronal jets is to learn more about the nature of
coronal jets themselves. Even though there is strong evidence that the minifilament eruption model explains many coronal jets,
it is still to be confirmed that the model holds up to close scrutiny under improved observations. An alternative mechanism,
which was originally proposed along with the earliest detailed coronal-jet observations \citep{shibata.et92}, is that they result
when emerging magnetic flux reconnects (external reconnection) with surrounding coronal field.
As mentioned in \S\ref{sec-introduction}, many observations fit the minifilament-eruption model and find that coronal jets frequently
occur as the result of minifilament (or flux rope) eruptions on canceling magnetic neutral lines. Other observations
of coronal jets -- such as the direction of motion of coronal-jet spires horizontal to the solar surface -- are also consistent wtih the minifilament-eruption
mechanism and not with the emerging-flux mechanism \citep{baikie.et22}. So questions are: do coronal jets ever form via
the emerging-flux mechanism, and if so, are there any special characteristics of those coronal jets compared to those that we
have described in \S\ref{sec-introduction}? Another question is, if coronal jets do not occur frequently via the emerging-flux
mechanism, then why not? After all, numerical simulations indicate that flux emerging into open surrounding coronal
field should produce a coronal jet \citep[e.g.,][]{yokoyama.et95,nishizuka.et08}. So if that process does not produce coronal jets on the
Sun in reality, then it is important to understand why the reconnection resulting from flux emergence (which inevitably
must happen) does not produce actual coronal jets. These questions emphasize the importance of understanding physics
on the size scale of coronal jets.
\section{The ``SEIM" Instrument to Observe Coronal Jets and Jet-like Structures}
\label{sec-seim}
We propose a new instrument for the next generation, under the provisional name of the ``Solar Explosions IMager" (SEIM).\@
This instrument would be tuned to observe features of size scales of coronal jets at EUV wavelengths, with
a spatial resolution and cadence similar to that of the Hi-C2.1 instrument. The Hi-C flights, however, were on sounding rockets, and
so of short duration ($\sim$5\,min). The {\sl Solar Orbiter}/EUI instrument can also achieve high resolution comparable to that of Hi-C, but such high
resolution is only available for a few days around {\sl Solar Orbiter}'s perihelion. Our idea is for SEIM to be on a satellite, allowing for
long-term high-resolution observations. We now outline the instrument's desired characteristics.
\subsection{Wavelength Coverage}
\label{subsec-wavelength}
As pointed out in \S\ref{sec-instruments}, coronal jets in polar coronal holes such that the spires are generally seen in the {\sl SDO}/AIA
channels of 304, 171, 193, and/or 211\,\AA.\@ Among these, 171, 193, and 211 are all coronal lines. It would be best to
include all of these channels, as sometimes the spire tends to be better seen in one than the other. Nonetheless, if it is
visible in one of these three channels, it is usually at least detectable in the other two, based on observations of 41 coronal jets in
\citet{sterling.et15} and \citet{sterling.et22}. Therefore, as a minimum, one of these three channels should be included in a
minimal mission. The 304\AA\ channel shows a mixture of what might be called ``upper"-chromospheric
and transition region plasmas.. It sometimes shows features of coronal jets detected in SXRs that are not apparent in the other
three cool-coronal (171, 193, 211\,\AA) channels \citep{sterling.et22}, and therefore that channel would be
essential to include in a SEIM mission.
Brighter coronal jets, such as those occurring at the periphery of active regions, are often visible in all AIA EUV channels. Therefore,
an instrument designed to see coronal-jet-like features in coronal holes (and quiet Sun) well would also be able to see coronal jets and
similar features in active regions.
A channel that shows photospheric emissions, such as AIA's 1600\,\AA\ channel, would be essential to facilitate comparisons with
other instruments such as DKIST\@. This channel also shows ribbon-like flare emission at the base of some active-region jets
\citep{sterling.et16b}.
Given these considerations, a minimal wavelength-coverage package for a SEIM instrument could be 304, 171 or 193, 94, and 1600\,\AA\@.
\subsection{Resolution, Cadence, and Field of View}
\label{subsec-other_parameters}
In active regions, some strands of erupting minifilaments are substantially thinner than the width of erupting minifilaments in
coronal holes or quiet regions, having widths of $\raisebox{-1.0ex}{$\stackrel{\textstyle<}{\sim}$}$2$''$ \citep{sterling.et16b}. Hi-C, either in its original incarnation \citep{kobayashi.et14}
or the Hi-C2.1 version discussed above (\S\ref{subsec-jetlets}), would be able to resolve many of these, and in general sees
features at the limit of or beyond what is readily detectable in AIA \citep[e.g.,][]{brooks.et13,tiwari.et16,panesar.et19,tiwari.et19,sterling.et20c}.
Based on this, we are confident that a resolution comparable to that of Hi-C ($0''\kern-0.5em.\,1$ pixels) will be adequate for revolutionary
breakthroughs in the study of coronal jets and similar-sized phenomena.
For coronal-jet studies, the 12-second cadence of AIA has been adequate. Smaller-scale jet-like features can have lifetimes shorter than the
$\sim$tens of minutes of coronal jets, including $\sim$one minute for small jetlets \citep{panesar.et18b} and UV network jets.
Therefore a faster-than-AIA time cadence comparable to that of Hi-C, about 5\,seconds, would be preferred in order to sample
these objects well.
A FOV of about $6' \times 6'$, would be acceptable for an initial mission. This would be slightly larger than the
$4'\kern-0.5em.\,.4 \times 4'\kern-0.5em.\,.4$ Hi-C2.1 FOV \citep{rachmeler.et19}. {\sl SDO}/AIA's detector is a circle of
diameter $41'$ \citep{lemen.et12}, and so our proposed detector would have a FOV of about one-sixth that of
AIA's. Thus we could obtain our goal of $0''\kern-0.5em.\,1$ with a FOV one-sixth that of AIA's, by using an AIA-sized
detector ($4096 \times 4096$ pixels$^2$). Advances in technology might make it feasible to improve upon this, allowing for
increased FOV and/or higher
resolution, but these minimal criteria would allow for substantial advancements in our understanding of coronal jets and jet-like features.
Figure~\ref{fig:s22_xrt4} shows a sample image from XRT with a FOV similar to that being discussed (that image's FOV is only
slightly larger, at $6'\kern-0.5em.\,.67 \times 6'\kern-0.5em.\,.67$), with much of the northern polar coronal hole and several
X-ray coronal jets visible.
\subsection{Orbit, Accompanying Instrumentation, and Operations Planning}
\label{subsec-orbit}
Ideally, a mission carrying a SEIM instrument would have extended, uninterrupted views of the Sun. Accordingly, a Sun-synchronous
orbit, such as that of {\sl Hinode}\ \citep{kosugi.et07} or {\sl IRIS}\ \citep{depontieu.et14b} would be appropriate, allowing for $\sim$nine mouths
of uninterrupted viewing, and $\sim$three months with orbits that include spacecraft nights while still allowing $\sim$one hour of solar
observing per orbit. Longer periods of uninterrupted viewing
would be possible from L1 or a similar location, but that might be more appropriate for a more extensive followup mission.
An imaging-only mission plan would be of limited value for advancing the science of jet-like features. As a minimum, systematic
corresponding line-of-sight magnetograms would be essential to complement these observations. This could be included on
the same spacecraft, in which case a magnetogram FOV comparable to that of the EUV instrument would be acceptable. Alternatively,
it would be possible to use synoptic full-disk magnetograms from elsewhere if appropriate ones are available; for example, it would
be fully acceptable to rely on magnetograms from {\sl SDO}/HMI or a similar instrument on a different satellite that is operational at the
time of a SEIM mission. In either case, the time cadence should be
about $\sim$1\,minute, comparable to that of {\sl SDO}/HMI (45\,seconds). Spatial resolution of HMI's level would be the minimum desired,
but would be adequate for an initial mission. For special programs, coordinated observations with DKIST or other ground-based
instruments \citep[such as BBSO; e.g.,][]{samanta.et19} would allow for much higher-resolution magnetograms. It will also be extremely
valuable to have spectroscopic observations at UV and/or EUV, or even SXR, wavelengths, to obtain diagnostic information on
the observed objects. These spectra should have sufficient spatial and spectral resolution and high-enough cadence to address questions
such as whether jetlet, UV network jet, and even spicule-sized objects routinely display characteristics of jets, such as spinning
motion of their spires.
We know that coronal jets are common in polar coronal holes, and they are observed in on-disk coronal holes also. Therefore,
a basic minimal-maintenance plan would be to observe (with tracking) an on-disk coronal hole when one is available. This would
allow for coordination with line-of-sight magentograms. A second low-latitude target would be active regions. The frequency of
typical coronal jets from active regions is not yet known, but they are not uncommon. Even in the absence of such coronal jets, there
are smaller-scale penumbral jets that that are ubiquitous in active regions, and it would be desirable to have high-resolution,
high-cadence observations of other active-region activity, and of course it would be highly desirable for the instrument to observe
large-scale eruptions from active regions. In the absence of on-disk coronal holes and active regions (or if there are only
active regions showing essentially no substantial activity), one of the two polar regions (preferably one with a prominent coronal hole)
would be the standard default target.
\subsection{Extensions}
\label{subsec-extensions}
The instrument proposed here could act as a proving ground for a more elaborate mission that features a full-disk FOV and
wider wavelength coverage. This would be analogous to how the {\sl TRACE}\ mission \citep{handy.et99} preceded AIA on {\sl SDO}\@.
It would be fully appropriate for such a more-extensive mission to operate from L1 or similar location, with uninterrupted solar
viewing. Such an instrument would ideally be accompanied by a complementary magnetograph, and perhaps other instruments,
on the same spacecraft.
The Appendix provides a summary of properties of coronal jets and jet-like features that will be either observed directly with SEIM, or
to which SEIM will provide valuable supplementary observations, for refining our understanding of all of these features.
Even in the simplest form however, the SEIM instrument suggested above would be far more than just a “solar-jets telescope.”
As we have argued above, solar jets can be viewed as a proxy for one of the many types of possibly similar solar features, on
both larger and smaller size scales, that could be observed and studied in detail with such as instrument. Therefore, SEIM would
provide a new, high-resolution, high-time-cadence window into an understanding of fundamental explosive phenomena that occur
on multiple size scales in the lower solar atmosphere, and that possibly power the heliosphere as well.
\section*{Conflict of Interest Statement}
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
\section*{Author Contributions}
\noindent
ACS collected all materials and authored the bulk of the manuscript. RLM consulted on all aspects of the content. NKP
contributed consultations and input into coronal jet and jetlet details. TS contributed consultations and input into spicule observation and analysis details. SKT contributed consultations and input into sunspot and penumbral jet details. SS critiqued the manuscript and
provided valuable feedback.
\comment{
The Author Contributions section is mandatory for all articles, including articles by sole authors. If an appropriate statement
is not provided on submission, a standard one will be inserted during the production process. The Author Contributions statement
must describe the contributions of individual authors referred to by their initials and, in doing so, all authors agree to be accountable
for the content of the work. Please see
\href{https://www.frontiersin.org/about/policies-and-publication-ethics#AuthorshipAuthorResponsibilities}{here} for full authorship criteria.
}
\comment{
\section*{Funding}
Details of all funding sources should be provided, including grant numbers if applicable. Please ensure to add all necessary funding information, as after publication this is no longer possible.
THIS WILL BE HANDLED IN ACKNOWLEDGEMENTS
}
\section*{Acknowledgments and Funding}
This paper expands upon material presented previously in a white paper submitted to the National Academies of
Sciences, Engineering, and Medicine for the U.S. Solar and Space Physics (Heliophysics) Decadal Survey, for
Heliophysics 2050 \citep{sterling.et22a}.
ACS, RLM, and NKP were supported with funding from the Heliophysics Division of NASA's Science
Mission Directorate through the Heliophysics Supporting Research (HSR, grant No.~20-HSR20\_2-0124) Program,
and the Heliophysics Guest Investigators program.
ACS and RLM were supported through the Heliophysics System Observatory Connect (HSOC, grant No.~80NSSC20K1285)
Program. ACS received additional support through the MSFC {\sl Hinode}\ Project, and NKP received additional
support through a NASA SDO/AIA grant. SKT gratefully acknowledges support by NASA HGI
(80NSSC21K0520) and HSR (80NSSC23K0093) grants, and NASA contract NNM07AA01C ({\sl Hinode}). {\sl Hinode}\ is a Japanese
mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and UKSA as international
partners. It is operated by these agencies in co-operation with ESA and NSC (Norway).
We acknowledge the use of data from AIA and HMI data, both of which are instruments onboard {\sl SDO}, a mission of
NASA's Living With a Star program.
\section*{Data Availability Statement}
No new data were analyzed for this summary-type presentation. All background material presented in the figures is available in
the references given in the corresponding figure captions.
\bibliographystyle{Frontiers-Harvard}
|
{
"arxiv_id": "2302.13236",
"language": "en",
"timestamp": "2023-02-28T02:13:45",
"url": "https://arxiv.org/abs/2302.13236",
"yymm": "2302"
} | \section{Preliminary}
\subsection{Bayesian Network \cite{pearl1988probabilistic}}
\label{sec:bayesian net}
A Bayesian Network is a directed, acyclic graph (DAG) $G= \langle \mathcal{X} ,E\rangle $ representing multivariate probability distribution. Each node $X_i \in \mathcal{X}$ in the graph represents a random variable. Each edge in the graph represent conditional dependence. Edge $X_i\rightarrow X_j$ suggests $X_j$ has a conditional dependence on $X_i$, and $A$ is a parent node of $B$. Each node is attached with a conditional probability distribution $P(X_i \mid \operatorname{Parents}(X_i))$, where $\operatorname{Parents}(X_i)$ is the set of nodes that are $X_i$'s parents in the graph. The joint probability distribution $P(X_1, X_2, \ldots, X_n)$ is represented in factorized form as follows:
\begin{equation}
P(X_1, X_2, \ldots, X_n) = \prod_{i=1}^{N}P(X_i \mid \operatorname{Parents}(X_i))
\end{equation}
The Bayesian Network can be used for probabilistic inference. Given exact values for some evidence variables $X_2, X_3, \ldots, X_k$, the posterior probability distribution for a query variable $X_1$ can be represented as:
\begin{equation}
\begin{split}
&P(X_1\mid X_2, \ldots, X_k) = \alpha P(X_1, X_2, \ldots, X_k) \\
= &\alpha \sum_{X_{k+1}}\ldots \sum_{X_{n}} P(X_1, \ldots, X_k, X_{k+1}, \ldots, X_n)
\end{split}
\end{equation}
We are motivated by the real-world problem of searching an unknown indoor environment for some target object in search and rescue scenarios. More specifically, a robot agent is initialized at a random starting position and orientation in an unseen environment and asked to find a target object instance $o_T$ of category $c_T$ (`find a towel'). No environment map is available, and the agent can only use its sensory input (an RGB-D camera) to navigate. At the end of the task, the agent should either report that it has identified the target object instance with a confidence level greater than $1-\epsilon$ or it has failed to find the target object instance.}
\textcolor{blue}{
This problem requires real-time planning and execution of the robot's motion to perform the following two necessary tasks concurrently, which are intertwined and mutually facilitating:
\begin{itemize}
\item simultaneous localization and mapping (SLAM) at both semantic and geometric levels to facilitate the search of the target object;
\item real-time planning and execution of search motion towards the target object based on semantic SLAM results or to expand and improve semantic SLAM to find and identify the target object.
\end{itemize}
Such search motions of the robot present novel challenges to real-time robot motion planning as the robot motion planner must {\em combine} (1) active semantic SLAM with improved SLAM results, (2) real-time motion planning under sensing and motion uncertainty in the partially observed world represented by ever updated semantic SLAM results, and (3) real-time determination of (intermediate) goals of motion on the fly and based on ever improved semantic knowledge.}
\textcolor{blue}{
This paper addresses the novel challenges by leveraging the probabilistic representation of semantic SLAM outcomes (that have been constantly improved), Bayesian network \cite{russell2010artificial} representation of semantic knowledge relating the target object to surrounding objects, and Markov decision process (MDP) formulation. It introduces a method for determining intermediate motion planning goals on the fly based on semantic information and a novel, adaptive planner that synergizes the above (1)-(3) with a unified approach.
}
\section{INTRODUCTION}
This paper is motivated by the problem of searching an unknown environment for some target object, which is a fundamental problem in many application scenarios from search and rescue to reconnaissance to elderly care.
\subsection{Related Work}
There is a significant amount of literature on simultaneous localization and mapping (SLAM) for robot mapping and navigation in an unknown environment based on perception, such as visual and odometry sensing. SLAM methods model and reduce sensing uncertainties in mapping the unknown environment and localizing the robot in it at the same time. Semantic SLAM and active SLAM are particularly relevant.
Semantic SLAM methods are focused on representing, mapping, and localizing 3D objects and use different representations of objects such as meshes \cite{galvez2016real}, quadric \cite{nicholson2018quadricslam, qian2020semantic}, cuboid \cite{yang2019cubeslam}, and OctoMap \cite{zhang2018semantic}.
Active SLAM aims to choose the optimal trajectory for a robot to improve map and localization accuracy and maximize the information gain. The localization accuracy is typically measured by metrics such as A-opt (sum of the covariance matrix eigenvalues)\cite{leung2006active, kollar2008trajectory}, D-opt (product of covariance matrix eigenvalues) \cite{kim2013perception}, E-opt (largest covariance matrix eigenvalue) \cite{ehrenfeld1955efficiency}. Information gain is measured in metrics such as joint entropy \cite{stachniss2005information} and expected map information \cite{blanco2008novel}.
However, neither semantic nor active SLAM
considers performing tasks other than mapping an unknown environment. The planning aspect is not addressed for semantic SLAM and is downplayed in active SLAM with simple methods such as A*\cite{kim2013perception}.
Robot path and motion planning is one of the most studied areas in robotics. The basic objective is to find an optimal and collision-free path for a robot to navigate to some goals in an environment. Many traditional path-planning approaches assume a more or less known environment, i.e., the robot already has a map and models of objects \cite{lavalle2006planning}. On the other hand, real-time, sensing-based planning in an unknown environment still largely remains a challenge \cite{alterovitz2016robot}.
Earlier work includes grid-based planning approaches such as D* \cite{stentz1997optimal} and D* Lite \cite{koenig2005fast}, sampling-based approaches such as ERRT\cite{bruce2002real} and DRRT \cite{ferguson2006replanning}, and adaptive approaches such as \cite{RAMP}. These approaches consider the environment dynamic and partially known, but assume the goal position is known, disregard the uncertainties in sensing, the robot pose, and dynamics, and do not consider semantic information.
Recently, various techniques based on partially observable Markov decision processes (POMDPs) have been developed \cite{wang2022hybrid, veiga2019hierarchical, burks2019optimal} to incorporate sensing and robot motion uncertainties into planning in partially observable environments. However, POMDP suffers from the curse of dimensionality and is computationally expensive, particularly when the state space is large. For the POMDP to scale, high-level abstraction must be made for the state space. For example, treat objects \cite{veiga2019hierarchical} or rooms\cite{wang2022hybrid} as state variables. The downside is that highly-abstracted models can lose touch with reality. To bypass this problem, some researchers turn to deep learning to learn semantic priors and make predictions on the unobserved region \cite{chaplot2020object, georgakis2021learning}. These methods tend to suffer from poor generalization.
Next-best view planning is another highly related topic, designed for efficient visual exploration of unknown space. Unlike active SLAM, approaches for next-best view planning typically do not consider robot localization uncertainty. A next-best view planner starts by sampling a set of views in the environment, evaluates the estimated information gain for each view, and selects the view with the maximum information gain as the next view \cite{zeng2020view}. Different methods differ in the sampling methods (uniform sampler, frontier-based coverage sampler \cite{meng2017intelligent}), information gain (path costs are incorporated in \cite{selin2019efficient, meng2017intelligent}), and the selection of the next view (receding horizon scheme in \cite{bircher2016receding}, Fixed Start Open Traveling Salesman Problem (FSOTSP) solver in \cite{meng2017intelligent}).
However, existing planning methods in unknown environments usually do not consider real-time results from SLAM with embedded and changing uncertainties, such as the robot's pose, the metric map, and the semantic map (generated by semantic SLAM). Only the metric map was used by next-best view planning approaches \cite{zeng2020view, meng2017intelligent, selin2019efficient}.
\subsection{Approach and Contributions}
\label{sec: task}
The problem we focus on in this paper, i.e., searching a target object in an unknown and GPS-denied environment, requires real-time planning and execution of the robot's motion to perform the following two necessary tasks concurrently, which are intertwined and mutually facilitating:\\
\noindent {--} Simultaneous localization and mapping at both semantic and geometric levels to facilitate the search of the target object.\\
\noindent {--} Real-time planning and execution of search motion towards the target object based on semantic SLAM results or to expand and improve semantic SLAM results to find and identify the target object. \\
This paper addresses such novel challenges by leveraging the probabilistic representation of semantic SLAM outcomes (constantly improved), Bayesian network \cite{russell2010artificial} representation of semantic knowledge relating the target object to surrounding objects and Markov decision process (MDP) formulation. It introduces a novel, adaptive planner that synergizes: \\
1) active semantic SLAM with improved SLAM results, \\
2) real-time motion planning under sensing and motion uncertainty in the partially observed world represented by ever-updated semantic SLAM results, \\
3) real-time determination of (intermediate) goals of motion on the fly and based on ever-improved semantic knowledge.
\section{System Overview}
Fig. \ref{fig:sys} provides an overview of our system. The robot starts by scanning the environment with its onboard RGB-D sensor. The color and depth images from the RGB-D sensor are fed into geometric and semantic SLAM modules. The SLAM modules then update the robot's location, the (observed) free and occupied space, and objects detected in the environment; the updated localization and map information are fused in a single map $E_t$, where the subscript $t$ stands for time instance.
Next, the robot determines a goal of motion and plans a path based on $E_t$, to either explore new regions or inspect the candidate target object. It first checks whether it has reached the current goal. If not, the robot executes the planned path. Otherwise, either the task is complete, or the robot forms a new goal.
In the goal-forming process, based on $E_t$ and additional semantic information about the target object $o_T$,
the robot decides if it should search for $o_T$ in the explored space or further observe a detected object that is likely $o_T$. It then updates the optimal policies for reaching the newly formed goal. During the entire operation, the robot continuously checks if it has found the target object instance $o_T$ with high confidence. If not, the robot will repeat the above processes until it has reached the time limit or explored all regions.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/SystemOverview.pdf}
\caption{System Overview}
\label{fig:sys}
\end{figure}
\section{Mapping and localization}
In this section, we describe how geometric and semantic SLAM is achieved, and the information from different levels is fused into a single map $E_t$.
\subsection{Geometric SLAM}
Geometric SLAM and semantic SLAM modules run in parallel in our system.
We employ the RTAB-MAP \cite{labbe2019rtab} algorithm for geometric SLAM. It generates a grid map $\mathbf{G}_t \in \{0, 1, -1\}^{W\times H}$, where $W$ and $H$ are the width and height of the grid map. $0$, $1$, and $-1$ in the grid map represent free, occupied, and unknown space respectively, as shown in Fig. \ref{fig:grid map}. The geometric SLAM module also estimates the robot pose $(\mathbf{\mu}_{p,t}, \mathbf{\Sigma}_{p,t})$, where $\mathbf{\mu}_{p, t}$ and $\mathbf{\Sigma}_{p, t}$ are the mean and covariance of the robot pose at time instance $t$.
\begin{figure}
\centering
\includegraphics[width =0.6\linewidth]{Figures/GridMapExample.png}
\caption{Grid map at time $t$. The grey, white, and black areas represent unknown, free, and occupied regions.}
\label{fig:grid map}
\end{figure}
We use off-the-shelve tools \cite{bormann2018indoor} to segment the grid map into different geometric rooms: a room is defined as any space enclosed within a number of walls to which entry is possible only by a door or other dividing structure that connects it either to a hallway or to another room. Every grid on the grid map $\mathbf{G}_t$ is assigned with a corresponding room ID: $\mathbf{R}_t \in \mathbb{N}^{W\times H}$. An example is provided in Fig. \ref{fig:segmented room}.
\begin{figure}
\centering
\includegraphics[width =0.6\linewidth]{Figures/ROOM.png}
\caption{Segmented geometric rooms at time $t$. The two segmented rooms are encoded in different colors.}
\label{fig:segmented room}
\end{figure}
\subsection{Semantic SLAM}
We adapt the system introduced in \cite{qian2021semantic} for semantic SLAM. At time instance $t-1$, the position estimation $\mathbf{m}_{i, t-1} \in \mathbb{R}^2$ for the semantic object $o_i$ is:
\begin{equation}
bel(\mathbf{m}_{t-1}) \sim \mathcal{N}(\mu_{t-1}, \mathbf{\Sigma}_{t-1}),
\label{eq: map pos dis}
\end{equation}
Where $bel(\cdot)$ stands for the belief over a variable. Note that for simplicity, the subscript $i$ is dropped in \eqref{eq: map pos dis}, as in \eqref{eq: range bearing}--\eqref{eq: Bayes theorem object class}.
At time instance $t$, the robot pose $\mathbf{x}_{t} \in \mathbb{R}^2$ estimated by geometric SLAM is $bel(\mathbf{x}_{t}) \sim \mathcal{N}(\mathbf{\mu}_{p,t}, \mathbf{\Sigma}_{p,t})$.
If the semantic object $o_i$ is detected on the color image $\mathbf{I}_t$, range-bearing measurement $\mathbf{z}_t$ will be generated based on the depth information of $o_i$ from the depth image. The range-bearing measurement noise $\mathbf{\delta}_t$ is:
\begin{equation}
\mathbf{\delta}_t \sim \mathcal{N}(0, \mathbf{\Sigma}_{\delta}).
\label{eq: range bearing}
\end{equation}
The covariance of the range-bearing measurement $\Sigma_\delta$ is assumed to be independent of time. Then the posterior belief $bel(\mathbf{m}_t)$ at time $t$ can be updated using Bayes' theorem:
\begin{equation}
\begin{split}
bel(\mathbf{m}_t) & = p(\mathbf{m} \vert \mathbf{z}_{1:t}) = \frac{p(\mathbf{z}_{t} \vert \mathbf{m}, \mathbf{z}_{1:t-1}) \cdot p(\mathbf{m} \vert \mathbf{z}_{1:t-1})}{p(\mathbf{z}_t \vert \mathbf{z}_{1:t-1})} \\
& = \eta \int p(\mathbf{z}_{t} \vert \mathbf{m}, \mathbf{x}_t)\cdot bel(\mathbf{x}_t)\cdot bel(\mathbf{m}_{t-1}) d\mathbf{x}_t,
\end{split}
\label{eq:filter}
\end{equation}
where $\eta$ is a normalizing term.
Substituting the probability density functions of $p(\mathbf{z}_{t} \vert \mathbf{m}, \mathbf{x}_t)$, $ bel(\mathbf{x}_t)$, and $bel(\mathbf{m}_{t-1})$ into \eqref{eq:filter}, the final result after simplification suggests that the updated posterior belief $bel(\mathbf{m}_{t})$ can be approximated by a multivariate Gaussian distribution $bel(\mathbf{m}_{t}) \sim \mathcal{N}(\mu_{t}, \mathbf{\Sigma}_{t})$,
where
\begin{align*}
\mathbf{\Sigma}_{t} & = \Big(\mathbf{K}_1^T\mathbf{\Sigma} _{\delta}^{-1}\mathbf{K}_1 + \mathbf{\Sigma} _{t-1}^{-1} - \mathbf{K}_1^T\mathbf{\Sigma} _{\delta}^{-1}\mathbf{K}_2 \mathbf{\Psi} \mathbf{K}_2^T\mathbf{\Sigma} _{\delta}^{-1}\mathbf{K}_1\Big)^{-1},\\
\mu_{t} & = \mu_{t-1} + \mathbf{\Sigma}_{t} \mathbf{K}_1^T(\mathbf{\Sigma} _\delta^{-1} - \mathbf{\Sigma} _\delta^{-1}\mathbf{K}_2\mathbf{\Psi}\mathbf{K}_2^T\mathbf{\Sigma} _\delta^{-1})\Delta\mathbf{z}_t.
\end{align*}
$\Delta\mathbf{z}_t$ is the error between expected and actual range-bearing measurement. The complete derivation is omitted here.
The object class probability distribution $p_{t}(\cdot)$ is updated using Bayes' theorem:
\begin{equation}
\begin{split}
p_{t}(c) & = p(c \vert \mathbf{L}_{1:t}) = \frac{p(\mathbf{L}_{t} \vert c, \mathbf{L}_{1:t-1}) \cdot p(c \vert \mathbf{L}_{1:t-1})}{p(\mathbf{L}_t \vert \mathbf{L}_{1:t-1})} \\
&= \eta p(\mathbf{L_t}\lvert c) \cdot p_{t-1}(c) = \frac{p(\mathbf{L_t}\lvert c) \cdot p_{t-1}(c)}{\sum_{c' \in \mathbb{C}} p(\mathbf{L}_t\lvert c') p_{t-1}(c')},
\end{split}
\label{eq: Bayes theorem object class}
\end{equation}
where $\eta = 1/p(\mathbf{L}_t \vert \mathbf{L}_{1:t-1})$ is a normalization constant, $\mathbf{L}_t \in \mathbb{R}^{\lvert \mathbb{C} \rvert}$ is the confidence level distribution of an object in different classes, returned by an object detector, such as YOLOv3 \cite{redmon2018yolov3} at time $t$. $c \in \mathbb{C}$ is one of the possible object classes.
$p(\mathbf{L}_t\vert c)$ is the object detector uncertainty model, representing the probability of object detector outputs $\mathbf{L}_t$ when the object class is $c$. We use the Dirichlet distribution ${\displaystyle \operatorname {Dir} ({\boldsymbol {\alpha }}_c)}$ to model this uncertainty, with a different parameter $\boldsymbol{\alpha}_c \in \mathbb{R}^{\lvert \mathbb{C} \rvert}$ for each object class $c$.
Finally, based on the locations of map objects, the corresponding geometric room IDs are assigned to the objects. Formally, the map object $o_{i}$ is represented as a 4-tuple $o_{i} = \langle \mu_{i}, \Sigma_{i}, p_{i}, r_{i} \rangle$ with $\mu_i$ and $\Sigma_i$ the mean and covariance of the object $o_i$ pose, $p_i$ the object class distribution, and $r_{i}$ the room ID of $o_{i}$. The object map is the set of observed map objects $\mathbb{O}_t = \{o_1, o_2, \ldots, o_n\}$. The fused map $E_t = \langle\mathbf{G}_t, \mathbb{O}_t, \mathbf{R}_t \rangle$ collects the grid map $\mathbf{G}_t$, object map $\mathbb{O}_t$, as well as the room information $\mathbf{R}_t$.
\section{Information for Goal Forming.}
As the robot's mission is to find a target object in an unknown environment, its goal of motion will be determined on the fly depending on the information provided by the fused map $E_t$ and the robot's location. The mission is accomplished if the target object is visible and identified as such. Otherwise, there are several types of intermediate goals for the robot motion: \\
\noindent{--} if the target object is not included in the current map $E_t$, the robot chooses to explore more. This intermediate goal requires frontier detection; \\
\noindent{--} if an object in the map is likely the target object (with a low probability), the robot chooses to observe more of the object in its visibility region; \\
\noindent{--} if an object in the map is related to the target object based on the semantic information that they are likely in the same geometric room, the robot chooses to move to that room in the hope of being able to see the target object once it is there.
\subsection{Frontier Detection}
The Frontier region is the set of cells between free and unknown space in the grid map $\mathbf{G}_t$. Formally, a grid cell $(i, j)$ belongs to the frontier region if and only if $\mathbf{G}_t[i, j] = 0$ and $
\exists k \in \{0, 1, -1\}, \exists l \in \{0, 1, -1\}\colon \mathbf{G}_t[i+k, j+l] = -1$.
We use the Canny edge detector \cite{canny1986computational} to detect the grid cells between free and unknown. The detected cells are grouped into edges using 8-connectivity, i.e., each cell with coordinates $(i\pm1,j\pm1)$ is connected to the cell at $(i,j)$. Similar to map objects, a frontier edge $e_j$ is also assigned a room ID $r_j$ based on its position. The frontier region is defined as $\mathbb{F}_t = \{\langle e_1, r_1 \rangle, \langle e_2, r_2\rangle, \ldots, \langle e_m, r_m \rangle \}$, where $m$ is the number of frontier edges. Edges with area $\lvert e_j \rvert$ smaller than 15 cells are deemed as noise and excluded from $\mathbb{F}_t$. The frontier region at time $t$ is drawn in green in Fig. \ref{fig:frontier region}.
\begin{figure}
\centering
\includegraphics[width =0.6\linewidth]{Figures/frontier.png}
\caption{The frontier region computed at time $t$, marked in green.}
\label{fig:frontier region}
\end{figure}
\subsection{Visibility Region Computation}
At time $t$, the visibility region $\mathbb{V}_t$ for an object $o_i$ in the grid map $G_t$ with obstacles is the region of all cells on the grid map $G_t$ that object $o_i$ is visible. That is, if a line connecting the position of $o_i$ and a cell $q$ does not intersect with any obstacle cell and is within the sensing range, then $q \in \mathbb{V}_t$. We apply a uniform ray-casting algorithm to compute the visibility region. Rays originating from the object's position are cast in many directions. Regions illuminated by the ray (reached by it) are considered the visibility region $\mathbb{V}_t$. The visibility region for one object is drawn in blue in Fig. \ref{fig:visibility region}.
For efficient planning, we only compute the visibility region for the object most likely in the target object category $c_T$. We refer to this object as the object of interest $o_{I}$, $I = \argmax_i p_i(c_T)$.
\begin{figure}
\centering
\includegraphics[width =0.6\linewidth]{Figures/sight.png}
\caption{The visibility region computed for one object instance at time $t$, marked in blue.}
\label{fig:visibility region}
\end{figure}
\subsection{Semantic Prior Knowledge}
\label{sec: semantic prior}
We leverage prior semantic knowledge to facilitate efficient exploration. The key idea is that objects in the target category may have a closer affinity to some categories of objects than others. The co-occurrence relationship between objects of two categories is estimated based on Lidstone's law of succession \cite{schutze2008introduction}:
\begin{equation}
p(c_i \mid c_j) = \frac{N(c_i, c_j) + \alpha}{N(c_j) + \alpha \lvert \mathbb{C} \rvert},
\label{eq: lidstone}
\end{equation}
where $p(c_i \mid c_j)$ is the conditional probability of object of class $c_i$ being in a geometric room given object of class $c_j$ is already observed in the same room. $N(c_i, c_j)$ is the number of times objects of classes $c_i$ and $c_j$ are observed in the same room. $N(c_j)$ is the number of times object of category $c_j$ are obsereved in a room. $\alpha \in [0, \infty)$ is a smoothing factor, and finally $\lvert \mathbb{C} \rvert$ is the number of classes.
The probabilistic co-occurrence relationships of multiple pairs of objects are captured using Eq. \eqref{eq: lidstone} and further assembled into multiple Bayesian networks. We construct a set of Bayesian networks $\mathcal{B} = \{B_1, B_2, \ldots \}$, with one for each semantic space $\mathcal{S} = \{S_1, S_2, \ldots \}$. Each semantic space corresponds to one room category, such as kitchen, bedroom, bathroom, etc. An example of a Bayesian Network is illustrated in Fig. \ref{fig:bayesian net}, demonstrating common object classes found in a kitchen and their conditional dependency. For each geometric room $r_i$ in the environment, we will collect the set of object classes $\mathbb{E}_i = \{c_1, c_2, \ldots \}$ that are observed in the room $r_i$. Recall that we keep a class probability distribution for each map object. Thus we cannot draw a deterministic conclusion regarding the presence of a certain object class in the room $r_i$. However, to keep the problem tractable, we assume the presence of object class $c_k$ if for any object $o_j$ in the room, the probability of the object $o_j$ being in class $c_k$ exceeds some threshold $\lambda$: $c_k \in \mathbb{E}_i \iff \exists j,\, p_j(c_k) > \lambda$.
Given the evidence set $\mathbb{E}_i$, we only consider the Bayesian networks in $\mathcal{B}$ that contains the target object instance $c_T$ and shares nodes with $\mathbb{E}_i$; name this new set $\mathcal{B}_i$. By doing so, we narrow down the possible semantic space categories for the geometric room $r_i$ to a subset $\mathcal{S}_i$, which corresponds to $\mathcal{B}_i$. For each Bayesian network $B_j \in \mathcal{B}_i$, we can compute the probability of finding the target object instance $o_T$ in the room $r_i$ based on evidence $\mathbb{E}_i$, denoted as $P(c_T \mid \mathbb{E}_i , r_i, B_j)$.
We can then infer the probability of finding the target object instance $o_T$ in the same room $r_i$ by feeding $\mathbb{E}_i$ into the Bayesian network set $\mathcal{B}_i$:
\begin{equation*}
P(c_T \mid \mathbb{E}_i = \{c_1, c_2, \ldots\}, r_i) = \max_{B_j \in \mathcal{B}_i} P(c_T \mid \mathbb{E}_i , r_i, B_j).
\end{equation*}
This probability is computed for all geometric rooms.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{Figures/Kitchenexample.pdf}
\caption{Bayesian Network}
\label{fig:bayesian net}
\end{figure}
\section{Planning Approach}
We now describe how the intermediate goal is determined for the robot on the fly and how the optimal policy for reaching the intermediate goal is computed for the robot.
\subsection{Robot Model}
The robot is a stochastic dynamic system and can be represented by a Markov decision process (MDP) $\text{M}_t = \langle S, A, P, R, F\rangle$ with the following components:\\
\noindent {---} $S$ is the discrete state space, representing the mean of the Gaussian distribution of the robot position. The mean of the robot's position is discretized and clipped to the closest grid cell in the grid map $\mathbf{G}_t$ to avoid an infinite state space.
\noindent {---} $A$ is a set of actions. We consider eight actions that allow the robot to move horizontally, vertically, and diagonally to reach its eight neighboring grid cells. A low-level controller is used to map the actions into the robot command.
\noindent {---} $P\colon S \times A \times S \to [0, 1]$ is the transition probability function, where $P(\cdot \mid s, a)$ represents the probability distribution over next states given an action $a$ taken at the current state $s$.
For example, for the move-up action, the robot has a high probability of moving up one cell, but it also has a small probability of moving to the upper-left or upper-right cell.
\noindent {---} $R \colon S \times A \times S \to \mathbb{R}$ is the reward function,
where $R(s, a, s')$ is the reward for executing action $a \in A$ at state $s \in S$ and reaching next state $s' \in S$.
\noindent {---} $F \subset S$ is the set of (intermediate) goal states, which are determined on the fly, as described in Section V-C.
\subsection{Reward Shaping}
To compute policies that can drive the robot to the frontier region $\mathbb{F}_t$ or visibility region $\mathbb{V}_t$, for exploration or re-observation, we define two reward functions accordingly.
\subsubsection{Reward function for reaching $\mathbb{F}_t$}
The reward function $R(s, a, s')$ is designed as:
\begin{equation}
R(:, :, s') = P(\mathbf{x} \in e_j \mid s') \cdot P(c_T \mid \mathbb{E}, r_i) \cdot \lvert e_j \rvert,
\label{eq: F reward}
\end{equation}
where $P(\mathbf{x} \in e_j \mid s')$ is the probability of the robot being at frontier edge $e_j$ if its mean position is $s'$. $P(c_T \mid \mathbb{E}, r_i)$ is the probability to find target object instance $c_T$ in geometric room $r_i$ where edge $e_j$ lies given the current evidence $\mathbb{E}$. $\lvert e_j \rvert$ is the size of the frontier edge, representing the possible information gain by exploring $e_j$. $P(\mathbf{x} \in e_j \mid s')$ can be calculated by first discretizing the robot's Gaussian position distribution (with mean at $s'$) based on $\mathbf{G}_t$ and then
summing up the probability of the robot at each cell that belongs to $e_j$. $P(c_T \mid \mathbb{E}, r_i)$ is calculated using the Bayesian network, as discussed in Section \ref{sec: semantic prior}.
\subsubsection{Reward function for reaching $\mathbb{V}_t$}
The reward function $R(s, a, s')$ is designed as:
\begin{equation}
R(:, :, s') = P(\mathbf{x} \in \mathbb{V}_t \mid s'),
\label{eq: v reward}
\end{equation}
which is the probability of the robot being in visibility region $\mathbb{V}_t$ if its mean position is $s'$.
\subsection{Goal Determination}
\label{sec: goal}
We use an optimistic approach in determining the intermediate goal. If the probability of the object of interest $o_I$ being in the target object category $c_T$ exceeds a threshold $\tau$, i.e., $p_I(c_T) > \tau$, then the intermediate goal is to re-observe the object of interest $o_I$, and the reward function is as defined in \eqref{eq: v reward}. Otherwise, the intermediate goal is to explore the frontier region, and the reward function is defined as \eqref{eq: F reward}.
\subsection{Planner}
The MDP $\mathbf{M}_t$ and the selected reward function $R$ are fed into a planner based on the Real Time Dynamic Programming (RTDP) algorithm \cite{smith2006focused} to compute an optimal policy $\pi^*$ that maximizes the expected sum of rewards, i.e., value function $\mathcal{V}$. A value function $\mathcal{V}$ starting at state $s \in S$ following policy $\pi$ is defined as follows:
\begin{align*}
\mathcal{V}^\pi(s) = \operatorname{E}_{\pi}[\sum_{t = 0}^{\infty} \gamma^t R(s_t, a_t, s_{t+1})],
\end{align*}
where $\pi \colon S \to A $ is a deterministic policy over $\mathbf{M}_t$ mapping the state into an action, and $\gamma \in [0, 1)$ is a discounting factor. The optimal policy $\pi^\ast$ is computed as follows: for all~$s \in S$,
\begin{align*}
\pi^\ast(s) = \argmax_{\pi \in \Pi} \mathcal{V}^\pi(s).
\label{eq:optimal_policy_maxprob}
\end{align*}
The RTDP algorithm allows us to compute a semi-optimal policy in a short time\footnote{Unlike a more traditional approach such as value iteration \cite{bellman1957markovian}.}. As the robot carries out the semi-optimal policy, the policy will be continuously improved by the RTDP algorithm with the current robot mean position as the initial state $s_0$ and converges to the optimal policy.
\subsection{Adaptation}
The fused map $E_t$, frontier region $\mathbb{F}_t$, and visibility region $\mathbb{V}_t$ are updated at every time instance $t$ based on the ever-improving semantic SLAM results. Consequently, once the robot reaches an intermediate goal state, the MDP model $\text{M}_t$ needs to be updated based on the new fused map $E_t$. We call this process the {\em adaptation} of the MDP model. Next, the corresponding policy $\pi$ also needs to be updated.
Specifically, the following components are adapted: (a) the discrete state space $S$ to match the changing grid map $\mathbf{G}_t$, (b) the transition probability function $P$, (c) the reward function $R$ based on Eqs. \eqref{eq: F reward} and \eqref{eq: v reward}, and (d) the set of intermediate goal states $F$ as $\mathbb{F}_t$ and $\mathbb{V}_t$ change. The RTDP planner takes the updated MDP model $\text{M}_t$ to generate a new policy.
\section{EXPERIMENTS}
We have performed experiments on the Matterport3D (MP3D) \cite{chang2017matterport3d} dataset using the Habitat \cite{savva2019habitat} simulator. MP3D dataset contains 3D scenes of a common indoor environment, and the Habitat simulator allows the robot to navigate the virtual 3D scene. Given the towel as the target object category, the robot's objective is to find any instance of the target with a confidence level greater than $1 - \epsilon = 0.99$. The action space of the robot is continuous, constituting its angular and linear velocity. An episode is only successful if the agent stops its motion when it has identified the target within a specific time budget (1K seconds). Five episodes are run for each method, with the robot placed at a random position at the beginning of each episode. Two shots of the MP3D scene are given in Fig. \ref{fig: snap shots}. This particular scene is $6.4m\times 8.4m$ in size, and has one level, ten rooms, and 187 objects. The robot start positions and the target object instances are visualized in Fig. \ref{fig:start and goal}, represented as blue boxes and yellow diamonds.
The accompanying video shows the robot's operations to find target objects with different starting locations.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{Figures/view1.png}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{Figures/view3.png}
\end{subfigure}
\caption{Two snap shots of the MP3D scene.}
\label{fig: snap shots}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/GoalAndStart.png}
\caption{Target objects (yellow diamonds) and robot start positions (blue boxes).}
\label{fig:start and goal}
\end{figure}
\subsection{Semantic SLAM results}
We present the semantic SLAM results obtained during one episode. Our evaluation focuses on the accuracy and uncertainty of the collected results.
\subsubsection{Accuracy}
We calculate the mean and the median of the error between the predicted objects' position and the ground-truth objects' position:
\begin{equation*}
\text{Mean} =\frac {\sum _{i=1}^{n}\lVert{\hat {p}}_{i}-p_{i}\rVert}{n}, \hspace{2mm} \text{Median} = \operatorname {median}(\lVert{\hat {p}}_{i}-p_{i}\rVert),
\end{equation*}
\noindent where $n$ is the current number of objects, $\hat {p}_i$ is the estimated object position and $p_i$ is the ground truth object position. Their variations with respect to time are plotted in Fig. \ref{fig: position error}. For reference, the number of identified objects at each time instance is also plotted in Fig. \ref{fig: obj num}. We can see that the error increases for the first few seconds as new objects are identified. Nonetheless, as more observations come in, the error decreases and converges. We can also see that the calculated average error appears larger than the median value. This is because some objects only receive one observation. As a result, their positions are not updated, contributing to a large error. For this reason, the median error is considered a more sensible metric in our case, which is kept at a reasonable level.
In the same spirit, we also calculate the cross entropy between the predicted objects' classes and the ground-truth objects' classes: $-\frac{1}{n}\sum_{i=1}^{n}\sum _{c\in {\mathbb {C}}}p^{gt}_i(c)\,\log p_i(c)$. $p_i(\cdot)$ is the predicted object class distribution, $p_i^{gt}(\cdot)$ is the ground truth of object class distribution, taking the value one at the corresponding object class and zero elsewhere. The results are plotted in Fig. \ref{fig: cross entropy}. We can see that the cross entropy gradually decreases with time, proving that the predicted objects' classes would converge to the correct results.
\subsubsection{Uncertainty}
Though we do not claim our method to be an active SLAM method, we observe a decrease in semantic map uncertainty as the robot progresses. The average A-opt (sum of the covariance matrix eigenvalues), D-opt (product of covariance matrix eigenvalues), and E-opt (largest covariance matrix eigenvalue) of the map object position covariance are calculated. Their evolution over time is plotted in Fig. \ref{fig: obj covariance}. The spikes in the graph indicate the identification of new objects, hence the increased position uncertainty. However, as time goes by and more observations come in, we can see that all three metrics are kept at a low level. This shows that the robot can estimate the objects' position fairly confidently.
Fig. \ref{fig: covariance map} gives a more intuitive representation. In Fig. \ref{fig: covariance map}, we plot the Gaussian functions with their means and covariances set as the estimated object position means and covariances. Therefore, each ``bell'' in the plot represents one object. Comparing the results we obtained at time instants $t=8s$ and $t=70s$. We can see that at $t=70s$, the bell's peak increases, and the base decreases, indicating a more certain estimation of the object's position.
The entropy of the predicted object class is also calculated: $-\frac{1}{n}\sum_{i=1}^{n}\sum _{c\in {\mathbb {C}}}p_i(c)\,\log p_i(c)$
and visualized in Fig. \ref{fig: entropy}. The result suggests that as time progresses, the robot is more and more certain about the object class that it predicted.
\begin{figure}
\centering
\includegraphics[width=0.79\linewidth]{Figures/error.pdf}
\caption{Mean and median of the error between the predicted objects' position and the ground-truth objects' position.}
\label{fig: position error}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.79\linewidth]{Figures/obj_num.pdf}
\caption{Number of identified objects with respect to time.}
\label{fig: obj num}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.79\linewidth]{Figures/cross_entropy.pdf}
\caption{Cross entropy between the predicted objects' classes and the ground-truth objects' classes.}
\label{fig: cross entropy}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.79\linewidth]{Figures/opt.pdf}
\caption{The evolution of the object position covariance with respect to time.}
\label{fig: obj covariance}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{1\linewidth}
\centering
\includegraphics[width=1\textwidth]{Figures/old_map.pdf}
\caption{$t=8s$.}
\end{subfigure}
\begin{subfigure}[b]{1\linewidth}
\centering
\includegraphics[width=1\textwidth]{Figures/new_map.pdf}
\caption{$t=70s$.}
\end{subfigure}
\caption{Object position covariance at time instant $t=8s$ and time instant $t=70s$}
\label{fig: covariance map}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.79\linewidth]{Figures/entropy.pdf}
\caption{The evolution of the predicted object class entropy with
respect to time}
\label{fig: entropy}
\end{figure}
\subsection{Planning results}
We evaluate the performance of our planning method vs. other navigation strategies: \textbf{FE-SS}: a frontier exploration method \cite{613851} with rewards defined by \eqref{eq: F reward}, and equipped with our custom Semantic SLAM, and
\textbf{Ours-NS}: ablation of our method without semantic prior knowledge and using a uniform reward. To evaluate all methods, we report the following metrics: \\
\noindent {\bf Success}: percentage of successful episodes\\
\noindent {\bf Average path length}: average length of the path taken by the agent in an episode.\\
\noindent {\bf Success weighted by path length (SPL)} \cite{anderson2018evaluation}:
$\frac{1}{N}\sum_{i=1}^{N}S_i \frac{l_i}{\max(p_i, l_i)}$,
\noindent where $l_i$$=$length of the shortest path between goal and the visibility region of target instance for an episode, $p_i$$=$length of the path taken by the agent in an episode, $S_i$$=$ binary indicator of success in episode $i$.\\
\noindent {\bf Average planning time}: average time spent on planning (excluding the time in action).
The testing results are summarized in Table \ref{tab:average}. Our method outperforms other methods in success rate, SPL, and path length by a large margin. Although our method does not guarantee dominant performance in planning time, it still exhibits advantages with a shorter average planning time for all episodes.
The ablation study is conducted by denying access to the semantic prior knowledge in our method. A significant performance drop is observed on all metrics after that. This proves the efficacy of using semantic prior knowledge to guide our search for the target object.
\begin{table}[htbp]
\caption{Comparison study}
\centering
\begin{tabular}{c c c c c}
\toprule
Method & Success & Path length (m) & SPL & Planning time (s)\\
\midrule
Ours & \textbf{0.96} & \textbf{4.984} & \textbf{0.46} & \textbf{80.338} \\
FE-SS & 0.80 & 6.796 & 0.39 & 105.514 \\
Ours-NS & 0.68 & 8.726 & 0.41 & 97.716 \\
\bottomrule
\end{tabular}
\label{tab:average}
\end{table}
\section{Conclusions}
We presented a novel approach to tackle the open problem of enabling
a robot to search for a semantic object in an unknown
and GPS-denied environment. Our approach combines
semantic SLAM, Bayesian Networks, Markov Decision Process,
and real-time dynamic planning. The testing results on Matterport 3D dataset demonstrate both the
effectiveness and efficiency of our approach. Moreover, while our approach is unique in incorporating semantic object information to search for semantic targets, to evaluate its motion planning performance, we compared it to a non-semantic baseline planning method and conducted an ablation study. The results show that our approach has a higher success rate, shorter path length, and less planning time. In the next step, we will consider extending our approach to more complex, compound semantic tasks and tasks that require the robot to interact with objects.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13276",
"language": "en",
"timestamp": "2023-02-28T02:14:53",
"url": "https://arxiv.org/abs/2302.13276",
"yymm": "2302"
} | \section{Introduction}
Let $G=(V,E)$ be a graph. We say that $G$ is an \emph{intersection graph} of convex sets in $\mathbb{R}^d$ if there is a family $\mathcal{F}$ of convex sets in $\mathbb{R}^d$ and a bijection $V\rightarrow \mathcal{F}$ mapping each vertex $v_i$ to a set $s_i$ with the property that the sets $s_i$ and $s_j$ intersect if and only if the corresponding vertices $v_i$ and $v_j$ are connected in $G$, that is, $\{v_i,v_j\}\in E$. Such graphs are instances of \emph{geometric intersection graphs}, whose study is a core theme of discrete and computational geometry. Historically, intersection graphs have mainly been considered for convex sets in~$\mathbb{R}^1$, in which case they are called \emph{interval graphs}, or for convex sets or segments in the plane.
A fundamental computational question for geometric intersection graphs is the \emph{recognition problem} defined as follows: given a graph $G$, and some (infinite) collection of geometric objects $C$, decide whether $G$ is an intersection graph of objects of $C$. While the recognition problem for interval graphs can be solved in linear time \cite{Interval}, the recognition of segment intersection graphs in the plane is significantly harder. In fact, Matou\v{s}ek and Kratochv\'{i}l have shown that this problem is complete for the complexity class $\exists\mathbb{R}$ \cite{Kratochvil}. Their proof was later simplified by Schaefer \cite{Schaefer}, see also the streamlined presentation by Matou\v{s}ek \cite{Matousek}.
The complexity class $\exists\mathbb{R}$ was introduced by Schaefer and \v{S}tefankovi\v{c} \cite{ETR}. It can be thought of as an analogue of NP over the reals. More formally, the class is defined via a canonical problem called ETR, short for \emph{Existential Theory of the Reals}. The problem ETR is a decision problem whose input consists of an integer $n$ and a sentence of the form
\[\exists X_1,\ldots,X_n\in\mathbb{R}:\varphi(X_1,\ldots,X_n),\]
where $\varphi$ is a quantifier-free formula consisting only of polynomial equations and inequalities connected by logical connectives. The decision problem is to decide whether there exists an assignment of real values to the variables $X_1,\ldots,X_n$ such that the formula $\varphi$ is true.
It is known that $\mathsf{NP}\subseteq\exists\mathbb{R}\subseteq \mathsf{PSPACE}$, where both inclusions are conjectured to be strict. Many problems in computational geometry have been shown to be $\exists\mathbb{R}$-complete, such as the realizability of abstract order types \cite{Mnev}, the art gallery problem \cite{ArtGallery}, the computation of rectilinear crossing numbers \cite{Bienstock}, geometric embeddings of simplicial complexes \cite{Embeddings}, and the recognition of several types of geometric intersection graphs \cite{Cardinal,Evans,Kang,McDiarmid}.
In this work, we extend the recognition problem of intersection graphs of convex sets to the recognition problem of skeletons of nerves of convex sets. Let us introduce the relevant notions. An (abstract) \emph{simplicial complex} on a finite ground set $V$ is a family of subsets of~$V$, called \emph{faces}, that is closed under taking subsets. The \emph{dimension} of a face is the number of its elements minus one. The dimension of a simplicial is the maximum dimension of any of its faces. In particular, a 1-dimensional simplicial complex is just a graph.
The \emph{$k$-skeleton} of a simplicial complex $K$ is the subcomplex of all faces of dimension at most $k$.
Let $\mathcal{F}$ be a family of convex sets in $\mathbb{R}^d$. The \emph{nerve} of $\mathcal{F}$, denoted by $N(\mathcal{F})$ is the simplicial complex with ground set $\mathcal{F}$ where $\{F_1,\ldots,F_m\}\subset\mathcal{F}$ is a face whenever $F_1\cap\ldots\cap F_m\neq\emptyset$.
In other words, the intersection graph of a family $\mathcal{F}$ of convex sets is the $1$-skeleton of the nerve~$N(\mathcal{F})$. Consider now the following decision problem, which we denote by $R(k,j,d)$: given a simplicial complex $K$ by its maximal faces,
decide whether there exists a family $\mathcal{F}$ of $j$-dimensional convex sets in $\mathbb{R}^d$ such that $K$ is the $k$-skeleton of $N(\mathcal{F})$.
In some cases, the $k$-skeleton of a nerve of convex sets uniquely determines the entire nerve: recall \emph{Helly's theorem}~\cite{Helly} which states that for a finite family $\mathcal{F}$ of convex sets, if every $d+1$ of its members have a common intersection, then all sets in $\mathcal{F}$ have a common intersection. Phrased in the language of nerves, this says that if the $d$-skeleton of the nerve $N(\mathcal{F})$ is complete, then $N(\mathcal{F})$ is an $|\mathcal{F}|$-simplex. In other words, we can retrieve the nerve of a family of convex sets in $\mathbb{R}^d$ from its $d$-skeleton by filling in higher-dimensional faces whenever all of their $d$-dimensional faces are present.
\begin{remark}
The following Helly-type theorem
implies the analogous statement that a nerve of $j$-dimensional convex sets can be retrieved from its $(j+1)$-skeleton.
\end{remark}
\begin{theorem}\label{thm:Helly-type}
Let $\mathcal{F}$ be a finite family of $j$-dimensional convex sets in $\mathbb{R}^d$. Assume that any $j+2$ or fewer members of $\mathcal{F}$ have a common intersection. Then all sets in $\mathcal{F}$ have a common intersection.
\end{theorem}
This result is likely known, however we could not find a reference for it, so we include a short proof. The proof requires some algebraic topology, in particular the notion of \emph{homology}. For background on this, we refer to the many textbooks on algebraic topology, for instance the excellent work by Hatcher \cite{Hatcher}. For readers not familiar with this concept, the idea of the proof can still be seen by the intuitive notion that $H_k(X)=0$ means that the space $X$ has no holes of dimension $k$.
\begin{proof}
We want to show that the nerve $N(\mathcal{F})$ is an $|\mathcal{F}|$-simplex. Consider a subfamily $\mathcal{F}'\subseteq\mathcal{F}$ and its induced sub-nerve $N(\mathcal{F}')$. By the nerve theorem (see e.g.\ \cite{Hatcher}, Corollary 4G.3), the sub-nerve $N(\mathcal{F}')$ is homotopy-equivalent to the union $\bigcup\mathcal{F}'$ of the sets in $\mathcal{F}'$, implying that the two objects have isomorphic homology groups. As $\bigcup\mathcal{F}'$ has dimension at most $j$, and $\mathcal{F}$ (and thus also $\mathcal{F}'$) is finite, we have that $H_k(\bigcup\mathcal{F}')=0$ for all $k\geq j+1$. Thus $H_k(N(\mathcal{F}'))=0$ for all $k\geq j+1$ and all $\mathcal{F}'\subseteq\mathcal{F}$. On the other hand, the assumption that any $j+2$ or fewer sets have a common intersection implies that the $(j+1)$-skeleton of $N(\mathcal{F})$ is complete and thus $H_k(N(\mathcal{F}'))=0$ for all $1\leq k\leq j$ and \emph{all} subfamilies $\mathcal{F}'\subseteq\mathcal{F}$. Thus, $N(\mathcal{F})$ must be a simplex.
\end{proof}
\section{Containment results}
We start by showing that all considered problems are in the complexity class $\exists\mathbb{R}$.
\begin{theorem}
For all $k,j$ and $d$, we have $R(k,j,d)\in\exists\mathbb{R}$.
\end{theorem}
\begin{proof}
Similarly to $\mathsf{NP}$, containment in \ensuremath{\exists\mathbb{R}}\xspace can be proven by providing a certificate consisting of a polynomial number of real values, and a verification algorithm running on the real RAM computation model which verifies these certificates~\cite{Erickson}. As a certificate, we use the coordinates of some point in $\mathbb{R}^d$ for each maximal face of the input complex $K$. These points then describe a family $\mathcal{F}$ of convex sets: Each set $F$ is the convex hull of all points representing maximal faces $S$ of $K$ such that $F\in S$.
Note that if $K$ is the $k$-skeleton of $N(\mathcal{F})$ for some family $\mathcal{F}$ of $j$-dimensional convex sets in $\mathbb{R}^d$, such a certificate must exist: The points can be placed in the maximal intersections of $\mathcal{F}$, and shrinking each set to the convex hull of these points cannot change $N(\mathcal{F})$.
Such a certificate can be verified by testing that each set $F$ is $j$-dimensional (e.g., using linear programming), and by testing that the $k$-skeleton of $N(\mathcal{F})$ is indeed $K$. The latter can be achieved in polynomial time by computing the intersection of each subfamily $\mathcal{F}'\subseteq \mathcal{F}$ of at most $\min(k+1,d+1)$ sets. If $k\leq d$, this determines the $k$-skeleton of $N(\mathcal{F})$. If $k>d$, the $k$-skeleton of $N(\mathcal{F})$ is determined by the $d$-skeleton of $N(\mathcal{F})$ by Helly's theorem~\cite{Helly}.
\end{proof}
\begin{lemma}
$R(k,1,1)$ is in $\mathsf{P}$ for any $k\geq 1$.
\end{lemma}
\begin{proof}
$R(1,1,1)$ is equivalent to recognizing interval graphs, and can thus be solved in polynomial time (see~\cite{Interval}). Since we are considering a family $\mathcal{F}$ of intervals in $\mathbb{R}^1$, the $1$-skeleton of $N(\mathcal{F})$ uniquely determines $N(\mathcal{F})$. By Helly's theorem, $N(\mathcal{F})$ must be the clique complex of its $1$-skeleton. Thus, $R(k,1,1)$ can be solved as follows: Build the graph $G$ given by the $1$-skeleton of the input complex $K$. Test the following four properties: (i) $G$ is an interval graph, (ii) $K$ is at most $k$-dimensional, (iii) every maximal face of $K$ is a clique of~$G$, and (iv) every clique of size $<k$ in $G$ is contained in some maximal face of $K$. Return yes if the answer to all these tests is yes, otherwise return no. All tests can be performed in polynomial time, thus $R(k,1,1)\in\mathsf{P}$.
\end{proof}
For some constellations of $k,j,d$, any simplicial complex of dimension at most $k$ can be realized as the $k$-skeleton of the nerve of $j$-dimensional convex sets in $\mathbb{R}^d$. In this case we say that the problem $R(k,j,d)$ is \emph{trivial}. Evans et al. prove triviality for $R(1,2,3)$:
\begin{lemma}[\cite{Evans}]
$R(1,2,3)$ is trivial.
\end{lemma}
Furthermore, we can show that if the dimensions $j$ and $d$ get large enough compared to~$k$, the problem also becomes trivial.
\begin{lemma}
$R(k,2k+1,2k+1)$ is trivial.
\end{lemma}
\begin{proof}
Wegner has shown that every $k$-dimensional simplicial complex is the nerve of convex sets in $\mathbb{R}^{2k+1}$ \cite{Wegner}. In particular, it is also the $k$-skeleton of a nerve.
\end{proof}
Finally, we prove the following lifting result.
\begin{lemma}
If $R(k,j,d)$ is trivial, $R(k,j',d')$ is trivial for all $d'\geq d$ and $j\leq j'\leq d'$.
\end{lemma}
\begin{proof}
We prove that both $j$ and $d$ can be increased by one without destroying triviality, from which the lemma follows.
Any simplicial complex that can be realized in dimension $d$ can also be realized in a $d$-dimensional subspace of $\mathbb{R}^{d+1}$, thus increasing $d$ by one preserves triviality.
To see that $j$ can be increased, consider a realization of a simplicial complex as the $k$-skeleton of the nerve of a family $\mathcal{F}$ of $j$-dimensional convex sets in $\mathbb{R}^d$. Now, consider any two subfamilies $\mathcal{F}_1,\mathcal{F}_2$ of $\mathcal{F}$, such that $\left(\bigcap_{F\in \mathcal{F}_1}F\right) \cap \left(\bigcap_{F\in \mathcal{F}_2}F\right)=\emptyset$. The two intersections $\bigcap_{F\in \mathcal{F}_1}F$ and $\bigcap_{F\in \mathcal{F}_2}F$ must have some distance $\epsilon$. Consider $\epsilon_{min}$, the minimum of all such $\epsilon$ over all pairs of subfamilies $\mathcal{F}_1,\mathcal{F}_2$. We extrude every object in $\mathcal{F}$ in some direction not yet spanned by the object by some $\epsilon'$ small enough that no intersection $\bigcap_{F\in \mathcal{F}'}F$ for $\mathcal{F}'\subseteq \mathcal{F}$ grows by more than $\epsilon_{min}/3$. This process can not introduce any additional intersections, and thus the nerve of this family of $j+1$-dimensional sets is the same as the nerve of $\mathcal{F}$. We conclude that triviality of $R(k,j,d)$ for $j<d$ implies triviality of $R(k,j+1,d)$.
\end{proof}
\section{ Existing \texorpdfstring{$\exists\mathbb{R}$}{ER}-Hardness Results}
\begin{lemma}\label[lemma]{lem:k1d}
$R(k,1,d)$ is \ensuremath{\exists\mathbb{R}}\xspace-hard for $k\geq 1$ and $d\geq 2$.
\end{lemma}
\begin{proof}
For $k=1$ and $d=2$, this is equivalent to recognizing segment intersection graphs in the plane, which Schaefer \cite{Schaefer} proved to be \ensuremath{\exists\mathbb{R}}\xspace-hard by reduction from stretchability. Evans et al. \cite{Evans} generalize Schaefer's proof for intersection graphs of segments in $\mathbb{R}^3$ ($k=1$ and $d=3$). Their proof works by arguing that all segments of their constructed graph must be coplanar.
Since the argument implies coplanarity no matter the dimension of the ambient space, the proof also implies \ensuremath{\exists\mathbb{R}}\xspace-hardness for $k=1$ and $d>3$. Furthermore, for any ``yes''-instance of stretchability, the constructed graph can be drawn using segments with no triple intersections. Thus, the proof implies \ensuremath{\exists\mathbb{R}}\xspace-hardness for $R(k,1,d)$ for $k>1$, as well.
\end{proof}
Schaefer~\cite{Schaefer} furthermore proved that $R(1,2,2)$ is \ensuremath{\exists\mathbb{R}}\xspace-hard. In the proof of this result, again no triple intersections occur in the representations of ``yes''-instances. Thus the same proof applies to the following lemma.
\begin{lemma}\label[lemma]{lem:k22}
$R(k,2,2)$ is \ensuremath{\exists\mathbb{R}}\xspace-hard for any $k\geq 1$.
\end{lemma}
This solves the complexity status of $R(1,j,d)$ for all $j$ and $d$. We summarize these results in the following corollary.
\begin{corollary}
For $k=1$, $R(k,j,d)$ is \begin{itemize}
\item in $\mathsf{P}$, if $j=d=1$.
\item \ensuremath{\exists\mathbb{R}}\xspace-complete, if $j=1$ and $d>2$, or if $j=d=2$.
\item trivial in all other cases.
\end{itemize}
\end{corollary}
\section{Lifting to Higher Dimensions}
We can extend a lifting result due to Tancer \cite{Tancer} to our setting. For this, the \emph{suspension} of a simplicial complex $K$ with ground set $V$ and face family $F$ is the simplicial complex $S(K)$ with ground set $V\cup\{a,b\}$ and faces $F\cup\{f\cup\{a\}\mid f\in F\}\cup\{f\cup\{b\}\mid f\in F\}$.
\begin{lemma}
Let $K$ be a simplicial complex and let $j\geq d-1$. Then $K$ is a nerve of $j$-dimensional convex sets in $\mathbb{R}^d$ if and only if $S(K)$ is a nerve of $(j+1)$-dimensional convex sets in $\mathbb{R}^{d+1}$.
\end{lemma}
\begin{proof}
We first show that if $K$ is a nerve of convex sets in $\mathbb{R}^d$ then $S(K)$ is a nerve of convex sets in $\mathbb{R}^{d+1}$. For this, let $\mathcal{F}$ be a family of sets in $\mathbb{R}^d$ whose nerve is $K$ and embed them on the hyperplane $x_{d+1}=0$ in $\mathbb{R}^{d+1}$. For each set $F\in\mathcal{F}$ define $F'$ as the cartesian product of $F$ and the segment defined by $-2\leq x_{d+1}\leq 2$. Adding the hyperplanes $x_{d+1}=-1$ and $x_{d+1}=1$, it is easy to see that the nerve of the resulting set family is $S(K)$.
In the other direction, consider a family $\mathcal{F}'$ of $(j+1)$-dimensional convex sets in $\mathbb{R}^{d+1}$ whose nerve is $S(K)$. Let $A$ and $B$ be the convex sets that correspond to the vertices $a$ and $b$, respectively. As $a$ and $b$ are not connected in $S(K)$, the sets $A$ and $B$ must be disjoint. In particular, they can be separated by a hyperplane $h$. For each other set $F'\in\mathcal{F}'$, consider $F:=F'\cap h$ and let $\mathcal{F}$ be the family of these intersections. Note that $\mathcal{F}$ is a family of $j$-dimensional convex sets in $\mathbb{R}^d$. We claim that the nerve of $\mathcal{F}$ is $K$. Indeed, as $K$ is a subcomplex of $S(K)$, every face of $N(\mathcal{F})$ must be a face of $K$. On the other hand, for every face $f$ of $K$, there are points $p_a$ and $p_b$ in $A$ and $B$, respectively, which lie in the intersection corresponding to faces $f\cup\{a\}$ and $f\cup\{b\}$ of $S(K)$, respectively. The intersection of the segment $p_a p_b$ with $h$ lies in the intersection of the sets corresponding to $f$, showing that every face of $K$ must be a face of $N(\mathcal{F})$.
\end{proof}
Combined with the fact that the $d$-skeleton determines the entire nerve, we get the following reduction.
\begin{corollary}
Let $j\in\{d-1,d\}$. If $R(d,j,d)$ is $\exists\mathbb{R}$-hard, then so is $R(d+1,j+1,d+1)$.
\end{corollary}
Using the \ensuremath{\exists\mathbb{R}}\xspace-hardness of $R(2,1,2)$ and $R(2,2,2)$ implied by \Cref{lem:k1d,lem:k22}, we thus deduce the following
\begin{theorem}
For any $d\geq 2$ and $k\geq d$, the problems $R(k,d-1,d)$ and $R(k,d,d)$ are $\exists\mathbb{R}$-complete.
\end{theorem}
This strengthens a result of Tancer who has shown that $R(d,d,d)$ is NP-hard \cite{Tancer}.
\section{Conclusion}
We have introduced a generalization of the recognition problem of intersection graphs of convex sets and have seen that several existing results in the literature of intersection graphs imply stronger statements in this setting. In particular, the computational complexities of recognizing intersections graphs of convex sets is completely settled. For small $k,j,d$, the current state of knowledge is summarized in the tables in \Cref{fig:tables}. As can be seen, for many decision problems $R(k,j,d)$, the computational complexity is still open. We conjecture that these cases are either \ensuremath{\exists\mathbb{R}}\xspace-complete or trivial, determining which of the two remains an interesting open problem. Of course, the analogous problems can be defined for objects other than convex sets, giving rise to many interesting open problems.
\begin{figure}[htbp]
\centering
\includegraphics[]{figs/tables.pdf}
\caption{The complexity status of $R(k,j,d)$ for $k\leq 4$ and $d,j\leq 8$. P denotes containment in~$\mathsf{P}$, \ensuremath{\exists\mathbb{R}}\xspace denotes \ensuremath{\exists\mathbb{R}}\xspace-completeness, T denotes triviality, and ? indicates open cases.}
\label{fig:tables}
\end{figure}
\FloatBarrier
|
{
"arxiv_id": "2302.13275",
"language": "en",
"timestamp": "2023-02-28T02:14:53",
"url": "https://arxiv.org/abs/2302.13275",
"yymm": "2302"
} | \section{Introduction}
With the popularization of digital cameras and storage devices, millions images are taken everyday and billion images are hosted in photo-sharing websites and image search engines.
A nature problem with such gigantic image collections is how to retrieve the relevant images for everyday users, which is also well known as image retrieval problem.
Though image retrieval is with similar user-interaction mode with document retrieval (users provide a few keywords as query, and the machine returns a list of relevant documents), image retrieval is more challenge as machine cannot directly use string matching to check whether the textual query matching with the candidate images.
Current image search engines mainly rely on the surrounding texts of an image to represent textual information conveyed in the image, and convert image retrieval into document retrieval.
However, surrounding texts are not always available or relevant to the image, which leads large number of images irretrievable or irrelevant.
In order to make all images retrievable and improve the relevance of retrieved images, the machine needs the ability to directly measure the image-query similarity by extracting information from image itself. Though sounds intuitive, this is a difficult task and far from being solved for the following two reasons:
\begin{itemize}
\item Extract semantic information from images is hard even with the state-of-the-art hand crafted image features (e.g., super-vector coding~\cite{SVcode}, fisher vector~\cite{fisherKernel}, spatial pyramid matching~\cite{SPM}, etc.).
\item The number of possible queries is huge even if not infinite, it is impractical to build classifiers query by query as image classification tasks.
\end{itemize}
Recent significant progress in DNN has shown the possibility and superiority in automatically learning representations from raw inputs such as images and texts. Inspired by the success of DNN in image classification and word embedding tasks, we proposed an unified DNN to model the image-query similarity. The proposed DNN unifies Convolutional Neural Network (CNN) and Word Embedding Network (WEN) to generate representations from images and queries respectively, where the final outputs of CNN and WEN are residing in the same vector space and their inner product is defined as the image-query similarity. CNN has shown its superiority over hand crafted image features in extracting semantic information from images via the automatically learned features~\cite{krizhevsky2012imagenet,ZF}. WEN has been successfully used in natural language processing tasks by learning low dimensional vector representations of words~\cite{WordEmbed}, and query representation is modeled by the linear weighted combination of word vectors.
With the unified DNN, both image and query are mapped into the same feature vector space as illustrated in Figure~\ref{CSMframe}.
DNN requires large number of training data to learn its parameters.
Here, we utilize a large scale clickthrough dataset collected from Bing image search as the training dataset, which contains of 23 million clicked image-query pairs from 11.7 million queries and 1 million images~\cite{Clickage}. The large number of queries, images and their associations provide a well coverage of the sample space. With such large number training examples, there is no observable overfitting problem even without using dropout~\cite{krizhevsky2012imagenet}.
Qualitative results show our learned CSM model constructs a meaningful common vector space for both image and query. We further evaluate the learned DNN on an image retrieval task with 1000 queries. The quantitative results on image retrieval comparing several competing methods demonstrate the effectiveness of the proposed method.
The rest of the paper is organized as follows. Related work is presented in Section~\ref{s:relatedWork}, the unified DNN for jointly image-query modeling and learning is introduced in Section~\ref{CSM}. Experimental results on a large scale clickthrough dataset are presented in Section~\ref{experiment}. Finally, we conclude this work in Section~\ref{Conclusion}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth,page=1]{Fig/fig1}
\caption{The cross space mapping model. Both image space and query space are mapped to a common space, where images are represented as visual feature vectors and queries are represented as textual weight vectors.}
\label{CSMframe}
\end{figure}
\section{RELATED WORK}
\label{s:relatedWork}
As one important domain of information retrieval, image retrieval has been intensively studied for decades in both academic and industrial community~\cite{ImageSearch20}. However, current image retrieval systems still mainly rely on surrounding texts of images to perform the retrieval task. As the missing and noisy problem of surrounding texts, many research works have been proposed to use the image content to measure image-query similarity.
With the continuously developing image content understanding techniques especially with the rebirth of convolutional neural network, the image content is gradually playing more important role.
\subsection{Image annotation as intermediate step}
Automatic image annotation is the process by which a machine automatically assigns keywords to an image, and image retrieval is performed over the annotated keywords. A typical pipeline of image annotation is firstly representing images with visual features, then predicting the keywords of images by machine learning algorithms. According the algorithms used, image annotation approaches can be roughly divided into two categories, i.e., model-based approach~\cite{vogel2004natural,tieu2004boosting} and data-driven approach~\cite{annoSearch}.
In model-based approaches, image annotation is performed as multi-class or multi-label classification problem, where a manually labeled dataset is used for learning models such as SVM and boosting~\cite{vogel2004natural,tieu2004boosting}. Model-based approaches often work with thousands categories, which are impractical to scale up to millions or even more queries.
Compared with model-based approaches, data-driven approaches can be performed without the limits of the number of queries.
In a data-driven approach, the annotation of an image is assigned by propagating annotations of its similar images~\cite{annoSearch}. Due the limitation of low-level image features, data-driven approaches only work well for images with enough duplicates in the training set. It is worth mentioning that image annotation is performed in image retrieval as an intermediate step, and queries are further needed to compare with the annotations to accomplish the retrieval.
\subsection{Joint image and query modeling}
To avoid the intermediate step of image annotation, many works studied how to jointly modeling image and query where image-query similarity is directly estimated. There are two main directions in this area, one is using generative models and another is using discriminative models.
Generative models are applied widely in joint image and query modeling as they are easy to take different modalities into account. Different kinds of generative models have been proposed for joint image and query modeling, including latent Dirichlet allocation~\cite{barnard2003matching}, probabilistic latent semantic analysis~\cite{monay2004plsa}, hierarchical Dirichlet processes~\cite{yakhnenko2008annotating}, machine translation methods~\cite{duygulu2006object} and deep Boltzmann machine~\cite{Multimodal_DBM}, etc.
As it is still difficult to learn probability on raw images, hand crafted image features are used in the modeling.
Discriminative models are generally with better performance. In discriminative models, joint kernels over image and query are defined and learned for ranking images~\cite{grangier2008discriminative,wsabie}. Though different image features and diverse kernel functions are considered in these works, their modeling ability still limited by the visual features and their shallow structures. In~\cite{DeViSE}, deep visual-semantic embedding model is proposed to measure image-query similarity by automatically learned convolutional neural network. Unlike our methods, the method still requires ImageNet to do supervised pretraining.
\section{CROSS SPACE MAPPING MODEL}
\label{CSM}
\begin{figure*}[t]
\centering
\includegraphics[width=0.98\textwidth,page=1]{Fig/fig2}
\caption{Architecture of the unified DNN. The upper part is WEN modeling textual query mapping while the lower part is CNN modeling image mapping.}
\label{DNNframe}
\end{figure*}
In this section, the unified DNN is described for image-query similarity modeling and accomplish the cross space mapping. We first introduce CNN and WEN separately for image and query modeling, and then unify these two networks together into one unified DNN to define the image-query similarity. Then the training procedure is introduced to learn the DNN model parameters.
\subsection{CNN for Image Modeling}
Images are stored as raw pixels in the machine, we use standard CNN~\cite{krizhevsky2012imagenet,lecun1989backpropagation} without softmax outputs for image modeling. The CNN contains seven layers with weights, including five convolutional layers and two fully-connected layers, three max-pooling layers are used following the first, second and fifth convolutional layers, two local contrast normalization layers are used following the first and second max-pooling layers. More details of these operations can be referred in~\cite{krizhevsky2012imagenet}. The lower part of Figure~\ref{DNNframe} illustrates the architecture of the image part. Via the CNN, an image $I$ is mapped to a $d$-dimensional vector space and denoted as $F(I)\in\mathbb{R}^d$.
\subsection{WEN for Query Modeling}
Queries are stored as a set of words in the machine, word embedding~\cite{WordEmbed} is leveraged for query modeling.
To this end, we build a vocabulary $\mathcal{V}$ formed by 50K words with top word-frequency in training set, where $|\mathcal{V}|$=50K.
With word embedding, a word $t\in\mathcal{V}$ is mapped into $d$-dimensional space as $\bold{w}(t)\in\mathbb{R}^d$ using a lookup table, and will be learned in the training procedure. Then a query $Q$ is mapped to the same space as $W(Q)\in\mathbb{R}^d$ by weighted linear combination of its words' vectors, i.e.,
\begin{equation}
W(Q)=\frac{1}{|Q|}\sum_{t\in Q} \omega (t)\bold{w}(t)
\end{equation}
where $\omega(t)$ is weights for word $t$, and normally defined as the \emph{normalized idf weighting}
\begin{equation}
\omega(t)=\frac{idf_t}{\sqrt{\sum_{t\in Q} idf_t^2}}
\end{equation}
where $idf_t=-\log(r_t)$, $r_t$ refers to the fraction of corpus queries containing the word $t$.
The upper part of Figure~\ref{DNNframe} illustrates the architecture of the query part, which is a networks with two layers, the first layer takes bag-of-words representation of query as input, the second layer outputs the query embedding vector. The parameters for word embedding is represented as the weights between the two fully-connected layers.
\subsection{Image-Query Similarity}
With the image mapping $F(I)$ and query mapping $W(Q)$, images and queries both are mapped into a common feature space, and the image-query similarity can be defined as their inner product, i.e.,
\begin{equation}
S(I,Q)=<F(I),W(Q)>~\label{IQ}
\end{equation}
where $S(I,Q)$ is image-query similarity.
As the output of the unified DNN model, $S(I,Q)$ can be used to determine whether image $I$ and $Q$ is relevant or not, and can be naturally used to ranking candidate images for a specific query.
\subsection{Training Data Preparing}
Given a clickthrough dataset denoted by $\{\mathcal{I},\mathcal{Q},M\}$, $\mathcal{I}$ and $\mathcal{Q}$ refer to the image set and query set respectively, $M$ is the click matrix which represents the corresponding clicks between images and queries in training set.
With image-query similarity $S(I,Q)$, we further define the following constraint that requires clicked image-query pairs are with large similarity:
\begin{equation}
S(I^+,Q)>S(I^-,Q)\; \; \; \; for\; \forall I^+\in \mathcal{I}_Q\; and\; \forall I^-\in \mathcal{N}_Q
\end{equation}
where $Q\in \mathcal{Q}$, $\mathcal{I}_Q=\{I:M(I,Q)=1,I\in\mathcal{I}\}$ is the clicked images of query $Q$, $\mathcal{N}_Q=\mathcal{I}-\mathcal{I}_Q$ is unclicked images of query $Q$.
In web-scale image set, the unclicked image set for each query is often too large for direct optimization.
Thus, the practical negative set $\mathcal{N}_Q$ is a subset sampled from the complementary set of $\mathcal{I}_Q$.
Here, We propose a preprocessing stage, which attempt to sample the better negative examples as negative set.
Considering the click matrix $M$ is only partially observed, that is the non-clicked image-query pairs are not necessary irrelevant.
As illustrated in Figure~\ref{imageCorresponding}, the bottom image is denoted as irrelevant to \emph{dog} by M which is actually relevant.
Yet, the top and bottom images share other same queries, such as \emph{neapolitan mastiff}, which means the bottom image should be removed from the negative set of query \emph{dog}.
Based on this idea, we denote the first order image relationship matrix as $M_{1} = M\cdot M^{T}$, and the $n^{th}$ order image relationship matrix can be defined as $M_{n}=(M_{1})^{n}$.
In this paper, we utilize $M_{2}$ to remove the potential relevant images from the negative set of specific query, and the final $\mathcal{N}_{Q}$ is sampled from the set $\{I: M(I,Q)=0 \;\&\; M_{2}(I^{+},I)=0, I^{+}\in \mathcal{I}_{Q}\}$.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth,page=1]{Fig/fig3}
\caption{The images sharing same queries.}
\label{imageCorresponding}
\end{figure}
\subsection{Training Objective}
In order to measure discrimination between $\mathcal{I}_Q$ and $\mathcal{N}_Q$, we define the inter-class scatter of query $Q$ as:
\begin{equation}
\begin{split}
for \;\; \forall I^+\in \mathcal{I}_Q,&\;\;\forall I^-\in \mathcal{N}_Q\\
m(Q;\theta_i,\theta_q) &= \text{min} \; S(I^+,Q)-S(I^-,Q)\\
&=\text{min} \; W(Q)\cdot F(I^+)-W(Q)\cdot F(I^-) \\
&=\text{min} \; W(Q)\cdot (F(I^+)-F(I^-)) \\
\end{split}
\end{equation}
where $\theta_i$ and $\theta_q$ are the parameters of image mapping and query mapping respectively. As the minimum score difference of all positive-negative image pairs for query $Q$, $m(Q;\theta_i,\theta_q)$ can be regarded as margin in classification tasks, where larger margin would yield better discrimination.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth,page=1]{Fig/fig4}
\caption{Margins generated based on different textual query mapping.}
\label{margin}
\end{figure}
Obviously, parameter vectors $\theta_i$ and $\theta_q$ jointly determine the margin given the data. Figure~\ref{margin} shows different margins using different query mappings with fixed image mapping.
Actually, Figure~\ref{margin} could also be regarded as the cases where the visual features are preselected in image retrieval. The goal of these cases is to find the optimal query weight vector which maximizes the margin, while the visual feature has been fixed such as SIFT and GIST.
However, the preselected visual feature mapping may not have enough ability to distinguish negative set and positive set. Therefore, the inter-class scatter is also influenced by image mapping. Here, our CSM aims at learning the parameter vectors $\theta_i$ and $\theta_q$ simultaneously by enlarging the margins over all queries. The training objective of CSM could be formulated as follows:
\begin{equation}
[\theta_i^*,\theta_q^*]=\arg\; \text{max}\sum_{Q\in \mathcal{Q}} m(Q;\theta_i,\theta_q)
\end{equation}
To avoid trivial solution, both the norm of $\theta_i$ and $\theta_q$ are constrained to be less than 1.
The DNN is trained by stochastic gradient descent with a batch size of 128 queries, for each query, the loss is defined as the negative of its margin.
In the training process, the update rule for network parameters $\theta = \{\theta_i , \theta_q\}$ is formulated as:
\begin{equation}
\begin{cases}
&v_{t+1}=\alpha \cdot v_t - \beta \cdot \varepsilon \cdot \theta_t - \varepsilon \cdot \left \langle \frac{\partial L}{\partial \theta}|_{\theta_t} \right \rangle_{D_{t}} \\
&\theta_{t+1}=\theta_t+v_{t+1}
\end{cases}
\end{equation}
where $t$ is the iteration index, $\alpha$ is the momentum set to 0.9, $v$ is the momentum variable, $\beta$ is the weight decay set to $10^{-5}$, $\varepsilon$ is the learning rate and $\left \langle \frac{\partial L}{\partial \theta}|_{\theta_i} \right \rangle_{D_{t}}$ is the average over the $t^{th}$ batch $D_t$ of the derivative of the objective with respect to $\theta$, evaluated at $\theta_t$. The learning rate is initially set to 0.01, and decreased by a factor of 10 when the margin on a validation set stopped improving.
\subsection{Image Retrieval System}
Based on the learned CSM, we can build a textual query based image retrieval system, as shown in Figure~\ref{system}. The candidate images in database are translated as visual feature vectors in the mapped space, and the textual query input by user is translated as weight vector in the common mapped space. Through calculating and sorting the scores of input query and candidate image pairs, this system output a ranked image list as the retrieval result.
In particular, few input queries can not be matched within the training word set, since the input words don't appear in the selected training words with top word-frequency.
In this case, retrieval system will return a random ranking result.
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\textwidth,page=1]{Fig/fig5}
\caption{Framework of CSM based image retrieval system.}\label{system}
\end{figure}
\section{EXPERIMENT SETTING}
\subsection{Dataset Description}
The user-click data was collected from Bing search engine, which is publicly accessible as MSR-Bing Image Retrieval Challenge~\cite{Clickage}. In this dataset, images are collected from the Web and labels are the input textual queries from Bing's users. The dataset is collected based on queries received at Bing Image Search in EN-US market. The dataset comprises two parts: the training dataset, and the dev dataset which label is judged by annotators and used as test dataset.
The training dataset includes 11,701,890 queries, 1,000,000 images and 23,094,592 clicked $<$query, image$>$ pairs, where the whole clicked data is randomly sampled from one year's of Bing Image Search log. The topics of queries are wide and diverse, some examples are shown in Figure~\ref{Train}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth,page=1]{Fig/fig6}
\caption{Examples of training dataset, words marked with red font are typos by users.}\label{Train}
\end{figure}
The test dataset is comprised of {1,000} queries and {79,665} images, which are also randomly sampled from the one year's Bing Image Search log in EN-US market. In order to measure the relevance, a large set of plausible retrieval results are judged manually for each query. The relevance of images are measured with three levels with respect to a query, that is \textit{Excellent = 3, Good = 2, Bad = 0}. The judgment guidelines and procedure are established to ensure high data quality and consistency.
\subsection{Evaluation Criterion}
\label{evaCr}
In order to measure the performance of the search results, we adopt Discounted Cumulated Gain (DCG) measurement to quantify the retrieval performance. The standard DCG is defined as:
\begin{equation}
DCG_n=\gamma\sum_{i=1}^{n}\frac{2^{rel_{i}}-1}{log_2(i+1)}
\end{equation}
where $n$ is the count of images in searching list, $rel_i$ is the relevance level of the result at position $i$. In our experiment, $rel_i = {0, 2, 3}$ as previous mentioned and $n = 25$, $\gamma=0.1757$ is the normalizer that make the best $\text{DCG}_{25}$ equals to 1.
\section{EXPERIMENT RESULTS}
\label{experiment}
In this section, we demonstrate CSM based image retrieval from both qualitative results and quantitative results.
\subsection{The Learned Mapping Space}
Firstly, we qualitatively demonstrate the effectiveness of CSM by visualizing the learned mapping space.
Figure~\ref{mapping} visualizes six randomly selected dimensions of the learned feature space, and the images with high responses of each dimension are showed. The pattern captured by each dimension is both visually and semantically meaningful. Figure~\ref{Fig:WbasedKNN} demonstrates the effectiveness of inner product in the learned common space by showing nearest neighbor words and images of some exemplar words measured by inner products. Though the nearest words contains some spelling mistakes, it is easy to guess the real meaning.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth,page=1]{Fig/fig7}
\caption{Visualization of the learned common vector space by six randomly selected dimensions. The top 7 images with highest response at each dimension are showed in each row. It can be seen that clear semantic meanings or visual patterns are captured by these dimensions.}\label{mapping}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.75\textwidth,page=1]{Fig/fig8}
\caption{Nearest neighbors of some exemplar words in the learned mapping space. Column 1 contains three query words, Column 2 and column 3 are their nearest words and images in feature space measured by the similarity defined by inner product.}\label{Fig:WbasedKNN}
\end{figure*}
\subsection{Overall Performance}
In order to validate the overall performance of CSM, we compare CSM with the two state-of-the-art single models on the dataset, i.e., Concept Classification model~\cite{carneiro2005formulating} and Passive-Aggressive model~\cite{grangier2008discriminative}.
Concept Classification model builds a binary classifier for each concept using a standard SVM.
Passive-Aggressive model utilizes a parametric function to map image into text space, and optimizes a learning criterion related to ranking performance.
Both Concept Classification model and Passive-Aggressive model adopt HOG features as the representation of images.
In addition, we compare our result with the ideal ranker and the random ranker.
The ideal ranking is the optimal ranking list generated by the relevance annotated by annotators, and the random result is a random order of the candidate images.
As mentioned in section~\ref{evaCr}, $\textrm{DCG}_{25}$ is utilized to capture quantitative results of the performance of the ranking list.
The overall performance is shown in Table~\ref{tab_overall}.
Because of the nature of test dataset, the average $\textrm{DCG}_{25}$ of ideal ranking is less than 1, since excellent candidates image for some test queries are less than 25. CSM achieves much better results than the state-of-the-arts models using sophisticate hand-craft images features, which quantitatively demonstrate the effectiveness of CSM in measure image-similarity for image retrieval.
\begin{table}
\centering
\normalsize
\caption{Overall performance of CSM.}
\begin{tabular}{|c|c|}
\hline
Model & Average $\textrm{DCG}_{25}$ \\
\hline
Random Ranking & 0.468 \\
Concept Classification model~\cite{carneiro2005formulating} & 0.494 \\
Passive-Aggressive model~\cite{grangier2008discriminative} & 0.496 \\
CSM & 0.505 \\
Ideal Ranking & 0.684 \\
\hline
\end{tabular}
\label{tab_overall}
\end{table}
\subsection{Detailed Results Analysis}
In the whole 1,000 test queries, 71 queries achieve ideal retrieval performance and other 235 queries' DCG$_{25}$ are inferior to ideal ranking within 0.05.
Figure~\ref{Fig:Example} shows six retrieval ranking results by CSM including four queries achieved DCG$_{25}$ above 0.9 and two failure cases with DCG$_{25}$ close to 0.
The query \emph{chair}, \emph{fat cat} and \emph{church} can be matched exactly within the training query set.
Though the query \emph{beer stein from Germany} can not be matched exactly, the training query \emph{beer stein} is helpful to map effective textual weight vector through WEN.
The first failure case of \emph{vanese mcneill} is caused by the fact of ideal DCG$_{25}$ nearly being zero, since there are rare relevant images for this query.
For the last failure case of \emph{american caravansary of the 1920}, the key word \emph{caravansary} is missing in training word set, while other words are not helpful to map effective textual weight vector.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth,page=1]{Fig/fig9}
\caption{Search results of several queries.}\label{Fig:Example}
\end{figure*}
In addition, we further discuss the effects of retrieval performance vs the query length.
On one hand, more words embellish the search intention and limit the number of available candidate images, which is demonstrated by Figure~\ref{Length}, where longer queries are with lower $\text{DCG}_{25} score with ideal ranker$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\textwidth,page=1]{Fig/fig10}
\caption{Retrieval performance with different query lengths.}\label{Length}
\end{figure}
On the other hand, different query lengths are with different query matching types.
Statistically, 392 test queries have exactly matched queries in the training query set, while 19 test queries have no matched queries in the training set.
The left 589 test queries can be partly matched, which means these queries contains one or more words in training query set.
Different matching types lead to different ranking performance, where no match leads to random ranker as previous mentioned and exact match can produce better ranking results.
The partial match is likely to introduce the semantic ambiguous, since the queries with partial match usually are matched with several training queries.
As shown in Figure~\ref{ratioImg}, the longer query set includes higher proportion of test queries with partial match.
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\textwidth,page=1]{Fig/fig11}
\caption{The distribution of matching types with different query lengths and the statistics of query with different query lengths.}\label{ratioImg}
\end{figure}
\section{Conclusions}
\label{Conclusion}
In this paper, we proposed a novel approach for image retrieval, which reformulates image retrieval problem as mapping images and textual queries to one common space with an unified deep neural network. With sufficient training image provided by user clicks, the trained DNN significantly improved the image retrieval performance compared with state-of-the-arts methods based on predefined image features. In addition, CSM model not only measures the similarity between query and image, but also measures the similarity of textual queries and the similarity of images.
As the query embedding part still affects by the out of vocabulary problem, in the future we will combine the word embedding from natural language process tasks to enhance query embedding.
\bibliographystyle{IEEEtranS}
|
{
"arxiv_id": "2302.13266",
"language": "en",
"timestamp": "2023-02-28T02:14:38",
"url": "https://arxiv.org/abs/2302.13266",
"yymm": "2302"
} | \section{Introduction}
Let $\Gamma$ be a finitely generated residually finite group. We say that $\Gamma$ is profinitely-rigid if whenever $\widehat{\Lambda}\cong \widehat{\Gamma}$ for some finitely generated residually finite group $\Lambda$, then $\Lambda \cong \Gamma$. Here $\widehat{\Gamma}$ (resp. $\widehat{\Lambda}$) denotes the profinite completion of $\Gamma$ (resp. $\Lambda$). \newline
Up until recently, the only profinite rigid groups were "small" (i.e. without non-abelian free subgroups). Recently, in a groundbreaking work, Bridson, McReynolds, Reid and Spitler gave first examples of "big" groups which are profinitely rigid, among them are some fundamental groups of hyperbolic 3-manifolds \cite{BMRS20} and some triangle groups \cite{BMRS21}. \newline
Arithmetic subgroups of semisimple Lie groups need not be profinitely rigid (\cite{Ak12},\cite{KK20}). A well known open problem asks:
\[
\text{ For $n\geq 2$, is $SL_{n}(\mathbb{Z})$ profinitely rigid?}
\]
While we will not answer this question, we will show that there are finite index subgroups of these groups (at least when $n\geq 3$) which are not profinitely rigid. In fact, we will show a much more general result:
\begin{Theorem}[Main Theorem] \label{Main~The}
Let $k$ be a number field and $\mathbf{G}$ be a connected, simply connected, absolutely almost simple $k$-linear algebraic group of high $\infty$-rank such that $\mathbf{G}(k)$ satisfies the congruence subgroup property and $\Gamma \subseteq \mathbf{G}(k)$ an arithmetic subgroup.
\begin{enumerate}
\item Unless $\mathbf{G}$ is of type $G_2,F_4$ or $E_8$ and $k=\mathbb{Q}$, $\Gamma$ has infinitely many pairs of finite index subgroups $\Gamma_{1}$ and $\Gamma_2$ which are not isomorphic but their profinite completions are.
\item The exceptional cases are truly exceptional and in these cases, there are no such pairs at all. In fact, if $\Gamma_{1},\Gamma_{2}\subseteq \mathbf{G}(\mathbb{Q})$ are arithmetic subgroups with isomorphic profinite completions then $\Gamma_{1}$ and $\Gamma_{2}$ are isomorphic.
\end{enumerate}
\end{Theorem}
Note that we show that each such $\Gamma$ has a finite index subgroup $\Gamma_{1}$ which is not profinitely rigid by showing that $\widehat{\Gamma_{1}}\cong \widehat{\Gamma_{2}}$ for some $\Gamma_{2}$ commensurable to it. This complements results of \cite{KK20},\cite{Ak12} and \cite{KS23} which give examples of non-commensurable arithmetic groups which are profinitely isomorphic. \newline
To illustrate our methods, let us now present them only for $\Gamma=SL_{4}(\mathbb{Z})$ (methods A \& B) and $\Gamma=SL_{2}(\mathbb{Z}[\sqrt{2}])$ (method C). \newline
\begin{description}
\descitem{Method A} Using the centre of the simply connected form: Let $2 \neq p,q$ be two different primes, and let $\Lambda$ be the principle congruence subgroup corresponding to $pq$. Let $\rho_{p}$ be the element of $\widehat{SL_{4}(\mathbb{Z})}\cong \prod_{r}SL_{4}(\mathbb{Z}_{r})$ which is $1$ at the places $r\neq {p}$ and $-1$ at the place $p$, similarly define $\rho_{q}$. Define $\Gamma_{1} := \langle \widehat{\Lambda},\rho_{p}\rangle \cap SL_{4}(\mathbb{Z})$ and $\Gamma_{2} := \langle \widehat{\Lambda},\rho_{q}\rangle \cap SL_{4}(\mathbb{Z})$. Then $\widehat{\Gamma_{1}} \cong \widehat{\Lambda} \times \mathbb{Z}/2\mathbb{Z} \cong \widehat{\Gamma_{2}}$ but $\Gamma_{1}$ and $\Gamma_{2}$ cannot be isomorphic. For details see \ref{Proof~A}. \newline
\descitem{Method B} Using a non-trivial Dynkin automorphism: Let $2,3\neq p,q$ be two different primes. Consider the following maximal parabolic subgroups of $SL_{4}(\mathbb{Z}/p\mathbb{Z})$.
\[
P_{1,p}:=
\left\{ \begin{pmatrix}
\ast & \ast & \ast & \ast \\
& \ast & \ast & \ast \\
& \ast & \ast & \ast \\
& \ast & \ast & \ast \\
\end{pmatrix} \right\},
\quad
P_{2,p}:=
\left\{ \begin{pmatrix}
\ast & \ast & \ast & \ast \\
\ast & \ast & \ast & \ast \\
\ast & \ast & \ast & \ast \\
& & & \ast \\
\end{pmatrix} \right\}
\]
Similarly define the maximal parabolic subgroups $P_{1,q}$ and $P_{2,q}$ of $SL_{4}(\mathbb{Z}/q\mathbb{Z})$. Now, let $\Gamma_{1}$ be the congruence subgroup corresponding to $P_{1,p} \mod p$, $P_{1,q} \mod q$ and is trivial $\mod 3$, and $\Gamma_{2}$ to be the congruence subgroup corresponding to $P_{1,p} \mod p$, $P_{2,q} \mod q$ and is trivial$\mod 3$. Then the profinite completions of $\Gamma_{1}$ and $\Gamma_{2}$ are isomorphic via the automorphism of $\widehat{SL_{4}(\mathbb{Z})}$ which is the non-trivial Dynkin automorphism at the place $q$ and the identity elsewhere, but they themselves cannot be isomorphic. For details see \ref{Proof~B}. \newline
\descitem{Method C} Using the number field: Let $2\neq p,q$ be two different primes such that $2$ is a square in $\mathbb{Q}_{p}$ and $\mathbb{Q}_{q}$, hence $p$ and $q$ splits completely in $\mathbb{Q}[\sqrt{2}]$. Set $\mathcal{O}=\mathbb{Z}[\sqrt{2}]$ and $\mathfrak{p}_{1},\mathfrak{p}_{2}|p$, $\mathfrak{q}_{1},\mathfrak{q}_{2}|q$ to be the primes lying over $p$ and $q$ respectively. Let $\Gamma_{1}$ be the principle congruence subgroup corresponding to $\mathfrak{p}_{1}\mathfrak{q}_{1}$, and $\Gamma_{2}$ be the principle congruence subgroup corresponding to $\mathfrak{p}_{2}\mathfrak{q}_{1}$. As $SL_{2}(\mathcal{O})$ has trivial congruence kernel, $\widehat{SL_{2}(\mathcal{O})} \cong \left( \prod_{l\neq p,q} SL_{2}(\mathcal{O}_{l}) \right) \times \prod_{i=1}^{2} \left( SL_{2}(\mathcal{O}_{\mathfrak{p}_{i}})\times SL_{2}(\mathcal{O}_{\mathfrak{q}_{i}}) \right)$, where for a prime $l\neq p,q$, $\mathcal{O}_{l}$ is the completion of $\mathcal{O}$ with respect to the primes lying over $l$. Then the profinite completions, $\widehat{\Gamma_{1}}$ and $\widehat{\Gamma_{2}}$ are isomorphic via the automorphism of $\widehat{SL_{2}(\mathbb{Z}[\sqrt{2}])}$ which is the transposition of the places $\mathfrak{p}_{1}$ and $\mathfrak{p}_{2}$. But $\Gamma_{1}$ and $\Gamma_{2}$ cannot be isomorphic. For details see \ref{Proof~C}. \newline
\end{description}
The main theorem is proved by generalizing the above methods to more general arithmetic lattices (In fact, only methods A and C are really needed). \newline
The paper is organized as follows: after some preliminaries in \S2, we will generalize methods A and C in \S3 deducing the first part of the main theorem. In \S4, we will elaborate on method B and finally in \S5 we will prove that the exceptional cases are true exceptions, concluding the main theorem. In \S6, we will give a stronger and more general version of the main theorem, stating that it holds for $S$-arithmetic groups and not merely for arithmetic groups. Moreover, one can get any (finite) number of non-isomorphic subgroup with isomorphic profinite completions (not just pairs). \newline
\textbf{Acknowledgments.} This work is a part of the author's PhD thesis at the Hebrew University. For suggesting the above topic and for providing helpful guidance, suggestions and ideas I am deeply grateful to my advisors Alexander Lubotzky and Shahar Mozes. During the period of work on this paper I was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 882751) and by the ISF-Moked grant 2019/19.
\section{Preliminaries}
Throughout we assume that $k$ is a number field. The set of places of $k$ is denoted by $V(k)$, it is the union of the set of archimedean places $V_{\infty}(k)$ and the set of finite places $V_{f}(k)$. The completion of $k$ at $v\in V(k)$ is denoted by $k_{v}$. Let $\mathcal{O}_{k}$ denote the ring of integers of $k$ and for a finite place $v\in V_{f}(k)$, denote by $\mathcal{O}_{k,v}$ the ring of integers of $k_{v}$. The ring of finite adeles $\mathbb{A}_{k}^{f}= \prod^{*}_{v\in V_{f}(k)}k_{v} := \{ (x_{v})_{v}\in \prod_{v\in V_{f}(k)}k_{v}: \, x_{v}\in \mathcal{O}_{k,v} \text{ for all but finitely many places } \}$ is the restricted product over all the finite completions of $k$. If $k$ is clear from the context, we will omit the letter $k$ from all the above.\newline
Let $\tilde{\mathbf{G}}$ be a connected, simply connected, absolutely almost simple $k$-linear algebraic group, with a fixed faithful $k$-representation $\rho:\tilde{\mathbf{G}} \to GL(n_{\rho})$. A subgroup $\Gamma \subseteq \tilde{\mathbf{G}}(k)$ is called arithmetic if it is commensurable with $\tilde{\mathbf{G}}(\mathcal{O})$ (see \cite{Mo01} and \cite{PR93} for more details about arithmetic groups). We will usually write $\mathbf{G}$ for the adjoint form of $\tilde{\mathbf{G}}$ (which is the universal form), and by $\pi:\tilde{\mathbf{G}} \to \mathbf{G}$ the universal covering map, it is a central isogeny, and $\mathbf{C}:= \ker \pi = \mathcal{Z}(\tilde{\mathbf{G}})$ is a finite group. The $V_{\infty}(k)$-rank of $\tilde{\mathbf{G}}$ is $\text{rank}_{V_{\infty}(k)}\tilde{\mathbf{G}} := \sum_{v\in V_{\infty}(k)} \text{rank}_{k_{v}}\tilde{\mathbf{G}}$, where $\text{rank}_{k_{v}}\tilde{\mathbf{G}}$ is the dimension of a maximal $k_{v}$-split torus, $\tilde{\mathbf{G}}$ is said to have high $\infty$-rank if its $V_{\infty}(k)$-rank is $\geq 2$. \newline
We will use Margulis' superrigidity in a rather delicate manner. The particular version we use is the following:
\begin{Theorem}[Margulis' superrigidity]\label{Mar~Sup~Rig}
Assume $\tilde{\mathbf{G}}$ is of high $\infty$-rank, and let $\Gamma_{1},\Gamma_{2} \subseteq \tilde{\mathbf{G}}(k)$ be arithmetic subgroups. Assume further that $\Gamma_{i} \cap \mathbf{C}(k)={1}$. If $\varphi:\Gamma_{1}\to \Gamma_{2}$ is an isomorphism, then there exists a unique $k$-automorphism $\Phi$ of $\tilde{\mathbf{G}}$ and a unique automorphism $\sigma$ of $k$ such that $\varphi(\gamma)=\Phi(\sigma^{0}(\gamma))$ for every $\gamma\in \Gamma_{1}$, where $\sigma^{0}$ is the automorphism of $\tilde{\mathbf{G}}(k)$ induced by $\sigma$.
\end{Theorem}
\begin{proof}
Identifying $\Gamma_{1}$ and $\Gamma_{2}$ via the universal covering map $\pi:\tilde{\mathbf{G}}\to \mathbf{G}$ as arithmetic subgroups of the adjoint group $\mathbf{G}(k)$, Margulis' superrigidity \cite[Theorem VIII.3.6.(ii)]{Ma91} implies that the isomorphism $\varphi$ can be extended to an automorphism of $Res_{k/\mathbb{Q}}\mathbf{G}$. By the properties of the restriction of scalars functor, such an automorphism must be of the form $\Phi \circ \sigma^{0}$ for a $k$-automorphism $\Phi$ of $\mathbf{G}$ and an automorphism $\sigma$ of the field $k$ \cite[Proposition A.5.14]{CGP15}. Moreover, the $k$-automorphism $\Phi$ of the adjoint form $\mathbf{G}$ can be interpreted as a $k$-automorphism of the universal form $\tilde{\mathbf{G}}$, hence the assertion of the theorem.
\end{proof}
If $\sigma$ is an automorphism of $k$, it induces a permutation of the (finite) places of $k$, and thus an automorphism of the adelic group $\tilde{\mathbf{G}}(\mathbb{A}_{k}^{f})$ by permuting its factors according to $\sigma$, call this automorphism $\sigma_{\mathbb{A}}^{0}$. If $\Phi$ is a $k$-automorphism of $\tilde{\mathbf{G}}$, it induces a unique $k_{v}$-automorphism $\Phi_{v}:\tilde{\mathbf{G}}(k_{v})\to \tilde{\mathbf{G}}(k_{v})$ for every finite place $v$ of $k$ and the product $(\Phi_{v})_{v}:\prod_{v}\tilde{\mathbf{G}}(k_{v})\to \prod_{v}\tilde{\mathbf{G}}(k_{v})$ restricts to an automorphism $\Phi_{\mathbb{A}}:\tilde{\mathbf{G}}(\mathbb{A}_{k}^{f})\to \tilde{\mathbf{G}}(\mathbb{A}_{k}^{f})$ \cite[\S5]{PR93}. Clearly $\Phi(\sigma^{0}(\gamma))=\Phi_{\mathbb{A}}(\sigma_{\mathbb{A}}^{0}(\gamma))$ for every $\gamma \in \tilde{\mathbf{G}}(k)$ (We identify the group of rational points $\tilde{\mathbf{G}}(k)$ with its diagonal embedding in the group of adelic points $\tilde{\mathbf{G}}(\mathbb{A}_{k}^{f})$) and is unique with this property. We thus get the following corollary:
\begin{Corollary}\label{cor~mar~sup}
Under the assumptions of the previous theorem. If $\varphi:\Gamma_{1}\to \Gamma_{2}$ is an isomorphism. Then there exist unique automorphisms $\Phi_{\mathbb{A}}$ and $\sigma_{\mathbb{A}}^{0}$ of the adelic group $\tilde{\mathbf{G}}(\mathbb{A}_{k}^{f})$, such that $\sigma_{\mathbb{A}}^{0}$ is induced from an automorphism of $k$, and $\Phi_{\mathbb{A}}$ is induced from a $k$-automorphism of $\tilde{\mathbf{G}}$ with $\varphi(\gamma)=\Phi_{\mathbb{A}}(\sigma_{\mathbb{A}}^{0}(\gamma))$ for every $\gamma\in \Gamma_{1}$.
\end{Corollary}
We will also need an adelic version of Margulis' superrigidity stated and proven by Kammeyer and Kionke \cite[Theorem 3.2]{KK21}:
\begin{Theorem}\label{Ade~Sup~Rig}
Let $\mathbf{G}$ be a connected, absolutely almost simple $\mathbb{Q}$-linear algebraic group of high $\infty$-rank and $\Gamma\subseteq \mathbf{G}(\mathbb{Q})$ an arithmetic subgroup. If $\varphi:\Gamma \to \mathbf{G}(\mathbb{A}_{\mathbb{Q}}^{f})$ is a homomorphism such that $\overline{\varphi(\Gamma)}$ has non-empty interior, then there exist a homomorphism of adelic groups $\eta:\mathbf{G}(\mathbb{A}_{\mathbb{Q}}^{f}) \to \mathbf{G}(\mathbb{A}_{\mathbb{Q}}^{f})$, and a group homomorphism $\nu:\Gamma \to \mathcal{Z}(\mathbf{G})(\mathbb{A}_{\mathbb{Q}}^{f})$ with finite image such that $\varphi(\gamma)=\nu(\gamma)\eta(\gamma)$ for all $\gamma\in \Gamma$. Moreover, $\eta$ and $\nu$ are uniquely determined by this condition.
\end{Theorem}
\subsection{Profinite groups and the congruence subgroup property}\label{Pro~Grps}
A family $(\{G_{i}\}_{i\in I},\{\phi_{i,j}\}_{i\geq j \in I})$ is an inverse system of finite groups over the directed set $I$ if the $G_{i}$'s are finite groups, $\phi_{i,j}:G_{i}\to G_{j}$ are homomorphisms of groups whenever $i\geq j$ such that $\phi_{i,k}=\phi_{j,k}\phi_{i,j}$ for every $i\geq j \geq k$ and $\phi_{ii}=id_{G_{i}}$ $\forall i$. A group $G$ is called profinite if it is the inverse limit of an inverse system of finite groups over some directed set. A profinite group is a compact, Hausdorff, totally disconnected topological group, a map of profinite groups is a continuous homomorphism of groups.
\begin{Example}[Profinite completion]
Let $\Gamma$ be a finitely generated group, let $\mathcal{N}$ be the set of finite index normal subgroups of $\Gamma$, for $M,N\in \mathcal{N}$, declare that $M\leq N$ whenever $N\subseteq M$, it is a directed set. Consider the natural quotient homomorphisms $\phi_{N,M}:\Gamma/N \to \Gamma/M$, then the profinite group $\widehat{\Gamma}=\varprojlim_{N\in \mathcal{N}} \Gamma/N$, is called the \textbf{profinite completion} of $\Gamma$.
\end{Example}
The profinite completion $\widehat{\Gamma}$ and the set $\mathcal{C}(\Gamma)$ of isomorphism classes of the finite quotients of $\Gamma$ hold the same information in following manner:
\begin{Theorem}\cite[Theorems 3.2.2 \& 3.2.7]{RZ00}
If $\Gamma$ and $\Lambda$ are two finitely generated residually finite groups then $\mathcal{C}(\Gamma)=\mathcal{C}(\Lambda)$ if and only if $\widehat{\Gamma} \cong \widehat{\Lambda}$
\end{Theorem}
There is a natural map $\iota:\Gamma \to \widehat{\Gamma}$ given by $\gamma\mapsto (\gamma N)_{N}$, this map is injective if and only if $\Gamma$ is residually finite, in this case we identify $\Gamma$ with its image $\iota(\Gamma)$. The pair $(\widehat{\Gamma},\iota)$ satisfies a universal property: $\iota(\Gamma)$ is dense in $\widehat{\Gamma}$, and for every profinite group $P$, and every homomorphism $\varphi:\Gamma\to P$, there exists a unique homomorphism of profinite groups $\hat{\varphi}:\widehat{\Gamma} \to P$ such that $\widehat{\phi} \circ \iota = \varphi$.
There is a strong connection between the finite index subgroups of $\widehat{\Gamma}$ and those of $\Gamma$:
\begin{Proposition}\cite[Proposition 3.2.2]{RZ00}\label{Pro~Corre}
Let $\Gamma$ be a finitely generated residually finite group, then there is a one-to-one correspondence between the set $\mathcal{X}$ of all finite index subgroups of $\Gamma$ and the set $\mathcal{Y}$ of all finite index subgroup of $\widehat{\Gamma}$, given by
\begin{gather*}
X \mapsto \overline{X}, \quad X \in \mathcal{X} \\
Y \mapsto Y\cap \Gamma, \quad Y\in \mathcal{Y}
\end{gather*}
where $\overline{X}$ denote the closure of $X$ in $\widehat{\Gamma}$. Moreover, this bijection preserves normality, index and quotients.
\end{Proposition}
\begin{Example}[Congruence completion]
Let $\Gamma$ be an arithmetic subgroup of $\tilde{\mathbf{G}}(k)$. Consider the set $\mathcal{C}$ of all congruence subgroups, i.e. subgroups that contain $\Gamma[\mathcal{I}] := \Gamma \cap \ker(\phi_{\mathcal{I}}:\tilde{\mathbf{G}}(\mathcal{O}_{k})\to \tilde{\mathbf{G}}(\mathcal{O}_{k}/\mathcal{I}))$ for some ideal $\mathcal{I}\triangleleft \mathcal{O}_{k}$, where $\phi_{\mathcal{I}}$ is the reduction map $\mod \mathcal{I}$. As in the profinite completion, $\mathcal{C}$ is a directed set by the inverse of inclusion, and one can form the \textbf{Congruence completion} $\overline{\Gamma}$ of $\Gamma$ with respect to this inverse system.
\end{Example}
Thus, there is a surjective map $\widehat{\Gamma} \to \overline{\Gamma}$ between the profinite completion and the congruence completion. Call $C(\Gamma)$, the kernel of this map, the \textbf{congruence kernel}. The group $\Gamma$ is said to have the \textbf{congruence subgroup property} if the congruence kernel $C(\Gamma)$ is a finite group. It is not difficult to see that the congruence subgroup property is actually a property of the ambient group $\tilde{\mathbf{G}}$ and the field $k$. \newline
It was conjectured by Serre \cite{Se70} that if $\text{rank}_{V_{\infty}(k)}(\tilde{\mathbf{G}})\geq 2$ and $\Gamma\subseteq \tilde{\mathbf{G}}(k)$ is an arithmetic subgroup then $C(\Gamma)$ is trivial or isomorphic to a subgroup of the roots of unity of $k$. The conjecture has been proven in many instances, including for example, all the isotropic cases \cite{Ra86} and all anisotropic groups of type $B_{n}$,$C_{n}$,$D_{n}$ (except for some triality forms of $D_{4}$), $E_{7}$,$E_{8}$,$F_{4}$ and $G_{2}$ (\cite[Ch.9]{PR93}, \cite{PR10}).
\subsection{A number theoretic lemma}
\begin{Lemma} \label{num~the~lem}
Let $\tilde{\mathbf{G}}$ be a connected, simply connected, absolutely almost simple $k$-linear algebraic group. There exist infinitely many finite places $v\in V_f(k)$ such that $\tilde{\mathbf{G}}$ splits over $k_{v}$. Moreover, one can assume that for these places, $\mathbf{C}(\mathcal{O}_{v})=\mathbf{C}(\mathbb{C})$.
\end{Lemma}
\begin{proof}
There exists a finite Galois field extension $k'/k$ such that $\tilde{\mathbf{G}}$ splits over $k'$ and $\mathbf{C}(k')=\mathbf{C}(\mathbb{C})$. By Chebotarev's density theorem \cite[Corollary 13.6]{Ne13}, there exist infinitely many primes $\mathfrak{p}\subseteq \mathcal{O}_{k}$ that splits completely in $k'$. In particular, if such a prime lies under a prime $\mathfrak{p'}\subseteq \mathcal{O}_{k'}$ then $k_{v}\cong k'_{v'}$ where $v$ and $v'$ are the places corresponding to the primes $\mathfrak{p}$ and $\mathfrak{p'}$ respectively. Thus, for such a place $v$, $\tilde{\mathbf{G}}$ splits over $k_{v}$. \newline
Moreover, as the centre $\mathbf{C}(k')$ is finite, for all but finitely many places, $\mathbf{C}(k')=\mathbf{C}({\mathcal{O}_{k',v'}})$. So there exist infinitely many places $v\in V_{f}(k)$ with $\mathbf{C}(\mathcal{O}_{k,v})=\mathbf{C}(\mathbb{C})$ and $\tilde{\mathbf{G}}$ splits over $k_{v}$.
\end{proof}
\section{ First part of the main theorem - Existence}
In this section we will prove the first part of theorem \ref{Main~The}, the existence part, it will follow from the two theorems below.
\begin{Theorem}\label{Proof~A}
Let $\Gamma \subseteq \tilde{\mathbf{G}}(k)$ be an arithmetic subgroup. Assume that $\tilde{\mathbf{G}}$ has the congruence subgroup property and type different then $E_{8},F_{4}$ or $G_{2}$. Then there exist two non-isomorphic finite index subgroups $\Gamma_{1},\Gamma_{2}\subseteq \Gamma$ with isomorphic profinite completions.
\end{Theorem}
\begin{proof}
(Following \descref{Method A}) Moving to a finite index subgroup, one can assume that there exists a finite set of primes $S$ and a compact open subgroup $\Lambda \subseteq \prod_{v\in S} \tilde{\mathbf{G}}(k_{v})$ commensurable with $\prod_{v\in S}\tilde{\mathbf{G}}(\mathcal{O}_{v})$ such that
\begin{gather}
\widehat{\Gamma} = \overline{\Gamma} \cong \Lambda \times \prod_{v\notin S} \tilde{\mathbf{G}}(\mathcal{O}_{v}).
\end{gather}
Indeed, the congruence kernel $C(\Gamma) \subseteq \widehat{\Gamma}$ is finite, so one can find a finite index subgroup $\Gamma'^{P}\subseteq \widehat{\Gamma}$ of the above form, by proposition \ref{Pro~Corre}, there exists a finite index subgroup $\Gamma'\subseteq \Gamma$ such that $\widehat{\Gamma'} = \Gamma'^{P}$. Let $\pi:\tilde{\mathbf{G}} \to \mathbf{G}$ be the canonical central isogeny to the adjoint form $\mathbf{G}$, by moving again to a finite index subgroup one can assume that $\Gamma \cap \mathbf{C}(k) = \{1\}$, and $\widehat{\Gamma}$ is still of the same form as $(1)$. \newline
Let $S_{Q}$ be the set of all rational primes lying under some valuation in $S$, and $S_{\text{full}}=\{ v : \, v|p \text{ for some } p \in S_{Q} \}$.
By lemma \ref{num~the~lem} one can find two valuations $\mathfrak{p},\mathfrak{q}\notin S_{\text{full}}$ lying over different rational primes $p$ and $q$ respectively and such that $\mathbf{C}(\mathcal{O}_{\mathfrak{p}})=\mathbf{C}(\mathcal{O}_{\mathfrak{q}})=\mathbf{C}(\mathbb{C})$, and thus also elements $1\neq \rho_{\mathfrak{p}}\in \mathbf{C}(\mathcal{O}_{\mathfrak{p}})$ and $1\neq \rho_{\mathfrak{q}}\in \mathbf{C}(\mathcal{O}_{\mathfrak{q}})$ of the same order. For every $v|p,q$, let $\Delta_{v}$ be a finite index subgroup of $\tilde{\mathbf{G}}(\mathcal{O}_{v})$ with $\mathbf{C}(\mathcal{O}_{v}) \cap \Delta_{v} = \{1\}$.
Define $\epsilon_{\mathfrak{p}} = (\epsilon_{\mathfrak{p},w})_{w},\epsilon_{q} = (\epsilon_{\mathfrak{q},w})_{w} \in \widehat{\Gamma}$ where
\[
\epsilon_{\mathfrak{p},w} := \begin{cases} 1 & \quad \text{if } w \neq \mathfrak{p} \\ \rho_{\mathfrak{p}} & \quad \text{if } w=\mathfrak{p} \end{cases}, \quad
\epsilon_{\mathfrak{q},w} := \begin{cases} 1 & \quad \text{if } w \neq \mathfrak{q} \\ \rho_{\mathfrak{q}} & \quad \text{if } w=\mathfrak{q} \end{cases}
\]
Now we define the following subgroups of $\widehat{\Gamma}$:
\begin{gather*}
\Delta = \Lambda \times \prod_{v \notin S, \, v \nmid p, \, v\nmid q} \tilde{\mathbf{G}}(\mathcal{O}_{v}) \times \prod_{v|p \text{ or } v|q} \Delta_{v} \\
\Delta_{1} = \langle \Delta,\epsilon_{\mathfrak{p}} \rangle \cong \Delta \times \langle \epsilon_{\mathfrak{p}} \rangle \\
\Delta_{2} = \langle \Delta, \epsilon_{\mathfrak{q}} \rangle \cong \Delta \times \langle \epsilon_{\mathfrak{q}} \rangle
\end{gather*}
Clearly $\Delta_{1}$ and $\Delta_{2}$ are isomorphic finite index subgroups of $\widehat{\Gamma}$. By proposition \ref{Pro~Corre}, there exist finite index subgroups $\Gamma_{i} \subseteq \Gamma$ with $\widehat{\Gamma_{i}} = \Delta_{i}$ for $i=1,2$. We will finish the proof by showing that $\Gamma_{1}$ and $\Gamma_{2}$ cannot be isomorphic. \newline
Assume to the contrary that there exists an isomorphism $\varphi:\Gamma_{1} \to \Gamma_{2}$. By corollary \ref{cor~mar~sup} there exist unique adelic automorphisms $\Phi_{\mathbb{A}}$ and $\sigma_{\mathbb{A}}^{0}$ of $\tilde{\mathbf{G}}(\mathbb{A}_{k}^{f})$ such that $\sigma_{\mathbb{A}}^{0}$ is induced from an automorphism of $k$ and $\Phi_{\mathbb{A}}$ is induced from a $k$-automorphism of $\tilde{\mathbf{G}}$ such that $\varphi(\gamma)=\Phi_{\mathbb{A}}(\sigma_{\mathbb{A}}^{0}(\gamma))$ for every $\gamma\in \Gamma_{1}$, taking closures one deduces that $(\tilde{\varphi}_{\mathbb{A}}\circ \sigma_{\mathbb{A}})(\widehat{\Gamma_{1}}) = \widehat{\Gamma_{2}}$. On the other hand, the induced map between the profinite completions $\widehat{\varphi}:\widehat{\Gamma_{1}} \to \widehat{\Gamma_{2}}$ is unique with $\widehat{\varphi}(\gamma)=\varphi(\gamma)$ for every $\gamma \in \Gamma_{1}$. Thus, it must be that $\widehat{\varphi} \equiv (\tilde{\varphi}_{\mathbb{A}}\circ \sigma_{\mathbb{A}})|_{\widehat{\Gamma_{1}}}$. In particular, it implies that the $\mathfrak{p}$'th place of $\widehat{\Gamma_{1}}$ is mapped isomorphically onto the $\sigma(\mathfrak{p})$'th place of $\widehat{\Gamma_{2}}$. This is a contradiction since $\sigma(\mathfrak{p})|p$, so the $\sigma(\mathfrak{p})$'th place of $\widehat{\Gamma_{2}}$ is centerless, but the $\mathfrak{p}$'th place of $\widehat{\Gamma_{1}}$ has a non-trivial centre.
\end{proof}
\begin{Theorem}\label{Proof~C}
Let $\Gamma \subseteq \tilde{\mathbf{G}}(k)$ be an arithmetic subgroup. Assume further that $\tilde{\mathbf{G}}(k)$ has the congruence subgroup property and that $k$ is a number field of dimension $d\geq 2$ over $\mathbb{Q}$. Then there exist two non-isomorphic finite index subgroups $\Gamma_{1},\Gamma_{2}\subseteq \Gamma$ with isomorphic profinite completions.
\end{Theorem}
\begin{proof}
(Following \descref{Method C}) As before, by moving to a finite index subgroup we can assume that $\Gamma \cap \mathbf{C}(k) = \{1\}$ and that $\widehat{\Gamma} = \overline{\Gamma} = \Lambda \times \prod_{v\notin S} \tilde{\mathbf{G}}(\mathcal{O}_{v})$, for some finite set of places $S$ and $\Lambda \subseteq \prod_{v\in S}\tilde{\mathbf{G}}(k_{v})$ commensurable with $\prod_{v\in S}\tilde{\mathbf{G}}(\mathcal{O}_{v})$. Let $S_{Q}$ and $S_{\text{full}}$ be as before. By Chebotarev's density theorem there exist infinitely many rational primes that split completely in $k$ \cite[Corollary 13.6]{Ne13}, pick two such different primes $p,q\notin S_{Q}$. Say $\mathfrak{p}_{1},...,\mathfrak{p}_{d}|p$ and $\mathfrak{q}_{1},...,\mathfrak{q}_{d}|q$. For $i=1,2$, let $\Gamma_{i}:=\Gamma(\mathfrak{p}_{i}\mathfrak{q}_{1})$ be the principle congruence subgroups modulo $\mathfrak{p}_{i}\mathfrak{q}_{1}$, then obviously $\widehat{\Gamma_{1}}\cong \widehat{\Gamma_{2}}$. \newline
By corollary \ref{cor~mar~sup}, if $\Gamma_{1}$ and $\Gamma_{2}$ are isomorphic, then there exist unique adelic automorphisms $\Phi_{\mathbb{A}}$ and $\sigma_{\mathbb{A}}^{0}$ of $\tilde{\mathbf{G}}(\mathbb{A}_{k}^{f})$ such that $\sigma_{\mathbb{A}}^{0}$ is induced from an automorphism of $k$ and $\Phi_{\mathbb{A}}$ is induced from a $k$-automorphism of $\tilde{\mathbf{G}}$ such that $\varphi(\gamma)=\Phi_{\mathbb{A}}(\sigma_{\mathbb{A}}^{0}(\gamma))$ for every $\gamma\in \Gamma_{1}$, taking closures one deduces that $(\tilde{\varphi}_{\mathbb{A}}\circ \sigma_{\mathbb{A}})(\widehat{\Gamma_{1}}) = \widehat{\Gamma_{2}}$. As $\sigma_{\mathbb{A}}$ acts by permuting the places and $\tilde{\varphi}_{\mathbb{A}}$ acts place-wise it must be that $\sigma_{\mathbb{A}}:\mathfrak{p}_{1} \mapsto \mathfrak{p}_{1}$. We claim that such $\sigma$ must be trivial, which will finish the proof. \newline
Indeed, let $N$ be the Galois closure of $k$, $G:=Gal(N/\mathbb{Q}),\, H:=Gal(N/k)$, let $\mathfrak{r}$ be a prime lying over $\mathfrak{p}_{1}$ and $G_{\mathfrak{r}}$ be its decomposition group. The correspondence $H \backslash G / G_{\mathfrak{r}} \to \{ \mathfrak{p}_{1},....,\mathfrak{p}_{d} \}$ given by $H\sigma G_{\mathfrak{r}} \mapsto \sigma \mathfrak{p}_{1}$ is a one to one correspondence (of $G$-sets), hence $G_{\mathfrak{r}}$ must be trivial. Thus, $Aut(k)$ acts freely on the set of primes $\{ \mathfrak{p}_{1},...,\mathfrak{p}_{d} \}$, as needed.
\end{proof}
\section{ Another set of examples }
The following theorem is not needed for the proof of the main theorem, but following method B, it gives many more examples of non-profinitely rigid arithmetic groups.
\begin{Theorem}\label{Proof~B}
Let $\Gamma \subseteq \tilde{\mathbf{G}}(k)$ be an arithmetic subgroup. Assume further that $\tilde{\mathbf{G}}$ has: \textbf{1)} the congruence subgroup property; \textbf{2)} type $A_{n},D_{n}$ or $E_{6}$; and \textbf{3)} there exists an archimedean place $k_{v}$ such that $\text{rank}_{k_{v}}\tilde{\mathbf{G}} \geq 2$. Then there exist two non-isomorphic finite index subgroups subgroups $\Gamma_{1},\Gamma_{2}\subseteq \Gamma$ with isomorphic profinite completions.
\end{Theorem}
Note that $A_{n},D_{n},E_{6}$ are exactly the types of Dynkin diagrams with a non-trivial symmetry.
\begin{proof}
(Following \descref{Method B}) As before, by moving to a finite index subgroup we can assume that $\Gamma\cap \mathbf{C}(k) = \{1\}$ and that $\widehat{\Gamma} = \overline{\Gamma} = \Lambda \times \prod_{v\notin S} \tilde{\mathbf{G}}(\mathcal{O}_{v})$, for some finite set of places $S$ and $\Lambda \subseteq \prod_{v\in S} \tilde{\mathbf{G}}(k_{v})$ commensurable with $\prod_{v\in S} \tilde{\mathbf{G}}(\mathcal{O}_{v})$. \newline
By lemma \ref{num~the~lem}, one can find two different primes $\mathfrak{p},\mathfrak{q} \notin S_{\text{full}}$ (where $S_{\text{full}}\supseteq S$ is as in the previous section) lying over different rational primes, such that $\tilde{\mathbf{G}}$ splits over both $k_{\mathfrak{p}}$ and $k_{\mathfrak{q}}$. Fix root systems for $\tilde{\mathbf{G}}(k_{\mathfrak{p}})$ and $\tilde{\mathbf{G}}(k_{\mathfrak{q}})$, and let $\Sigma$ be the set of simple roots, identified for both groups. For $v=\mathfrak{p},\mathfrak{q}$, the reduction maps $\pi_{v}:\tilde{\mathbf{G}}(\mathcal{O}_{v})\to \tilde{\mathbf{G}}(\mathcal{O}_{v}/v)$ are onto, and $G_{v}:=\tilde{\mathbf{G}}(\mathcal{O}_{v}/v)$ is the split universal Chevalley group of the same type as $\tilde{\mathbf{G}}$ over the residue field. The root systems we fixed define corresponding root systems for $G_{v}$.\newline
For each of the types involved, there exist a non-trivial symmetry $s$ of the Dynkin diagram, pick a subset $\theta \subseteq \Sigma$ which is non-invariant under $s$, e.g.
\begin{gather*}
A_{n} : \quad
\begin{tikzpicture}
\draw (0,0) -- (1,0);
\filldraw (0,0) circle (0.05 cm) node[anchor=south] {$1$};
\filldraw (1,0) circle (0.05 cm) node[anchor=south] {$2$};
\draw[dashed] (1,0) -- (3,0);
\draw (3,0) -- (4,0);
\filldraw (3,0) circle (0.05 cm) node[anchor=south] {$n-1$};
\filldraw (4,0) circle (0.05 cm) node[anchor=south] {$n$};
\draw[red] (1.5,0) ellipse (2cm and 0.5 cm) node[anchor=north] {$\theta$};
\draw[<->] (0,1) .. controls (1,2) and (3,2) ..(4,1);
\draw (2,2) node {$s$};
\end{tikzpicture}
\end{gather*}
\begin{gather*}
D_{n}: \quad
\begin{tikzpicture}
\draw (0,0) -- (1,0);
\filldraw (0,0) circle (0.05 cm) node[anchor=south] {$1$};
\filldraw (1,0) circle (0.05 cm) node[anchor=south] {$2$};
\draw[dashed] (1,0) -- (3,0);
\draw (3,0) -- (4,1);
\draw (3,0) -- (4,-1);
\filldraw (3,0) circle (0.05 cm) node[anchor=south] {$n-2$};
\filldraw (4,1) circle (0.05 cm) node[anchor=south] {$n-1$};
\filldraw (4,-1) circle (0.05 cm) node[anchor=north] {$n$};
\draw[red, rotate=12] (2,0) ellipse (3cm and 1 cm) node[anchor=north, outer sep = 28pt] {$\theta$};
\draw[<->] (4.5,1) .. controls (5,0) .. (4.5, -1);
\draw (5,0) node {$s$};
\end{tikzpicture}
\end{gather*}
\begin{gather*}
E_{6}: \quad
\begin{tikzpicture}
\draw (0,0) -- (1,0);
\filldraw (0,0) circle (0.05 cm) node[anchor=south] {$1$};
\filldraw (1,0) circle (0.05 cm) node[anchor=south] {$2$};
\draw (1,0) -- (2,1);
\draw (1,0) -- (2,-1);
\draw (2,1) -- (3,1);
\draw (2,-1) -- (3,-1);
\filldraw (2,1) circle (0.05 cm) node[anchor=south] {$3$};
\filldraw (3,1) circle (0.05 cm) node[anchor=south] {$4$};
\filldraw (2,-1) circle (0.05 cm) node[anchor=north] {$5$};
\filldraw (3,-1) circle (0.05 cm) node[anchor=north] {$6$};
\draw[red, rotate=12] (1.5,0) ellipse (2.5cm and 1 cm) node[anchor=north west, outer sep = 8pt] {$\theta$};
\draw[<->] (3.5,1) .. controls (4,0) .. (3.5, -1);
\draw (4,0) node {$s$};
\end{tikzpicture}
\end{gather*}
The symmetry $s$ of the Dynkin diagram induces an isomorphism $\varphi_{v}$ of $G_{v}$ \cite[Corollary to theorem 29]{St67}. For a subset $R\subseteq \Sigma$ of simple roots, let $P_{v,R}$ be the parabolic subgroup of $G_{v}$ corresponding to $R$, then $P_{v,\theta}$ and $P_{v,s \theta}$ are non-conjugate in $G_{v}$, but isomorphic via $\varphi_{v}$. \newline
Now, let $\pi:\Gamma \to G_{\mathfrak{p}} \times G_{\mathfrak{q}}$ be the canonical reduction map modulo $\mathfrak{p} \mathfrak{q}$. Consider the following two congruence subgroups
\begin{gather*}
\Gamma_{1} := \pi^{-1}(P_{\mathfrak{p},\theta} \times P_{\mathfrak{q}, \theta}); \\
\Gamma_{2} := \pi^{-1}(P_{\mathfrak{p}, s\theta} \times P_{\mathfrak{q}, \theta}).
\end{gather*}
Then $\widehat{\Gamma_{1}}$ and $\widehat{\Gamma_{2}}$ are isomorphic via $\Phi:= (\Phi_{v})_{v}$ where $\Phi_{v} \equiv id$ for $v \neq \mathfrak{p}$ and $\Phi_{\mathfrak{p}}$ is the isomorphism of $\tilde{\mathbf{G}}(k_{\mathfrak{p}})$ induced by the non-trivial symmetry $s$ of the Dynkin diagram. It remains to show that $\Gamma_{1}$ and $\Gamma_{2}$ cannot be isomorphic. \newline
Assume to the contrary that there exists an isomorphism $\varphi:\Gamma_{1} \to \Gamma_{2}$. By corollary \ref{cor~mar~sup}, there exist unique adelic automorphisms $\Phi_{\mathbb{A}}$ and $\sigma_{\mathbb{A}}^{0}$ of $\tilde{\mathbf{G}}(\mathbb{A}_{k}^{f})$ such that $\sigma_{\mathbb{A}}^{0}$ is induced from an automorphism of $k$ and $\Phi_{\mathbb{A}}$ is induced from a $k$-automorphism of $\tilde{\mathbf{G}}$ such that $\varphi(\gamma)=\Phi_{\mathbb{A}}(\sigma_{\mathbb{A}}^{0}(\gamma))$ for every $\gamma\in \Gamma_{1}$. Moreover $\Phi_{\mathbb{A}}$ is of the form $C_{g}\cdot \omega_{\mathbb{A}}$ where $C_{g}$ is conjugation by some $g\in \tilde{\mathbf{G}}(\overline{k})$, $\omega$ is an outer automorphism which comes from a symmetry of the Dynkin diagram and $\omega_{\mathbb{A}}$ just acts as $\omega$ at each place. In particular $\pi_{\mathfrak{p}}^{-1}(P_{\mathfrak{p},\theta})$ is mapped onto $g\pi_{\mathfrak{p}}^{-1}(P_{\sigma(\mathfrak{p}),\omega\theta})g^{-1}$ and $\pi_{\mathfrak{q}}^{-1}(P_{\mathfrak{q},\theta})$ is mapped onto $g\pi_{\mathfrak{q}}^{-1}(P_{\sigma(\mathfrak{q}),\omega\theta})g^{-1}$. By our choice of $\mathfrak{p}$ and $\mathfrak{q}$ it must be that $\sigma(\mathfrak{p})=\mathfrak{p}$ and $\sigma(\mathfrak{q})=\mathfrak{q}$.\newline
We have that $g^{-1}\pi_{\mathfrak{p}}^{-1}(P_{\mathfrak{p},\theta})g=\pi_{\mathfrak{p}}^{-1}(P_{\mathfrak{p},\omega\theta})$, multiplying $g$ from both sides by elements of $\tilde{\mathbf{G}}(k)$, we can assume that conjugation by $g$ preserves the root system that was fixed in the beginning of the proof. If $\theta \neq \omega\theta$, let $\alpha$ be a simple root in $\omega\theta \backslash \theta$, then the action of $g$ on the one-parameter unipotent subgroup $U_{\alpha}$ must be as scalar multiplication by some $f_{\alpha}$ with $\text{val}_{\mathfrak{p}}(f_{\alpha})=1$. Thus, the action on the opposite one-parameter unipotent subgroup $U_{-\alpha}$ is given as scalar multiplication by $1/f_{\alpha}$, but then $g^{-1}\pi_{\mathfrak{p}}^{-1}(P_{\mathfrak{p},\theta})g\nsubseteq \pi_{\mathfrak{p}}^{-1}(P_{\mathfrak{p},\omega\theta})$. Hence $\theta$ must be equal to $\omega\theta$. The same argument imply that $\omega\theta=s\theta$, and so $\theta = \omega\theta = s\theta$. But $\theta$ was chosen to be non-invariant under $s$, a contradiction.
\end{proof}
\section{ Second part of the main theorem - The exceptional cases}
\begin{Lemma}
Let $\mathbf{G}$ be a connected, simply connected and absolutely almost simple $\mathbb{Q}$-linear algebraic group of type $E_{8},F_{4}$ or $G_{2}$. Then $\mathbf{G}$ splits over $\mathbb{Q}_{p}$ for every $p$.
\end{Lemma}
\begin{proof}
The $\mathbb{Q}_{p}$ forms of the group $\mathbf{G}$ are classified by the first Galois cohomology set $H^{1}(\mathbb{Q}_{p},Aut(\mathbf{G}))$. For the groups considered, the universal and the adjoint forms coincide, moreover there are no symmetries for their Dynkin diagram, hence $Aut(\mathbf{G}) \cong \mathbf{G}$. As the field $\mathbb{Q}_{p}$ is local and non-archimedean, by \cite[Theorem 6.4]{PR93}, the Galois cohomology group $H^{1}(\mathbb{Q}_{p},\mathbf{G})$ is trivial. Thus, there is only one (up to an isomorphism) $\mathbb{Q}_{p}$ form for $\mathbf{G}$, in particular, this form must be the split form.
\end{proof}
\begin{Remark}\label{aut~spl~grp}
If $\mathbf{G}$ is a $k$-split simple $k$-linear algebraic group, then the automorphism group, $Aut(\mathbf{G}(k))$, of $\mathbf{G}(k)$ is completely known. Precisely, following the notations of \cite[Theorem 30]{St67}, each automorphism can be written as the product of an inner, a diagonal, a graph and a field automorphism. We have used graph automorphisms for \descref{Method B} and field automorphisms for \descref{Method C}. The group of diagonal automorphisms (modulo the inner ones) has a connection with the centre of the universal form \cite[Exercise following theorem 30]{St67}, which was used for \descref{Method A}. Thus, restriction ourselves to the exceptional cases, where $\mathbf{G}$ has type $E_{8},F_{4}$ or $G_{2}$ and $k=\mathbb{Q}$ (and also for $\mathbb{Q}_{p}$), all automorphisms are inner.
\end{Remark}
\begin{Theorem}
Let $\mathbf{G}$ be a connected, simply connected and absolutely almost simple high $\infty$-rank $\mathbb{Q}$-linear algebraic group of type $E_{8},F_{4}$ or $G_{2}$. If $\Gamma_{1},\Gamma_{2}\subseteq \mathbf{G}(\mathbb{Q})$ are two arithmetic subgroups with isomorphic profinite completions, then $\Gamma_{1}$ and $\Gamma_{2}$ are isomorphic.
\end{Theorem}
\begin{proof}
Let $\Gamma_1,\Gamma_2 \subseteq \mathbf{G}(\mathbb{Q})$ be two arithmetic subgroups. As noted in the preliminaries, the congruence kernel is trivial for these groups, so one can write $\overline{\Gamma_{i}} = \widehat{\Gamma_{i}} = \Lambda_{i} \times \prod_{p\notin S}\mathbf{G}(\mathbb{Z}_{p})$ where $S$ is a finite set and $\Lambda_{i}$ are commensurable with $\prod_{p\in S}\mathbf{G}(\mathbb{Z}_{p})$. Assume that $\Phi:\overline{\Gamma_{1}} \cong \widehat{\Gamma_{1}} \xrightarrow{\sim} \widehat{\Gamma_{2}} \cong \overline{\Gamma_{2}} \subseteq \mathbf{G}(\mathbb{A})$ is an isomorphism between the profinite completions of the two.
By adelic supperrigidity \ref{Ade~Sup~Rig}, there exists a unique homomorphism of adelic groups
\begin{gather*}
\tilde{\Phi}:\mathbf{G}(\mathbb{A}^{f}) \to \mathbf{G}(\mathbb{A}^{f})
\end{gather*}
such that $\tilde{\Phi}|_{\Gamma_{1}} \equiv \Phi\circ \iota|_{\Gamma_{1}}$, as $\mathbf{G}(\mathbb{A}^{f})$ is centerless by our assumption on the type of $\mathbf{G}$. Moreover, using the uniqueness of the map, $\tilde{\Phi}$ must be an isomorphism, and $\tilde{\Phi}|_{\overline{\Gamma_{1}}} \equiv \Phi$. \newline
Consider the homomorphisms $\tilde{\Phi}_{p,q}:\mathbf{G}(\mathbb{Q}_{p}) \to \mathbf{G}(\mathbb{Q}_{q})$ which are the composite
\begin{gather*}
\mathbf{G}(\mathbb{Q}_{p}) \xrightarrow{\iota_{p}} \mathbf{G}(\mathbb{A}^{f}) \xrightarrow{\tilde{\Phi}} \mathbf{G}(\mathbb{A}^{f}) \xrightarrow{\pi_{q}} \mathbf{G}(\mathbb{Q}_{q})
\end{gather*}
of the inclusion in the $p$'th place, $\tilde{\Phi}$, and the projection onto the $q$'th place. This is a continuous homomorphism between a $p$-adic group and a $q$-adic group, so if $p\neq q$ it must be a locally constant map. So its image is a normal countable subgroup of $\mathbf{G}(\mathbb{Q}_{q})$, in particular it is not of finite index, and hence must be trivial \cite[Proposition 3.17]{PR93}. Thus, $\tilde{\Phi} = (\tilde{\Phi}_{p,p})_{p}$ is given by an isomorphism at each place, and $\tilde{\Phi}_{p,p}$ must be conjugation by some $y'_{p}\in \mathbf{G}(\mathbb{Q}_{p})$ (see the remark above). \newline
We truncate $\tilde{\Phi}$ in the following manner,
\[
\text{write }y= (y_{p})_{p},\text{ where }y_{p} = \begin{cases} y_{p} = y'_{p} & \quad p\in S \\ y_{p}=1 & \quad p\notin S \end{cases}.
\]
By our choice of $S$, conjugation by $y$ is again an isomorphism between $\overline{\Gamma_{1}}$ and $\overline{\Gamma_{2}}$. By the Strong Approximation theorem \cite[Theorem 7.12]{PR93}, there exists some $g\in \mathbf{G}(\mathbb{Q})$ with $g\in y\overline{\Gamma_{1}}$. Thus, conjugation by $g\in \mathbf{G}(\mathbb{Q})$ is an isomorphism between $\overline{\Gamma_{1}}$ and $\overline{\Gamma_{2}}$. We have that $\Gamma_{i} = \overline{\Gamma_{i}}\cap \mathbf{G}(\mathbb{Q})$, which imply that $\Gamma_{1}$ and $\Gamma_{2}$ can be conjugated by $g$, as needed.
\end{proof}
\section{Final Remarks}
It is possible to generalize our methods even further. For example, using \descref{Method A}, we can find finite index subgroups of $\Gamma:= SL_{2}(\mathbb{Z}[1/p])$ which are not profinitely rigid. Explicitly (for $p\neq 2,3,5$), the following finite index subgroups of $SL_{2}(\mathbb{Z}[1/p])$ are non-isomorphic, but their profinite completions are:
\begin{gather*}
\Gamma_{1} := \left\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix}\in SL_{2}(\mathbb{Z}[1/p]) :\, \begin{matrix} b,c \equiv 0 \mod 3,5 \\ a,d \equiv \pm 1 \mod 3 \\ a,d \equiv 1 \mod 5 \end{matrix} \right\}; \\
\Gamma_{2} := \left\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix}\in SL_{2}(\mathbb{Z}[1/p]) :\, \begin{matrix} b,c \equiv 0 \mod 3,5 \\ a,d \equiv 1 \mod 3 \\ a,d \equiv \pm 1 \mod 5 \end{matrix} \right\}.
\end{gather*}
Indeed, just as in \S3, $\widehat{\Gamma_{1}}\cong \widehat{\Gamma(15)}\times \mathbb{Z}/2\mathbb{Z} \cong \widehat{\Gamma_{2}}$, where $\Gamma(15)$ is the principle congruence subgroup of $\Gamma$ of level 15. \newline
Let us stress out that it is still unknown whether or not $SL_{2}(\mathbb{Z}[1/p])$ itself is profinitely rigid, and in fact, there are some reasons to believe it is profinitely rigid (see for example \cite[\S 4]{CTLR22}). On the other hand, increasing slightly the dimension, it has been shown that $SL_{4}(\mathbb{Z}[1/p])$ is not profinitely rigid \cite{CTLR22}. \newline
We would like to state a stronger version of Theorem \ref{Main~The} which includes the above example. First, we need some further notations. Let $S\subseteq V(k)$ be a finite set of places containing all the archimedean places. The ring of $S$-integers of the number field $k$ is
\[
\mathcal{O}_{k,S}:=\{ x\in k:\, v(x)\geq 0 \, \forall v\notin S\}
\]
Let $\tilde{\mathbf{G}}$ be a connected, simply connected, absolutely almost simple $k$-linear algebraic group with a fixed faithful $k$-representation $\rho:\tilde{\mathbf{G}}\to GL(n_{\rho})$ . A subgroup $\Gamma \subseteq \tilde{\mathbf{G}}(k)$ is called an $S$-arithmetic subgroup if it is commensurable with $\tilde{\mathbf{G}}(\mathcal{O}_{S})$. As in \S2, there is a map from the profinite completion to the congruence completion, denote its kernel by $C(\Gamma,S)$. The group $\Gamma$ is said to have the congruence subgroup property (with respect to $S$) if $C(\Gamma,S)$ is a finite group. Again, this is actually a property of the ambient group $\tilde{\mathbf{G}}$, the field $k$ and the set $S$. The proofs given throughout the paper, carry over to establish:
\begin{Theorem}
Let $n$ be a positive integer, $k$ a number field, $S$ a finite set of places of $k$ containing all the archimedean places, $\tilde{\mathbf{G}}$ a connected, simply connected and absolutely almost simple $k$-linear algebraic group such that $\sum_{v\in S} \text{rank}_{k_{v}}\tilde{\mathbf{G}} \geq 2$ and such that $\tilde{\mathbf{G}}(k)$ satisfies the congruence subgroup property (with respect to $S$). Let $\Gamma \subseteq \tilde{\mathbf{G}}(k)$ be an $S$-arithmetic subgroup. Then, unless $\tilde{\mathbf{G}}$ has type $G_{2},F_{4}$ or $E_{8}$ and $k=\mathbb{Q}$, $\Gamma$ has infinitely many sequences of pairwise non-isomorphic finite index subgroups $\Gamma_{1},...,\Gamma_{n}$ with isomorphic profinite completions.
\end{Theorem}
As in the main theorem, the exceptional cases are indeed exceptional. Moreover, in these cases, if $\Gamma_{1}$ is an $S_{1}$-arithmetic subgroup and $\Gamma_{2}$ is an $S_{2}$-arithmetic subgroup with $\widehat{\Gamma_{1}}\cong \widehat{\Gamma_{2}}$ then $S_{1}=S_{2}$ and $\Gamma_{1}\cong\Gamma_{2}$.
\bibliographystyle{acm}
|
{
"arxiv_id": "2302.13208",
"language": "en",
"timestamp": "2023-03-01T02:22:12",
"url": "https://arxiv.org/abs/2302.13208",
"yymm": "2302"
} | \section{Introduction}
When describing physical dynamics mathematically, there exist a number of equivalent representations that one can choose between. This plethora of potential representations is particularly pronounced in quantum dynamics. Besides the Schr{\"o}dinger and Liouville equations, there also exists more esoteric formulations such as the Wigner-Weyl phase space representation \cite{doi:10.1119/1.2957889,PhysRev.40.749, Baker1958,Curtright2014,Groenewold1946}, or the Feynman path integral \cite{Schulman2005-av}. Each of these carries its own strengths and weaknesses. For example, the phase space representation is commonly used in quantum chemistry and optics \cite{wignerbook,bondar_wigner_2013}, while path integrals find a natural home in the description of open system dynamics via the influence functional \cite{Feynman-Vernon-1963,PhysRevB.95.125124,PhysRevB.97.224310,McCaul2021,Caldeira1983,Ford-Kac-JST-1987,Gardiner-1988,Sebastian1981,Leggett1987,van_Kampen-1997}. On a more fundamental level, the choice of representation can change the degree which the correspondence principle is manifestly present. To draw again on the example of the path integral and Wigner function, the $\hbar \to 0$ limit makes clear that for the former the only path of finite weight is that corresponding to the classical action \cite{McCaul2021}, while the equation of motion for the Wigner function reduces to the classical Poisson bracket \cite{bondar_wigner_2013,PhysRevE.99.062121}.
In the realm of quantum dynamics, one's choice of representation can often lead to issues of interpretation. For instance, the measure of a path integral is only finitely additive and therefore not guaranteed to converge \cite{Klauder2003}, while the Wigner function exhibits negativity. This is particularly problematic, as this potential negativity means it is uninterpretable as a density, despite being derived from one. In the case of pure states, this difficulty was resolved with the demonstration that the Wigner function should be interpreted as a phase space probability amplitude. This is in direct analogy with the Koopman-von Neumann (KvN) representation of classical dynamics \cite{McCaul_2022,PhysRevE.99.062121, Bondar2019, Koopman315, Wilkie1997a,Wilkie1997b,Sudarshan1976,Viennot_2018,PhysRevLett.120.070402,SEN2020168302,DHASMANA2021168623,SEN2023104732, BermdezManjarres2021,BermdezManjarres2022} which explicitly admits a wavefunction on phase space, and which the Wigner function of a pure state corresponds to in the classical limit. The extension of this interpretation to mixed states has to date been lacking however, given that such states must be described by densities and therefore lack a direct correspondence to wavefunctions.
Here we address this issue, by employing the little-known \textit{wave operator} formalism. Such a representation of dynamics has been deployed in a number of contexts, including open systems \cite{ilya}, quantum holonomy in density matrices \cite{uhlmann_parallel_1986}, the development of phase-space wavefunctions \cite{wlodarz_quantum_1994}, as well as nonlinear dynamical models \cite{gheorghiu-svirschevski_nonlinear_2001, beretta_nonlinear_2006, beretta_nonlinear_2005}. In fact, the motivation for a ``square root" of the density and the advantages it provides can be found even when not explicitly referenced. For example, the recently developed Ensemble Rank Truncation method (ERT) has at its heart a method for representing a Lindbladian evolution of a density in terms of a weighted sum of wavefunctions \cite{McCaulensemble}. The wave operator has also been used for foundational research~\cite{reznik_unitary_1996, yahalom_square-root_2006, tronci_momentum_2019}, but here we extend this to demonstrate that when combined with purification techniques from quantum information, it provides a natural bridge between the Hilbert space representation of quantum dynamics, the phase space Wigner representation, as well as KvN dynamics \cite{mezic2005spectral}. Through this, we are able to derive not only a consistent interpretation of mixed states in the Wigner representation, but also establish a connection between the commonly utilised phase space methods of quantum chemistry, and quantum information. We also find that this representation of quantum dynamics leads to novel semiclassical approximations of both real and imaginary time dynamics, as well as a transparent correspondence to the classical limit.
The remainder of this paper is outlined as follows - Sec.\ref{Sec:purification} borrows from the field of quantum information to express the wave operator in a purified form. This is then exploited in Sec.\ref{SecBoppOp} to introduce Bopp operators into the wave operator description. Equipped with this formulation, in Sec.\ref{Sec:phasespace} it is possible to identify the phase-space representation of the wave operator with the Wigner function, while in Sec.\ref{SecClassicalLimit} we use it to demonstrate that the classical limit of the wave operator description reduces exactly to the KvN representation of classical dynamics. Sec.\ref{Sec:Imaginarytime} then applies the same technique to the imaginary time Bloch equation, where we are able to derive a semi-classical correction to the equilibrium state of a system, and illustrate its effect using the examples of a quadratic and quartic oscillator. The paper then concludes with a summary of key findings, as well as outlining both open questions and future research directions.
\section{ The wave operator \label{Sec:waveoperator}}
We begin our treatment by making explicit a freedom present in the in the Liouville equation. This describes the dynamics for the density matrix $\hat{\rho}$ of a quantum system:
\begin{equation}\label{EqQuantumLiouvilleEq}
i\hbar \partial_t \hat{\rho} = [\hat{H}, \hat{\rho}]
\equiv \hat{H} \hat{\rho} - \hat{\rho} \hat{H},
\end{equation}
where $\hat{H}$ is a self-adjoint Hamiltonian. The expectation value of an observable $\hat{O}$ is obtained as
\begin{equation}\label{EqRhoAverage}
\braket{O} = \mathrm{Tr}(\hat{\rho}\hat{O}).
\end{equation}
Let us first assume that the density matrix can be decomposed into the form:
\begin{equation}\label{EqRhoThroughOmega}
\hat{\rho} = \hat{\Omega} \hat{\Omega}^\dagger,
\end{equation}
where in what follows we shall refer to $\hat{\Omega}(t)$ as the \textit{wave operator}. Following this assignation, we might ask what form the dynamics of $\hat{\Omega}(t)$ can take while remaining consistent with both Eq.~\eqref{EqQuantumLiouvilleEq} and Eq.~\eqref{EqRhoThroughOmega}. We find that the most general form of evolution permitted is
\begin{equation}\label{EqWaveOperatorDynamics}
i\hbar \partial_t \hat{\Omega} = [\hat{H}, \hat{\Omega}] - \hat{\Omega}\hat{F},
\end{equation}
where $\hat{F}$ is an arbitrary self-adjoint operator. It is easy to show from this that
\begin{equation}\label{EqDensityMatrixDynamics}
i\hbar \partial_t ( \hat{\Omega}\hat{\Omega}^{\dagger} ) = [\hat{H}, \hat{\Omega}\hat{\Omega}^{\dagger} ]
\end{equation}
is satisfied at all times if it is satisfied at a single moment (e.g., $t=0$). Consequently, the Liouville dynamics described by Eq.~\eqref{EqQuantumLiouvilleEq} may instead be described via the wave operator using Eq.~\eqref{EqWaveOperatorDynamics}, together with a prescription for expectations:
\begin{equation}\label{EqOAverageWaveOperator}
\braket{O} = \mathrm{Tr}( \hat{\Omega}^\dagger \hat{O} \hat{\Omega}).
\end{equation}
The principal advantage of expressing a quantum system's dynamics in terms of $\hat{\Omega}(t)$ rather than $\hat{\rho}(t)$ are two-fold. First, any dynamics using $\hat{\Omega}(t)$ are guaranteed to preserve positivity on the level of the density. Such a property means that we are free to choose $\hat{F}$ in such a way that $\hat{\Omega}(t)$ may be highly non-Hermitian. The special case of $\hat{F} = 0$ of the wave operator description has been studied in \cite{uhlmann_parallel_1986, tronci_momentum_2019}, but the ability to arbitrarily bias an evolution lends itself to numerical development, i.e. $\hat{F}$ can be chosen such that the dynamics of $\hat{\Omega}(t)$ can be either Schr{\"o}dinger or Liouville like. A concrete example taking advantage of this freedom may be found in \cite{ilya}, where $\hat{F}$ is chosen so as to maintain a lower triangular shape for $\hat{\Omega}$, and thus minimise the number of coefficients that must be propagated.
To understand the physical meaning of $\hat{F}$, we can rewrite equation~(\ref{EqWaveOperatorDynamics}) for small $\delta t$ as
\begin{equation}\label{EqPolarReprWaveOp}
\hat{\Omega}(t + \delta t) = e^{-i\delta t\hat{H}/\hbar} \hat{\Omega}(t) e^{i\delta t\hat{H}/\hbar} e^{i\delta t \hat{F} / \hbar} + O(\delta t^2).
\end{equation}
Assuming that $\hat{F} \neq 0$ and $\hat{\Omega}(t)$ is a non-negative operator, then equation~(\ref{EqPolarReprWaveOp}) is a polar decomposition of $\hat{\Omega}(t + \delta t)$. $\hat{F}$ may therefore be interpreted as the generator of the ``phase'' of the non self-adjoint wave operator.
The second advantage of the wave operator formalism is conceptual. Specifically we shall see that when employed in concert with the technique of canonical purification, we obtain both a direct correspondence to the Wigner phase function, as well as a generally applicable procedure for taking the classical limit of a quantum system. It is hoped that ultimately the combination of these two properties will allow for a physically consistent model of a quantum-classical hybrid, but in the present work we restrict ourselves to the context of a closed system, where we are able to demonstrate the aforementioned classical limit.
\section{Canonical purification of the wave operator \label{Sec:purification}}
In this section we will establish a close link between the proposed wave operator description of quantum mechanics and the notion of purification in quantum information theory (see chapter 5 in \cite{wilde_quantum_2017}). Expressing the wave operator in a purified form will later allow for the introduction of Bopp operators, and the establishment of a classical limit for the formalism. To perform the purification, we first choose an arbitrary orthogonal time-independent basis $\{ \ket{k} \} \subset \cal{H}$ in a Hilbert space $\cal{H}$. This allows us to define a mapping from an operator $\hat{\Omega}$ acting on $\cal{H}$ to a vector $\ket{\hat{\Omega}} \in \cal{H} \otimes \cal{H}$ as
\begin{equation}\label{EqVectorizationDefinition}
\ket{\hat{\Omega}} \equiv \sum_k \hat{\Omega} \ket{k} \otimes \ket{k} = (\hat{\Omega} \otimes \hat{1}) \ket{\omega},
\end{equation}
where
\begin{equation}
\ket{\omega}=\sum_k \ket{k}\otimes\ket{k}.
\end{equation}
The transformation given by Eq.~(\ref{EqVectorizationDefinition}) is closely related to the the concept of canonical purification (see page 166 of \cite{wilde_quantum_2017}), while in linear algebra, the mapping is also known as row-major vectorization. Since Eq.~(\ref{EqVectorizationDefinition}) is a purification of the density matrix $\hat{\rho}$, the latter can be recovered as a partial trace,
\begin{equation}
\hat{\rho} = \mathrm{Tr}' \ket{\hat{\Omega}}\bra{\hat{\Omega}}
\equiv \sum_{k} (\hat{1} \otimes \bra{k}) \ket{\hat{\Omega}}\bra{\hat{\Omega}}
(\hat{1} \otimes \ket{k}).
\end{equation}
A number of important identities can be derived from the definition of Eq.~(\ref{EqVectorizationDefinition})
\begin{subequations}\label{EqVectorizationIdentities}
\begin{eqnarray}
\ket{\hat{A}\hat{\Omega}} = (\hat{A} \otimes \hat{1}) \ket{\hat{\Omega}}, \label{EqVectorization1} \\
\braket{\hat{A}|\hat{B}} = \mathrm{Tr} (\hat{A}^{\dagger} \hat{B}), \label{EqVectorization3} \\
\ket{\hat{\Omega}\hat{A}} = (\hat{1} \otimes \hat{A}^T) \ket{\hat{\Omega}}, \label{EqVectorization2}
\end{eqnarray}
\end{subequations}
where $\hat{A}^T$ denotes the transpose of $\hat{A}$.
The latter identity Eq.~\eqref{EqVectorization2} is a consequence of the following ``ricochet" property:
\begin{eqnarray}
\hat{A}\otimes\hat{1} \ket{\omega}&=\sum_{ijk}a_{ij}\ket{i}\braket{j|k}\otimes\ket{k} &=\sum_{ijk}a_{ij}\delta_{jk}\ket{i}\otimes\ket{k} \notag\\
& =\sum_{ik}a_{ik}\ket{i}\otimes\ket{k}&=\sum_{ij} \ket{k}\otimes a_{ki}\ket{i} \notag\\
& =\sum_{ijk}\ket{k}\otimes a_{ji}\delta_{jk}\ket{i}&= \sum_{ijk}\ket{k}\otimes a_{ji}\ket{i}\braket{j|k} \notag\\
&=\hat{1}\otimes\hat{A}^T\ket{\omega}.
\end{eqnarray}
When this is combined with the fact that any operators of the form $\hat{1}\otimes\hat{A}$ and $\hat{B}\otimes\hat{1}$ will commute, we obtain Eq.~\eqref{EqVectorization2}.
By combining Eq.~\eqref{EqWaveOperatorDynamics} with Eq.~\eqref{EqVectorizationIdentities}, it is possible to express the evolution of the wave operator state in a Schr\"odinger-like form
\begin{eqnarray}
i\hbar\partial_t \ket{\Omega} = \left( \hat{H}\otimes \hat{1} - \hat{1} \otimes(\hat{H} + \hat{F})^T \right) \ket{\Omega}, \label{EqWaveOperSchrodinger}\\
\braket{O} = \braket{\Omega| \hat{O} \otimes \hat{1}|\Omega}. \label{EqOAverageOmega}
\end{eqnarray}
The free choice of $\hat{F}$ also means that this evolution can correspond either to a Liouville-type commutator evolution when $\hat{F}=0$, or a Schr\"odinger equation with an ancillary space when $\hat{F}=-\hat{H}$.
The dynamics denoted by Eq.~(\ref{EqWaveOperSchrodinger}) can also be arrived at from a Dirac-Frankel variational principle \cite{RAAB2000674},
\begin{equation}
\delta \Re \int_{t_i}^{t_f} \braket{\Omega(t)| i\hbar\partial_t - \left( \hat{H}\otimes \hat{1} - \hat{1} \otimes(\hat{H} + \hat{F})^T \right) |\Omega(t)}dt = 0,
\end{equation}
where the choice of the ``phase generator'' $\hat{F}$ in (\ref{EqWaveOperSchrodinger}) does not affect the values of the observables since
\begin{eqnarray}\label{EqEhrenfestTheoremsWaveOpt}
i\hbar \partial_t \braket{O} = \bra{\Omega} [ \hat{O}, \hat{H} ] \otimes \hat{1} \ket{\Omega}.
\end{eqnarray}
The choice of an orthonormal basis in Eq.~(\ref{EqVectorizationDefinition}) to construct the purification of the wave operator is equivalent to fixing the ``phase generator'' $\hat{F}$, and hence bears no observational consequences. If $\ket{\Omega}$ and $\ket{\Omega'}$ denote two purifications of $\hat{\Omega}$ corresponding to the different bases $\{ \ket{k} \}$ and $\{ \ket{k'} \}$, then there exists a a unitary $\hat{U}$ such that $\ket{\Omega} = (\hat{1} \otimes \hat{U}) \ket{\Omega'}$ \cite{wilde_quantum_2017}. Then, Eq.~(\ref{EqWaveOperSchrodinger}) is invariant under the ``gauge'' transformation
\begin{equation}
\ket{\Omega} \to \ket{\Omega'}, \qquad
\hat{F} \to \left( \hat{U}^{\dagger} (\hat{H} + \hat{F})^T\hat{U} + \hat{G} \right)^T - \hat{H},
\end{equation}
where the self-adjoint $\hat{G}$ is defined as $i\hbar\partial_t \hat{U} = \hat{U} \hat{G}$ (i.e. Stone's theorem) \cite{Stonetheorem}.
\section{Bopp operators for purified wave operators \label{SecBoppOp}}
Having defined the wave operator and its dynamics when represented as a purified state, we now show that this formalism provides a transparent method for the introduction of Bopp operators \cite{Zueco_2007}. These not only allow one to transit between Hilbert and phase space representations of a quantum system, but also enable a classical limit to be taken transparently, as we shall find in a later section. For simplicity, hereafter we will consider system with one degree of freedom, but the extension to multidimensional case is trivial.
In anticipation of later developments, we shall refer to quantum coordinate and momentum variables as $\hat{\bi{x}}$ and $\hat{\bi{p}}$, where the bold font is used to indicate their status as non-commuting quantum operators, rather than vectorial notation. These will obey the Heisenberg canonical commutation relation
\begin{equation}
[\hat{\bi{x}}, \hat{\bi{p}}] = i\hbar.
\end{equation}
We will also assume that the operator functions $H(\hat{\bi{x}}, \hat{\bi{p}})$ and $F(\hat{\bi{x}}, \hat{\bi{p}})$ are represented in a Weyl-symmetrized form. We then introduce \emph{the Bopp operators} as
\begin{eqnarray}
\hat{x} = \frac{1}{2} \left( \hat{1} \otimes \hat{\bi{x}}^T + \hat{\bi{x}} \otimes \hat{1} \right), \qquad
\hat{p} = \frac{1}{2} \left( \hat{\bi{p}} \otimes \hat{1} + \hat{1} \otimes \hat{\bi{p}}^T \right), \nonumber\\
\hat{\theta} = \frac{1}{\hbar}\left( \hat{1} \otimes \hat{\bi{x}}^T - \hat{\bi{x}} \otimes \hat{1} \right),
\qquad
\hat{\lambda} = \frac{1}{\hbar} \left( \hat{\bi{p}} \otimes \hat{1} - \hat{1} \otimes \hat{\bi{p}}^T \right). \label{EqBoppOpDef}
\end{eqnarray}
The inverse transformations read
\begin{eqnarray}
\hat{\bi{x}} \otimes \hat{1} = \hat{x} - \frac{\hbar}{2} \hat{\theta}, \qquad
&\hat{\bi{p}} \otimes \hat{1} = \hat{p} + \frac{\hbar}{2} \hat{\lambda}, \nonumber\\
\hat{1} \otimes \hat{\bi{x}}^T = \hat{x} + \frac{\hbar}{2} \hat{\theta}, \qquad
&\hat{1} \otimes \hat{\bi{p}}^T = \hat{p} - \frac{\hbar}{2} \hat{\lambda}.
\label{EqBoppOpInverse}
\end{eqnarray}
The commutation relations of these Bopp operators can be calculated as (for example):
\begin{align}
[\hat{x}, \hat{p}]= &\frac{1}{4}\left([\hat{\bi{x}},\hat{\bi{p}}]\otimes\hat{1}+\hat{1}\otimes[\hat{\bi{x}}^T,\hat{\bi{p}}^T]\right), \\
[\hat{\theta}, \hat{\lambda}]= &\frac{1}{2\hbar}\left([\hat{\bi{x}},\hat{\bi{p}}]\otimes\hat{1}-\hat{1}\otimes[\hat{\bi{x}}^T,\hat{\bi{p}}^T]\right).
\end{align}
Conjugating the fundamental commutation relation yields the identity $[\hat{\bi{x}}^T,\hat{\bi{p}}^T]=-i\hbar$, and means the Bopp operators obey the following alegbra:
\begin{subequations}\label{EqBoppOpAlgebra}
\begin{gather}\label{EqBoppOpAlgebraXPCommutator}
[\hat{x}, \hat{p}] = [\hat{\theta}, \hat{\lambda}] = 0,\\
[\hat{p}, \hat{\theta}] = [\hat{x}, \hat{\lambda}] = i.
\end{gather}
\end{subequations}
With the help of the identities $\hat{1} \otimes H(\hat{\bi{x}}^T, \hat{\bi{p}}^T) = H(\hat{1} \otimes \hat{\bi{x}}^T, \hat{1} \otimes \hat{\bi{p}}^T)$ and $H(\hat{\bi{x}}, \hat{\bi{p}}) \otimes \hat{1} = H(\hat{\bi{x}} \otimes \hat{1}, \hat{\bi{p}} \otimes \hat{1})$ (valid for any Weyl-symmetrized $\hat H$), the equations for the state dynamics and expectations read:
\begin{gather}
\label{EqBoppevolution}i\hbar\partial_t \ket{\Omega} = \hat G \ket{\Omega}, \\
\label{EqForG} \hat G = H(\hat{x} - \frac{\hbar}{2} \hat{\theta}, \hat{p} + \frac{\hbar}{2} \hat{\lambda})
- H(\hat{x} + \frac{\hbar}{2} \hat{\theta}, \hat{p} - \frac{\hbar}{2} \hat{\lambda}) - F(\hat{x} + \frac{\hbar}{2} \hat{\theta}, \hat{p} - \frac{\hbar}{2} \hat{\lambda}),\\
\label{EqBraKetOBopp}
\braket{O} = \braket{\Omega| O(\hat{x} - \frac{\hbar}{2} \hat{\theta}, \hat{p} + \frac{\hbar}{2} \hat{\lambda}) |\Omega}.
\end{gather}
We note that Eqs.~(\ref{EqBoppevolution})-(\ref{EqBraKetOBopp}) have been derived in Ref.\cite{bondar_operational_2012}, but from an entirely different perspective.
Since $\hat{x}$ and $\hat{p}$ commute, they share a common eigenbasis
\begin{equation}\label{EqXPRepresentation}
\hat{x}\ket{xp} = x\ket{xp}, \qquad \hat{p}\ket{xp} = p\ket{xp},
\qquad \hat{1} \otimes \hat{1} = \int dxdp \ket{xp}\bra{xp}.
\end{equation}
It follows from the commutator relationship~(\ref{EqBoppOpAlgebra}) that
\begin{eqnarray}
\braket{xp|\hat{x}|\Omega} = x \braket{xp|\Omega}, \qquad
\braket{xp|\hat{\lambda}|\Omega} = -i\partial_x \braket{xp|\Omega}, \\
\braket{xp|\hat{p}|\Omega} = p \braket{xp|\Omega}, \qquad
\braket{xp|\hat{\theta}|\Omega} = -i\partial_p \braket{xp|\Omega}.
\end{eqnarray}
Hence,
\begin{eqnarray}
i\hbar\partial_t \braket{xp|\Omega} = \Big( H(x + i\frac{\hbar}{2} \partial_p, p - i \frac{\hbar}{2} \partial_x )
- H(x - i\frac{\hbar}{2} \partial_p, p + i \frac{\hbar}{2} \partial_x ) \nonumber\\
\qquad\qquad - F(x - i\frac{\hbar}{2} \partial_p, p + i \frac{\hbar}{2} \partial_x )
\Big) \braket{xp|\Omega}, \label{EqKetOBoppXPRepr} \\
\braket{O} = \int dxdp \braket{\Omega|xp} O(x + i\frac{\hbar}{2} \partial_p, p - i \frac{\hbar}{2} \partial_x ) \braket{xp|\Omega}. \label{EqAverageOXPRepr}
\end{eqnarray}
When $F=0$ Eq.~(\ref{EqKetOBoppXPRepr}) coincides with the equation of motion for the Wigner function (see, e.g., Eq.~(2.77) in \cite{hillery_distribution_1984}). In this case however, the original wave operator is not restricted to representing a pure state, meaning that Eq.~\eqref{EqBraKetOBopp} in cojunction with Eq.~(\ref{EqBoppevolution}) extend the interpretation of the Wigner function as a wave function \cite{bondar_wigner_2013} to include the general case of mixed states.
\section{The phase-space representation of the wave operator \label{Sec:phasespace}}
In this section, we will provide an alternative derivation of Eq.~(\ref{EqKetOBoppXPRepr}) and Eq.~(\ref{EqAverageOXPRepr}). The Wigner-Weyl transformation of equations (\ref{EqWaveOperatorDynamics}) and (\ref{EqOAverageWaveOperator}) read
\begin{eqnarray}
i\hbar\partial_t \Omega(x,p) = H(x,p) \star \Omega(x, p) - \Omega(x,p) \star H(x, p) - \Omega(x,p) \star F(x, p), \\
\braket{O} = \int dxdp \Omega(x,p)^* \star O(x, p) \star \Omega(x, p),
\end{eqnarray}
where $\star$ denotes the Moyal product, $H(x,p)$, $\Omega(x,p)$, $F(x,p)$, and $O(x,p)$ are the Weyl symbols for the operators $\hat{H}$, $\hat{\Omega}$, $\hat{F}$, and $\hat{O}$, respectively.
Utilizing the ``lone star'' identity $\int f(x,p) \star g(x,p) dxdp = \int f(x,p)g(x,p) dxdp$ (see equation~(16) in \cite{curtright_concise_2014}) and
\begin{eqnarray}
f(x,p) \star g(x, p) = f\left(x + i\frac{\hbar}{2} \partial_p, p - i \frac{\hbar}{2} \partial_x \right) g(x,p), \\
g(x,p) \star f(x, p) = f\left(x - i\frac{\hbar}{2} \partial_p, p + i \frac{\hbar}{2} \partial_x \right) g(x,p),
\end{eqnarray}
(see, e.g., equations (12) and (13) in \cite{curtright_concise_2014, cabrera_efficient_2015}),
we obtain
\begin{eqnarray}
i\hbar\partial_t \Omega(x,p) = \Big( H(x + i\frac{\hbar}{2} \partial_p, p - i \frac{\hbar}{2} \partial_x )
- H(x - i\frac{\hbar}{2} \partial_p, p + i \frac{\hbar}{2} \partial_x ) \nonumber\\
\qquad\qquad - F(x - i\frac{\hbar}{2} \partial_p, p + i \frac{\hbar}{2} \partial_x )
\Big) \Omega(x,p), \label{MasterWignerWeylEqForWaveOperator}\\
\braket{O} = \int dxdp \, \Omega(x,p)^* O\left(x + i\frac{\hbar}{2} \partial_p, p - i \frac{\hbar}{2} \partial_x \right) \Omega(x, p). \label{EqAverageOWignerOmega}
\end{eqnarray}
Comparing these two equations with Eq.(\ref{EqKetOBoppXPRepr}) and Eq.(\ref{EqAverageOXPRepr}), we conclude that $\braket{xp|\Omega} \equiv \Omega(x,p)$, i.e., $\braket{xp|\Omega}$ is the Wigner-Weyl transform of $\hat{\Omega}$.
We can also recover a more direct intertpretation of $\Omega(x,p)$ in the case that $W(x,p)$ is the Wigner function for a pure quantum state $\hat{\rho}$. Recalling that purity implies $W(x,p) \star W(x,p) = \frac{1}{2\pi\hbar} W(x,p)$ (see, e.g., equation (25) in \cite{curtright_concise_2014}), one shows
\begin{eqnarray}
\braket{O} &= \int dxdp\, O(x,p) W(x,p) = \int dxdp\, O(x,p) \star W(x,p) \nonumber\\
&= 2\pi\hbar \int dxdp\, O(x,p) \star W(x,p) \star W(x,p) \nonumber\\
&= 2\pi\hbar \int dxdp\, W(x,p) O(x,p) \star W(x,p) \nonumber\\
&= 2\pi\hbar \int dxdp\, W(x,p) O\left(x + i\frac{\hbar}{2} \partial_p, p - i \frac{\hbar}{2} \partial_x \right) W(x,p). \label{EqWignerAsAWaveFuncAver}
\end{eqnarray}
Since the Wigner function is real by construction, Eq.~(\ref{EqWignerAsAWaveFuncAver}) is recovered from Eq.~(\ref{EqAverageOWignerOmega}) and Eq.~\eqref{MasterWignerWeylEqForWaveOperator} in the case $F{=}0$ if
\begin{equation}\label{EqWignerAsAWaveFunc}
\Omega(x,p) = \sqrt{2\pi\hbar} W(x,p).
\end{equation}
Eq.~(\ref{EqWignerAsAWaveFuncAver}) and Eq.~\eqref{EqWignerAsAWaveFunc} therefore provide an alternative and much more simple derivation of the interpretation, put forth in \cite{bondar_wigner_2013}, of the Wigner function for a pure quantum system as a Koopman–von Neumann wave function. In particular Eq.~(\ref{EqWignerAsAWaveFuncAver}) and Eq.~(\ref{EqWignerAsAWaveFunc}) coincide with Eq.~(19) and Eq.~(8) in \cite{bondar_wigner_2013}. In the general mixed case, we are still able to identify the waveoperator with the Wigner function thanks to Eq.~(\ref{EqKetOBoppXPRepr}) and Eq.~(\ref{EqAverageOXPRepr}).
\section{The classical limit of the wave operator description \label{SecClassicalLimit}}
The proposed formalism also offers a direct route to the classical limit of quantum dynamics, where the Koopman-von Neumann representation of classical dynamics is naturally recovered. Beginning from Eq.~\eqref{EqBoppevolution}, we first scale our arbitrary phase $F\to \hbar F$, purely as a matter of convenience when taking the classical limit. Having done so, we then Taylor expand the Hamiltonian around the Bopp operators:
\begin{equation}
\label{EqHamexpansion}
H(\hat{x} \mp \frac{\hbar}{2} \hat{\theta}, \hat{p} \pm \frac{\hbar}{2} \hat{\lambda})=H(\hat{x},\hat{p}) \pm \frac{\hbar}{2} \partial_pH(\hat{x},\hat{p}) \hat{\lambda} \mp \frac{\hbar}{2} \partial_xH(\hat{x},\hat{p}) \hat{\theta} +O(\hbar^2).
\end{equation}
Inserting this into Eq.~\eqref{EqBoppevolution} and Eq.~\eqref{EqForG} we obtain
\begin{equation}
\hat{G} = \partial_pH(\hat{x},\hat{p}) \hat{\lambda} - \partial_xH(\hat{x},\hat{p}) \hat{\theta} + F(\hat{x} + \frac{\hbar}{2} \hat{\theta}, \hat{p} - \frac{\hbar}{2} \hat{\lambda})+O(\hbar^2).
\end{equation}
Taking $\lim_{\hbar\to0}\hat{G}=\hat{K}$ recovers the well-known KvN propagator, describing classical dynamics:
\begin{equation}
\label{Eq:ClassicalLimit}
i\partial_t \ket{\Omega}=\hat{K}\ket{\Omega} =\left[\partial_pH(\hat{x},\hat{p}) \hat{\lambda} - \partial_xH(\hat{x},\hat{p}) \hat{\theta} + F(\hat{x}, \hat{p})\right]\ket{\Omega}.
\end{equation}
We see that the phase generator in the classical limit corresponds to the arbitrary phase-space function one obtains in standard derivations of KvN \cite{PhysRevE.99.062121,McCaul_2022,mezic1,mezic2,mezic3}, which itself relates KvN to alternative dynamical equations such as Koopman-van Hove (KvH) \cite{Bondar2019,Kirillov2001,doi:10.1142/4721}.
The connection between Eq.~\eqref{Eq:ClassicalLimit} and the standard Liouville equation for the density can be made explicit by expressing this equation of motion in phase space:
\begin{equation}
\partial_t \Omega(x,p)= \left\{H(x,p), \Omega(x,p)\right\} -iF(x,p)\Omega(x,p),
\end{equation}
where $\left\{\cdot,\cdot\right\}$ indicates the Poisson bracket. Using $\rho(x,p)=\left|\Omega(x,p)\right|^2$, we immediately recover the Liouville equation for the density.
It is also interesting to note that when expanding the right hand sides of Eqs.~\eqref{EqBoppevolution} and \eqref{EqHamexpansion} in series in $\hbar$ all the terms corresponding to even powers of $\hbar$ will cancel out. An immediate consequence is that for quadratic Hamiltonians, the $\hbar\to 0$ limit may only affect arbitrary phase term $F$. Otherwise, the wave operator and Wigner function formalisms share the same property that the quantum equations of motion for quadratic systems remain unchanged in the classical limit $\hbar\to0$.
\section{Wave operator representation of thermal states \label{Sec:Imaginarytime}}
The wave operator formalism is also instructive when considering the quantum correction to equilibrium states. Recall that the density matrix for the Gibbs state at temperature $T{=}\tfrac1{k\beta}$ can be found (up to normalisation) via imaginary time propagation starting from $\beta{=}0$:
\begin{equation}
\label{EqBlochevolution}
\partial_\beta\hat{\rho}=-\frac{1}{2}\left(\hat{H}\hat{\rho}+\hat{\rho}\hat{H}\right), \ \ \hat{\rho}(0)=\hat{1}
\end{equation}
The solution to this equation is clearly $\hat{\rho}(\beta)={\rm e}^{-\beta\hat{H}}$, which selects the ground state as $\beta \to \infty$. The matching equation for the wave operator is:
\begin{equation}
\partial_\beta \hat{\Omega}=-\frac{1}{4}\left[\left(\hat{H}+i\hat{F}\right)\hat{\Omega}+\hat{\Omega}\left(\hat{H}+i\hat{F}\right)\right], \ \ \hat{\Omega}(0)=\hat{1} \label{eq:waveoperatorBloch}
\end{equation}
Eq.~\eqref{eq:waveoperatorBloch} can be proved directly by showing that the density matrix $\hat{\rho}=\hat{\Omega}\hat{\Omega}^\dagger$ is the solution to Eq.~\eqref{EqBlochevolution} when $\hat{\Omega}$ solves Eq.~\eqref{eq:waveoperatorBloch}%
\footnote{One needs to take into account the fact that at all $\Omega$ will commute with $\hat{H}+i\hat{F}$ at all points in the evolution.}%
. Just as in the real time case, the free term $\hat{F}$ merely adds a phase to the state, and in what follows we shall take $\hat{F}=0$.
By vectorizing the thermal state wave operator $\hat\Omega$ according to Eq.~\eqref{EqVectorizationDefinition}, Eq.~\eqref{eq:waveoperatorBloch} can be restated in terms of Bopp operators as
\begin{equation}\label{eq:waveoperatorBlochVectorized}
\partial_\beta \ket{\Omega} =-\frac{1}{4}\left[H(\hat{x} - \frac{\hbar}{2} \hat{\theta}, \hat{p} + \frac{\hbar}{2} \hat{\lambda})
+ H(\hat{x} + \frac{\hbar}{2} \hat{\theta}, \hat{p} - \frac{\hbar}{2} \hat{\lambda})\right]\ket{\Omega}.
\end{equation}
Series expansion of the right hand side of Eq.~\eqref{eq:waveoperatorBlochVectorized} in $\hbar$ gives
\begin{equation}
\label{EqImaginaryWaveOperatorEvolution}
\partial_\beta \ket{\Omega} =\frac{1}{2}\left[H(\hat{x},\hat{p})+\frac{\hbar^2}{4}\left(\partial^2_x H(\hat{x},\hat{p})\hat{\theta}^2 +\partial^2_p H(\hat{x},\hat{p})\hat{\lambda}^2\right)
+O(\hbar^4)\right]\ket{\Omega}.
\end{equation}
Thus, the lowest order quantum correction to the ground or thermal state is of order $\hbar^2$, and only the terms corresponding to even powers of $\hbar$ survive. This means that unlike in real time, Eq.~\eqref{eq:waveoperatorBlochVectorized} retains its form in the classical limit $\hbar\to0$ only for \textit{linear} Hamiltonians. It is also interesting to note that the semiclassical correction has the form of a Fokker-Planck like diffusive term when expressed in phase space.
In order to showcase the distinction between classical and quantum worlds within the wave operator formalism, let us compare the thermal states for benchmark one-dimensional quadratic and quartic systems. These will be described by the Hamiltonians
\begin{equation}
\hat{H}^{(n)}=\frac{1}{2} \bi{\hat{p}}^2 +\frac{1}{2}\bi{\hat{x}}^n,
\end{equation}
where $n{=}2$ and $n{=}4$, respectively. Let us consider three levels of approximation to Eq.~\eqref{eq:waveoperatorBlochVectorized}: We label $\ket{\Omega^{(n)}_q}$ as the state obtained when evolving using the fully quantum Eq.~\eqref{eq:waveoperatorBloch}. A semiclassical $\ket{\Omega^{(n)}_s}$ is derived from Eq.~\eqref{EqImaginaryWaveOperatorEvolution}by dropping the $O(\hbar^4)$ terms, and finally $\ket{\Omega^{(n)}_c}$ is the evolution using the $\hbar \to 0$ limit of Eq.~\eqref{EqImaginaryWaveOperatorEvolution}, which additionally wipes out $O(\hbar^2)$ terms. Fig. \ref{fig:quadratic} illustrates these three types of evolution. As expected, the quantum and semiclassical evolutions are identical for an $n=2$ quadratic Hamiltonian, but surprisingly produce only slightly different results for the quartic $n=4$ Hamiltonian. In all cases however, the distinction between classical and quantum evolutions is clear, where in the former case the absence of a fundamental commutation relation (and therefore a zero-point energy) is reflected in both the ground state energy and $\Delta x\Delta p$, as shown in Fig. \ref{fig:quadratic}.
\begin{figure*}[h!]
\captionsetup{width=\textwidth}
\centering
\begin{center}
\includegraphics[width=\textwidth]{quadratic.pdf}
\end{center}
\begin{center}
\caption{Comparison of the imaginary time dynamics for quantum, semiclassical and classical systems, in the cases of quadratic and quartic Hamiltonians. In both cases, the expected energy of the system is distinguished by the zero-point energy present in the quantum and semiclassical evolutions. Finally, the uncertainty relation in position and momentum is clearly respected in the quantum and semiclassical system, while in the classical system it approaches zero, reflecting the zero energy classical ground state.}
\label{fig:quadratic}
\end{center}
\end{figure*}
\section{Discussion \label{Sec:Discussion}}
Here we have presented a novel representation of Hilbert space dynamics, where positivity is automatically preserved by performing dynami cs on the square root of the density operator.
One advantage of the present formalism is that it it is set in Hilbert space with a Schr\"{o}dinger-like equation of motion. Consequently it is possible to use all of the highly efficient tools developed for these dynamics (e.g. tensor network algorithms) when performing wave operator calculations. Furthermore, by introducing the concept of purification it has been possible to introduce Bopp operators to this formalism. This has resulted in a phase space representation of the wave operator, which in turn allows us to identify the Wigner function as the projection of the wave operator onto phase space.
Taking the classical limit of the wave operator formalism, we find that it corresponds to the KvN representation of classical dynamics. For quadratic Hamiltonians, this correspondence to classical dynamics is exact even before taking any $\hbar\to0$ limit. This mirrors similar results to be found in the path integral and Wigner representations of dynamics for quadratic systems. In the former case, a saddle point approximation ensures only paths corresponding to the classical action contribute to the propagator, while the Moyal star operation evolving the quasiprobability $W(x,p)$ reduces to the Poisson bracket in the phase space representation.
When performing an analogous procedure in imaginary time, an $O(h^2)$ correction distinguishes quantum and classical quadratic systems, suggesting the most significant difference between the quantum and classical regime is in the ground state they inhabit, rather than their real time dynamics. The fact that the semiclassical expansion of the imaginary time evolution exhibits a quadratic correction to the classical Hamiltonian is strikingly similar to another context in which the equilibrium state of a system is determined by a correction to its bare Hamiltonian. Specifically, when considering a system strongly coupled to an environment, its thermal state is described by a ``Hamiltonian of mean force'' that accounts for the environmental interaction. In those cases where this effective Hamiltonian is known \cite{PhysRevA.106.012204,Cerisola,PhysRevB.95.125124,doi:10.1116/5.0073853,Miller2018}, the correction to the bare Hamiltonian is \textit{also} quadratic rather than linear. It is tempting to speculate that these two phenomena may be related to each other.
There are a number of potential extensions to this formalism. For instance, the introduction of commuting Bopp operators in previous sections relies on the canonical commutation relation in an infinite dimensional space of operators \cite{doi:10.1063/1.1665849}. Finite dimensional Hilbert spaces are more restrictive and generally do not allow introducing the analogs of Bopp operators sharing the similar commutation properties. Nevertheless, one might employ (for example) a Jordan-Schwinger map \cite{Sakurai2010-mw} to represent such a finite dimensional system. This would introduce an oscillator basis obeying canonical commutation relation in the continuum limit, and open a route to calculations analogous to those presented here. One also might extend the wave operator machinery to dissipative dynamics. For example, it has been shown in Ref.~\cite{reznik_unitary_1996} that a positive trace preserving map representing wave-operator evolution has the form:
\begin{gather}\label{LindbladEqnForWaveOperator}
i\partial_t\hat\Omega{=}\sum_k{\hat A_k}\hat\Omega{\Hat B_k}.
\end{gather}
The purification representation of Eq.~\eqref{LindbladEqnForWaveOperator} describes an arbitrary unitary transformation of the state vector $\ket{\Omega}$ and hence is capable of describing an arbitrary transformation of the density matrix $\hat\rho$, similar to the standard Lindblad equation. Nevertheless, the precise connection to the latter is highly non-trivial: a conventional linear Lindblad equation will in general correspond to a highly nonlinear Eq.~\eqref{LindbladEqnForWaveOperator}, where the operators $\hat A_k$ and $\hat B_k$ are functions of $\hat\Omega$ \cite{ilya}.
More generally, the hunt for novel efficient representations for interacting systems is one of the chief motivations for the development of the wave operator formalism. Specifically, the fact that positivity is automatically preserved is of vital importance when attempting to construct a hybrid formalism, where a partial classical limit is taken on one part of an interacting system. The importance of developing such formalisms should not be understated, given that all our interactions with the quantum world must be mediated through essentially classical devices, which will themselves have a quantum backreaction. The growing sophistication of quantum technology demands we be able to accurately describe such phenomena via a hybrid approach. Traditionally however, both of these representations map to the quantum density operator $\hat{\rho}$, and the hybrid equations of motion derived (e.g. the AGK equation) \cite{doi:10.1146/annurev.physchem.57.032905.104702,PhysRevD.37.3522,doi:10.1063/1.478811,doi:10.1063/1.481225} do not necessarily preserve the positivity of the state \cite{Bondar2019}, calling into question the physicality of the dynamics. It is our hope that the wave operator formalism developed here will provide a path to the resolution of these issues.
|
{
"arxiv_id": "2302.13278",
"language": "en",
"timestamp": "2023-02-28T02:14:58",
"url": "https://arxiv.org/abs/2302.13278",
"yymm": "2302"
} | \section{Introduction}\label{sec:intro}
\subsection{Background and Motivation}
\IEEEPARstart{M}{odern} electric power systems are structured and managed hierarchically, stretching from the inter-regional network, to the regional transmission system, high/medium/low-voltage distribution system, and user-side microgrid. With the rapid development of ultra-high-voltage transmissions, the surge of distributed flexible resources, and the integration of multiple energy sectors, systems in different regions and levels are coupled more closely. The coordinated optimal dispatch (COD) among different subsystems is becoming increasingly important to facilitate the economic and secure operation of the entire system.
Owing to technical restrictions (such as limited computation capacity) and regulatory reasons (such as management independence and information privacy), it is impractical to dispatch the hierarchical system by the centralized optimization that requires the collection of global information \cite{lit:dual_6}. In this regard, researchers introduce coordinated optimization methods to realize the COD in a decoupled fashion. The coordinated optimization decomposes the joint optimization problem into an upper-level problem and a series of lower-level problems. These problems are solved by corresponding subsystem operators independently and the overall optimum is achieved via the iterative information exchange among the upper-and lower-level systems. A broad spectrum of coordinated optimization methods has been developed with applications to various scenarios, e.g., the multi-area coordinated dispatch, transmission-distribution coordinated dispatch, and the coordinated control of distributed resources. However, conventional coordinated optimization methods rely on iterative information interaction, which has drawbacks including, 1) convergence issue: the iteration process may converge slowly, oscillate, or even diverge under some system operation states, and the convergence performance of some methods also depends on the parameter tuning; 2) communication burden: the iterative information exchange will occupy the communication channel for a long time until the iteration convergences; 3) scalability issue: the iteration number increases sharply with the number of system levels and subsystems \cite{ref:nested}, which impedes the coordination of hierarchical systems with multiple levels and multiple subsystems; and 4) compatibility issue: subsystems in different regions and levels are coordinated in a single-round serial manner in the real-world dispatch, which repels the iterative coordination method that requires repeated information exchange \cite{ref:iteration}. These drawbacks restrict the practical application of iterative coordination methods. Realizing the coordinated optimization in a non-iterative fashion will radically overcome the drawbacks above, which is a longstanding expectation from the industry and academia.
\begin{table*}[t]
\caption{Review of Coordinated Optimal Dispatch Methods}
\label{tab:review_dco}
\centering
\begin{threeparttable}
\setlength{\tabcolsep}{3.5mm}
\begin{tabular}{lllllll}
\toprule
Method & Decomposed & \makecell[l]{No \\iteration} & \makecell[l]{Privacy \\protection} & \makecell[l]{Multi-level \\ coordination} & \makecell[l]{Communication \\burden} & \makecell[l]{Computation\\ burden \tnote{1}} \\
\midrule
Centralized optimization & $\times$ & \checkmark & $\times$ & \checkmark & Low & High\\
Encrypted optimization [26], [27] & $\times$ & \checkmark & \checkmark & \checkmark & Low & High \\
Primal decomposition & & & & & & \\
- Benders decomposition [5], [6] & \checkmark & $\times$ & \checkmark & $\times$ & High & Medium \\
- Multi-parametric programming [7] & \checkmark & $\times$ & \checkmark & $\times$ & High & Medium \\
- Marginal equivalence [8], [17] & \checkmark & $\times$ & \checkmark & $\times$ & High & Medium\\
- \textbf{Equivalent projection (this paper)} & \checkmark & \checkmark & \checkmark & \checkmark & Low & Medium\\
Dual decomposition [1], [9-13], [19] & \checkmark & $\times$ & \checkmark & $\times$ & High & Medium \\
Consensus algorithm [14-16] & \checkmark & $\times$ & \checkmark & $\times$ & High & Low \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[1] Maximum computation resource requirement of upper-level and lower-level subsystems.
\end{tablenotes}
\end{threeparttable}
\end{table*}
To this end, this work aims at the non-iterative coordinated optimization method, i.e., coordinating the dispatch of multiple subsystems without iterative information exchange. The cause that conventional coordinated optimization requires iterations is: the information exchanged in each round of iteration does not represent complete technical and economic features of subsystems and thus, making the upper-level decision conflict with the feasibility or optimality of subsystems. Consequently, iterative information exchange is needed to revise the upper-level decision. To address this issue, this study proposes a novel system reduction theory termed the equivalent projection (EP), which makes external equivalence of the subsystem with much fewer data. The EP model depicts complete technical and economic information of the subsystem using a group of inequalities regarding coordination variables, and can be used as a substitute for the original subsystem model in the COD. The EP model ensures that the upper-level decision is consistent with the feasibility and optimality of the lower-level system, which avoids iterative information exchange among subsystems and realizes the COD in an iteration-free fashion.
The EP theory and the non-iterative COD framework are introduced in Part I of this paper. To calculate the EP, a novel polyhedral projection algorithm is developed in Part II, along with detailed applications of the proposed COD method.
\subsection{Literature Review}
Existing studies on the coordinated optimization and system reduction are reviewed, as summarized in TABLE \ref{tab:review_dco} and TABLE \ref{tab:review_reduction}, respectively.
\subsubsection{Coordinated dispatch}
Applications of the coordinated optimization in power systems date back to the 1980s \cite{ref:dco_first}. Existing coordinated optimization methods can be generally classified into two categories, i.e., the primal decomposition and the dual decomposition. The former decomposes the joint optimization problem by splitting the coupling variables and coordinates the solution of the upper and lower problems by updating coupling variables. Typical primal decomposition methods include: the Benders decomposition \cite{ref:primal_1}, generalized Benders decomposition \cite{ref:primal_2}, multi-parametric programming method \cite{ref:primal_3}, marginal equivalence method \cite{ref:primal_5}, etc. The latter decomposes the joint optimization by relaxing the coupling constraints to the objective function, and coordinates the solution of upper and lower problems by updating dual multipliers of coupling constraints. Typical relaxation methods include: the Lagrangian relaxation \cite{lit:dual_1}, augmented Lagrangian relaxation \cite{lit:dual_2}, optimality condition decomposition \cite{lit:dual_3}, etc. These relaxation methods can be combined with different multiplier updating algorithms, e.g., sub-gradient method, alternating direction method of multipliers (ADMM) \cite{lit:dual_4}, cutting plan method \cite{lit:dual_5}, and dynamic multiplier updating method \cite{lit:dual_6}. In addition to the coordinated optimization, fully distributed dispatch methods are also developed based on the consensus algorithm \cite{lit:consensus, lit:review3}. These methods only require algebraic calculation for each subsystem and thus have very low computation expense. However, the communication burden will increase as much more iterations are needed by these methods.
The coordinated optimization realizes the decomposed optimal dispatch of interconnected subsystems by merely exchanging some boundary information of each subsystem, which protects the information privacy and alleviating the computation burden of the centralized optimization. In the literature, applications of the coordinated optimization are studied for different problems, e.g., multi-regional economic dispatch \cite{ref:app_1}, market integration of multiple regions \cite{ref:app_2}, transmission-distribution system coordination \cite{ref:app_3}, and coordinated dispatch of multi-energy systems \cite{ref:app_4}. In addition to the optimal dispatch, coordinated optimization methods are also applied in areas of the optimal planning \cite{ref:app_6}, coordinated state estimation \cite{ref:app_5}, and optimal voltage/Var control \cite{ref:app_7}. Reviews of coordinated dispatch methods in power system can be found in \cite{ref:iteration, lit:dco_review2}.
However, the requirement of iterative information exchange brings computational and practical drawbacks to existing coordinated optimization methods, as analyzed in the former section. Some recent references have recognized the significance of the non-iterative solution for coordinated optimization. In reference \cite{ref:dco_non_iterative}, the non-iterative transmission-distribution system coordination is realized based on the aggregated cost function of the distribution system. In this study, however, the dimension of coordination variable is restricted to 1, which greatly simplifies the problem setup. In \cite{ref:tan4}, a variable and constraint elimination method is proposed to realize the non-iterative COD of the multi-area power system, in which high-dimensional coordination variables are incorporated. Nevertheless, the applicability to multi-level systems is not addressed in this work. References \cite{ref:privacy1} and \cite{ref:privacy2} solve the coordinated optimization in a centralized manner and protect information privacy of subsystems through data encryption. Iterations are avoided in these studies, but computational barriers exist since the coordinator has to solve a large-scale joint optimization problem. Furthermore, the coordinated optimization results can only be decoded by each subsystem, which may hinder the transparency of dispatch and pricing. Basic theory and method for the non-iterative, privacy-protected, and computationally efficient COD methods remain challenging issues, which motivates the present work.
\subsubsection{System reduction}
\begin{table}
\caption{Review of System Reduction Methods}
\label{tab:review_reduction}
\centering
\begin{tabular}{llll}
\toprule
\multirow{2}{*}{Method} & \multicolumn{3}{c}{Considered elements}\\
\cline{2-4}
& \makecell[l]{Equality \\constraints} & \makecell[l]{Inequality \\constraints} & \makecell[l]{Objective \\function} \\
\midrule
Network reduction [28], [29] & \checkmark & $\times$ & $\times$\\
Equivalent line limit [30] & \checkmark & \checkmark & $\times$ \\
\makecell[l]{Feasible region projection\\and aggregation [31-36]} & \checkmark & \checkmark & $\times$ \\
\textbf{\makecell[l]{Equivalent projection \\(this paper)}} & \checkmark & \checkmark & \checkmark \\
\bottomrule
\end{tabular}
\end{table}
To simplify the analysis and calculation of large-scale interconnected power systems, it is desired to reduce the scale (number of variables and/or constraints) of the system model \cite{ref:net_eq}. The most common reduction technique is the network equivalence, e.g., the Ward equivalence \cite{ref:ward}, which eliminates internal variables from the network equation of an electric grid and yields the equivalent network model at boundary nodes. The network reduction is widely applied in power flow calculation, security analysis, state estimation, and other static analysis problems. However, the network reduction is incapable of coordinated optimization problems since operation limits and cost functions of resources are not incorporated in the reduction.
To address this issue, an equivalent method preserving transmission limits is proposed in \cite{ref:net_reduce_cst}, which captures the thermal limits of equivalent lines. However, the network is assumed unloaded in this work, and resource operation limits are omitted in the equivalent model. In the authors' previous work \cite{ref:tan1}, projection methods are proposed to reduce the operation constraints of the regional transmission system to the tie-line boundary, which can be used to enforce internal constraints of each area in the cross-area power trading. In \cite{ref:minkowski1}, the Minkowski addition is used to estimate and aggregate the flexibility of distributed resources. In \cite{ref:tan2} and \cite{ref:tan3}, the projection-based reduction method is studied to capture the allowable range of active and reactive power output of virtual power plants. These studies focus on enforcing technical constraints in the reduced model, but do not consider the operation cost function of the system. Reference \cite{ref:minkowski} proposes a Zonotope-based method for characterizing the flexibility region of distributed resources with explicit pricing of each Zonotope parameter. Reference \cite{ref:dist_pq} characterizes flexibility regions of the distribution network given different operation cost levels. Since cost functions are modeled separately from the projection of operation constraints, the accuracy of these methods is limited. Additionally, existing methods are developed for specific application scenarios. There is a research gap in general system reduction theory incorporating both operation constraints and objective function of the system, which will be addressed in this work.
\subsection{Contributions and Paper Organization}
The contribution of Part I is twofold,
1) A novel system reduction theory namely the EP is proposed, which makes external equivalence of the entire optimization model with much fewer variables and constraints. The EP model is proven to capture identical technical and economic operation characteristics of a subsystem without revealing private information.
2) A novel coordinated optimization framework is developed based on the EP, which supports the COD of multi-level systems in a decomposed, iteration-free, and privacy-protected fashion. This coordination framework also has the computational advantage by reducing the scale of the subsystem model.
The paper is organized as follows. Section II introduces the joint optimal dispatch problem and its primal decomposition. Section III introduces the EP theory. Section IV develops the EP-based coordinated optimization method and discusses its properties. Section V illustrates the proposed coordination method with a small example. Section VI concludes this paper.
\section{Problem Formulation}\label{sec:formulation}
\subsection{Basic Notation}
Consider the COD of a system composed of multiple levels on a short-term basis (e.g., day-ahead, hour-ahead, and 5 minute-ahead). Assume the tree-form connection of subsystems, i.e., each lower-level system is connected to one and only one upper-level system, which can be achieved by properly splitting the entire system. For two adjacent levels, as shown in Fig. \ref{fig_structure}, variables in each subsystem can be partitioned into two groups,
\begin{figure}[t]
\centering
\includegraphics[width=3.3in]{Photo/fig_structure.pdf}
\caption{System structure of two adjacent levels.}
\label{fig_structure}
\end{figure}
\begin{itemize}
\item Coordination variable $x\in \mathbb{R}^{N_x}$, which contains decision variables at the boundary node, e.g., the power exchange between the two levels;
\item Internal variable $y\in \mathbb{R}^{N_y}$, which contains internal control variables (e.g., power output of generators) and state variables (e.g., voltage magnitudes and phase angles) of the subsystem.
\end{itemize}
Following the hierarchical management paradigm, the upper-level system determines the value of the coordination variable, but cannot directly determine the internal variable of each lower-level subsystem. The lower-level subsystem optimally determines the value of the internal variable with the coordination variable as the boundary condition.
In the following contents, we use superscript $u$ and $l$ to label variables of upper-and lower-level systems, respectively. Lower-level systems are indexed by $r \in [R]$, where $R$ is the number of subsystems and $[R]:=\{1,\cdots,R\}$. To concatenate column vectors, we use the syntax $(x_1,\cdots,x_R):=[x_1^\top,\cdots,x_R^\top]^\top$ and $x_{[R]}:=(x_1,\cdots,x_R)$.
\subsection{Joint Optimal Dispatch}
The joint optimal dispatch (JOD) seeks the most cost-effective generation schedule of the entire multi-level system subject to operation constraints. Without loss of generality, the JOD model is represented as follows,
\begin{subequations}
\label{jed}
\begin{align}
\min_{x,y} \ & C^u(x_{[R]}, y^u) + \sum_{r\in[R]} C^l_r(x_r,y^l_r) \label{jed:obj}\\
\text{s.t.} \ & h^u(x_{[R]}, y^u) \leq 0,\label{jed:cst_c}\\
& h_r^l(x_r, y_r^l) \leq 0, \forall r \in [R].\label{jed:cst_i}
\end{align}
\end{subequations}
Function $C^u(\cdot)$ and $C^l(\cdot)$ respectively denote cost functions of upper-and lower-level systems, which only rely on internal variables. Equations \eqref{jed:cst_c} and \eqref{jed:cst_i} enforce operation constraints of the upper-level system and each lower-level system, respectively.
\subsection{Primal Decomposition}
In the JOD model, the optimization of different subsystems are coupled by coordination variable $x_r$. If the value of $x_r$ is given as $\hat{x}_{r}$, the JOD model can be decomposed into an upper-level problem and the lower-level problem corresponding to each subsystem,
\begin{itemize}
\item upper-level problem (UP\_0)
\end{itemize}
\begin{equation}
\label{model:up_0}
\min_{y^u} \left\{C^u(\hat{x}_{[R]}, y^u) : h^u(\hat{x}_{[R]}, y^u) \leq 0 \right\},
\end{equation}
\begin{itemize}
\item lower-level problem (LP\_0)
\end{itemize}
\begin{equation}
\label{model:lp_0}
\min_{y^l_r} \left\{C^l(\hat{x}_{r}, y^l_r) : h^l_r(\hat{x}_{r}, y^l_r) \leq 0 \right\}, \forall r\in [R].
\end{equation}
This decomposition scheme is known as the primal decomposition in the optimization literature \cite{lit:dual_5}. The key point of this decomposition is to choose the proper value of $\hat{x}_{r}$ so that the feasibility and optimality of each subsystem are met. In engineering practice, the dispatch for different levels is decoupled and $\hat{x}_{r}$ is determined empirically or based on simplified lower-level system models. For instance, in the transmission network dispatch, distribution networks are simplified as nodal netload. In the pan-European spot market, the cross-regional trading is cleared by simplifying each price area as a virtual node \cite{ref:eu}. However, since the detailed information of lower-level systems is not completely incorporated, $\hat{x}_{r}$ determined based on the simplified model may lead to uneconomical and even infeasible dispatch commands for lower-level systems. To this end, existing coordinated optimization algorithms update $\hat{x}_{r}$ through iterative information exchange among upper-and lower-level systems. In these methods, the upper-level system determines the optimal value of $\hat{x}_{r}$ by solving problem \eqref{model:up_0}. Then each lower-level system checks the feasibility and optimality of $\hat{x}_{r}$ and returns information to the upper-level system to adjust the value of $\hat{x}_{r}$.
To overcome the drawbacks caused by iterations of conventional coordinated optimization methods, this study seeks the non-iterative solution for coordinating problem \eqref{model:up_0} and \eqref{model:lp_0}. The basic idea is to eliminate internal variables from problem \eqref{model:lp_0} and obtain an equivalent model with reduced scale to replace the original lower-level system model for the upper-level optimization. Detailed introductions of the equivalent system reduction and non-iterative coordinated optimization are included in the following two sections.
\section{Equivalent Projection Theory}\label{sec:theory}
To realize the non-iterative coordinated optimization, this study proposes the EP theory to make external equivalence of the lower-level system model. First, to ensure that internal variables only exist in constraints, the cost function of each lower-level system is transformed into an inequality form using the epigraph. Then, the feasible region characterized by technical and economic constraints of the lower-level system is projected onto the subspace of the coordination variable. Through the projection, internal variables of the lower-level system are eliminated, yielding a reduced model to equivalently represent the lower-level system model with much fewer data.
\subsection{Reformulation of Objective Function}\label{chp:epigraph}
In the original JOD model \eqref{jed}, internal variable $y^l_r$ of the lower-level system not only appears in operation constraint \eqref{jed:cst_i}, but also exists in the objective function, which impedes the decomposition of the JOD model. Hence, we convert the lower-level objective function into the inequality-form via the epigraph. With this conversion, the JOD model is reformulated as follows,
\begin{subequations}
\label{jed2}
\begin{align}
\min_{x,\pi,y} \ & C^u(x_{[R]}, y^u) + \sum_{r\in[R]} \pi_r \label{jed2:obj}\\
\text{s.t.} \ & h^u(x_{[R]}, y^u) \leq 0,\label{jed2:cst_c}\\
& h_r^l(x_r, y_r^l) \leq 0, \forall r \in [R],\label{jed2:cst_i}\\
& C^l_r(x_r, y_r^l) \leq \pi_r \leq \overline{\pi}_r, \forall r \in [R].\label{jed2:cst_e}
\end{align}
\end{subequations}
Equation \eqref{jed2:cst_e} represents the epigraph of the lower-level cost function $C^l_r(x_r, y_r^l)$. Variable $\pi_r$ denotes the operation cost of lower-level system $r$. Constant $\overline{\pi}_r$ is introduced to bound the value of $\pi_r$, which can take any value larger than the supremum of $C^l_r(x_r, y_r^l)$.
\begin{theorem}
\label{theorem:jed}
The reformulated JOD model \eqref{jed2} is equivalent to the original JOD model \eqref{jed}.
\end{theorem}
\begin{proof}
Note that problem \eqref{jed2} minimizes over $\pi_r$, the first less-than-equal sign in equation \eqref{jed2:cst_e} will always get binding at the optimum, i.e., $C^l_r(\hat{x}_r, \hat{y}^{L}_r) = \hat{\pi}_r$ always holds for optimal solution $(\hat{x}_r, \hat{y}^{L}_r, \hat{\pi}_r)$. This is because if the constraint is not binding, a value of $\pi_r$ smaller than $\hat{\pi}_r$ can be found to further decrease the value of the objective function \eqref{jed2:obj}, which conflicts with that $\hat{\pi}_r$ is the optimal solution. Hence, the reformulated JOD model \eqref{jed2} is equivalent to the original model \eqref{jed}.
\end{proof}
In the reformulated JOD model, regard $(x_r, \pi_r)$ as the augmented coordination variable of subsystem $r$. Then internal variable $y^l_r$ only appears in the lower-level operation constraint, and the upper-and lower-level systems are only coupled through $(x_r, \pi_r)$.
\subsection{Equivalent Projection}
The target of the EP is to eliminate internal variables from the optimization model of each lower-level system to obtain its external equivalence. First, define the operation feasible region (OFR) of each lower-level system,
\begin{definition}[OFR]\label{def:ofr}
The OFR of lower-level system $r$ is the feasible region of both coordination variables and internal variables subject to the operation constraints, i.e., $\Omega_r := \left\{(x_r, \pi_r, y^l_r)\in \mathbb{R}^{N_x+1} \times \mathbb{R}^{N_y} : \text{Eq } \eqref{jed2:cst_i}-\eqref{jed2:cst_e} \right\}$.
\end{definition}
Then define the EP of the lower-level system,
\begin{definition}[EP]\label{def:ESR}
The EP model of lower-level system $r$ is the projection of $\Omega_r$ onto the subspace of $(x_r,\pi_r)$, i.e., $\Phi_r := \left\{(x_r,\pi_r) \in \mathbb{R}^{N_x+1} : \exists y^l_r, \ \text{s.t. } (x_r, \pi_r, y^l_r) \in \Omega_r \right\}$.
\end{definition}
For any coordination variable $(x_r,\pi_r)$ satisfies the EP model, the above definition ensures that there is at least one feasible operation state $y^l_r$ to execute the dispatch command $x_r$ with operation cost no larger than $\pi_r$. Through the projection, internal variables are eliminated from the lower-level optimization model, yielding a lower-dimensional feasible region to depict the technical and economic features of the system. The resulted EP model contains all possible values of coordination variables that can be executed by the lower-level system securely and economically and thus, can be used as the substitute for the detailed lower-level model in the coordinated optimization.
\subsection{Discussions}
\subsubsection{Privacy protection}
In the coordinated dispatch, private and sensitive information of lower-level systems should not be revealed, which typically refers to network parameters, nodal loads, and cost functions of resources \cite{ref:privacy1, ref:privacy2}. The privacy protection of the EP is guaranteed by the following theorem.
\begin{theorem}
The original model of the lower-level system cannot be inferred from the explicit representation of its EP model.
\end{theorem}
\begin{proof}
From Definition \ref{def:ESR}, it is obvious that $\Phi_r$ is the projection of $\Phi_r$ itself. $\Phi_r$ is also the projection of set $\left\{(x_r,\pi_r, 0), (x_r,\pi_r, 1) \right\}$ where $(x_r,\pi_r) \in \Phi_r$. Hence, one EP model may be the projection of different OFRs and thus, the original system model cannot be referred from only the EP model.
\end{proof}
The projection is a many-to-one mapping from the high-dimensional region to the low-dimensional space, which endows the privacy-protection property of the EP. One may concern if the lower-level system model can be recovered from a series of EP models based on historical data. Actually, this is neither unachievable since the dimension of internal variable $y^l_r$ cannot be referred from EP models, let alone the detailed model of the lower-level system.
\subsubsection{Comparison with network reduction}
The EP is reminiscent of the power network reduction such as the Ward equivalence. The network reduction technique eliminates internal variables from the network equation and obtains a reduced equation for only boundary nodes. The network reduction is mathematically formulated as a matrix inversion problem and can be computationally solved by the Gaussian Elimination. The network reduction makes equivalence of the network equation at the boundary nodes. However, operation limits and costs of components in the network are neglected during the reduction. Hence, the network reduction can only be used in network analysis (e.g., power flow calculation, state estimation), but cannot be used for the coordinated optimization. In contrast, the proposed EP method eliminates internal variables from the entire optimization model containing not only the network equation, but also inequality constraints and operation cost. The resulted EP model extracts both technical and economic features of the subsystem to its boundary. Therefore, the EP model can be used to replace the optimization model of the lower-level system in the upper-level coordinated optimization. In this regard, the proposed EP method extends the equivalence theory of power systems.
\subsubsection{Comparison with feasible region projection}
In the recent literature, the feasible region projection is employed for operation constraints reduction. For the multi-area transmission system, the projected region is used to characterize the allowable range of tie-line power that can be executed by the regional system \cite{ref:tan1}. For the active distribution network, the projected region is used to characterize the admissible range of flexibility the distribution system can provide \cite{ref:tan2, ref:tan3, ref:minkowski, ref:appr}. Though the technical feasibility of the lower-level system can be guaranteed via the feasible region projection, the economic characteristics are not finely incorporated and thus, the optimality of the coordination is not guaranteed. With the EP in the present paper, in contrast, economic and technical characteristics of the lower-level system are projected onto the coordination space simultaneously. As will be proven in the following section, using the EP model to replace the original lower-level system model in the coordinated optimization, the coordinated dispatch result will be consistent with the feasibility and optimality of the lower-level system.
\section{Coordinated Optimization Based on EP}
\subsection{EP-based Model Decomposition}
In the proposed framework, each lower-level system submits its EP model as a substitute for its original model to participate in the upper-level coordinated dispatch. Thereby, the JOD model is equivalently decomposed into the following two levels,
\begin{itemize}
\item upper-level problem (UP\_1)
\end{itemize}
\begin{subequations}
\label{model:up_1}
\begin{align}
\min_{x,\pi,y^u} \ & C^u(x_{[R]}, y^u) + \sum_{r\in[R]} \pi_r\\
\text{s.t.} \ & h^u(x_{[R]},y^u) \leq 0,\\
& (x_r, \pi_r) \in \Phi_r, \forall r\in [R].
\end{align}
\end{subequations}
\begin{itemize}
\item lower-level problem (LP\_1)
\end{itemize}
\begin{subequations}
\label{model:lp_1}
\begin{align}
\min_{y^l_r} \ & C^l_r(\hat{x}_r, y^l_r)\\
\text{s.t.} \ & h^l_r(\hat{x}_r,y^l_r) \leq 0,\\
& C^l_r(\hat{x}_r, y^l_r) \leq \hat{\pi}^l_r.\label{model:lp_1_cost}
\end{align}
\end{subequations}
In problem LP\_1, $\hat{x}_r$ and $\hat{\pi}^l_r$ are optimal solution of the upper-level problem UP\_1. Each lower-level system dispatches its local system with the upper-level decision result as the boundary condition.
\subsection{Non-Iterative Coordination Scheme}\label{sec:scheme}
Based on the EP, the optimal dispatch of upper-and lower-level systems can be coordinated in a non-iterative fashion. The proposed coordination scheme contains three stages, i.e., system reduction, coordinated optimization, and subsystem operation. The information exchange procedure of the proposed coordination scheme is illustrated in Fig. \ref{fig_process}. Detailed introductions are as follows.
\begin{figure}[t]
\centering
\includegraphics[width = 3.5 in]{Photo/fig_process.pdf}
\caption{Procedure of the EP-based coordinated optimization.}
\label{fig_process}
\end{figure}
\begin{itemize}
\item \textbf{Stage 1: equivalent projection.} Each lower-level system calculates its EP model $\Phi_r$ according to Definition \ref{def:ESR} and submits $\Phi_r$ to the upper-level system.
\item \textbf{Stage 2: coordinated optimization.} The upper-level system solves problem \eqref{model:up_1} with $\Phi_r$ of each lower-level system as constraints. The optimal solution for the coordination variable is $(\hat{x}_r,\hat{\pi}_r)$, which is published to lower-level systems as dispatch command.
\item \textbf{Stage 3: subsystem operation.} Each lower-level system fixes the coordination variable to $(\hat{x}_r,\hat{\pi}_r)$ and solves problem \eqref{model:lp_1} to dispatch the local system. Since $(\hat{x}_r,\hat{\pi}_r) \in \Phi_r$, problem \eqref{model:lp_1} is ensured to be feasible with $(\hat{x}_r,\hat{\pi}_r)$ as the boundary condition.
\end{itemize}
The above coordination scheme only requires one round of interaction between upper-and lower-level systems, which overcomes drawbacks of conventional iterative coordination algorithms such as slow convergence, risk of iteration oscillation, and complicated information exchange. Additionally, the proposed coordination procedure is serial, which is compatible with the existing hierarchical management paradigm of power systems.
\subsection{Discussions}
\subsubsection{Optimality}
The optimality of the EP-based coordination scheme is proven as follows.
\begin{theorem}
The coordinated solution of problem \eqref{model:up_1} and problem \eqref{model:lp_1} is equivalent to that of the JOD.
\end{theorem}
\begin{proof}
In problem \eqref{model:up_1}, its optimal solution $(\hat{x}_r,\hat{\pi}_r) \in \Phi_r$. According to Definition \ref{def:ESR}, there is $\hat{y}^l_r$ such that $(\hat{x}_r,\hat{\pi}_r, \hat{y}^l_r)$ satisfies constraint \eqref{jed2:cst_i} and \eqref{jed2:cst_e} and thus, $(\hat{x}_r,\hat{\pi}_r, \hat{y}^u,\hat{y}^l_r)$ is feasible for problem \eqref{jed2}. Let $(\breve{x}_r,\breve{\pi}_r, \breve{y}^u,\breve{y}^l_r)$ be the optimal solution of problem \eqref{jed2}. According to Definition \ref{def:ESR}, $(\breve{x}_r,\breve{\pi}_r) \in \Phi_r$ and thus, $(\breve{x}_r,\breve{\pi}_r, \breve{y}^u)$ is feasible for problem \eqref{model:up_1}. Hence, problem \eqref{model:up_1} and \eqref{jed2} have the same feasible region. Note that problem \eqref{model:up_1} and \eqref{jed2} also have identical objective functions, the optimal solution of problem \eqref{model:up_1} is equivalent to that of problem \eqref{jed2}. According to Theorem \ref{theorem:jed}, problem \eqref{jed2} is equivalent to the original JOD problem \eqref{jed} and thus, the solution of problem \eqref{model:up_1} is equivalent to the JOD. With the minimized cost $\hat{\pi}_r$, constraint \eqref{model:lp_1_cost} will take the equal sign at the optimum of problem \eqref{model:lp_1}, otherwise a $\pi^l_r$ smaller than $\hat{\pi}_r$ can be found, which conflicts with that $\hat{\pi}_r$ is the optimal value. Hence, the optimal solution of problem \eqref{model:lp_1} is equal to that of \eqref{jed}.
\end{proof}
Note that the above theorem and proof do not restrict the form of the JOD model. Hence, the EP-based coordinated optimization framework is general, and is capable of both convex and nonconvex problems. In addition to the mathematical proof, the optimality of the EP-based coordination can also be analyzed through physical interpretation. According to Definition \ref{def:ESR}, the EP model contains all coordination variables that are technically and economically feasible for the lower-level system. Hence, the upper-level decision constrained by the EP model can be executed by the lower-level system with minimized operation cost, which ensures the equivalence between the EP-based coordination and the original JOD solution.
\subsubsection{Complexity analysis}
Let $G^u$ and $G^l_r$ denote the model scale of the upper-level and lower-level problems, respectively. The problem scale can be measured by the product of the variable number and constraint number of the problem (assume there is no redundant variable or constraint). Given the form of the optimization problem, the computation time of the problem is an increasing function of its scale, denoted by $\mathcal{T}^o(\cdot)$. For the lower-level system, the time of system reduction is an increasing function of the problem scale and the coordination variable dimension, denoted by $\mathcal{T}^p(G^l_r,N_x)$. In the EP-based coordination scheme, the total computation time is the sum of the 3 stages. Note that both Stage 1 and Stage 3 are implemented simultaneously by different lower-level systems and thus, the computation time of the corresponding stage relies on the longest time of lower-level systems. Hence, the total computation time of the EP-based coordination scheme is
\begin{equation}
\mathcal{T}^{coor} = \max_{r\in[R]} \mathcal{T}^p(G^l_r,N_x) + \mathcal{T}^o(G^u) + \max_{r\in[R]}\mathcal{T}^o(G^l_r).
\end{equation}
The total computation time of directly solving the JOD is
\begin{equation}
\mathcal{T}^{jod} = \mathcal{T}^o(G^u + \sum_{r\in[R]} G^l_r).
\end{equation}
If the scale of each system is given, the computation time of solving the JOD will increase with the number of lower-level systems. In contrast, the computation time of the EP-based coordination method mainly depends on the longest time of lower-level systems and will not be significantly impacted by the number of lower-level systems. The reason is that in the EP-based coordination scheme, each lower-level system reduces its original model into the small-scale EP model to participate in the upper-level optimization, which alleviates the computation burden of the upper-level system. Though more computation efforts are required by Stage 1 and Stage 3, they are executed by each lower-level system in parallel and do not add much to the overall computation time. In this regard, the EP-based coordination method distributes the computation complexity of the joint optimization to subsystems and will improve computation efficiency, especially for coordinated optimization problems with numerous lower-level systems. This advantage is verified based on detailed application scenarios in Part II of this paper. The projection algorithm is a key factor that impacts the computation time of the EP-based coordination scheme, which is also addressed in Part II.
\subsubsection{Application to multi-level system}
The EP-based framework is also capable of coordinated optimization for the hierarchical system with multiple levels. The coordination for the multi-level system also requires three stages similar to the process in Section \ref{sec:scheme}. As illustrated in Fig. \ref{fig_hierarchical}, Stage 1 is implemented in a `bottom-up' manner. The system reduction calculation starts from the lowest level and then, systems at higher levels calculate their EP models with lower-level EP models as constraints. This process continues until it comes to the highest level, which optimizes the value of coordinated variables incorporating EP models of its connected lower-level systems. Then Stage 2 is implemented in a `top-down' manner to disaggregate the dispatch command. Systems at each level solve their local optimal dispatch problems with dispatch command from the upper-level system as the boundary condition. This process continues until it comes to the lowest level. As can be seen, the EP-based coordination process does not require multiple iterations for the multi-level system.
\begin{figure}[t]
\centering
\includegraphics[width = 3.1 in]{Photo/fig_hierarchical.pdf}
\caption{Coordination scheme for the multi-level system.}
\label{fig_hierarchical}
\end{figure}
If iterative coordinated optimization methods are applied to the multi-level system, the total number of iterations among different levels will be $\mathcal{O}(m^n)$, where $m$ is the number of iterations required by the coordination between two adjacent levels and $n$ is the number of levels. The iteration number of iterative coordination methods is exponential to the number of system levels, making it difficult to be applied to multi-level systems. This drawback is overcome naturally by the EP-based method since no iteration is needed no matter how many levels the system has.
\begin{figure*}[t]
\centering
\includegraphics[width = 7.16 in]{Photo/fig_illu.pdf}
\caption{Illustrative example results. (a) and (b) are OFRs of lower-level system 1 and system 2. (c) and (d) are EP models of lower-level systems in the coordination space. (e) exhibits the upper-level optimization results.}
\label{fig_illu}
\end{figure*}
\section{Illustrative Example}
The implementation of the EP-based coordinated optimization is exhibited via an illustrative example. The test system is composed of an upper-level system and two lower-level systems. The JOD of the test system takes the form of \eqref{jed} and is parametrized as follows,
\begin{subequations}
\label{case_md}
\begin{align}
\min_{x,\pi,y} \ & \ \pi_1 + \pi_2 \label{case_md:obj}\\
\text{s.t.} \ & \ x_1 + x_2 = 4.5 \rightarrow \text{Upper-level Constraint} \label{case_md:up}\\
& \left. \begin{array}{l}
1 \leq x_1 \leq 3,\ 1 \leq y_1 \leq 3\\
-1 \leq -x_1+y_1 \leq 1\\
x_1 + y_1 \leq \pi_1 \leq 7\\
\end{array}
\right\} \text{Lower Sys. 1} \label{case_md:low1}\\
& \left. \begin{array}{l}
1 \leq x_2 \leq 3,\ 1 \leq y_2 \leq 3\\
-1 \leq -x_2+y_2 \leq 1\\
1.5 \times (x_2 + y_2) \leq \pi_2 \leq 10\\
\end{array}
\right\} \text{Lower Sys. 2} \label{case_md:low2}
\end{align}
\end{subequations}
In the above model, $x_1$ and $x_2$ are coordination variables of the two lower-level systems, respectively. $y_1$ and $y_2$ are internal variables. $\pi_1$ and $\pi_2$ are cost variables. The objective of the JOD in \eqref{case_md:obj} is to minimize the total cost of the lower-level systems. Equation \eqref{case_md:up} is the constraint of the upper-level system, which couples the optimization of the lower-level systems. Equation \eqref{case_md:low1} and \eqref{case_md:low2} are constraints of the lower-level systems. The JOD problem \eqref{case_md} is a linear programming. By directly solving the problem, its optimal solution is
\begin{equation}
\label{case:rst1}
(\hat{x}_1,\hat{y}_1,\hat{\pi}_1) = (2.5, 1.5, 4)
\end{equation}
\begin{equation}
\label{case:rst2}
(\hat{x}_2,\hat{y}_2,\hat{\pi}_2) = (2, 1, 4.5)
\end{equation}
According to Definition \ref{def:ofr}, operation feasible regions of the two lower-level systems are as follows
\begin{equation}
\label{case:ofr1}
\Omega_1 = \left\{(x_1, \pi_1, y_1) \in \mathbb{R}^3: \text{Eq. } \eqref{case_md:low1} \right\},
\end{equation}
\begin{equation}
\label{case:ofr2}
\Omega_2 = \left\{(x_2, \pi_2, y_2) \in \mathbb{R}^3: \text{Eq. } \eqref{case_md:low2} \right\}.
\end{equation}
Both $\Omega_1$ and $\Omega_2$ are 3-dimensional polytopes, as red regions show in Fig. \ref{fig_illu} (a) and (b).
The problem scale of this test case is small and the projection can be calculated by the classic Fourier-Motzkin Elimination (FME) method. Project $\Omega_1$ and $\Omega_2$ onto the subspace of $(x,\pi)$, EP models of lower-level systems are obtained, as blue regions illustrate in Fig. \ref{fig_illu} (c) and (d). In this case, EP models are 2-dimensional polygons, which are represented as follows,
\begin{equation}
\label{case:ESR1}
\begin{split}
\Phi_1 = \{(x_1,\pi_1) \in \mathbb{R}^2:1 \leq x_1 \leq 3, \pi_1 \leq 7,\\
x_1 - \pi_1 \leq -1, 2 x_1 - \pi_1 \leq 1\},
\end{split}
\end{equation}
\begin{equation}
\label{case:ESR2}
\begin{split}
\Phi_2 = \{(x_2,\pi_2) \in \mathbb{R}^2:1 \leq x_2 \leq 3, \pi_2 \leq 10,\\
3 x_2 - 2 \pi_2 \leq -3, 6 x_2 - 2 \pi_2 \leq 3 \}.
\end{split}
\end{equation}
The two lower-level systems submit their EP models as substitutes for their original models to form the upper-level coordinated optimization problem. In this illustrative case, the upper-level optimization problem can be solved by the graphical method. The objective in \eqref{case_md:obj} is to minimize the sum of $\pi_1$ and $\pi_2$, and the upper-level constraint is the limitation on the sum of $x_1$ and $x_2$. Take $x_1+x_2$ and $\pi_1+\pi_2$ as decision variables, then the feasible region of the upper-level problem is the intersection of the Minkowski sum of EP models of lower-level systems and the line $x_1+x_2=4.5$, as illustrated in Fig. \ref{fig_illu} (e). From the figure, it can be obtained that the optimal solution of the upper-level problem is: $\hat{x}_1 = 2.5$, $\hat{x}_2 = 2$, $\hat{\pi}_1 = 4$, $\hat{\pi}_2 = 4.5$. Fix the upper-level decision results in \eqref{case_md:low1} and \eqref{case_md:low2}, also note that $\hat{\pi}_1$ and $\hat{\pi}_2$ are minimized, then it can be inferred that $\hat{y}_1 = 1.5$ and $\hat{y}_2 = 1$. As can be seen, the solution based on the EP-based coordinated optimization is equivalent to the joint optimization solution in \eqref{case:rst1} and \eqref{case:rst2}, which validates the effectiveness of the proposed coordination method.
\section{Conclusion}
To overcome drawbacks brought by repetitive iterations of conventional coordinated optimization methods, this paper proposes a novel reduction theory namely the EP, and develops a non-iterative COD framework for hierarchical power systems. The EP eliminates internal variables from technical and economic constraints of the lower-level system, and makes external equivalence of the entire optimization model of the system with much fewer data. In the proposed COD framework, the EP model is used to replace the original lower-level system model in coordinated optimization. With the EP, only a single-round exchange of some boundary information of lower-level systems is required to achieve the coordinated optimality, which avoids iterations among subsystems. The EP-based COD method is proven to protect private information, guarantee the same optimality as the joint optimization, and is capable of multi-level coordinated optimization problems. A numerical example demonstrates the detailed process of the EP-based coordination framework.
In Part II of this paper, the methodology for calculating the EP model will be introduced. The performance of the EP-based non-iterative COD will also be tested based on specific applications.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13223",
"language": "en",
"timestamp": "2023-02-28T02:13:19",
"url": "https://arxiv.org/abs/2302.13223",
"yymm": "2302"
} | \section{Introduction}
The necessity to manipulate flow and transport liquids is primitive to many biophysical processes such as embryonic growth and development~\cite{okada2005mechanism, cartwright2004fluid}, mucus transport in bronchial tree~\cite{sleigh1988propulsion, blake1975movement, bustamante2017cilia}, motion of food within intestine~\cite{burns1967peristaltic,agbesi2022flow}, animal drinking~\cite{cats,dogs}. Engineered systems also rely on efficient liquid transport such as in heat sinks and exchangers for integrated circuits~\cite{tuckerman1981high, das2006heat}, micropumps~\cite{Laser2004, riverson2008recent} and lab-on-a-chip devices~\cite{kirby2010micro}. Transporting liquids at small scales requires non-reciprocal motion to overcome the time reversibility of low Reynolds number flows. Deformable boundaries in the form of rhythmic undulation of cilia beds and peristaltic waves are nature's resolutions to overrule this reversibility and achieve directional liquid transport. While peristaltic pumps have become an integral component of biomedical devices, artificial ciliary metasurfaces that can actuate, pump, and mix flow have been realized only recently ~\cite{shields2010biomimetic,milana2020metachronal,wang2022cilia,gu2020magnetic, hanasoge2018microfluidic}.
Design strategy of valveless micropumps essentially relies on a similar working principle as cilia-lined walls; sequential actuation of a channel wall by electrical or magnetic fields creates a travelling wave which drags the liquid along with it~\cite{Liu2018, ogawa2009development}. While the primary focus of micropumps has been on the transport of liquids enclosed within a channel, numerous technological applications require handling liquids near fluid-fluid interfaces. In particular, processes such as self-assembly, encapsulation, emulsification involving micron-sized particles critically rely on the liquid flow near interfaces~\cite{chatzigiannakis2021thin, langevin2000influence}. Thus the ability to maneuver interfacial flows will open up new avenues for micro-particle sensing and actuating at interfaces. Interestingly, the apple snail \textit{Pomacea canaliculata} leverages its flexible foot to create large-scale surface flows that fetch floating food particles from afar while feeding underwater in a process called \textit{pedal surface collection}~\cite{saveanu2013pedal, saveanu2015neuston}; the physics of which is yet to be fully understood~\cite{joo2020freshwater,huang2022collecting}.
Here we reveal how a rhythmically undulating solid boundary pumps viscous liquid at the interface, and transports floating objects from distances much larger than its size. Surprisingly, pumping does not increase proportionally to the speed of the traveling wave, and we observe non-monotonicity in the average motion of surface floaters as the wave speed is gradually increased. Detailed measurements of the velocity field in combination with an analysis of the lubrication theory unravel the interfacial hydrodynamics of the problem that emerges from a coupling between capillary, gravity, and viscous forces. We find that the non-monotonic flow is a direct consequence of whether the interface remains flat or conforms to the phase of the undulator. Through the theoretical analysis, we are able to predict the optimal wave speed that maximizes pumping, and this prediction is in excellent agreement with experiments. Finally, we show how pumping near an interface is a less dissipative strategy to transport liquid compared to pumping near a rigid boundary.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{Fig1.pdf}
\caption{\textbf{Large-scale transport of floaters by the undulating carpet}. The actuator, shown in panels a) and b), is comprised of a helix rotating inside a blue shell. Rotation of the helix causes an oscillatory motion of the shell forming a traveling wave on the surface. It is placed at a mean depth $H$ below the liquid surface. c) Shape of the undulations over a period of oscillation. These shapes are captured by a traveling sine wave of $\delta \sin (x-V_w t)/\lambda$. d) Trajectories of floating styrofoam particles after 30 mins of continuous oscillation in 1000 cSt silicone oil for a fixed actuation speed $V_w$. This panel is a top view image with the actuator position marked at the bottom of the frame. The color coding of dark to light indicates the arrow of time. e) Magnified trajectories of particles located straight ahead of the actuator. The filled circles represent initial positions of the styrofoam particles. f) Particle velocity as a function of distance for increasing wave speeds ($V_w$). Different wave speeds are marked by the color coding. Distances are measured from the edge of actuator, as shown in panel e). Each of the curves is an average over 20 trajectories. Particle velocity exhibits a non-monotonic behavior with $V_w$ with maximum velocities measured at intermediate wave speeds. The inset confirms this behavior by showing particle velocity at a fixed location, $x=50$ mm for different $V_w$. Error bars in this plot represent standard deviation in velocity magnitude. The gray line is prediction from eq.~\eqref{optimalV}.}
\vspace{-3mm}
\label{fig1}
\end{figure*}
\section{Results}
\subsection{Experiments}
A 3D printed undulator capable of generating travelling waves is attached to the bottom of an acrylic tank. The tank is filled with a viscous liquid (silicone oil or glycerin-water mixture) such that the mean depth of liquid above the undulator ($H$) remains much smaller the undulator wavelength ($\lambda$), i.e. $H/\lambda\ll 1$. The undulator is driven by a servo motor attached to a DC power source. Millimetric styrofoam spheres are sprinkled on the liquid surface and their motion is tracked during the experiment to estimate the large scale flow of liquid. Additionally, we characterize the flow within the thin film of liquid directly in contact with the undulator by performing 2D particle image velocimetry (PIV) measurements. Our experimental design is essentially a mesoscale realization of the Taylor's sheet~\cite{taylor1951analysis} placed near a free surface~\cite{shaik2019swimming,dias2013swimming}; the crucial difference, however, is that the sheet or undulator is held stationary here, in contrast to free swimming.
Images of the undulator are shown in fig.~\ref{fig1}a and~\ref{fig1}b. The primary component of this design is a helical spine encased with a series of hollow, rectangular links that are interconnected through a thin top surface~\cite{zarrouk2016single} (see SI and supplementary movies 1 and 2 for details). The links along with the top surface forms an outer shell that transforms the helix rotation to planar travelling wave of the form, $\delta\sin[(x-V_w t)/\lambda]$. The pitch and radius of the helix determine the wavelength $\lambda$ and amplitude $\delta$ of the undulations respectively. By modulating the angular frequency of the helix, we are able to vary the wave speed $V_w=\omega\lambda$ from 15 to 120 mm/s ($\lambda$ is fixed at 50 mm). We perform experiments with undulators of length $\lambda$ and $2\lambda$, and the results remain invariant of the undulator size. For given $V_w$, shapes of undulator surface are shown in fig.~\ref{fig1}c for one period of oscillation.
\\
\begin{figure*}[t]
\centering
\includegraphics[width=.8\textwidth]{Fig2_new-01.eps}
\caption{\textbf{Thin-film flow atop the undulator}. a) A sketch of the actuator and a long exposure image of a typical flow-field measurement, showing motion of the tracer particles in the thin film. The free surface deforms in response to the flow. b) Results of PIV for two different capillary numbers, $Ca=132$ (top panel), $Ca=3$ (bottom panel). In both these panels the bottom boundary is the actuator surface, while the top boundary is the liquid interface. The color coding represents the horizontal component of the velocity field, $V_x$; red signifies flow along $V_w$ while blue signifies flow opposite to $V_w$.
\vspace{-6mm}
\label{fig2}
\end{figure*}
\noindent\textbf{Large-scale flow} - Figure~\ref{fig1}d shows the trajectories of floating styrofoam particles generated by 30 mins of continuous oscillations in 1000 cSt silicone oil contained in an acrylic tank of dimensions 61 cm $\times$ 46 cm (supplementary movie 3 shows motion of surface floaters for different $V_w$). Traveling waves on the actuator move in the downward direction as shown by the direction of $V_w$ in fig.~\ref{fig1}d. Thus the floaters are dragged towards the undulator by forming the large-scale flow. The color codes on the trajectories represent time: blue and yellow colors represent the initial and final positions, respectively. Placing the undulator near a side wall of the tank, we measure the floaters' motion over a decade in distance. However, some particles are recirculated back due to the nearby wall. We disregard these trajectories in our analysis.
Fluid motion at the interface is traced by the styrofoam floaters because of their low density ($\rho_p\simeq 50$ kg/m$^3$), which ensures that the Stokes number, $St=\rho_p R_p V_w/\eta$ remains very small ($\simeq 10^{-2}$) (based on typical wave speed of $V_w=100$ mm/s, particle radius of $R_p=1$ mm, and viscosity of silicone oil of $\eta=10^{-3}$ Pa$\cdot$s). To this end, we focus on the floaters that are initially located straight ahead of the actuator to analyze the variation of velocity with distance. These trajectories are shown in fig.~\ref{fig1}e with black circles representing the initial positions. For a given $V_w$, we interpolate 20 trajectories to construct a velocity-distance curve which is shown in fig.~\ref{fig1}f (see SI for details of these measurements). Here, $|\bar{V}|=(V_x^2+V_y)^{1/2}$ is the magnitude of the velocity at the liquid-air interface and $x$ is the distance from the edge of the actuator. We disregard the first 20 mm of data to avoid edge effects. The color code on the curves represents the magnitude of $V_w$. Interestingly, we observe a nonmonotonic response with the particle velocity reaching the maximum value at an intermediate $V_w$. Once $|\bar{V}|$ at a given location ($x=50$ mm) is plotted against the wave speeds (inset of fig.~\ref{fig1}f), it becomes apparent that the maximum surface flow is achieved for $V_w\simeq$ 80 mm/s. Since the overall flow in the liquid is driven by the hydrodynamics within the thin film of liquid atop the undulator, we focus on quantifying the velocity field and flow rate in this region. \\
\noindent\textbf{Dimensionless groups} - Before we discuss the experimental results further, it is instructive to identify the relevant dimensionless groups which dictate the response of the system. The system has eight dimensional parameters: three length scales given by film thickness $H$, amplitude ($\delta$) and wavelength ($\lambda$) of the undulator, the velocity scale $V_w$, gravitational constant $g$, and three fluid properties set by surface tension ($\gamma$), density ($\rho$), dynamic viscosity ($\eta$). These parameters lead to 5 dimensionless groups, namely, $\epsilon=\delta/H$, $a=H/\lambda$, Reynolds number $Re=\rho V_w \lambda a^2/\eta$, Capillary number $Ca=\eta V_w/(\gamma a^3)$, and Bond number $Bo=\rho g\lambda^2/\gamma$. Here both $Re$ and $Ca$ are defined for the thin-film limit, $a\ll 1$. We choose two working liquids, silicone oil ($\eta_s=0.97$ Pa$\cdot$s) and glycerin water mixture, GW ($\eta_{GW}=0.133$ Pa$\cdot$s). For each of the liquids, the thickness, $H$ (maintaining $a\ll 1$) and wave speed, $V_w$ are varied independently. Across all experiments, $Re$ remains lower than 1 ($0.01-0.35$). Thus inertial effects are subdominant and the problem is fully described by $\epsilon$, $Ca$, and $Bo$. We vary $Ca$, the ratio of viscous to capillary forces, over three orders in magnitude, $1.4-1140$. The value of $Bo$, representing the strength of gravitational forces to surface tension, is 1133 and 426 for silicone oil and glycerin water mixture respectively. As we will demonstrate in the next sections, $Ca/Bo$ which represents the ratio of viscous force to gravitational force, turns out to be the key governing parameter.\\
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{Fig3.pdf}
\caption{\textbf{Non-monotonic flow rate}. a) Instantaneous flow rate in silicone oil over multiple periods of oscillation. The data sets represent increasing $V_w$, from white to black. These measurements are taken at a cross section marked by the dashed line in fig.~\ref{fig2}b i. b) Time-averaged flow rate, $\left<Q\right>$ is plotted against the flux scale $\epsilon^2 V_w H$ of the problem. The circles represent the experiment in silicone oil with bigger markers denoting larger $H$: 7.5 mm (red), 9.5 mm (green), 11.5 mm (blue), 14 mm (orange). The squares represent GW experiments with $H=11$ mm. The dashed line is the theoretical prediction, given by $\left<Q\right>=3\epsilon^2 V_w H/2$.}
\vspace{-6mm}
\label{fig3}
\end{figure*}
\noindent\textbf{Non-monotonic flow rate} - The flow field within the thin film of liquid above the undulator is characterized by performing 2D PIV at a longitudinal plane in the middle of the undulator (see Materials \& Methods section for details). Figure~\ref{fig2}a shows a long-exposure image of illuminated tracer particles, giving a qualitative picture of the flow. The particles essentially oscillate up-down with the actuator, but exhibit net horizontal displacement over a period due to the traveling wave. The presence of the shear-free interface is also crucial to the transport mechanism; the interfacial curvature induces a capillary pressure that modifies the local flow field. The coupling between the two deforming boundaries determines the flow within the gap. Snapshots of typical velocity fields for the two liquids are shown in fig.~\ref{fig2}b. The top panel is a silicone oil flow field with $V_w=23$ mm/s and $H=10$ mm ($Ca=132$, $Bo=1133$), while the bottom panel represents flow field of glycerin-water mixture with $V_w=17$ mm/s and $H=11$ mm ($Ca=3$, $Bo=426$). Higher $Ca$ leads to larger deformation of the free surface. Colors in the plot represent the magnitude of horizontal velocity component, $V_x$; portion of the liquid that follows the wave is shown in red, whereas a blue region represents part of the liquid that moves in the opposite direction to the wave. In fact, the velocity vectors at a given location switch directions depending on the phase of the actuator (see supplementary movies 4 and 5). Thus, to estimate the net horizontal transport of liquid across a section, we first integrate $V_x$ across thickness in the middle of the undulator (marked by the black dashed line in fig.~\ref{fig2}b i) which yields an instantaneous flow rate
\begin{equation}
Q=\int_{h_a}^{h_f}V_x\,dz.
\label{}
\end{equation}
Here, $h_a$ and $h_f$ are the positions of the bottom and top boundaries from the reference point, respectively. Figure~\ref{fig3}a plots $Q$ as a function of time, measured in silicone oil for three distinct wave speeds. It shows that $Q$ oscillates with the the same time period as the undulator ($\tau=\lambda/V_w$), but there is a net flow of liquid along the traveling wave. Thus a time-averaged flow rate,
\begin{equation}
\left<Q\right>={1\over\tau}\int_0^{\tau}Q\,\mathrm{d}t,
\label{}
\end{equation} gives a measure of liquid transport by the undulator.
Figure~\ref{fig3}b gives a comprehensive picture of the flow rate measured across all the experiments. $\left<Q\right>$ is plotted against the characteristic flow rate $V_wH$. The geometric prefactor of $\epsilon^2$ is a direct consequence of the thin geometry of the flow~\cite{oron1997long}. Two interesting observations are in order. Regardless of the fluid properties, the flow rates at first increase linearly with $\epsilon^2V_w H$. All the data sets other than the GW exhibit a non-monotonic behavior with flow rates reaching maximum values at intermediate $\epsilon^2V_w H$. Thus, we find that the non-monotonic surface flow observed in fig.~\ref{fig1}f is a direct consequence of the flow within the thin film above the undulator. It is important to note that these measurements remain invariant of the undulator size as shown in the SI where we compare the time averaged flow rates measured in single and double wave undulators. In the next section, we develop a theoretical model to explain how the geometrical and material parameters combine to give the optimal wave speed that maximizes flow rate.
\begin{figure*}
\centering
\includegraphics[width=.99\textwidth]{Fig4.pdf}
\caption{\textbf{Theoretical \& numerical solutions of thin-film flow}. a) The thin film geometry with relevant quantities. We consider an infinite train of traveling undulations of amplitude $\delta$ and wavelength $\lambda$ moving at a speed of $V_w$. The coordinate frame ($X,Z$) travels with the undulations. The red curve represents the bottom boundary in motion. A liquid layer of mean thickness $H $ reside on top the deformable bottom boundary. Shape of the free surface is given by $h_f$, while the bottom surface is given by $h_a$. b) Numerical solution of the thin-film equation is plotted in terms of the flow rate as a function of $Ca$, for different $Bo$. The two largest Bond numbers correspond to the experimental values. c) The rescaled experimental data of fig.~\ref{fig3}b are in excellent agreement with the theoretical prediction of \eqref{asympsol}, plotted as the solid black line. The small $V_w$ ($Ca/Bo\ll 1$) limit is given by $\left<\bar{Q}\right>=3\epsilon^2/2$, while large $V_w$ ($Ca/Bo\gg 1$) limit is given by $\left<\bar{Q}\right>=\epsilon^2 (\mathrm{Ca}/\mathrm{Bo})^{-2}/6$.}
\vspace{-6mm}
\label{fig4}
\end{figure*}
\subsection{Theoretical framework}
\noindent\textbf{Thin-film equation} - We consider the two dimensional geometry depicted in fig.~\ref{fig4}a for the theoretical model. An infinite train of periodic undulations of the form $h_a=\delta\sin (x-V_w t)/\lambda$ propagates on the actuator located at a mean depth of $H$ from the free surface. We analyze the flow in the thin-film limit, such that $a=H/\lambda\ll 1$. A key aspect of the problem is that the shape of the interface, $h_f$ is unknown along with the flow field. The explicit time dependence in this problem is a direct manifestation of the traveling wave on the boundary. Thus in a coordinate system $(X,\,Z)$ moving with the wave, the flow becomes steady. A simple Galilean transformation relates these coordinates to the laboratory coordinates $(x,\,z)$: $X=x-V_w t$, and $Z=z$. Thus, we first solve the problem in the wave frame and then transform the solution to the lab frame. Leaving the details of the derivation in Materials \& Methods, here we present the key results.
In the thin-film limit, the separation of vertical and horizontal scales leads to a predominantly horizontal flow-field, and both mass and momentum conservation equations are integrated across the film thickness to reach an ordinary differential equation involving the free surface shape, $h_f$ and volume flow rate $q$. Introducing dimensionless variables $\bar{X}=X/\lambda$, $\bar{h}_f=h_f/H$, $\bar{h}_a=h_a/H$, and $\bar{q}=q/V_w H$ we get
\begin{equation}
\bar{q}={1\over 3}\left({1\over Ca}\bar{h}_f'''-{Bo\over Ca}\bar{h}_f'\right)(\bar{h}_f-\bar{h}_a)^3-(\bar{h}_f-\bar{h}_a),
\label{NDthinfilm}
\end{equation} where both $\bar{q}$ and $\bar{h}_f$ are unknowns, and $\bar{h}_a=\epsilon\sin\bar{X}$ is known. We close the problem by imposing the following constraint on $\bar{h}_f$:
\begin{equation}
\int_0^{2\pi}\bar{h}_f\,\mathrm{d}\bar{X}=2\pi,
\label{NDcons}
\end{equation}
which states that the mean film thickness over one wavelength does not change due to deformation. Along with periodic boundary conditions, equations~\eqref{NDthinfilm} and \eqref{NDcons} form a set of nonlinear coupled equations whose solutions depend on the three parameters, $Ca$, $Bo$, and $\epsilon$. For chosen $Bo$ and $\epsilon$, these equations are solved by a shooting method for a wide range of $Ca$. To be able to compare the numerical results with the experimental data of fig.~\ref{fig3}, we transform the results to the lab frame using the relation $\bar{Q}=\bar{q}+\left(\bar{h}_f(\bar{x},\bar{t})-\bar{h}_a(\bar{x},\bar{t})\right)$. Owing to the periodic nature of $\bar{h}_f$ and $\bar{h}_a$, the time-averaged flow rate simplifies to $\left<\bar{Q}\right>=\bar{q}+1$. Figure~\ref{fig4}b shows the numerical solution of $\left<\bar{Q}\right>$ as a function of $Ca$ for different $Bo$. All curves exhibit the same qualitative behavior; at low $Ca$, the scaled flow rate reaches a constant value as $\left<Q\right>\sim V_w H$, which is analogous to what we observe in fig.~\ref{fig3}b. At large $Ca$, however, we recover a decreasing flow rate as $\left<Q\right>\sim (V_w H)^{-\alpha}$ with $\alpha>0$. The transition between the two regimes scales with the $Bo$. Thus the thin-film equation captures the qualitative behavior found in the experiments.
\\
\noindent\textbf{Asymptotic solution} -
For $Bo\gg 1$, the third-order term in Eq.~\eqref{NDthinfilm} can be neglected, which simplifies the governing equation to
\begin{equation}
\bar{q}=-{1\over 3}{Bo\over Ca}\bar{h}_f'(\bar{h}_f-\bar{h}_a)^3-(\bar{h}_f-\bar{h}_a).
\label{NDthinfilm1}
\end{equation} Indeed $Bo$ values in experiments are large (433 and 1132) justifying the above simplification. Furthermore, we assume that the amplitude of the wave, $\delta$ is much smaller than $H$, $\epsilon\ll 1$. Interestingly $Ca/Bo$, the single parameter dictating the solution of eq.~\eqref{NDthinfilm1}, does not contain surface tension. This ratio is reciprocal to the Galileo number which plays a crucial role in the stability of thin films driven by gravity~\cite{craster2009dynamics}. Here we look for asymptotic solutions of the form, $\bar{h}_f=1+\epsilon\bar{h}_{f1}+\epsilon^2\bar{h}_{f2}+\mathcal{O}(\epsilon^3)$ and $\bar{q}=q_0+\epsilon\bar{q}_{1}+\epsilon^2\bar{q}_{2}+\mathcal{O}(\epsilon^3)$~\cite{lee2008crawling}. We insert these expansions in eqns.~\eqref{NDthinfilm1} and \eqref{NDcons}, and solve the equations in orders of $\epsilon$. Leaving the solution of $\bar{h}_f$ in the SI, here we present the solution of $\bar{q}$ which becomes
\begin{equation}
\bar{q}=-1+\frac{3\epsilon^2}{2\left(1+9(Ca/Bo)^2\right)}.
\end{equation} Thus the time averaged flow rate in the lab frame is given by
\begin{equation}
\frac{\left<\bar{Q}\right>}{\epsilon^2}=\frac{3}{2\left(1+9(Ca/Bo)^2\right)}.
\label{asympsol}
\end{equation}
This is the key result of the theoretical model. It demonstrates that the flow rate is quadratic in amplitude of the traveling wave, which is why we incorporated $\epsilon^2$ in the horizontal scale of fig.~\ref{fig3}b. Importantly, eq.~\eqref{asympsol} captures the non-monotonic behavior of the experiments. Once the data in fig.~\ref{fig3}b are rescaled, all the different cases collapse onto a master curve which is in excellent agreement with the black solid line representing eq.~\eqref{asympsol}, as shown in fig.~\ref{fig4}c.\\
\noindent\textbf{Optimal wave speed} - The physical picture behind the nonmonotonic nature of the flow rate becomes clear once the free surface shapes are found. For a given $Bo$ with a low $Ca$, the liquid-air interface behaves as an infinitely taut membrane with minimal deformations. Thus, a liquid parcel moves primarily in the horizontal direction, and the flow rate is given purely by the kinematics ($V_w, H, \delta$). Indeed for $Ca/Bo\ll 1$ eq.~\eqref{asympsol} simplifies to give $\left<\bar{Q}\right>/\epsilon^2=3/2$. In the dimensional form, this relation explains the increase in the flow rate with the wave speed, $\left<Q\right>=3\delta^2V_w/2H$. Thus, the flow is independent of the liquid properties which we have noted in fig.~\ref{fig3}b. As $Ca$ increases, the interface starts to deform up and down by conforming with the undulating actuator. In this limit, the translational velocity of tracer particles decreases thereby lowering the flow rate. Indeed, in the limit of $Ca/Bo\gg 1$, we find a decreasing flow rate given by $\left<\bar{Q}\right>=\epsilon^2 (Ca/Bo)^{-2}/6$. These two asymptotic limits are shown as dashed lines in fig.~\ref{fig4}c. The flow rate attains a maximum at the intersection of these two lines where $Ca/Bo=1/3$. In the dimensional form, this particular value of $Ca/Bo$ gives the optimal wave speed at which flow rate peaks,
\begin{equation}
V_w^\mathrm{(max)}=\left(\frac{\rho g H^3}{3\eta\lambda}\right).
\label{optimalV}
\end{equation}
The optimal wave speed emerges from a competition between hydrostatic pressure ($\sim\rho g H$) and lubrication pressure ($\sim \eta V_w \lambda/H^2$); surface tension drops out in the above expression. Eq.~\eqref{optimalV} gives the optimal speed at which the undulator maximizes pumping. Now we are in a position to examine whether eq.~\eqref{optimalV} captures the peak surface velocities observed in fig.~\ref{fig1}f. Plugging in the density ($\rho=970$ kg/m$^3$), viscosity ($\eta_s=.97$ Pa$\cdot$s), $H=10.8$ mm, $\lambda=50$ mm, we find $V_w^\mathrm{(max)}=82.3$ mm/s which matches very well with the observation (shown as the gray line in fig.~\ref{fig1}f inset).\\
\noindent\textbf{Pumping Efficacy} - The flow rate achieved by this mechanism comes at the expense of the power needed to drive the undulator. This power expenditure equals the viscous dissipation within the flow. To this end, we estimate the efficacy of the mechanism by comparing the output, $\left<\bar{Q}\right>$ to the input, viscous dissipation $\bar{\mathcal{E}}$ (see Materials \& Methods for a derivation of $\bar{\mathcal{E}}$). To demonstrate the benefit of having an interface on the pumping capability of this mechanism, we compare with the flux and dissipation for a rigid top boundary. These results are shown in fig.~\ref{fig5}. The data points represent dimensionless flux plotted against dissipation for a wide range of $Ca/Bo$. The $\epsilon\ll 1$ asymptotic result, shown as the black dashed line in fig.~\ref{fig5}, captures these results perfectly, giving the following algebraic relation between the two
\begin{equation}
\left<\bar{Q}\right>=\frac{\bar{\mathcal E}}{2\pi}.
\label{eff1}
\end{equation}
Importance of the free surface becomes apparent when the above result is compared to the scenario of the thin film bounded by a rigid, solid wall on top. As shown in the SI, for a rigid top boundary, both the flow rate and dissipation is given purely by the ratio of $\delta/H$. We find that the flow dissipates 4 times more energy to achieve the same amount of flow,
\begin{equation}
\left<\bar{Q}\right>\simeq\frac{\bar{\mathcal E}}{8\pi}.
\label{eff2}
\end{equation}
This is plotted as the solid black line in fig.~\ref{fig5}. Thus it is clear that the liquid-air interface facilitates pumping by promoting horizontal transport of fluid parcels at a lower power consumption.
\begin{figure}
\centering
\includegraphics[width=.48\textwidth]{Fig5_1.pdf}
\caption{\textbf{Pumping efficacy of the undulator}. The dimensionless flow rate is plotted against the dimensionless dissipation for a wide range of $Ca/Bo$ values. The data points represent numerical results, which are obtained for $\epsilon=0.3$. The dashed line is the asymptotic prediction of eq.~\eqref{eff1}. The solid line is the result for a top rigid boundary and represents eq.~\eqref{eff2}.}
\vspace{-5mm}
\label{fig5}
\end{figure}
\section{Discussion}
In summary, we have demonstrated the pumping capability of a sub-surface undulating carpet; the travelling wave triggers a large-scale flow beyond its body size. A direct observation of the liquid motion above the undulator in combination with a quantitative analysis of the thin film equations, yields the optimal speed at which this device transports the maximum amount of liquid for given geometric and fluid properties. This optimal wave speed scales inversely with the wavelength of the undulations and linearly with the cube of the film thickness. It is interesting to note that the key governing parameter, $Ca/Bo$ can be interpreted as a ratio of two velocities - wave speed ($V_w$) to a characteristic relaxation or leveling speed ($V_r=\rho g H^3/\eta\lambda$) at which surface undulations flatten out. This leveling process is dominated by gravity since the scale of undulations ($\sim\lambda$) is much larger than the capillary length ($\sqrt{\gamma/\rho g}$). Thus for $V_r\gg V_w$, the undulator essentially works against a relaxed, flat interface, and liquid parcels primarily exhibit horizontal displacement over a period. In the other limit of $V_r\ll V_w$, the free surface tends to beat in phase with the travelling boundary amplifying the vertical displacement, and subsequently reducing the net transport.
Our study demonstrates that the large-scale surface flow is a direct manifestation of the thin-film hydrodynamics above the undulator by showing how the optimal pumping speed captures the peak velocities in surface floaters. However, a quantitative analysis connecting the above two aspects of the flow field is necessary to exploit the full potential of this mechanism. Additionally, in the unexplored inertial regime, we expect the mechanism to showcase interesting dynamics due to the coupling between surface waves and finite-size particles~\cite{punzmann2014generation}. We believe that this work opens up new pathways for self-assembly and patterning at the mesoscale~\cite{zamanian2010interface,snezhko2009self}, bio-inspired strategies for remote sensing and actuation within liquids~\cite{santhanakrishnan2012flow,ryu2016vorticella}, and control of interfacial flows using active boundaries~\cite{manna2022harnessing, laskar2018designing}.
\section{Materials \& Methods}
\small{\noindent\textbf{Modeling \& printing of the undulator} - The models are designed in Fusion 360 (Autodesk). The helix is 3D printed in a Formlab Form 2 SLA printer by photo-crosslinking resin, whereas the outer shell comprising the top surface and rectangular links is printed in a Ultimaker S5 (Ultimaker Ltd.) using a blue TPE (thermoplastic elastomer). Due to the relative flexibility of TPE, the outer shell conforms to the helix. The helix is connected to a mini servo motor which is driven by DC power supply. All other parts (Base, Undulator holders, etc.) are printed using PLA (Polylactic acid) filaments on an Ultimaker S5 (Ultimaker Ltd.) printer.\\
\noindent\textbf{Measurement of the flow-field} - We perform particle image velocimetry measurements on the thin liquid layer above the undulator. The viscous liquid is seeded with 10 $\mu$m glass microspheres (LaVision). A 520 nm 1W laser sheet (Laserland) illuminates a longitudinal plane in the middle of the undulator. Images are recorded by a Photron Fastcam SAZ camera at 500 frames per second. Image cross-corelation is performed in the open source PIVlab~\cite{thielicke2014pivlab} to construct the velocity field. \\
\noindent\textbf{Theoretical Modeling} - The separation of scales ($H\ll\lambda$) in the thin film geometry leads to a set of reduced momentum equations and a flow field that is predominantly horizontal. Thus integration of the $X$-momentum equation with no slip boundary condition on the undulator ($Z=h_a$) and no shear stress condition at the free surface ($Z=h_f$) results in
\begin{equation}
v_{X}={1\over 2\eta}{\mathrm{d}p\over\mathrm{d}X}\left[(Z^2-h_a^2)-2h_f(Z-h_a)\right]-V_w.
\label{velofield}
\end{equation}
Similarly, we integrate the $Z$-momentum equation and apply the Young-Laplace equation at the free surface, which yields the following expression for the pressure $p$,
\begin{equation}
p=-\gamma h_f''+\rho g (h_f-Z).
\label{pres}
\end{equation}
Integration of the continuity equation gives the volume flow rate, $q=\int_{h_a}^{h_f}v_X\mathrm{d}Z$, (per unit depth in this two dimensional case). Plugging eqs.~\eqref{velofield} and~\eqref{pres} into the expression of flow rate gives the following ODE,
\begin{equation}
q={1\over 3\eta}(\gamma h_f'''-\rho g h_f')(h_f-h_a)^3-V_w(h_f-h_a).
\label{thinfilm}
\end{equation}
This equation relates the yet unknown constant $q$ and the unknown free surface shape $h_f$. We close the problem by imposing the following additional constraint on $h_f$:
\begin{equation}
\int_0^{2\pi\lambda}h_f\,\mathrm{d}X=2\pi H\lambda,
\label{cons}
\end{equation}
which states that the mean film thickness over one wavelength remains the same as that of the unperturbed interface, $H$. In dimensionless form the above set of equations take the form of eqs.~\eqref{NDthinfilm} and~\eqref{NDcons} of the main text.
For a direct comparison with experiments, we transform the flow rate, $q$ back to the lab frame which is an explicit function of time, $Q(x,t)=q+V_w(h_f(x,t)-\delta\sin(x-V_w t)/\lambda)$. We seek for periodic free surface shapes, such that $h_f=H+\textup{periodic terms}$. The time averaged flow rate thus simplifies to
\begin{equation}
\left<Q\right>=q+V_w H,
\end{equation} where the integration is performed at a fixed spatial location. In dimensionless form this equation becomes, $\left<\bar{Q}\right>=\bar{q}+1$, as mentioned in the main text.
To estimate the efficacy of the pumping mechanism, we compare the output, flow rate to the energy dissipation within the flow, which is given by,
\begin{equation}
\mathcal{E}=\eta\int_{h_a}^{h_z}\int_0^{2\pi\lambda}\left(\frac{\partial v_X}{\partial Z}\right)^2 \mathrm{d}X \mathrm{d}Z.
\label{}
\end{equation} Using the expression for velocity field of eq.~\eqref{velofield}, we integrate on $Z$ to find that the free surface shape $h_f$ fully determine the amount of dissipation in the flow. In dimensionless form, dissipation becomes
\begin{equation}
\bar{\mathcal{E}}={1\over 3}\left({Bo\over Ca}\right)^2\int_0^{2\pi}\bar{h}_f^{'2}\left(\bar{h}_f-\bar{h}_a\right)^3\mathrm{d}\bar{X},
\label{}
\end{equation} where $\bar{\mathcal{E}}=\mathcal{E}H/(\eta\lambda V_w^2)$.
\section{Acknowledgements}
We thank Yohan Sequeira and Sarah MacGregor for initial contributions.\\
\textbf{Funding:} C.R., D.T., S.L., and S.J. acknowledge the support of NSF through grant no CMMI-2042740. A.P. acknowledges startup funding from Syracuse University.
\textbf{Author contributions:} A.P., S.J. conceived the idea. A.P., J.Y., Y.S., C.R., and S.J. designed and performed experiments, and analyzed data. Z.C., D.T., and S.L. developed the theoretical and numerical models. All authors wrote the paper collectively.
\textbf{Competing interests:} The authors declare that they have no competing interests.
\textbf{{Data and materials avaiability:}} All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. All data and Mathematica scripts are available on the Open Science Framework (DOI 10.17605/OSF.IO/ERZ79).
|
{
"arxiv_id": "2302.13175",
"language": "en",
"timestamp": "2023-02-28T02:11:52",
"url": "https://arxiv.org/abs/2302.13175",
"yymm": "2302"
} | \section{Introduction}
A minor-closed class of matroids can be characterised by its \emph{excluded minors}: the minor-minimal matroids that are not in the class.
Finding an excluded-minor characterisation for a class of matroids representable over a certain field or fields is an area of much interest to matroid theorists (see \cite{GGK2000,HMvZ2011} for recent examples).
A class of matroids representable over a set of fields can be characterised by representability over a structure known as a \emph{partial field}.
Two particular tantalising classes of matroids representable over a partial field, for which excluded-minor characterisations are not yet known, are dyadic matroids and $2$-regular matroids.
In this paper, we describe an implementation of a computer search for the ``small'' excluded minors for a class of matroids representable over a partial field.
This approach was used to enumerate, by computer, the excluded minors on at most $15$ elements for the class of dyadic matroids, and for the class of $2$-regular matroids.
Our first result from this computation is the following:
\begin{theorem}
The excluded minors for dyadic matroids on at most $15$ elements are
$U_{2,5}$, $U_{3,5}$, $F_7$, $F_7^*$,
$\mathit{AG}(2,3)\backslash e$, $(\mathit{AG}(2,3)\backslash e)^*$, $(\mathit{AG}(2,3)\backslash e)^{\Delta Y}$, $T_8$, $N_{1}$, $N_2$, and $N_{3}$.
\end{theorem}
With the exception of $N_3$, these matroids were previously known \cite[Problem~14.7.11
{oxley}.
However, even this list is incomplete: we also found a $16$-element excluded minor that we call $N_4$.
We describe $N_3$ and $N_4$ in \cref{secdyadic}.
Our second result is the following:
\begin{theorem}
\label{thm2}
The excluded minors for $2$-regular matroids on at most $15$ elements are
$U_{2,6}$, $U_{3,6}$, $U_{4,6}$, $P_6$,
$F_7$, $F_7^*$, $F_7^-$, $(F_7^-)^*$, $F_7^=$, $(F_7^=)^*$,
$\mathit{AG}(2,3)\backslash e$, $(\mathit{AG}(2,3)\backslash e)^*$, $(\mathit{AG}(2,3)\backslash e)^{\Delta Y}$, $P_8$, $P_8^-$, $P_8^=$, and $\mathit{TQ}_8$.
\end{theorem}
\noindent
The matroids $P_8^-$ and $\mathit{TQ}_8$ are described in \cref{sec2reg}, whereas the others will be well-known to readers familiar with the excluded-minor characterisations for $\mathrm{GF}(4)$-representable matroids \cite{GGK2000} and near-regular matroids \cite{HMvZ2011} (see also \cite{oxley}).
We conjecture that this is the complete list of excluded minors for this class.
In fact, in recent as-yet-unpublished work, Brettell, Oxley, Semple and Whittle~\cite{BOSW2023a,BOSW2023b} prove that an excluded minor for the class of $2$-regular matroids has at most 15 elements. Combining this result with \cref{thm2}, one obtains an excluded-minor characterisation of the class of $2$-regular matroids, which is the culmination of a long research programme~\cite{bww3,BCOSW2018,CMWvZ2015,CCCMWvZ2013,CCMvZ2015}.
The structure of this paper is as follows. In the next section, we review preliminaries.
In \cref{preconfine}, we introduce confined partial-field representations and describe how a representation over a partial field can be encoded by a representation over a finite field, with particular subdeterminants.
In \cref{impl}, we describe the implementation of the computation. Rather than the presenting the code (which we intend to make freely available), we focus on describing the implementation details that enabled us to search up to matroids on 15 elements using computer resources that are (more or less) readily available.
In \cref{secdyadic,sec2reg}, we present our results for dyadic matroids and $2$-regular matroids, respectively.
\section{Preliminaries}
\label{prelims}
\subsection{Partial fields}
\label{prepf}
A \textit{partial field} is a pair $(R, G)$, where $R$ is a commutative ring with unity, and $G$ is a subgroup of the group of units of $R$ such that $-1 \in G$.
Note that $(\mathbb{F}, \mathbb{F}^*)$ is a partial field for any field $\mathbb{F}$.
If $\mathbb{P}=(R,G)$ is a partial field, then we write
we write $p\in \mathbb{P}$ when $p\in G\cup \{0\}$, and $P\subseteq \mathbb{P}$ when $P\subseteq G\cup \{0\}$.
For disjoint sets $X$ and $Y$, we refer to a matrix with rows labelled by elements of $X$ and columns labelled by elements of $Y$ as an \emph{$X \times Y$ matrix}.
Let $\mathbb{P}$ be a partial field, and let $A$ be an $X\times Y$ matrix with entries from $\mathbb{P}$. Then $A$ is a $\mathbb{P}$-\textit{matrix} if every subdeterminant of $A$ is contained in $\mathbb{P}$. If $X'\subseteq X$ and $Y'\subseteq Y$, then we write $A[X',Y']$ to denote the submatrix of $A$ with rows labelled by $X'$ and columns labelled by $Y'$.
\begin{lemma}[{\cite[Theorem 2.8]{PvZ2010b}}]
\label{pmatroid}
Let $\mathbb{P}$ be a partial field, and let $A$ be an $X\times Y$ $\mathbb{P}$-matrix, where $X$ and $Y$ are disjoint sets. Let
\begin{equation*}
\mathcal{B}=\{X\}\cup \{X\triangle Z : |X\cap Z|=|Y\cap Z|, \det(A[X\cap Z,Y\cap Z])\neq 0\}.
\end{equation*}
Then $\mathcal{B}$ is the family of bases of a matroid on $X\cup Y$.
\end{lemma}
For an $X\times Y$ $\mathbb{P}$-matrix $A$, we let $M[A]$ denote the matroid in \cref{pmatroid}, and say that $A$ is a \emph{$\mathbb{P}$-representation} of $M[A]$.
Note that this is sometimes known as a reduced $\mathbb{P}$-representation in the literature; here, all representations will be ``reduced'', so we simply refer to them as representations.
A matroid $M$ is $\mathbb{P}$-\textit{representable} if there exists some $\mathbb{P}$-matrix $A$ such that
$M \cong M[A]$.
We refer to a matroid $M$ together with a $\mathbb{P}$-representation $A$ of $M$ as a \emph{$\mathbb{P}$-represented} matroid.
For partial fields $\mathbb{P}_1$ and $\mathbb{P}_2$, we say that a function
$\phi : \mathbb{P}_1 \rightarrow \mathbb{P}_2$ is a \emph{homomorphism} if
\begin{enumerate}
\item $\phi(1) = 1$,
\item $\phi(pq) = \phi(p)\phi(q)$ for all $p, q \in \mathbb{P}_1$, and
\item $\phi(p) +\phi(q) = \phi(p +q)$ for all $p, q \in \mathbb{P}_1$ such that $p +q \in \mathbb{P}_1$.
\end{enumerate}
Let $\phi([a_{ij}])$ denote $[\phi(a_{ij})]$.
The existence of a
homomorphism from $\mathbb{P}_1$ to $\mathbb{P}_2$ certifies that $\mathbb{P}_1$-representability implies $\mathbb{P}_2$-representability:
\begin{lemma}[{\cite[Corollary 2.9]{PvZ2010b}}]
\label{homomorphisms}
Let $\mathbb{P}_1$ and $\mathbb{P}_2$ be partial fields and let $\phi : \mathbb{P}_1 \rightarrow \mathbb{P}_2$ be a
homomorphism.
If a matroid is $\mathbb{P}_1$-representable, then it is also $\mathbb{P}_2$-representable.
In particular,
if $A$ is a $\mathbb{P}_1$-representation of a matroid $M$, then $\phi(A)$ is a $\mathbb{P}_2$-representation of $M$.
\end{lemma}
Representability over a partial field can be used to characterise representability over each field in a set of fields. Indeed,
for any finite set of fields $\mathcal{F}$, there exists a partial field~$\mathbb{P}$ such that a matroid is
$\mathcal{F}$-representable
if and only if it is $\mathbb{P}$-representable \cite[Corollary~2.20]{PvZ2010a}.
Let $M$ be a matroid. We say that a partial field is the \emph{universal partial field of $M$}, and denote it $\mathbb{P}_M$, if for every partial field $\mathbb{P}$, the matroid $M$ is $\mathbb{P}$-representable if and only if there exists a
homomorphism $\phi : \mathbb{P}_M \rightarrow \mathbb{P}$.
Pendavingh and Van Zwam described
the canonical construction of a
partial field with this property for a given matroid~\cite[Section~4.2]{PvZ2010b}; for the purposes of this paper, the foregoing definition suffices.
Let $\mathbb{P}=(R,G)$ be a partial field.
We say that $p \in \mathbb{P}$ is \emph{fundamental} if $1-p \in \mathbb{P}$.
We denote the set of fundamentals of $\mathbb{P}$ by $\mathfrak{F}(\mathbb{P})$.
For $p \in \mathbb{P}$, the set of \emph{associates} of $p$ is
$$\Asc(p) = \begin{cases}\left\{p, 1-p, \frac{1}{p}, \frac{1}{1-p}, \frac{p}{p-1}, \frac{p-1}{p}\right\} & \textrm{if $p \notin \{0,1\}$}\\ \{0,1\} & \textrm{if $p \in \{0,1\}$.}\end{cases}$$
For $P \subseteq \mathbb{P}$, we write $\Asc(P) = \bigcup_{p \in P}\Asc(p)$.
If $p \in \mathfrak{F}(\mathbb{P})$, then $\Asc(p) \subseteq \mathfrak{F}(\mathbb{P})$.
Let $A$ and $A'$ be $\mathbb{P}$-matrices. We write $A \preceq A'$ if $A$ can be obtained from $A'$ by the following operations: multiplying a row or column by an element of $G$,
deleting a row or column,
permuting rows or columns,
and pivoting on a non-zero entry.
The \emph{cross ratios} of $A$ are $$\Cr(A) = \left\{p : \begin{bmatrix}1 & 1 \\ p & 1\end{bmatrix} \preceq A \right\}.$$
Any other undefined terminology related to partial fields follows Pendavingh and Van Zwam \cite{PvZ2010a,PvZ2010b}.
We note that although we work only at the generality of partial fields, this theory has been generalised by Baker and Lorscheid~\cite{BL20,BL21}.
\subsection{Partial fields of note}
The \emph{dyadic} partial field is
$\mathbb{D} = \left(\mathbb{Z}\left[\frac{1}{2}\right], \left<-1,2\right>\right)$.
We say a matroid is \emph{dyadic} if it is $\mathbb{D}$-representable.
A matroid is dyadic if and only if it is both $\mathrm{GF}(3)$-representable and $\mathrm{GF}(5)$-representable.
Moreover, a dyadic matroid is representable over every field of characteristic not two \cite[Lemma~2.5.5]{vanZwam2009}.
The \emph{2-regular} partial field is
$$\mathbb{U}_2 = (\mathbb{Q}(\alpha, \beta),\left<-1,\alpha, \beta, 1-\alpha, 1-\beta,\alpha-\beta\right>),$$
where $\alpha$ and $\beta$ are indeterminates.
We say a matroid is \emph{2-regular} if it is $\mathbb{U}_2$-representable.
Note that $\mathbb{U}_2$ is the universal partial field of $U_{2,5}$ \cite[Theorem 3.3.24]{vanZwam2009}.
If a matroid is $2$-regular, then it is $\mathbb{F}$-representable for every field $\mathbb{F}$ of size at least four \cite[Corollary 3.1.3]{Semple1998}.
However, the converse does not hold; for example, $U_{3,6}$ is representable over all fields of size at least four, but is not $2$-regular~\cite[Lemma 4.2.4]{Semple1998}.
More generally, the \emph{$k$-regular} partial field is $$\mathbb{U}_k = (\mathbb{Q}(\alpha_1,\dots,\alpha_k), \left<\{x-y : x,y \in \{0,1,\alpha_1,\dotsc,\alpha_k\}\textrm{ and }x \neq y\}\right>),$$ where $\alpha_1,\dotsc,\alpha_k$ are indeterminates.
In particular, a matroid is \emph{near-regular} if it is $\mathbb{U}_1$-representable.
We also make some use of the following partial fields \cite{PvZ2010b,vanZwam2009}.
The \emph{sixth-root-of-unity} partial field is
$\mathbb{S} = \left(\mathbb{Z}\left[\zeta\right], \left<\zeta\right> \right)$, where $\zeta$ is a solution to $x^2 - x + 1 = 0$.
A matroid is $\mathbb{S}$-representable if and only if it is $\mathrm{GF}(3)$- and $\mathrm{GF}(4)$-representable.
The \emph{2-cyclotomic} partial field is
$$\mathbb{K}_2 = (\mathbb{Q}(\alpha),\left<-1,\alpha-1, \alpha, \alpha+1\right>),$$
where $\alpha$ is an indeterminate.
If a matroid is $\mathbb{K}_2$-representable, then it is representable over every field of size at least four; but the converse does not hold \cite[Lemma~4.14 and Section~6]{PvZ2010a}.
The class of $2$-regular matroids is a proper subset of the $\mathbb{K}_2$-representable matroids.
\sloppy
Finally, Pendavingh and Van Zwam introduced, for each $i \in \{1,\dotsc,6\}$, the \emph{Hydra-$i$} partial field $\mathbb{H}_i$~\cite{PvZ2010b}.
A $3$-connected quinary matroid with a $\{U_{2,5},U_{3,5}\}$-minor is $\mathbb{H}_i$-representable if and only if it has at least $i$ inequivalent $\mathrm{GF}(5)$-representations.
\subsection{Delta-wye exchange}
\label{predy}
Let $M$ be a matroid with a coindependent triangle $T=\{a,b,c\}$.
Consider a copy of $M(K_4)$ having $T$ as a triangle with $\{a',b',c'\}$ as the complementary triad labelled such that $\{a,b',c'\}$, $\{a',b,c'\}$ and $\{a',b',c\}$ are triangles.
Let $P_{T}(M,M(K_4))$ denote the generalised parallel connection of $M$ with this copy of $M(K_4)$ along the triangle $T$.
Let $M'$ be the matroid $P_{T}(M,M(K_4))\backslash T$ where the elements $a'$, $b'$ and $c'$ are relabelled as $a$, $b$ and $c$ respectively.
The matroid~$M'$ is said to be obtained from $M$ by a \emph{$\Delta$-$Y$ exchange} on the triangle~$T$.
Dually, $M''$ is obtained from $M$ by a \emph{$Y$-$\Delta$ exchange} on the triad $T^*=\{a,b,c\}$ if $(M'')^*$ is obtained from $M^*$ by a $\Delta$-$Y$ exchange on $T^*$.
We say that matroids $M$ and $M'$ are \emph{$\Delta Y$-equivalent} if $M'$ can be obtained from $M$ by a (possibly empty) sequence of $\Delta$-$Y$ exchanges on coindependent triangles and $Y$-$\Delta$ exchanges on independent triads.
For a matroid~$M$, we use $\Delta(M)$ to denote the set of all matroids $\Delta Y$-equivalent to $M$; for a set of matroids $\mathcal{N}$, we use $\Delta(\mathcal{N})$ to denote $\bigcup_{N \in \mathcal{N}} \Delta(N)$.
We also use $\Delta^{(*)}(\mathcal{N})$ to denote $\bigcup_{N \in \mathcal{N}} \Delta(\{N,N^*\})$.
The following two results were proved by Oxley, Semple and Vertigan~\cite{OSV2000}, generalising the analogous results by Akkari and Oxley~\cite{AO1993} regarding the $\mathbb{F}$-representability of $\Delta Y$-equivalent matroids for a field $\mathbb{F}$.
\begin{lemma}[{\cite[Lemma 3.7]{OSV2000}}]
\label{deltaYrep}
Let $\mathbb{P}$ be a partial field, and let $M$ and $M'$ be $\Delta Y$-equivalent matroids.
Then $M$ is $\mathbb{P}$-representable if and only if $M'$ is $\mathbb{P}$-representable.
\end{lemma}
\begin{lemma}[{\cite[Theorem 1.1]{OSV2000}}]
\label{deltaYexc}
Let $\mathbb{P}$ be a partial field, and let $M$ be an excluded minor for the class of $\mathbb{P}$-representable matroids.
If $M'$ is $\Delta Y$-equivalent to $M$, then $M'$ is an excluded minor for the class of $\mathbb{P}$-representable matroids.
\end{lemma}
\subsection{Excluded-minor characterisations}
We now recall Geelen, Gerards and Kapoor's excluded-minor characterisation of quaternary matroids~\cite{GGK2000}.
The matroid $P_8$ is illustrated in \cref{p8fig};
observe that $\{a,b,c,d\}$ and $\{e,f,g,h\}$ are disjoint circuit-hyperplanes.
Relaxing both of these circuit-hyperplanes results in the matroid $P_8^=$.
\begin{theorem}[{\cite[Theorem~1.1]{GGK2000}}]
\label{gf4minors}
A matroid is $\mathrm{GF}(4)$-representable if and only if it has no minor isomorphic to $U_{2,6}$, $U_{4,6}$, $P_6$, $F_7^-$, $(F_7^-)^*$, $P_8$, and $P_8^=$.
\end{theorem}
Let $\mathit{AG}(2,3) \backslash e$ denote the matroid obtained from $\mathit{AG}(2,3)$ by deleting an element (this matroid is unique up to isomorphism).
Let $(\mathit{AG}(2, 3)\backslash e)^{\Delta Y}$ denote matroid obtained from $\mathit{AG}(2,3) \backslash e$ by performing a single $\Delta$-$Y$ exchange on a triangle (again, this matroid is unique up to isomorphism).
Hall, Mayhew, and Van Zwam proved the following excluded-minor characterisation of the near-regular matroids~\cite{HMvZ2011}.
\begin{theorem}[{\cite[Theorem 1.2]{HMvZ2011}}]
\label{nr_exminors}
A matroid is near-regular if and only if it has no minor isomorphic to $U_{2,5}$, $U_{3,5}$, $F_7$, $F_7^*$, $F_7^-$, $(F_7^-)^*$, $\mathit{AG}(2,3)\backslash e$, $(\mathit{AG}(2,3)\backslash e)^*$, $(\mathit{AG}(2, 3)\backslash e)^{\Delta Y}$, and $P_8$.
\end{theorem}
\subsection{Splitter theorems}
Let $\mathcal{N}$ be a set of matroids.
We say that a matroid $M$ has an $\mathcal{N}$-minor if $M$ has an $N$-minor for some $N \in \mathcal{N}$.
In order to exhaustively generate the matroids in some class that are $3$-connected and have an $\mathcal{N}$-minor,
we use Seymour's Splitter Theorem extensively.
\begin{theorem}[Seymour's Splitter Theorem \cite{Seymour1980}]
\label{seysplit}
Let $M$ be a $3$-connected matroid that is not a wheel or a whirl, and let $N$ be a $3$-connected proper minor of $M$.
Then there exists an element $e \in E(M)$ such that $M/e$ or $M\backslash e$ is $3$-connected and has an $N$-minor.
\end{theorem}
We are primarily interested in matroids that are not near-regular, due to \cref{nr_exminors}.
The next corollary follows from the observation that wheels and whirls are near-regular.
\begin{corollary}
\label{seysplitcorr}
Let $M$ be a $3$-connected matroid with a proper $N$-minor, where $N$ is not near-regular.
Then, for $(M',N') \in \{(M,N),(M^*,N^*)\}$, there exists an element $e \in E(M')$ such that $M'\backslash e$ is $3$-connected and has an $N'$-minor.
\end{corollary}
To reduce the number of extensions to consider, when generating potential excluded minors, we use splicing, as described in \cref{sec-splic}.
Since we only keep track of $3$-connected matroids with a particular $N$-minor, we require a guarantee of the existence of so-called $N$-detachable pairs~\cite{bww3}, in order to generate an exhaustive list of potential excluded minors.
Let $M$ be a $3$-connected matroid, and let $N$ be a $3$-connected minor of $M$.
A pair $\{a,b\} \subseteq E(M)$ is \emph{$N$-detachable} if either $M\backslash a\backslash b$ or $M/a/b$ is $3$-connected and has an $N$-minor.
To describe matroids with no $N$-detachable pairs, we require a definition.
Let $P \subseteq E(M)$ be an exactly $3$-separating set of $M$ such that $|P| \ge 6$.
Suppose $P$ has the following properties:
\begin{enumerate}[label=\rm(\alph*)]
\item there is a partition $\{L_1,\dotsc,L_t\}$ of $P$ into pairs such that for all distinct $i,j\in\{1,\dotsc,t\}$, the set $L_i\cup L_j$ is a cocircuit,
\item there is a partition $\{K_1,\dotsc,K_t\}$ of $P$ into pairs such that for all distinct $i,j\in\{1,\dotsc,t\}$, the set $K_i\cup K_j$ is a circuit,
\item $M / p$ and $M \backslash p$ are $3$-connected for each $p \in P$,
\item for all distinct $i,j\in\{1,\dotsc,t\}$, the matroid $\si(M / a / b)$ is $3$-connected for any $a \in L_i$ and $b \in L_j$, and
\item for all distinct $i,j\in\{1,\dotsc,t\}$, the matroid $\co(M \backslash a \backslash b)$ is $3$-connected for any $a \in K_i$ and $b \in K_j$.
\end{enumerate}
Then we say $P$ is a \emph{spikey $3$-separator} of $M$.
\begin{theorem}[{\cite[Theorem 1.1]{bww3}}]
\label{detachthm}
Let $M$ be a $3$-connected matroid, and let $N$ be a $3$-connected minor of $M$ such that $|E(N)| \ge 4$, and $|E(M)|-|E(N)| \ge 6$.
Then either
\begin{enumerate}
\item $M$ has an $N$-detachable pair,
\item there is a matroid $M'$ obtained by performing a single $\Delta$-$Y$ or $Y$-$\Delta$ exchange on $M$ such that $M'$ has an $N$-minor and an $N$-detachable pair, or
\item $M$ has a spikey $3$-separator\ $P$,
and if $|E(M)| \ge 13$, then at most one element of $E(M)-E(N)$ is not in $P$.
\end{enumerate}
\end{theorem}
We note that in the statement of this theorem in \cite{bww3}, the precise structure of the $3$-separators that arise in case (iii) is described.
It is clear that when $|E(M)|-|E(N)| \ge 6$, each of these $3$-separators satisfy conditions (a) and (b) in the definition of a spikey $3$-separator.
The fact that (c) holds for such a $3$-separator follows from \cite[Lemma~5.3]{bww3}, and it is easily checked that (d), and dually (e), also hold.
\subsection{Equivalence of \texorpdfstring{$\mathbb{P}$}{P}-matrices, and stabilizers}
Let $\mathbb{P} = (R,G)$ be a partial field, and let $A$ and $A'$ be $\mathbb{P}$-matrices.
We say that $A$ and $A'$ are \emph{scaling equivalent} if $A'$ can be obtained from $A$ by scaling rows and columns by elements of $G$.
If $A'$ can be obtained from $A$ by scaling, pivoting, permuting rows and columns, and also applying automorphisms of $\mathbb{P}$, then we say that $A$ and $A'$ are \emph{algebraically equivalent}.
We say that $M$ is \emph{uniquely representable over $\mathbb{P}$} if any two $\mathbb{P}$-representations of $M$ are algebraically equivalent.
Let $M$ and $N$ be $\mathbb{P}$-representable matroids, where $M$ has an $N$-minor.
Then \emph{$N$ stabilizes $M$ over $\mathbb{P}$} if for any scaling-equivalent $\mathbb{P}$-representations $A_1'$ and $A_2'$ of $N$ that extend to $\mathbb{P}$-representations $A_1$ and $A_2$ of $M$, respectively, $A_1$ and $A_2$ are scaling equivalent.
For a partial field~$\mathbb{P}$, let $\mathcal{M}(\mathbb{P})$ be the class of matroids representable over $\mathbb{P}$.
A matroid $N \in \mathcal{M}(\mathbb{P})$ is a \emph{$\mathbb{P}$-stabilizer} if, for any $3$-connected matroid $M \in \mathcal{M}(\mathbb{P})$ having an $N$-minor, the matroid $N$ stabilizes $M$ over $\mathbb{P}$.
Following Geelen et al.~\cite{GOVW1998}, we say that
a matroid $N$ \emph{strongly stabilizes $M$ over $\mathbb{P}$} if $N$ stabilizes $M$ over $\mathbb{P}$, and every
$\mathbb{P}$-representation of $N$ extends to a $\mathbb{P}$-representation of $M$.
We say that $N$ is a \emph{strong $\mathbb{P}$-stabilizer}
if $N$ is a $\mathbb{P}$-stabilizer and $N$ strongly stabilizes every matroid in $\mathcal{M}(\mathbb{P})$ with an $N$-minor.
\section{Partial-field proxies}
\label{preconfine}
In this section, we show that we can simulate a representation over a partial field by a representation over a finite field, where we have constraints on the subdeterminants appearing in the representation.
This has efficiency benefits for our computations, as we can utilise an existing implementation of finite fields, and avoid a full implementation of a partial field from scratch.
Let $\mathbb{P}$ be a partial field, let $\mathcal{F}\subseteq \mathfrak{F}(\mathbb{P})$, let $M$ be a matroid, and
let $A$ be a $\mathbb{P}$-matrix so that $M=M[A]$.
We say that the matrix $A$ is
{\em $F$-confined} if $\Cr(A) \subseteq F \cup \{0,1\}$.
If $A$ is an $F$-confined $\mathbb{P}$-matrix and $\phi: \mathbb{P}\rightarrow \mathbb{P}'$ is a partial-field homomorphism, then $M[A]=M[\phi(A)]$ and $$\Cr(\phi(A)) \subseteq \phi(F),$$ so that $\phi(A)$ is an $\phi(F)$-confined representation over $\mathbb{P}'$.
We will show
that under certain conditions on $\phi$ and $F$, any $\phi(F)$-confined representation over $\mathbb{P}'$ can be lifted to an $F$-confined representation over $\mathbb{P}$.
The following is a reformulation of \cite[Corollary~3.8]{PvZ2010a} (see also \cite[Corollary~4.1.6]{vanZwam2009}) using the notion of $F$-confined partial-field representations.
To see this, take the restriction of $h$ to $\Cr(A)$ as the lift function.
\begin{theorem}[{Lift Theorem \cite{PvZ2010a}}]
\label{lift}
Let $\mathbb{P}$ and $\mathbb{P}'$ be partial fields, let $F \subseteq \mathfrak{F}(\mathbb{P}')$, let $A$ be an $F$-confined $\mathbb{P}'$-matrix, and let $\phi : \mathbb{P} \rightarrow \mathbb{P}'$ be a partial-field homomorphism.
Suppose there exists a function $h: F \rightarrow \mathbb{P}$ such that
\begin{enumerate}
\item $\phi(h(p)) = p$ for all $p \in F$,
\item if $1+1 \in \mathbb{P}'$, then $1+1 \in \mathbb{P}$, and $1+1 = 0$ in $\mathbb{P}'$ if and only if $1+1=0$ in $\mathbb{P}$,
\item for all $p,q \in F$,
\begin{itemize}
\item if $p+q = 1$ then $h(p) + h(q) = 1$, and
\item if $pq = 1$ then $h(p)h(q) = 1$; and,
\end{itemize}
\item for all $p,q,r \in F$, we have $pqr = 1$ if and only if $h(p)h(q)h(r) = 1$.
\end{enumerate}
Then there exists a $\mathbb{P}$-matrix $A'$ such that $\phi(A')$ is scaling-equivalent to $A$.
\end{theorem}
We are interested in the case where $\mathbb{P}'$ is a finite field $\mathbb{F}=\mathrm{GF}(q)$ for some prime power $q$. In this case, we obtain the following corollary:
\begin{corollary}
\label{thm:proxy}
Let $\mathbb{P}$ be a partial field, let $\mathbb{F}$ be a finite field,
let $\phi : \mathbb{P} \rightarrow \mathbb{F}$ be a partial-field homomorphism,
let $F = \phi(\mathfrak{F}(\mathbb{P}))$,
and let $A$ be an $F$-confined $\mathbb{F}$-matrix.
Suppose that the restriction of $\phi$ to $\mathfrak{F}(\mathbb{P})$ is injective,
and
\begin{enumerate}
\item for all $p, q\in \mathfrak{F}(\mathbb{P})$, if $\phi(p)+\phi(q)=1$, then $p+q=1$; and
\item for all $p,q,r\in \mathfrak{F}(\mathbb{P})$, if $\phi(p)\phi(q)\phi(r)=1$, then $pqr=1$; and
\item if $1=-1$ in $\mathbb{F}$, then $1=-1$ in $\mathbb{P}$.
\end{enumerate}
Then there exists a $\mathbb{P}$-matrix $A'$ such that $\phi(A')$ is scaling-equivalent to $A$.
\end{corollary}
\begin{proof}
We work towards applying \cref{lift} with $\mathbb{P}' = \mathbb{F}$.
Since the restriction of $\phi$ to $\mathfrak{F}(\mathbb{P})$ is injective and $\phi(\mathfrak{F}(\mathbb{P}))=F$, there is a well-defined function $h : F \rightarrow \mathfrak{F}(\mathbb{P})$ where $h(f) = p$ when $\phi(p) = f$. Now $h$ is the inverse of $\phi|_{\mathfrak{F}(\mathbb{P})}$, and thus it is easily seen that (i)--(iv) of \cref{lift} are satisfied by the function $h$.
\end{proof}
\begin{corollary} \label{cor:dyad} $M$ is dyadic if and only if $M$ has a $\{2,6,10\}$-confined representation over $\mathrm{GF}(11)$.\end{corollary}
\proof Recall that $\mathfrak{F}(\mathbb{D}) \setminus \{0,1\} =\{-1,2, 2^{-1}\}$ \cite{vanZwam2009}.
Consider the partial-field homomorphism $d: \mathbb{D}\rightarrow \mathrm{GF}(11)$ defined by $d(2)=2$, $d(-1)=10$, $d(2^{-1})=6$. A finite check suffices to verify that the conditions of the theorem are satisfied for $(\mathbb{P}, \mathbb{F}, \phi) = (\mathbb{D}, \mathrm{GF}(11), d)$, and that then $\phi(F) = \{2,6,10\}$. The corollary follows.\endproof
A finite check reveals that we cannot take a smaller finite field $\mathbb{F}$ which admits a partial-field homomorphism $\phi: \mathbb{D}\rightarrow \mathbb{F}$ to take the role of $\mathrm{GF}(11)$ in this corollary. For example, if we take $\mathbb{F}=\mathrm{GF}(7)$, then $\phi(2)\phi(2)\phi(2)=1$, but $2\cdot 2\cdot 2\neq 1$.
Let $\mathbb{P}$ be a partial field.
For a finite field $\mathbb{F}$ and partial-field homomorphism $\phi : \mathbb{P} \rightarrow \mathbb{F}$, we say that $(\mathbb{F},\phi)$ is a \emph{proxy} for $\mathbb{P}$ if $\phi$ can be lifted in the sense of \cref{thm:proxy}.
For example, the proof of \cref{cor:dyad} shows that $(\mathrm{GF}(11),d)$ is a proxy for $\mathbb{D}$.
\Cref{fig:conff} lists several
partial field proxies
(see \cite[Appendix~A]{PvZ2010b} for any partial fields undefined here).
These were found by an exhaustive search (by computer), trying each prime $p$, in order, until the desired homomorphism was found.
Note that, with the exception of $\mathbb{H}_4$ and $\mathbb{H}_5$, these are the smallest finite fields for which such a homomorphism exists (for these two partial fields, the search was time consuming, so we started it at a large prime).
\begin{table}[htb]
$\begin{array}{lll}
\text{Partial field }&\text{Finite Field }&\text{Partial field homomorphism}\\
\hline
\mathbb{S}& \mathrm{GF}(7) & \zeta\mapsto 3\\
\mathbb{D}&\mathrm{GF}(11) & 2\mapsto 2\\
\mathbb{G}&\mathrm{GF}(19) & \tau\mapsto 5\\
\mathbb{U}_1& \mathrm{GF}(23) & \alpha\mapsto 5\\
\mathbb{H}_2& \mathrm{GF}(29) & i\mapsto 12\\
\mathbb{K}_2&\mathrm{GF}(73) & \alpha\mapsto 15\\
\mathbb{H}_3& \mathrm{GF}(151) & \alpha\mapsto 4\\
\mathbb{P}_4& \mathrm{GF}(197) & \alpha\mapsto 31\\
\mathbb{U}_2&\mathrm{GF}(211) & \alpha\mapsto 4, \beta\mapsto 44\\
\mathbb{H}_4& \mathrm{GF}(947) &\alpha\mapsto 272, \beta\mapsto 928\\
\mathbb{H}_5& \mathrm{GF}(3527) &\alpha\mapsto 1249, \beta\mapsto 295, \gamma\mapsto 3517\\
\end{array}$
\caption{\label{fig:conff} Several proxies for partial fields.}
\end{table}
Each of the partial fields listed in Table \ref{fig:conff} has finitely many fundamentals. There necessarily exists a finite field proxy for such partial fields. To establish this, we will need the following fact.
\begin{lemma} \label{lem:fgfield} Suppose that $J$ is a maximal ideal of the polynomial ring $\mathbb{Z}[X]$, where $X$ represents a finite collection of variables. Then $\mathbb{Z}[X]/J$ is a finite field.
\end{lemma}
\begin{theorem} Let $\mathbb{P}$ be a partial field with finitely many fundamentals. Then there exists a finite field $\mathbb{F}$ and homomorphism $\phi : \mathbb{P} \rightarrow \mathbb{F}$, so that $(\mathbb{F},\phi)$ is a proxy for $\mathbb{P}$.
\end{theorem}
\proof Let $\mathbb{P}=(R,G)$ be a partial field such that $|\mathfrak{F}(\mathbb{P})|<\infty$. We may assume that $G$ is generated by $\mathfrak{F}(\mathbb{P})$ and that $R=\mathbb{Z}[G]$.
Note that under these simplifying assumptions there is an ideal $I$ of $\mathbb{Z}[W]$, where $W:=\{W_f: f\in \mathfrak{F}(\mathbb{P})\}$, so that $R=\mathbb{Z}[W]/I$.
Consider the ring $S:=R[X, Y, Z]$ where $X, Y, Z$ are the collections of variables
$$X:=\{X_{pq}: p, q\in \mathfrak{F}(\mathbb{P})\}\cup\{X_{11}\}, ~Y:=\{Y_{pq}: p, q\in \mathfrak{F}(\mathbb{P})\cup\{0\}, p+q\neq 1\}$$
and $Z:= \{Z_{pqr}: p, q, r\in \mathfrak{F}(\mathbb{P})\cup\{1\}, pqr\neq 1\}$.
Let $J'$ be the ideal of $S$ generated by
$$\{(p-q)X_{pq}-1: p, q\in \mathfrak{F}(\mathbb{P}), p\neq q\}$$
$$\{(p+q-1)Y_{pq}-1: \mathfrak{F}(\mathbb{P})\cup\{0\}, p+q\neq 1\}$$
$$\{(pqr-1)Z_{pqr}-1: p, q,r\in \mathfrak{F}(\mathbb{P})\cup\{1\}, pqr\neq 1\}$$
and the generator $2X_{11}-1$ if $1\neq -1$ in $\mathbb{P}$. Since each of the polynomials generating $J'$ uses a variable unique to that generator, the ideal $J'$ is proper, i.e. $1\not\in J'$.
Let $J$ be a maximal ideal of $S$ containing $J'$. As $S$ is commutative and $J$ is maximal, $\mathbb{F}:=S/J$ is a field. Since $R=\mathbb{Z}[W]/I$, we have $S=R[X,Y,Z]=\mathbb{Z}[W,X,Y,Z]/I$ and $\mathbb{F}=S/J=\mathbb{Z}[W,X,Y,Z]/(I+J)$. Finally since $ \mathfrak{F}(\mathbb{P})$ is finite, each set of variables $W,X,Y,Z$ is finite. Then $\mathbb{F}$ is a finite field by Lemma \ref{lem:fgfield}.
Let $\phi: R\rightarrow \mathbb{F}$ be the restriction to $R$ of the natural ring homomorphism $\psi:S\rightarrow S/J=\mathbb{F}$. We verify that $(\mathbb{F},\phi)$ is a proxy for $\mathbb{P}$.
Since $\phi$ is a ring homomorphism, it is necessarily a partial field homomorphism.
Moreover, $\phi$ is injective on $\mathfrak{F}(\mathbb{P})$, for if $\phi(p)=\phi(q)$ for some distinct $p, q\in \mathfrak{F}(\mathbb{P})$, then we get the contradiction
$$-1=(\psi(p)-\psi(q))\psi(X_{pq})-1=\psi((p-q)X_{pq}-1)\in \psi(J)=\{0\}.$$
Second, if $p+q\neq 1$ but $\phi(p)+\phi(q)= 1$ then
$$-1=(\psi(p)+\psi(q)-1)\psi(Y_{pq})-1=\psi(p+q-1)Y_{pq}-1)\in \psi(J)=\{0\},$$
a contradiction. Third, if $\phi(p)\phi(q)\phi(r)= 1$ when $pqr\neq 1$ we get
$$-1=(\psi(p)\psi(q)\psi(r)-1)\psi(Z_{pqr})-1=\psi(pqr-1)Z_{pqr}-1)\in \psi(J)=\{0\},$$
a contradiction. Finally, if $1\neq -1$ in $\mathbb{P}$ then $1\neq -1$ in $\mathbb{F}$, for otherwise we get the contradiction $-1=(\psi(1)+\psi(1))\psi(X_{11})-1=\psi(2X_{11}-1)\in \psi(J)=\{0\}$.
\endproof
\section{Implementation details}
\label{impl}
Our implementation of these computations was written using SageMath 8.1, making extensive use of the Matroid Theory library.
Computations were run in a virtual machine on an Intel Xeon E5-2690 v4 64-bit x86 microprocessor operating at 2.6GHz, with 4 cores and 23GB of memory available.
Let $\mathbb{P} \in \{\mathbb{D}, \mathbb{U}_2\}$;
we want to find excluded minors of size at most $n$ for the class of $\mathbb{P}$-representable matroids $\mathcal{M}(\mathbb{P})$.
Let $\mathcal{N}$ be a set of strong $\mathbb{P}$-stabilizers such that each $N \in \mathcal{N}$ is not near-regular.
In what follows, we use $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$ to denote the set of all $3$-connected matroids in $\mathcal{M}(\mathbb{P})$ with an $\mathcal{N}$-minor.
We generate all matroids in $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$ of size at most $n$.
To find the excluded minors of size $n$, our basic approach is as follows.
First, find all $3$-connected extensions of $(n-1)$-element matroids in $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$; second, filter out those isomorphic to an $n$-element matroid in $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$; finally, filter out those that contain, as a minor, an excluded-minor for $\mathcal{M}(\mathbb{P})$ of size at most $n-1$.
\subsection{Restricting to ternary or quaternary excluded minors}
\label{linrestrict}
As we are dealing with a partial field $\mathbb{P} \in \{\mathbb{D}, \mathbb{U}_2\}$, which has a partial-field homomorphism to either $\mathrm{GF}(3)$ or $\mathrm{GF}(4)$, the efficiency of the first step can be improved using the excluded-minor characterisations for ternary and quaternary matroids.
\begin{lemma}
\label{onlyquaternary}
Let $M$ be an excluded minor for the class of $2$-regular matroids.
If $|E(M)| \ge 9$, then $M$ is
quaternary.
\end{lemma}
\begin{proof}
Suppose $|E(M)| \ge 9$ and, towards a contradiction, that $M$ is not $\mathrm{GF}(4)$-representable.
Then $M$ has a minor $N$ isomorphic to one of the seven excluded minors for $\mathrm{GF}(4)$ (see \cref{gf4minors}).
Since each of these excluded minors has at most eight elements, $M$ contains $N$ as a proper minor. But $M$ is an excluded minor, so $N$ is $2$-regular; a contradiction.
\end{proof}
The following lemma follows, in a similar manner, from the excluded-minor characterisation of ternary matroids.
\begin{lemma}
\label{onlyternary}
If $M$ is an excluded minor for dyadic matroids with $|E(M)| \ge 8$, then $M$ is ternary.
\end{lemma}
By \cref{onlyquaternary,onlyternary}, at the first step of our procedure for finding excluded minors, we need only consider ternary or quaternary $3$-connected extensions of $(n-1)$-element matroids in $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$.
We can further reduce the number of potential excluded minors to consider using splicing, which we explain in \cref{sec-splic}.
\subsection{Generating \texorpdfstring{$\mathbb{P}$}{P}-representable matroids}
\label{gen-iso}
To simulate generating a $\mathbb{P}$-representable matroid, we use
partial field proxies,
as described in \cref{preconfine}.
That is, we find a prime $p$, and partial-field homomorphism $\phi : \mathbb{P} \rightarrow \mathrm{GF}(p)$, such that a matroid is $\mathbb{P}$-representable if and only if it has a $\phi(\mathfrak{F}(\mathbb{P}))$-confined representation over $\mathrm{GF}(p)$ (see \cref{thm:proxy} and \cref{fig:conff}).
Then, to find $\mathbb{P}$-representable single-element extensions of a matroid with $\mathbb{P}$-representation $A$, we can find single-element extensions of $\phi(A)$ with a $\mathrm{GF}(p)$-representation whose cross ratios are in $\phi(\mathfrak{F}(\mathbb{P}))$.
For a class $\mathcal{M}(\mathbb{P})$ with a set of strong $\mathbb{P}$-stabilizers $\mathcal{N}$,
we generate a representative $M$ of each isomorphism class in $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$ consisting of matroids of size at most $n$.
Suppose we have generated all matroids in $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$ of size at most $n-1$ (up to isomorphs).
Initially, if $n_0$ is the size of the smallest matroid in $\mathcal{N}$, then $n = n_0 +1$.
Let $M[A]$ be a $\mathbb{P}$-represented matroid.
We say that the $\mathbb{P}$-represented matroid $M[A | e]$, for some column vector $e$ with entries in $\mathbb{P}$, is a \emph{linear extension} of $M[A]$.
For each $(n-1)$ element $\mathbb{P}$-represented matroid, we generate all simple linear extensions (where the representations have the appropriate cross ratios; this functionality is provided by the function
\texttt{LinearMatroid.linear\char`_extensions()} in SageMath).
Note that each of these simple matroids is in fact $3$-connected (by \cite[Proposition~8.2.7]{oxley}).
After closing this set under duality, and adding any $n$-element matroid in $\mathcal{N}$, the set consists of all $n$-element matroids in $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$,
by \cref{seysplitcorr} and
since each matroid in $\mathcal{N}$ is a strong $\mathbb{P}$-stabilizer.
\subsection{Isomorph filtering}
\label{sec-iso}
We use an isomorphism invariant, which can be efficiently computed, to distinguish matroids that can be easily identified as non-isomorphic. Two matroids with different values for the invariant are non-isomorphic; whereas two matroids with the same value for the invariant require a full isomorphism check.
The isomorphism invariant we use is provided by the function \verb|BasisMatroid._bases_invariant()| in SageMath, and is based on the incidences of groundset elements with bases.
As $n$ increases, we have to deal with more matroids than can be loaded in memory at once.
Thus, to filter isomorphic matroids, we use a batched two-pass approach.
We consider the matroids in batches of an appropriate size so that an entire batch can be kept in memory at once.
First, batch by batch, we compute a hash of the matroid invariant
for each matroid in the batch, and write the matroids to disk, stored in $g$ groups, grouped by the hash modulo~$g$.
(The value of $g$ is chosen to ensure all matroids in a group can also be loaded in memory at once.)
Call the hash of the invariant the \emph{raw hash}, and call the hash modulo~$g$ the \emph{hash~mod}.
Then, in turn, we load each of the $g$ groups; that is, for each $i \in \{0,1,\dotsc,g-1\}$, we load all matroids whose hash~mod is $i$. Within each group, isomorphs are filtered by isomorphism checking those matroids with the same raw hash.
\subsection{Minor checking}
\label{sec-minor}
Let $M$ and $N$ be matroids.
To check if $M$ has a minor isomorphic to $N$, we use a simple approach that avoids repetitive computations.
If $|E(N)| = |E(M)|$, then we check if $N$ is isomorphic to $M$; otherwise, for each single-element deletion and contraction of $M$, we recursively check if any of these matroids has an $N$-minor.
However, we cache the result of each minor check (keyed by the isomorphism class), and use cached results when available, to avoid repetition.
Full isomorphism checking is performed only when the isomorphism invariants match, as described in \cref{sec-iso}.
\subsection{Splicing}
\label{sec-splic}
Let $M'$ be a matroid, let $M_e$ be a single-element extension of $M'$ by an element~$e$, and let $M_f$ be a single-element extension of $M'$ by an element~$f$, where $e$ and $f$ are distinct.
Note that $M_e$ and $M_f$ may be isomorphic.
We say that $M$ is a \emph{splice of $M_e$ and $M_f$} if $M \backslash e = M_f$ and $M \backslash f = M_e$.
Suppose we wish to find the excluded minors of size $n$ for the class
$\mathcal{M}(\mathbb{P})$.
In order to reduce the number of matroids to consider as potential excluded minors, rather than
generating all extensions of $(n-1)$-element matroids in $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$, we can instead generate splices of each pair of $(n-1)$-element matroids in $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$ that are extensions of some $(n-2)$-element matroid in $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$.
Note that the two matroids in such a pair may be isomorphic.
In order for this splicing process to be exhaustive, we require a guarantee that for any excluded minor $M$, there is (up to duality) some pair $e,f \in E(M)$ such that $M \backslash e$, $M \backslash f$, and $M \backslash e \backslash f$ are in $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$. \Cref{detachthm} is such a guarantee when $M$ does not contain any spikey $3$-separators. We work towards showing that spikey $3$-separators do not appear in an excluded minor $M$ when $M$ is large.
First, there is a subtlety worth noting.
Let $M_x$ and $M'$ be matroids with $E(M_x) = E(M') \cup \{x\}$, and
suppose $M' \cong M_x \backslash x$. Clearly $M'$ has a single-element extension, by an element $x$, that is isomorphic to $M_x$, but there may be more than one distinct extensions with this property, due to automorphisms of $M_x$.
To obtain all splices,
it is not enough to consider just one of these extensions.
For each $(n-2)$-element
matroid $M' \in \widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$, and each
$(n-1)$-element matroid $M_x \in \widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$ such that $M_x \backslash x \cong M'$ for some $x \in E(M_x)$,
we keep track of all single-element extensions of $M'$ to a matroid isomorphic to $M_x$; denote these extensions as $\mathcal{X}(M_x)$. We also maintain, for each matroid $X \in \mathcal{X}(M_x)$, the isomorphism between $M_x \backslash x$ and $X\backslash x$.
Using this information, for each matroid $M'$,
and each (possibly isomorphic) pair $\{M_e,M_f\} \subseteq \widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$ such that $M_x \backslash x \cong M'$ for $x \in \{e,f\}$, and each $X_e \in \mathcal{X}(M_e)$ and $X_f \in \mathcal{X}(M_f)$, we compute the splice of $X_e$ and $X_f$.
For simplicity, we refer to the set of all of these matroids as ``the splices of $M_e$ and $M_f$''.
The following generalises \cite[Lemma 7.2]{BCOSW2018}; as the proof is similar, we provide only a sketch.
\begin{lemma}
\label{spikeys}
Let $\mathbb{P}$ be a partial field, let $N$ be a non-binary $3$-connected strong $\mathbb{P}$-stabilizer, and let $M$ be an excluded minor for $\mathcal{M}(\mathbb{P})$, where $M$ has an $N$-minor.
If $M$ has a spikey $3$-separator\ $P$ such that at most one element of $E(M)-E(N)$ is not in $P$, then $|E(M)| \le |E(N)| + 5$.
\end{lemma}
\begin{proof}
Since at most one element of $E(M)-E(N)$ is not in $P$, we have that $|P-E(N)| \ge 5$.
By dualising, if necessary, we may assume that there are distinct elements $a,b \in P$ such that $M \backslash a \backslash b$ has an $N$-minor, with $a \in K_i$ and $b \in K_j$ for $i \neq j$, where $\{K_1,\dotsc,K_t\}$ is a partition of $P$ such that $K_{i'} \cup K_{j'}$ is a circuit for all distinct $i',j' \in \{1,\dotsc,t\}$.
Now $M \backslash a$, $M \backslash b$ and $\co(M\backslash a \backslash b)$ are $3$-connected.
By the definition of a spikey $3$-separator, the pair $\{a,b\}$ is contained in a $4$-element cocircuit~$C^* \subseteq P$.
Let $u \in C^*-\{a,b\}$.
Then $u$ is in a series pair of $M \backslash a \backslash b$, so $M \backslash a \backslash b / u$ has an $N$-minor, and $\co(M \backslash a \backslash b/u)$ is $3$-connected.
Moreover, $M / u$ is $3$-connected.
The result then follows using the same argument as in \cite[Lemma 7.2]{BCOSW2018}.
\end{proof}
\begin{lemma}
\label{splicinglemma}
Let $\mathbb{P}$ be a partial field, and let $\mathcal{N}$ be a set of non-binary strong $\mathbb{P}$-stabilizers for $\mathcal{M}(\mathbb{P})$.
Let $M$ be an excluded minor for $\mathcal{M}(\mathbb{P})$ such that $M$ has an $\mathcal{N}$-minor, $|E(M)| \ge 13$, and $|E(M)| \ge |E(N)| + 6$ for each $N \in \mathcal{N}$.
Then there is a matroid $M'$ that is $\Delta Y$-equivalent to $M$ or $M^*$, and distinct elements $e,f \in E(M')$ such that for each $M'' \in \{M' \backslash e \backslash f, \, M' \backslash e, \, M' \backslash f\}$, the matroid $M''$ is $3$-connected, has an $\mathcal{N}$-minor, and $M'' \in \mathcal{M}(\mathbb{P})$.
\end{lemma}
\begin{proof}
Let $N \in \mathcal{N}$ such that $M$ has an $N$-minor.
By \cref{detachthm}, either there exists a matroid $M'$ that is $\Delta Y$-equivalent to $M$ or $M^*$ and a pair of elements $\{e,f\}$ such that either $M' \backslash e \backslash f$ is $3$-connected with an $N$-minor, or $M'$ has a spikey $3$-separator~$P$.
In the latter case, as $|E(M)| \ge 13$ there is at most one element of $E(M)-E(N)$ is not in $P$, so, by \cref{spikeys}, $|E(M)| \le |E(N)| + 5$; a contradiction.
We deduce that there is a pair $\{e,f\}$ such that $M'\backslash e \backslash f$ is $3$-connected with an $N$-minor.
It follows that $M'\backslash e$ and $M'\backslash f$ are $3$-connected with an $N$-minor.
Moreover, since $M'$ is an excluded minor for the class $\mathcal{M}(\mathbb{P})$, by \cref{deltaYexc}, each of $M' \backslash e$, $M'\backslash f$, and $M'\backslash e\backslash f$ is in $\mathcal{M}(\mathbb{P})$.
\end{proof}
As described in \cref{linrestrict},
when $\mathbb{P} = \mathbb{D}$ or $\mathbb{P} = \mathbb{U}_2$,
we may restrict our attention to ternary or quaternary excluded minors respectively; so it suffices to find splices that are ternary or quaternary, respectively.
\subsection{Testing}
Implementations were tested before use. In particular, the excluded-minor computation routines were checked using the known characterisation for $\mathrm{GF}(4)$ \cite{GGK2000}, and using the known excluded minors for $\mathrm{GF}(5)$-representable matroids on up to $9$ elements \cite{MR2008}. The excluded minors for dyadic matroids on up to 13 elements have previously been computed by Pendavingh; our results were also consistent with those.
Regarding the generation of matroids in $\mathcal{M}(\mathbb{P})$, the matroids that we generated were consistent with known maximum-sized $\mathbb{P}$-representable matroids for $\mathbb{P} \in \{\mathbb{D}, \mathbb{U}_2\}$ \cite{Kung1990,Kung1988,Semple1998}.
Our splicing implementation was tested by independently generating all (ternary/quaternary) matroids in $\widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$ with a pair $\{x,y\}$ such that $M\backslash x \backslash y \in \widetilde{\mathcal{M}}_\mathcal{N}(\mathbb{P})$, and ensuring that these are precisely the matroids obtained by splicing.
\section{Dyadic matroids}
\label{secdyadic}
In this section we present the results of the computation of the excluded minors for dyadic matroids on at most 15 elements.
The next lemma is a consequence of \cref{nr_exminors}, and the subsequent lemma is well known and easy to verify (see \cite[Proposition~3.1]{GOVW1998}, for example).
\begin{lemma}
\label{nou25u35dy}
Let $M$ be an excluded minor for the class of dyadic matroids.
Then, either
\begin{enumerate}
\item $M$
has a $\{F_7^-, (F_7^-)^*, P_8\}$-minor, or
\item $M$
is isomorphic to one of $U_{2,5}$, $U_{3,5}$, $F_7$, $F_7^*$, $\mathit{AG}(2,3)\backslash e$, $(\mathit{AG}(2,3)\backslash e)^*$, and $(\mathit{AG}(2, 3)\backslash e)^{\Delta Y}$.
\end{enumerate}
\end{lemma}
\begin{lemma}
The matroids $F_7^-$, $(F_7^-)^*$, and $P_8$ are strong $\mathbb{D}$-stabilizers.
\end{lemma}
The excluded minors for dyadic matroids are known to include
the seven matroids listed in \cref{nou25u35dy}(ii),
as well as an $8$-element matroid known as $T_8$, a $10$-element matroid known as $N_{1}$, and a $12$-element matroid known as $N_2$ (see \cite[Problem~14.7.11]{oxley}).
We computed an exhaustive list of the excluded minors on at most $15$ elements, finding one more, previously unknown, excluded minor, on $14$ elements.
This matroid, which we call $N_3$, has a reduced $\mathrm{GF}(3)$-representation as follows:
$$\begin{bmatrix}
1 & 2 & 0 & 0 & 1 & 2 & 2 \\
2 & 2 & 2 & 0 & 1 & 1 & 2 \\
0 & 2 & 0 & 0 & 1 & 1 & 2 \\
0 & 0 & 0 & 0 & 2 & 1 & 2 \\
1 & 1 & 1 & 2 & 1 & 2 & 2 \\
2 & 1 & 1 & 1 & 2 & 1 & 1 \\
2 & 2 & 2 & 2 & 2 & 1 & 0
\end{bmatrix}$$
\begin{theorem}
\label{dyexprop}
The excluded minors for dyadic matroids on at most $15$ elements are
$U_{2,5}$, $U_{3,5}$, $F_7$, $F_7^*$, $\mathit{AG}(2,3)\backslash e$, $(\mathit{AG}(2,3)\backslash e)^*$, $(\mathit{AG}(2,3)\backslash e)^{\Delta Y}$, $T_8$, $N_{1}$, $N_2$, and $N_{3}$.
\end{theorem}
\begin{proof}
We exhaustively generated all $n$-element dyadic matroids that are not near-regular for $n \le 15$; see \cref{dytable}.
By \cref{nou25u35dy}, the excluded minors on at most seven elements are $U_{2,5}$, $U_{3,5}$, $F_7$, and $F_7^*$.
Let $8 \le n \le 14$, and suppose all excluded minors for dyadic matroids on fewer than $n$ elements are known.
We generated all matroids that are ternary single-element extensions of some $(n-1)$-element dyadic matroid with a $\{F_7^-,(F_7^-)^*,P_8\}$-minor.
From this list of potential excluded minors, we first filtered out those in our list of $n$-element dyadic matroids, and then also filtered out any matroids that contained, as a minor, any of the excluded minors for dyadic matroids on fewer than $n$ elements. Each remaining matroid
is an excluded minor.
On the other hand, if $M$ is an $n$-element excluded minor not listed in \cref{nou25u35dy}(ii), then, by \cref{nou25u35dy,onlyternary,seysplitcorr}, this collection of generated matroids contains at least one of $M$ and $M^*$.
Now let $n = 15$, and again suppose all excluded minors on fewer than $n$ elements are known.
We generated all $3$-connected ternary
splices of a (not-necessarily non-isomorphic) pair of $(n-1)$-element dyadic matroids that are each single-element extensions of an $(n-2)$-element $3$-connected dyadic matroid with a $\{F_7^-,(F_7^-)^*,P_8\}$-minor; call this collection of generated matroids $\mathcal{S}$.
Since $n \ge |E(P_8)| + 6 = 14$, \cref{splicinglemma} implies that if $M$ is an $n$-element excluded minor, then, for some $M' \in \Delta^{(*)}(M)$, there exists a pair $\{e,f\} \subseteq E(M')$ such that $M' \backslash e$, $M' \backslash f$, and $M' \backslash \{e,f\}$ are $3$-connected and have a $\{F_7^-,(F_7^-)^*,P_8\}$-minor.
Thus $M' \in \mathcal{S}$.
(For reference, $\mathcal{S}$ contained 20632781 pairwise non-isomorphic $15$-element rank-$7$ matroids, and 8840124 pairwise non-isomorphic $15$-element rank-$8$ matroids.)
As before, from this list of potential excluded minors, we filtered out those matroids that were dyadic or contained, as a minor, any of the excluded minors for dyadic matroids on fewer than $n$ elements.
\end{proof}
\begin{table}[htb]
\begin{tabular}{r|r r r r r r r r r r}
\hline
$r \backslash n$ & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\
\hline
3 & 1 & 1 & 1 & & & & & & \\
4 & 1 & 7 & 24 & 52 & 60 & 44 & 20 & 7 & 2 \\
5 & & 1 & 24 & 223 & 1087 & 3000 & 5065 & 5651 & 4553 \\
6 & & & 1 & 52 & 1087 & 10755 & 57169 & 185354 & 398875 \\
7 & & & & & 60 & 3000 & 57169 & 540268 & 2986648 \\
8 & & & & & & 44 & 5065 & 185354 & 2986648 \\
9 & & & & & & & 20 & 5651 & 398875 \\
10 & & & & & & & & 7 & 4553 \\
11 & & & & & & & & & 2 \\
\hline
Total & 2 & 9 & 50 & 327 & 2294 & 16843 & 124508 & 922292 & 6780156 \\
\hline
\end{tabular}
\caption{The number of $3$-connected $n$-element rank-$r$ dyadic matroids with a $\{F_7^-,(F_7^-)^*,P_8\}$-minor, for $n \le 15$.}
\label{dytable}
\end{table}
It turns out that the list of matroids in \cref{dyexprop} is not the complete list of excluded minors for dyadic matroids.
We also found an excluded minor with 16 elements; we call this matroid $N_4$.
The following is a reduced $\mathrm{GF}(3)$-representation of $N_4$:
$$\begin{bmatrix}
1 & 0 & 1 & 1 & 1 & 1 & 2 & 1 \\
0 & 2 & 0 & 0 & 1 & 0 & 0 & 1 \\
1 & 0 & 2 & 1 & 0 & 1 & 2 & 1 \\
1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\
1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 \\
2 & 0 & 2 & 1 & 0 & 0 & 2 & 1 \\
1 & 1 & 1 & 0 & 0 & 1 & 1 & 0
\end{bmatrix}$$
We found this matroid by a computer search, as follows.
Observe that the matroids $T_8$, $N_1$, $N_2$, and $N_3$ are self-dual matroids on 8, 10, 12, and 14 elements respectively, and each has a pair of disjoint circuit-hyperplanes.
Starting with the 2986648 $3$-connected rank-8 dyadic non-near-regular matroids on 15 elements, 285488 of these matroids have a circuit-hyperplane whose complement is independent. Of these, 4875 have at least one $3$-connected ternary extension to a matroid with disjoint circuit-hyperplanes. There are 288076 such matroids, but 52 are dyadic and 288023 properly contain an excluded minor for dyadic matroids. The one other matroid is $N_4$.
\begin{table}[htb]
\begin{tabular}{ c c c c }
\hline
$M$ & $\mathbb{P}_M$ &
$\left|\Delta(M)\right|$ \\
\hline
$U_{2,5}$ & $\mathbb{U}_2$ &
2 \\
$F_7$ & $\mathrm{GF}(2)$ &
2 \\
$\mathit{AG}(2,3) \backslash e$ & $\mathbb{S}$ &
3 \\
$T_8$ & $\mathrm{GF}(3)$ &
1 \\
$N_1$ & $\mathrm{GF}(3)$ &
1 \\
$N_2$ & $\mathrm{GF}(3)$ &
1 \\
$N_3$ & $\mathrm{GF}(3)$ &
1 \\
$N_4$ & $\mathrm{GF}(3)$
&
1 \\
\hline
\end{tabular}
\caption{Excluded minors for the class of dyadic matroids, and their universal partial fields.
We list one representative~$M$ of each $\Delta Y$-equivalence class $\Delta(M)$.%
}
\label{dyupfs}
\end{table}
Finally, using \cref{onlyternary,dyexprop}, we observe that with the exception of $U_{2,5}$ and $U_{3,5}$, each excluded minor for the class of dyadic matroids is not $\mathrm{GF}(5)$-representable, so is an excluded minor for the class of $\mathrm{GF}(5)$-representable matroids.
In \cref{dyupfs}, we provide the universal partial field for each of the known excluded minors. The matroids with universal partial field $\mathrm{GF}(3)$ are representable only over fields with characteristic three.
\section{\texorpdfstring{$2$}{2}-regular matroids}
\label{sec2reg}
We now present the results of the computation of the excluded minors for $2$-regular matroids on at most 15
elements.
The next lemma is a consequence of \cite[Lemmas~5.7 and~5.25]{OSV2000}.
\begin{lemma}
The matroids $U_{2,5}$ and $U_{3,5}$ are strong $\mathbb{U}_2$-stabilizers.
\end{lemma}
\begin{lemma}
\label{nou25u35}
Let $M$ be an excluded minor for the class of
$2$-regular matroids.
Then, either
\begin{enumerate}
\item $M$ has a $\{U_{2,5}, U_{3,5}\}$-minor, or
\item $M$ is isomorphic to one of $F_7$, $F_7^*$, $F_7^-$, $(F_7^-)^*$, $\mathit{AG}(2,3)\backslash e$, $(\mathit{AG}(2,3)\backslash e)^*$, $(\mathit{AG}(2, 3)\backslash e)^{\Delta Y}$, and $P_8$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that $M$ has no $\{U_{2,5},U_{3,5}\}$-minor.
Since $M$ is not, in particular, near-regular, \cref{nr_exminors} implies that
$M$ has a minor isomorphic to one of $F_7$, $F_7^*$, $F_7^-$, $(F_7^-)^*$, $\mathit{AG}(2,3)\backslash e$, $(\mathit{AG}(2,3)\backslash e)^*$, $(\mathit{AG}(2,3)\backslash e)^{\Delta Y}$, and $P_8$.
It is well known that $F_7$ and $F_7^*$ are representable over a field $\mathbb{F}$ if and only if $\mathbb{F}$ has characteristic two; whereas $F_7^*$, $(F_7^-)^*$, and $P_8$ are representable over a field $\mathbb{F}$ if and only if $\mathbb{F}$ does not have characteristic two.
Moreover, $\mathit{AG}(2,3) \backslash e$ is not $\mathrm{GF}(5)$-representable \cite[Proposition~7.3]{HMvZ2011}, and hence $(\mathit{AG}(2,3)\backslash e)^*$ and $(\mathit{AG}(2,3)\backslash e)^{\Delta Y}$ are also not $\mathrm{GF}(5)$-representable, the latter by \cref{deltaYrep}.
Since each of these eight matroids is not representable over
either $\mathrm{GF}(4)$ or $\mathrm{GF}(5)$, we
deduce that $M$ does not contain one of these matroids as a proper minor, so (ii) holds, as required.
\end{proof}
By \cref{nou25u35}, in our search for excluded minors for the class of $2$-regular matroids, we can restrict our focus to matroids with a $\{U_{2,5},U_{3,5}\}$-minor.
The matroids $U_{2,6}$, $U_{4,6}$, $P_6$, $P_8$, and $P_8^=$ are not $2$-regular, as they are not $\mathrm{GF}(4)$-representable,
by \cref{gf4minors}.
Let $F_7^=$ denote the matroid obtained by relaxing a circuit-hyperplane of the non-Fano matroid $F_7^-$, as illustrated in \cref{fanosfig}.
Recall that $P_8^=$ is obtained from $P_8$ by relaxing disjoint circuit-hyperplanes; let $P_8^-$ denote the matroid obtained by relaxing just one of a pair of disjoint circuit-hyperplanes of $P_8$.
It is known that
$U_{3,6}$, $F_7^=$ and $(F_7^=)^*$ are not $2$-regular \cite[Lemmas~4.2.4 and~4.2.5]{Semple1998}; and neither is
$P_8^-$ \cite[Section 4.1]{COvZ2018}.
It turns out that all these matroids are excluded minors for the class of $2$-regular matroids.
\begin{figure}
\begin{subfigure}{0.32\textwidth}
\centering
\begin{tikzpicture}[rotate=90,yscale=0.8,line width=1pt]
\tikzset{VertexStyle/.append style = {minimum height=5,minimum width=5}}
\clip (-0.5,-4.5) rectangle (2.5,-.5);
\draw (0,-1) -- (2,-2.5) -- (0,-4);
\draw (1,-1.75) -- (0,-4);
\draw (1,-3.25) -- (0,-1);
\draw (0,-1) -- (0,-4);
\draw (2,-2.5) -- (0,-2.5);
\draw (1,-3.25) .. controls (-.35,-3.25) and (-.35,-1.75) .. (1,-1.75);
\SetVertexNoLabel
\Vertex[x=2,y=-2.5]{a2}
\Vertex[x=0.68,y=-2.5]{a3}
\Vertex[x=1,y=-3.25]{a4}
\Vertex[x=1,y=-1.75]{a5}
\Vertex[x=0,y=-2.5]{d}
\Vertex[x=0,y=-1]{e}
\Vertex[x=0,y=-4]{f}
\end{tikzpicture}
\caption{$F_7$.}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\begin{tikzpicture}[rotate=90,yscale=0.8,line width=1pt]
\tikzset{VertexStyle/.append style = {minimum height=5,minimum width=5}}
\clip (-0.5,-4.5) rectangle (2.5,-.5);
\draw (0,-1) -- (2,-2.5) -- (0,-4);
\draw (1,-1.75) -- (0,-4);
\draw (1,-3.25) -- (0,-1);
\draw (0,-1) -- (0,-4);
\draw (2,-2.5) -- (0,-2.5);
\SetVertexNoLabel
\Vertex[x=2,y=-2.5]{a2}
\Vertex[x=0.68,y=-2.5]{a3}
\Vertex[x=1,y=-3.25]{a4}
\Vertex[x=1,y=-1.75]{a5}
\Vertex[x=0,y=-2.5]{d}
\Vertex[x=0,y=-1]{e}
\Vertex[x=0,y=-4]{f}
\end{tikzpicture}
\caption{$F_7^-$.}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\begin{tikzpicture}[rotate=90,yscale=0.8,line width=1pt]
\tikzset{VertexStyle/.append style = {minimum height=5,minimum width=5}}
\clip (-0.5,-4.5) rectangle (2.5,-.5);
\draw (0,-1) -- (2,-2.5) -- (0,-4);
\draw (1,-1.75) -- (0,-4);
\draw (1,-3.25) -- (0,-1);
\draw (0,-1) -- (0,-4);
\Vertex[L=$a$,Lpos=0,LabelOut=true,x=2,y=-2.5]{a2}
\Vertex[L=$g$,Lpos=90,LabelOut=true,x=0.68,y=-2.5]{a3}
\Vertex[L=$e$,Lpos=0,LabelOut=true,x=1,y=-3.25]{a4}
\Vertex[L=$d$,Lpos=180,LabelOut=true,x=1,y=-1.75]{a5}
\Vertex[L=$f$,Lpos=-90,LabelOut=true,x=0,y=-2.75]{d}
\Vertex[L=$b$,Lpos=180,LabelOut=true,x=0,y=-1]{e}
\Vertex[L=$c$,Lpos=0,LabelOut=true,x=0,y=-4]{f}
\end{tikzpicture}
\caption{$F_7^=$.}
\end{subfigure}
\caption{Three of the excluded minors for $2$-regular matroids.}
\label{fanosfig}
\end{figure}
\begin{figure}
\begin{tikzpicture}[scale=0.39,line width=1pt]
\tikzset{VertexStyle/.append style = {minimum height=5,minimum width=5}}
\draw (-8.5,0.6) -- (6.5,0.6) -- (8.93,6.5) -- (-5.97,6.5) -- (-8.5,0.6);
\draw (-8.5,0.6) -- (-8.5,-5) -- (6.5,-5) -- (6.5,0.6);
\Vertex[Lpos=135,LabelOut=true,L=$d$,x=-0.4,y=3.1]{v1}
\Vertex[Lpos=-85,LabelOut=true,L=$c$,x=1,y=2.3]{v4}
\Vertex[LabelOut=true,L=$b$,x=3.2,y=3.2]{v6}
\Vertex[LabelOut=true,L=$a$,x=2.4,y=5.7]{v7}
\Vertex[Lpos=-60,LabelOut=true,L=$f$,x=-1.2,y=-1.6]{v2}
\Vertex[Lpos= 85,LabelOut=true,L=$g$,x=-3.0,y=-1.1]{v0}
\Vertex[Lpos=180,LabelOut=true,L=$h$,x=-4.5,y=-1.9]{v3}
\Vertex[Lpos=0,LabelOut=true,L=$e$,x=-3.0,y=-4.3]{v5}
\SetVertexNoLabel
\tikzset{VertexStyle/.append style = {shape=rectangle,fill=white}}
\Vertex[x=-6,y=0.6]{u0}
\Vertex[x=-3,y=0.6]{u1}
\Vertex[x=0.3,y=0.6]{u2}
\Vertex[x=4,y=0.6]{u3}
\Edge(u0)(v5)
\Edges(v5,u1,v7,u3)
\Edges(v5,u2,v7)
\Edges(u1,v6)
\Edges(u2,v3)
\Edge(v1)(u3)
\tikzset{EdgeStyle/.append style = {bend right=18}}
\Edge(v6)(u0)
\tikzset{EdgeStyle/.append style = {bend right=10}}
\Edge(u0)(v2)
\tikzset{EdgeStyle/.append style = {bend left=15}}
\Edge(u3)(v3)
\end{tikzpicture}
\caption{$P_8$, an excluded minor for $2$-regular matroids. Relaxing $\{e,f,g,h\}$ results in the matroid $P_8^-$; relaxing both $\{a,b,c,d\}$ and $\{e,f,g,h\}$ results in the matroid $P_8^=$.}
\label{p8fig}
\end{figure}
\begin{figure
\centering
\begin{tikzpicture}[rotate=0,scale=0.5,line width=1pt]
\tikzset{VertexStyle/.append style = {minimum height=5,minimum width=5}}
\draw (13.4,-1.3) -- (-2.5,-0.8);
\draw[color=green] (9.8,-5.1) -- (-.1,-3.5);
\draw[color=orange] (-1,-0.85) -- (5.4,-9.8);
\draw[color=red] (-2.5,-0.8) -- (9.3,2.5);
\draw[color=orange] (9,2.8) -- (-1,-0.85);
\draw[color=white,line width=4pt] (2.3,2.5) -- (0.2,-3.9);
\draw[color=blue] (0.2,-3.9) -- (13.4,-1.3) -- (2.3,2.5);
\draw[color=white,line width=4pt] (-.1,-3.5) -- (6.3,5.3) -- (9.8,-5.1);
\draw[color=green] (-.1+.198,-3.5-.032) -- (-.1,-3.5);
\draw[color=green] (9.8,-5.1) -- (9.8-.99,-5.1+.16);
\draw[color=blue] (2.3,2.5) -- (0.2,-3.9);
\draw[color=white,line width=4pt] (5.4,-9.8) -- (6.3,5.3);
\draw[color=white,line width=4pt] (9.3,2.5) -- (5.65,-5.4) -- (-2.5,-0.8);
\draw[color=red] (9.3-1.18,2.5-.33) -- (9.3,2.5);
\draw[color=white,line width=4pt] (5.4,-9.8) -- (9,2.8);
\draw[color=orange] (9,2.8) -- (9-1.0,2.8-.365);
\draw[color=green] (-.1,-3.5) -- (6.3,5.3) -- (9.8,-5.1);
\draw (-1.705, -0.825) -- (-2.5,-0.8);
\draw[color=red] (9.3,2.5) -- (5.65,-5.4) -- (-2.5,-0.8);
\draw[color=orange] (5.4,-9.8) -- (9,2.8);
\draw (-2.5,-0.8) -- (6.3,5.3) -- (13.4,-1.3);
\draw (5.4,-9.8) -- (6.3,5.3);
\draw (-2.5,-0.8) -- (5.4,-9.8) -- (13.4,-1.3);
\Vertex[L=$1$,Lpos=180,LabelOut=true,x=-2.5,y=-0.8]{a1}
\Vertex[L=$3$,LabelOut=true,x=6.3,y=5.3]{a3}
\Vertex[L=$5$,LabelOut=true,x=5.4,y=-9.8]{a5}
\Vertex[L=$7$,LabelOut=true,x=13.4,y=-1.3]{a7}
\Vertex[L=$2$,Lpos=-90,LabelOut=true,x=1.05,y=-3.7]{a2}
\Vertex[L=$4$,Lpos=90,LabelOut=true,x=5.4,y=1.5]{a4}
\Vertex[L=$6$,Lpos=90,LabelOut=true,x=.6,y=-2.6]{a6}
\Vertex[L=$8$,LabelOut=true,x=8.1,y=-.1]{a8}
\end{tikzpicture}
\caption{$\mathit{TQ}_8$, another excluded minor for $2$-regular matroids.}
\label{tp8fig}
\end{figure}
There is one more excluded minor for the class, that we now describe.
We denote this matroid $\mathit{TQ}_8$, and let $E(\mathit{TQ}_8) = \{0,1,\dotsc,7\}$.
The matroid $\mathit{TQ}_8$ is a rank-$4$ sparse paving matroid with eight non-spanning circuits
$\big\{\{i, i+2, i+4, i+5\} : i \in \{0,1,\dotsc,7\}\big\}$, working modulo 8.
It is illustrated in \cref{tp8fig}.
\begin{theorem}
\label{2regexminors}
The excluded minors for $2$-regular matroids on at most $15$ elements are
$U_{2,6}$, $U_{3,6}$, $U_{4,6}$, $P_6$,
$F_7$, $F_7^*$, $F_7^-$, $(F_7^-)^*$, $F_7^=$, $(F_7^=)^*$,
$\mathit{AG}(2,3)\backslash e$, $(\mathit{AG}(2,3)\backslash e)^*$, $(\mathit{AG}(2,3)\backslash e)^{\Delta Y}$, $P_8$, $P_8^-$, $P_8^=$, and $\mathit{TQ}_8$.
\end{theorem}
\begin{proof}
We exhaustively generated all $n$-element $2$-regular matroids with a $\{U_{2,5},U_{3,5}\}$-minor for $n \le 15$; see \cref{2regtable}.
By \cref{nou25u35}, any excluded minor has at least six elements.
Let $6 \le n \le 15$, and suppose all excluded minors for $2$-regular matroids on fewer than $n$ elements are known.
For $6 \le n \le 8$, we generated all single-element extensions of some $(n-1)$-element $2$-regular matroid with a $\{U_{2,5},U_{3,5}\}$-minor.
By \cref{nou25u35,seysplitcorr}, if $M$ is an $n$-element excluded minor not listed in \cref{nou25u35}(ii), then this collection of generated matroids contains at least one of $M$ and $M^*$.
For $8 < n \le 13$, we generated all matroids that are quaternary single-element extensions of some $(n-1)$-element $2$-regular matroid with a $\{U_{2,5},U_{3,5}\}$-minor.
For each of these potential excluded minors, we filtered out any matroids in the list of generated $2$-regular matroids, or any matroid containing, as a minor, one of the excluded minors for $2$-regular matroids on fewer than $n$ elements.
Any matroid remaining after this process is an excluded minor.
On the other hand, if $M$ is an $n$-element excluded minor not listed in \cref{nou25u35}(ii), then, by \cref{nou25u35,onlyquaternary,seysplitcorr}, the collection of generated potential excluded minors contains at least one of $M$ and $M^*$.
Finally, let $n \in \{14,15\}$.
We generated all $3$-connected quaternary
splices of a (not-necessarily non-isomorphic) pair of $(n-1)$-element $2$-regular matroids that are each single-element extensions of an $(n-2)$-element $3$-connected $2$-regular matroid with a $\{U_{2,5},U_{3,5}\}$-minor; call this collection of generated matroids $\mathcal{S}$.
By \cref{splicinglemma}, if $M$ is an $n$-element excluded minor not listed in \cref{nou25u35}(ii), then, for some $M' \in \Delta^{(*)}(M)$, there exists a pair $\{e,f\} \subseteq E(M')$ such that $M' \backslash e$, $M' \backslash f$, and $M' \backslash \{e,f\}$ are $3$-connected and have a $\{U_{2,5},U_{3,5}\}$-minor.
Thus $M' \in \mathcal{S}$.
(For reference, $\mathcal{S}$ consisted of 29383778 pairwise non-isomorphic $15$-element rank-$7$ matroids, and 12949820 pairwise non-isomorphic $15$-element rank-$8$ matroids.)
As before, for each such potential excluded minor $M'$, we filtered out $M'$ if it is $2$-regular or if it contains, as a minor, any of the excluded minors for $2$-regular matroids on fewer than $n$ elements.
\end{proof}
\Cref{2regtable} records the number of pairwise non-isomorphic $n$-element rank-$r$ matroids that are $2$-regular but not near-regular, for $n \le 15$.
Note that the two $10$-element $2$-regular matroids of rank-$3$ are the maximum-sized $2$-regular matroids known as $T_3^2$ and $S_{10}$ \cite{Semple1998}.
\begin{table}[ht]
\begin{tabular}{r|r r r r r r r r r r r}
\hline
$r \backslash n$ & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\
\hline
2 & 1 & & & & & & & & & & \\
3 & 1 & 1 & 2 & 4 & 3 & 2 & & & & & \\
4 & & & 2 & 17 & 62 & 113 & 132 & 89 & 45 & 14 & 5 \\
5 & & & & 4 & 62 & 502 & 2156 & 5357 & 8337 & 8685 & 6338 \\
6 & & & & & 3 & 113 & 2156 & 18593 & 88191 & 258318 & 511593 \\
7 & & & & & & 2 & 132 & 5357 & 88191 & 732667 & 3637691 \\
8 & & & & & & & & 89 & 8337 & 258318 & 3637691 \\
9 & & & & & & & & & 45 & 8685 & 511593 \\
10 & & & & & & & & & & 14 & 6338 \\
11 & & & & & & & & & & & 5 \\
\hline
Total & 2 & 1 & 4 & 25 & 130 & 732 & 4576 & 29486 & 193146 & 1266701 & 8311254 \\
\hline
\end{tabular}
\caption{The number of $3$-connected $2$-regular $n$-element rank-$r$ matroids with a $\{U_{2,5},U_{3,5}\}$-minor, for $n \le 15$.}
\label{2regtable}
\end{table}
We conjecture that there are no excluded minors for the class of $2$-regular matroids on more than 15 elements.
\begin{conjecture}
\label{2regexminorconj}
A matroid $M$ is $2$-regular if and only if $M$ has no minor isomorphic to
$U_{2,6}$, $U_{3,6}$, $U_{4,6}$, $P_6$,
$F_7$, $F_7^*$, $F_7^-$, $(F_7^-)^*$, $F_7^=$, $(F_7^=)^*$,
$\mathit{AG}(2,3)\backslash e$, $(\mathit{AG}(2,3)\backslash e)^*$, $(\mathit{AG}(2,3)\backslash e)^{\Delta Y}$, $P_8$, $P_8^-$, $P_8^=$, and $\mathit{TQ}_8$.
\end{conjecture}
\begin{table}[htb]
\begin{tabular}{ c c c c }
\hline
$M$ & $\mathbb{P}_M$ & $\max\{i : M \in \mathcal{M}(\mathbb{H}_i)\}$ & $\left|\Delta(M)\right|$ \\
\hline
$U_{2,6}$ & $\mathbb{U}_3$ & 6 & 3 \\
$U_{3,6}$ & $\mathbb{P}_{U_{3,6}}$& 6 & 1 \\
$F_7$ & $\mathrm{GF}(2)$ & -- & 2 \\
$F_7^-$ & $\mathbb{D}$ & 2 & 2 \\
$F_7^=$ & $\mathbb{K}_2$ & 2 & 2 \\
$\mathit{AG}(2,3)\backslash e$ & $\mathbb{S}$ & -- & 3 \\
$P_8$ & $\mathbb{D}$ & 2 & 1 \\
$P_8^-$ & $\mathbb{K}_2$ & 2 & 1 \\
$P_8^=$ & $\mathbb{H}_4$ & 4 & 1 \\
$\mathit{TQ}_8$ & $\mathbb{K}_2$ & 2 & 1 \\
\hline
\end{tabular}
\caption{The excluded minors for $2$-regular matroids on at most 15 elements, their universal partial fields, and how many inequivalent $\mathrm{GF}(5)$-representations they have.
We list one representative~$M$ of each $\Delta Y$-equivalence class $\Delta(M)$.%
}
\label{u2upfs}
\end{table}
We also calculated the universal partial fields for each excluded minor for the class of $2$-regular matroids, as shown in \cref{u2upfs}. The only as-yet-undefined partial field is:
\begin{multline*}
\mathbb{P}_{U_{3,6}} = (\mathbb{Q}(\alpha,\beta,\gamma,\delta), \langle-1, \alpha, \beta, \gamma, \delta, \alpha-1, \beta-1, \gamma-1, \delta-1, \\ \alpha-\beta, \gamma-\delta, \beta-\delta, \alpha-\gamma, \alpha\delta-\beta\gamma, \alpha\delta-\beta\gamma-\alpha+\beta+\gamma-\delta\rangle),
\end{multline*}
where $\alpha$, $\beta$, $\gamma$, and $\delta$ are indeterminates.
Note that there are no partial-field homomorphisms from $\mathbb{U}_3$ or $\mathbb{H}_4$ to $\mathrm{GF}(4)$, from $\mathbb{D}$ to fields of characteristic two, or from $\mathbb{S}$ to $\mathrm{GF}(5)$.
Thus, of the 17 matroids appearing in \cref{2regexminors} (and \cref{u2upfs}), all but $U_{3,6}$, $F_7^=$, $(F_7^=)^*$, $P_8^-$ and $\mathit{TQ}_8$ are not representable over either $\mathrm{GF}(4)$ or $\mathrm{GF}(5)$.
On the other hand,
we have the following:
\begin{lemma}
\label{special4guys}
The matroids $U_{3,6}$, $F_7^=$, $(F_7^=)^*$, $P_8^-$ and $\mathit{TQ}_8$ are
$\mathbb{K}_2$-representable, and
representable over all fields of size at least four.
\end{lemma}
\begin{proof}
It suffices to show that each of these matroids is $\mathbb{K}_2$-representable, and
this follows directly from the universal partial fields calculations given in \cref{u2upfs}.
Alternatively,
observe that
$$\begin{bmatrix}
1 & 1 & 1 \\
1 & \alpha & \beta \\
1 & \gamma & \delta \\
\end{bmatrix}$$
is a
$\mathbb{P}_{U_{3,6}}$-representation of $U_{3,6}$, and
let $\phi : \mathbb{P}_{U_{3,6}} \rightarrow \mathbb{K}_2$ be given by $\phi(\alpha) = -\alpha$, $\phi(\beta) = -1/\alpha$, $\phi(\gamma) = (\alpha-1)/\alpha$, $\phi(\delta) = 1-\alpha$.
It is easily verified that $\phi$ is a partial-field homomorphism.
It is also easy to check that the following are reduced
$\mathbb{K}_2$-representations for $F_7^=$, $\mathit{TQ}_8$, and $P_8^-$, respectively (labelled as in \cref{fanosfig,tp8fig,p8fig}, where for $P_8^-$, we relax $\{e,f,g,h\}$).
\begin{multicols}{2}
$$\kbordermatrix{
& d & e & f & g \\
a & 1 & 1 & 0 & 1 \\
b & 1 & 0 & 1 & 1 \\
c & 0 & 1 & \alpha & 1
}$$
$$\kbordermatrix{
& 8 & 6 & 4 & 2\\
1 & 0 & \alpha & 1 & 1\\
7 & 1 & 0 & \alpha & \alpha - 1\\
5 & 1 & \alpha & 0 & \alpha \\
3 & 1 & \alpha - 1 & 1 & 0
}$$
\end{multicols}
$$\kbordermatrix{
& d & e & g & h\\
a & 1 & 1 & 1 & \alpha+1\\
b & 1 & 0 & \alpha+1 & \alpha+1 \\
c & 1 & -\alpha & 1 & 0\\
f & 0 & 1 & 1 & 1
}$$
\end{proof}
\begin{corollary}
Let $M$ be an excluded minor for the class of matroids representable over all fields of size at least four.
Suppose that \cref{2regexminorconj} holds, or $|E(M)| \le 15$.
Then, either
\begin{enumerate}
\item $M$ has a proper $\{U_{3,6}, F_7^=, (F_7^=)^*, P_8^-, \mathit{TQ}_8\}$-minor, or
\item $M$ is isomorphic to one of $U_{2,6}$, $U_{4,6}$, $P_6$, $F_7$, $F_7^*$, $F_7^-$, $(F_7^-)^*$, $\mathit{AG}(2,3)\backslash e$, $(\mathit{AG}(2,3)\backslash e)^*$, $(\mathit{AG}(2, 3)\backslash e)^{\Delta Y}$, $P_8$, and $P_8^=$.
\end{enumerate}
\end{corollary}
Finally, we remark on the number of inequivalent $\mathrm{GF}(5)$-representations that the excluded minors for $2$-regular matroids possess.
As there is a partial-field homomorphism from $\mathbb{U}_3$ to $\mathbb{H}_5$~\cite{vanZwam2009}, and
$\phi : \mathbb{P}_{U_{3,6}} \rightarrow \mathbb{U}_3$ given by $\phi(\alpha) = \frac{\alpha-1}{\alpha}$, $\phi(\beta) = \frac{\gamma-1}{\gamma}$, $\phi(\gamma) = \frac{1-\alpha}{\beta-\alpha}$, and $\phi(\delta) = \frac{1-\gamma}{\beta-\gamma}$
is a partial-field homomorphism, the matroids $U_{2,6}$ and $U_{3,6}$ have precisely six inequivalent $\mathrm{GF}(5)$-representations.
For $\mathbb{P} \in \{\mathbb{D}, \mathbb{K}_2\}$, there is a partial-field homomorphism from $\mathbb{P}$ to $\mathbb{H}_2$ but none from $\mathbb{P}$ to $\mathbb{H}_3$~\cite{vanZwam2009}, so $F_7^-$, $F_7^=$, $P_8$, $P_8^-$, and $\mathit{TQ}_8$ have precisely two inequivalent $\mathrm{GF}(5)$-representations.
As the universal partial field of $P_8^=$ is $\mathbb{H}_4$, the matroid $P_8^=$ has precisely four inequivalent $\mathrm{GF}(5)$-representations.
\bibliographystyle{abbrv}
|
{
"arxiv_id": "2302.13252",
"language": "en",
"timestamp": "2023-02-28T02:14:13",
"url": "https://arxiv.org/abs/2302.13252",
"yymm": "2302"
} | \section{Introduction}\label{sec:intro}
Stochastic linear bandit is a classical problem of online learning and decision-making with many influential applications, e.g., A/B testing \citep{claeys2021dynamic}, recommendation systems \citep{chu2011contextual}, advertisement placements \citep{wang2021hybrid}, clinical trials \citep{moradipari2020stage}, hyperparameter tuning \citep{alieva2021robust}, and new material discovery \citep{katz2020empirical}.
More formally, stochastic bandit is a sequential game between an agent who chooses a sequence of actions $x_0,...,x_{T-1}\in\mathcal{X}$ and nature who decides on a sequence of noisy observations (rewards) $y_0,...,y_{T-1}$ according to $y_t = f_0(x_t) + \textit{noise}$ for some underlying function $f_0$. The goal of the learner is to minimize the \emph{cumulative regret} the agent experiences relative to an oracle who knows the best action to choose ahead of time, i.e.,
$$
R_T(x_0,...,x_{T-1}) = \sum_{t=0}^{T-1} r_t = \sum_{t=0}^{T-1} \max_{x\in \mathcal{X}} f_0(x) - f_0(x_t),
$$
where $r_t$ is called \emph{instantaneous regret}.
Despite being highly successful in the wild, existing theory for stochastic linear bandits (or more generally learning-oracle based bandits problems \citep{foster2018practical,foster2020beyond}) relies on a \emph{realizability} assumption, i.e., the learner is given access to a function class $\mathcal{F}$ such that the true expected reward $f_0: \mathcal{X}\rightarrow \mathbb{R}$ satisfies that $f_0\in\mathcal{F}$. Realizability is considered one of the strongest and most restrictive assumptions in the standard statistical learning setting, but in the linear bandits,
all known attempts to deviate from the realizability assumption result in a regret that grows linearly with $T$ \citep{ghosh2017misspecified,lattimore2020learning,zanette2020learning,neu2020efficient,bogunovic2021misspecified,krishnamurthy2021tractable}.
In practical applications, it is often observed that feature-based representation of the actions with function approximations in
estimating the reward can result in very strong policies even if the estimated reward functions are far from
being correct \citep{foster2018practical}.
So what went wrong? The critical intuition we rely on is the following:
\begin{center}
\begin{minipage}{6in}
\centering
\texttt{
It should be sufficient for the estimated reward function to clearly \emph{differentiate} good actions from bad ones, rather than requiring it to perfectly estimate the rewards numerically.}
\end{minipage}
\end{center}
\textbf{Contributions.}
In this paper, we formalize this intuition by defining a new family of misspecified bandit problems based on a condition that adjusts the need for an accurate approximation pointwise at every $x\in\mathcal{X}$ according to the suboptimality gap at $x$. Unlike the existing misspecified linear bandits problems with a linear regret, our problem admits a nearly optimal $\tilde{O}(\sqrt{T})$ regret despite being heavily misspecified. Specifically:
\begin{itemize}
\item We define $\rho$-\emph{gap-adjusted misspecified} ($\rho$-GAM) function approximations and characterize how they preserve important properties of the true function that are relevant for optimization.
\item We show that the classical LinUCB algorithm \citep{abbasi2011improved} can be used \emph{as is} (up to some mild hyperparameters) to achieve an $\tilde{O}(\sqrt{T})$ regret under a moderate level of gap-adjusted misspecification ($\rho \leq O(1/\sqrt{\log T})$). In comparison, the regret bound one can obtain under the corresponding uniform misspecification setting is only $\tilde{O}(T/\sqrt{\log T})$. This represents an exponential improvement in the average regret metric $R_T/T$.
\end{itemize}
To the best of our knowledge, the suboptimality gap-adjusted misspecification problem was not studied before and we are the first to obtain $\sqrt{T}$-style regrets without a realizability assumption.
\textbf{Technical novelty.} Due to misspecification, we have technical challenges that appear in bounding the instantaneous regret and parameter uncertainty region. We tackle the challenges by self bounding trick, i.e., bounding the instantaneous regret by the instantaneous regret itself, which can be of independent interest in more settings, e.g., Gaussian process bandit optimization and reinforcement learning.
\section{Related Work}\label{sec:rw}
The problem of linear bandits was first introduced in \citet{abe1999associative}. Then \citet{auer2002finite} proposed the upper confidence bound to study linear bandits where the number of actions is finite. Based on it, \citet{dani2008stochastic} proposed an algorithm based on confidence ellipsoids and then \citet{abbasi2011improved} simplified the proof with a novel self-normalized martingale bound. Later \citet{chu2011contextual} proposed a simpler and more robust linear bandit algorithm and showed $\tilde{O}(\sqrt{d T})$ regret cannot be improved beyond a polylog factor. \citet{li2019nearly} further improved the regret upper and lower bound, which characterized the minimax regret up to an iterated logarithmic factor. See \citet{lattimore2020bandit} for a detailed survey of linear bandits.
In terms of misspecification, \citet{ghosh2017misspecified} first studied the misspecified linear bandit with a fixed action set. They found that LinUCB \citep{abbasi2011improved} is not robust when misspecification is large. They showed that in a favourable case when one can test the linearity of the reward function, their RLB algorithm is able to switch between the linear bandit algorithm and finite-armed bandit algorithm to address misspecification issue and achieve the $\tilde{O}(\min \{\sqrt{K},d\}\sqrt{T})$ regret where $K$ is number of arms.
The most studied setting of model misspecification is uniform misspecification where the $\ell_\infty$ distance between the best-in-class function and the true function is always upper bounded by some parameter $\epsilon$, i.e.,
\begin{definition}[$\epsilon$-uniform misspecification]
We say function class $\mathcal{F}$ is an $\epsilon$-uniform misspecified approximation of $f_0$ if there exists $f\in \mathcal{F}$ such that $\sup_{x\in\mathcal{X}}|f(x) - f_0(x)| \leq \epsilon$.
\end{definition}
Under this definition, \citet{lattimore2020learning} proposed the optimal design-based phased elimination algorithm for misspecified linear bandits and achieved $\tilde{O}(d\sqrt{T} + \epsilon \sqrt{d} T)$ regret when number of actions is infinite. They also found that with modified confidence band in LinUCB, LinUCB is able to achieve the same regret. With the same misspecification model, \citet{foster2020beyond} studied contextual bandit with regression oracle, \citet{neu2020efficient} studied multi-armed linear contextual bandit, and \citet{zanette2020learning} studied misspecified contextual linear bandits after reduction of the algorithm. All of their results suffer from linear regrets. Later \citet{bogunovic2021misspecified} studied misspecified Gaussian process bandit optimization problem and achieved $\tilde{O}(d\sqrt{T} + \epsilon \sqrt{d} T )$ regret when linear kernel is used in Gaussian process. Moreover, their lower bound shows that $\tilde{\Omega}(\epsilon T)$ term is unavoidable in this setting.
Besides uniform misspecification, there are some work considered different definitions of misspecification. \citet{krishnamurthy2021tractable} defines misspecification error as an expected squared error between true function and best-in-class function where expectation is taken over distribution of context space and action space. \citet{foster2020adapting} considered average misspecification, which is weaker than uniform misspecification and allows tighter regret bound. However, they also have linear regrets.
Our work is different from all related work mentioned above because we are working under a newly defined misspecifiation condition and show that LinUCB is a no-regret algorithm in this case.
Model misspecification is naturally addressed in the related \emph{agnostic} contextual bandits setting \citep{agarwal2014taming}, but these approaches typically require the action space to be finite, thus not directly applicable to our problem. In addition, empirical evidence \citep{foster2018practical} suggests that the regression oracle approach works better in practice than the agnostic approach even if realizability cannot be verified.
\section{Preliminaries}\label{sec:pre}
\subsection{Notations}\label{sec:notation}
Let $[n]$ denote the integer set $\{1,2,...,n\}$. The algorithm runs in $T$ rounds in total. Let $f_0$ denote the true function, so the maximum function value is defined as $f^* = \max_{x \in \mathcal{X}} f_0(x)$ and the maximum point is defined as $x^* = \mathop{\mathrm{argmax}}_{x \in \mathcal{X}} f_0(x)$. Let $\mathcal{X} \subset \mathbb{R}^d$ and $\mathcal{Y} \subset \mathbb{R}$ denote the domain and range of $f_0$. We use $\mathcal{W}$ to denote the parameter class of a family of linear functions $\mathcal{F} := \{f_w: \mathcal{X} \rightarrow \mathcal{Y}|w \in \mathcal{W}\}$ where $f_w(x)=w^\top x$. Define $w_*$ as the parameter of best linear approximation function. $\|w\|_2 \leq C_w, \forall w \in \mathcal{W}$ and $\|x\|_2 \leq C_b, \forall x \in \mathcal{X}$.
For a vector $x$, its $\ell_2$ norm is denoted by $\|x\|_2 = \sqrt{\sum_{i=1}^d x^2_i}$ and for a matrix $A$ its operator norm is denoted by $\|A\|_\mathrm{op}$. For a vector $x$ and a square matrix $A$, define $\|x\|^2_A = x^\top A x$.
\subsection{Problem Setup}\label{sec:setup}
We consider the following optimization problem:
\begin{align*}
x_* = \mathop{\mathrm{argmax}}_{x \in \mathcal{X}} f_0(x),
\end{align*}
where $f_0$ is the true function which might not be linear in $\mathcal{X}$. We want to use a linear function $f_w=w^\top x\in\mathcal{F}$ to approximate $f_0$ and maximize $f_0$. At time $0\leq t \leq T-1$, after querying a data point $x_t$, we will receive a noisy feedback:
\begin{align}
y_t = f_0(x_t) + \eta_t, \label{eq:obs}
\end{align}
where $\eta_t$ is independent, zero-mean, and $\sigma$-sub-Gaussian noise.
The major highlight of our study is that we do not rely on the popular \emph{realizability} assumption (\emph{i.e.} $f_0\in\mathcal{F}$) that is frequently assumed in the existing function approximation literature. Alternatively, we propose the following gap-adjusted misspecification condition.
\begin{definition}[$\rho$-gap-adjusted misspecification]\label{def:lm}
We say a function $f$ is a $\rho$-gap-adjusted misspecified (or $\rho$-GAM in short) approximation of $f_0$ if
for parameter $0 \leq \rho < 1$,
\begin{align*}
\sup_{x \in \mathcal{X}} \left| \frac{f(x) - f_0(x)}{f^* - f_0(x)}\right|\leq \rho.\label{eq:local}
\end{align*}
We say function class $\mathcal{F}=\{f_w | w\in\mathcal{W}\}$ satisfies $\rho$-GAM for $f_0$, if
there exists $w^*\in\mathcal{W}$ such that $f_{w_*}$ is a $\rho$-GAM approximation of $f_0$.
\end{definition}
Observe that when $\rho = 0$, this recovers the standard realizability assumption, but when $\rho>0$ it could cover many misspecified function classes.
Figure~\ref{fig:example} shows a 1-dimensional example with $f_w(x)= 0.75x+0.5$ and piece-wise linear function $f_0(x)$ that satisfies local misspecification. With Definition~\ref{def:lm}, we have the following proposition.
\begin{figure}[t]
\centering
\subfigure[$\rho$-gap-adjusted misspecification]{\label{fig:example}\includegraphics[width=0.45\linewidth]{example.pdf}}
\subfigure[Weak $\rho$-gap-adjusted misspecification]{\label{fig:example2}\includegraphics[width=0.45\linewidth]{example_weak.pdf}}
\caption{(a): An example of $\rho$-gap-adjusted misspecification (Definition \ref{def:lm}) in $1$-dimension where $\rho=0.7$. The blue line shows a non-linear true function and the gray region shows the gap-adjusted misspecified function class. Note the vertical range of gray region at a certain point $x$ depends on the suboptimal gap. For example, at $x=1$ suboptimal gap is $2$ and the vertical range is $4\rho=2.8$. The red line shows a feasible linear function that is able to optimize the true function by taking $x_*=2$. (b): An example of weak $\rho$-gap-adjusted misspecification (Definition \ref{def:lm_weak}) in $1$-dimension where $\rho=0.7$. The difference to Figure \ref{fig:example} is that one can shift the qualifying approximation arbitrarily up or down and the specified model only has to $\rho$-RAM approximate $f_0$ up to an additive constant factor.}
\label{fig:main}
\end{figure}
\begin{proposition}\label{prop:perservation}
Let $f$ be
a $\rho$-GAM approximation of $f_0$ (Definition~\ref{def:lm}). Then it holds:
\begin{itemize}
\item (Preservation of maximizers) $$\mathop{\mathrm{argmax}}_{x}f(x) =\mathop{\mathrm{argmax}}_{x}f_{0}(x).$$
\item (Preservation of max value) $$\max_{x\in\mathcal{X}}f(x)=f^*.$$
\item (Self-bounding property) $$|f(x) - f_0(x)| \leq \rho (f^* - f_0(x)) = \rho r(x).$$
\end{itemize}
\end{proposition}
This tells $f$ and $f_0$ coincide on the same global maximum points and the same global maxima if Definition \ref{def:lm} is satisfied, while allowing $f$ and $f_0$ to be different (potentially large) at other locations. Therefore, Definition~\ref{def:lm} is a ``local'' assumption that does not require $f$ to be uniformly close to $f_0$ (e.g. the ``uniform'' misspecification assumes $\sup_{x\in\mathcal{X}}|f(x)-f_0(x)|\leq \rho$). Proof of Proposition \ref{prop:perservation} is shown in Appendix \ref{sec:pres}.
In addition, we can modify Definition~\ref{def:lm} with a slightly weaker condition that only requires $\mathop{\mathrm{argmax}}_{x}f(x) =\mathop{\mathrm{argmax}}_{x}f_{0}(x)$
but not necessarily $\max_{x\in\mathcal{X}}f(x)=f^*$.
\begin{definition}[Weak $\rho$-gap-adjusted misspecification]\label{def:lm_weak}
Denote $f_w^*=\max_{x\in\mathcal{X}} f(x)$. Then we say $f$ is (weak) $\rho$-gap-adjusted misspecification approximation of $f_0$ for a parameter $0 \leq \rho < 1$ if:
\begin{align*}
\sup_{x \in \mathcal{X}} \left| \frac{f(x) - f_w^*+f^*-f_0(x)}{f^* - f_0(x)}\right|\leq \rho.
\end{align*}
\end{definition}
See Figure \ref{fig:example2} for an example satisfying Definition \ref{def:lm_weak}, in which there is a constant gap between $f^*_w$ and $f^*$.
The idea of this weaker assumption is that we can always extend the function class by adding a single offset parameter $c$ w.l.o.g. to learn the constant gap $f^* - f_w^*$. In the linear case, this amounts to homogenizing the feature vector by appending $1$. For this reason, we stick to Definition~\ref{def:lm} and linear function approximation for conciseness and clarity in the main paper. See Appendix \ref{sec:weak} for formal statements and Appendix \ref{sec:weak_regret} for proofs of regret bound of linear bandits under Definition~\ref{def:lm_weak}.
Note that both Definition~\ref{def:lm} and Definition~\ref{def:lm_weak} are defined generically which do not require any assumptions on the parametric form of $f$. While we focus on the linear bandit setting in this paper, this notion can be considered for arbitrary function approximation learning problems.
\subsection{Assumptions}\label{sec:ass}
\begin{assumption}[Boundedness]\label{ass:boundedness}
For any $x\in\mathcal{X}$, $\|x\|_2\leq C_b$. For any $w\in\mathcal{W}$, $\|w\|_2\leq C_w$. Moreover, for any $x,\tilde{x}\in\mathcal{X}$, the true expected reward function $|f_0(x) - f_0(\tilde{x})| \leq F$.
\end{assumption}
These are mild assumptions that we assume for convenience. Relaxations of these are possible but not the focus of this paper. Note that the additional assumption is not required when $f_0$ is realizable.
\begin{assumption}\label{ass:unique}
Suppose $\mathcal{X}\in\mathbb{R}^d$ is a compact set, and all the global maximizers of $f_0$ live on the $d-1$ dimensional hyperplane. i.e., $\exists a\in\mathbb{R}^d,b\in \mathbb{R}^1$, s.t.
\begin{align*}
\mathop{\mathrm{argmax}}_{x\in\mathcal{X}}f_{0}(x)\subset \{x\in\mathbb{R}^d: x^\top a=b\}.
\end{align*}
\end{assumption}
For instance, when $d=1$, the above reduces to that $f_0$ has a unique maximizer. This is a compatibility assumption for Definition~\ref{def:lm}, since any linear function that violates Assumption~\ref{ass:unique} will not satisfy Definition \ref{def:lm}.
In addition, to obtain an $\tilde{O}(\sqrt{T})$ regret, for any finite sample $T$, we require the following condition.
\begin{assumption}[Low misspecification]\label{ass:rho}
The linear function class is a $\rho$-GAM approximation of $f_0$ with
\begin{align}
\rho < \frac{1}{8 d \sqrt{\log \left(1 + \frac{T C^2_b C^2_w}{d \sigma^2}\right)}} = O\left( \frac{1}{d\sqrt{\log T}}\right).
\end{align}
\end{assumption}
The condition is required for technical reasons. Relaxing this condition for LinUCB may require fundamental breakthroughs that knock out logarithmic factors from its regret analysis. This will be further clarified in the proof. In general, however, we conjecture that this condition is not needed and there are algorithms that can achieve $\tilde{O}(\sqrt{T}/(1-\rho))$ regret for any $\rho < 1$, but a new algorithm needs to be designed.
While this assumption may suggest that we still require realizability in a truly asymptotic world, handling a $O(1/\sqrt{\log T})$ level of misspecification is highly non-trivial in finite sample setting. For instance, if $T$ is a trillion, $1/\sqrt{\log (1e12)} \approx 0.19$. This means that for most practical cases, LinUCB is able to tolerate a constant level of misspecification under the GAM model.
\subsection{LinUCB Algorithm}\label{sec:alg}
We will focus on analyzing the classical Linear Upper Confidence Bound (LinUCB) algorithm due to \citep{dani2008stochastic,abbasi2011improved}, shown below.
\begin{algorithm}[!htbp]
\caption{LinUCB \citep{abbasi2011improved}}
\label{alg:linucb}
{\bf Input:}
Predefined sequence $\beta_t$ for $t=1,2,3,...$ as in eq. \eqref{eq:beta_t};
Set $\lambda=\sigma^2/C^2_w$ and $\mathrm{Ball}_0 = \mathcal{W}$.
\begin{algorithmic}[1]
\FOR{$t = 0,1,2,... $}
\STATE Select $x_t=\mathop{\mathrm{argmax}}_{x \in \mathcal{X}} \max_{w \in \mathrm{Ball}_t} w^\top x$.
\STATE Observe $y_t = f_0(x_t) + \eta_t$.
\STATE Update
\vspace{-1em}
\begin{align}
\Sigma_{t+1} = \lambda I + \sum_{i=0}^{t} x_i x^\top_i \mathrm{where}\ \Sigma_0 = \lambda I.\label{eq:sigma_t}
\end{align}
\vspace{-1em}
\STATE Update
\vspace{-1em}
\begin{align}
\hat{w}_{t+1} = \mathop{\mathrm{argmin}}_w \lambda \|w\|^2_2+ \sum_{i=0}^{t} (w^\top x_i - y_i)^2_2.\label{eq:w_t_opt}
\end{align}
\vspace{-1em}
\STATE Update $\mathrm{Ball}_{t+1} = \{w | \|w - \hat{w}_{t+1}\|^2_{\Sigma_{t+1}} \leq \beta_{t+1}\}.$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Main Results}\label{sec:theory}
In this section, we show that the classical LinUCB algorithm \citep{abbasi2011improved} works in $\rho$-gap-adjusted misspecified linear bandits and achieves cumulative regret at the order of $\tilde{O}(\sqrt{T}/(1-\rho))$. The following theorem shows the cumulative regret bound.
\begin{theorem}\label{thm:main}
Suppose Assumptions \ref{ass:boundedness}, \ref{ass:unique}, and \ref{ass:rho} hold. Set
\begin{align}
\beta_t = 8\sigma^2 \left(1 + d\log\left(1+ \frac{t C^2_b C^2_w }{d \sigma^2} \right) + 2\log \left(\frac{\pi^2 t^2}{3\delta} \right)\right).\label{eq:beta_t}
\end{align}
Then Algorithm \ref{alg:linucb} guarantees w.p. $> 1-\delta$ simultaneously for all $T=1,2,...$
\begin{align*}
R_T &\leq F + \sqrt{\frac{8 (T-1) \beta_{T-1} d}{(1-\rho)^2} \log \left( 1 + \frac{T C^2_b C^2_w }{d \sigma^2 }\right)}.
\end{align*}
\end{theorem}
\begin{remark}
The result shows that LinUCB achieves $\tilde{O}(\sqrt{T})$ cumulative regret bound and thus it is a no-regret algorithm in $\rho$-gap-adjusted misspecified linear bandits. In contrast, LinUCB can only achieve $\tilde{O}(\sqrt{T} + \epsilon T)$ regret in uniformly misspecified linear bandits. Even if $\epsilon = \tilde{O}(1/\sqrt{\log T})$, the resulting regret $\tilde{O}(T/\sqrt{\log T})$ is still exponentially worse than ours.
\end{remark}
\begin{proof}
By definition of cumulative regret, function range absolute bound $F$, and Cauchy-Schwarz inequality,
\begin{align*}
R_T &= r_0 + \sum_{t=1}^{T-1} r_t \\
&\leq F + \sqrt{\left(\sum_{t=1}^{T-1} 1 \right) \left(\sum_{t=1}^{T-1} r^2_t \right)}\\
&= F + \sqrt{ (T-1) \sum_{t=1}^{T-1} r^2_t}.
\end{align*}
Observe that the choice of $\beta_t$ is monotonically increasing in $t$. Also by Lemma~\ref{lem:w_t}, we get that with probability $1-\delta$, $w_*\in \text{Ball}_t, \forall t= 1,2,3,...$, which verifies the condition to apply Lemma \ref{lem:sos_r_t} simultaneously for all $T=1,2,3,...$, thereby completing the proof.
\end{proof}
\subsection{Regret Analysis}\label{sec:reg_ana}
The proof follows the LinUCB analysis closely. The main innovation is a self-bounding argument that controls the regret due to misspecification by the regret itself. This appears in Lemma~\ref{lem:r_t} and then again in the proof of Lemma~\ref{lem:w_t}.
Before we proceed, let $\Delta_t$ denote the deviation term of our linear function from the true function at $x_t$, formally,
\begin{align}
\Delta_t = f_0(x_t) - w^\top_* x_t,\label{eq:delta}
\end{align}
And our observation model (eq. \eqref{eq:obs}) becomes
\begin{align}
y_t = f_0(x_t) + \eta_t = w_*^\top x_t + \Delta_t + \eta_t.\label{eq:obs2}
\end{align}
Moreover, we have the following lemma showing the property of deviation term $\Delta_t$.
\begin{lemma}[Bound of deviation term]\label{lem:delta}
$\forall t \in \{0,1,\ldots,T-1\}$,
\begin{align*}
|\Delta_t | \leq \frac{\rho}{1-\rho} w^\top_*(x_* - x_t).
\end{align*}
\begin{proof}
Recall the definition of deviation term in eq. \eqref{eq:delta}:
\begin{align*}
\Delta_t = f_0(x_t) - w_*^\top x_t.
\end{align*}
By Definition \ref{def:lm}, $\forall t \in \{0,1,\ldots,T-1\}$,
\begin{align*}
-\rho(f^* - f_0(x_t))\leq \Delta_t &\leq \rho(f^* - f_0(x_t))\\
-\rho(f^* - w_*^\top x_t - \Delta_t)\leq \Delta_t &\leq \rho(f^* - w_*^\top x_t - \Delta_t)\\
-\rho(w_*^\top x_* - w_*^\top x_t - \Delta_t)\leq \Delta_t &\leq \rho(w_*^\top x_* - w_*^\top x_t - \Delta_t)\\
\frac{-\rho}{1-\rho} (w_*^\top x_* - w_*^\top x_t)\leq \Delta_t &\leq \frac{\rho}{1 + \rho}(w_*^\top x_* - w_*^\top x_t),
\end{align*}
where the third line is by Proposition \ref{prop:perservation} and the proof completes by taking the absolute value of the lower and upper bounds.
\end{proof}
\end{lemma}
Next, we prove instantaneous regret bound and its sum of squared regret version in the following two lemmas:
\begin{lemma}[Instantaneous regret bound]\label{lem:r_t}
Define $u_t := \| x_t\|_{\Sigma_t^{-1}}$, assume $w_*\in \mathrm{Ball}_t$
then for each $t\geq 1$
\begin{align*}
r_t \leq \frac{2\sqrt{\beta_t}u_t}{1-\rho}.
\end{align*}
\end{lemma}
\begin{proof}
By definition of instantaneous regret,
\begin{align*}
r_t &= f^* - f_0(x_t)\\
&= w^\top_* x_* - (w^\top_* x_t + \Delta(x_t))\\
&\leq w^\top_* x_* - w^\top_* x_t + \rho (f^* - f_0(x_t))\\
&= w^\top_* x_* - w^\top_* x_t + \rho r_t,
\end{align*}
where the inequality is by Definition \ref{def:lm}. Therefore, by rearranging the inequality we have
\begin{align*}
r_t &\leq \frac{1}{1-\rho}(w^\top_* x_* - w^\top_* x_t) \leq \frac{2\sqrt{\beta_t} u_t}{1-\rho},
\end{align*}
where the last inequality is by Lemma \ref{lem:gap}.
\end{proof}
\begin{lemma}\label{lem:sos_r_t}
Assume $\beta_t$ is monotonically nondecreasing and $w_*\in \mathrm{Ball}_t$ for all $t=1,...,T-1$, then
\begin{align*}
\sum_{t=1}^{T-1} r^2_t \leq \frac{8\beta_{T-1} d}{(1-\rho)^2} \log \left( 1 + \frac{T C^2_b}{d \lambda }\right).
\end{align*}
\end{lemma}
\begin{proof}
By definition $u_t = \sqrt{x^\top_t \Sigma^{-1}_{t} x_t}$ and Lemma \ref{lem:r_t},
\begin{align*}
\sum_{t=1}^{T-1} r^2_t &\leq \sum_{t=1}^{T-1} \frac{4}{(1-\rho)^2} \beta_t u^2_t \\
&\leq \frac{4\beta_{T-1}}{(1-\rho)^2} \sum_{t=1}^{T-1} u^2_t \leq \frac{4\beta_{T-1}}{(1-\rho)^2} \sum_{t=0}^{T-1} u^2_t\\
&\leq \frac{8\beta_{T-1} d}{(1-\rho)^2} \log \left( 1 + \frac{T C^2_b}{d \lambda }\right),
\end{align*}
where the second inequality is by the monotonic increasing property of $\beta_t$ and the last inequality uses the elliptical potential lemma (Lemma \ref{lem:sum_pos}).
\end{proof}
Previous two lemmas hold on the following lemma, bounding the gap between $f^*$ and the linear function value at $x_t$, shown below.
\begin{lemma}\label{lem:gap}
Define $u_t = \| x_t\|_{\Sigma_t^{-1}}$ and assume $\beta_t$ is chosen such that $w_*\in \mathrm{Ball}_t$.
Then
\begin{align*}
w_*^\top (x_* - x_t) \leq 2 \sqrt{\beta_t} u_t.
\end{align*}
\end{lemma}
\begin{proof}
Let $\tilde{w}$ denote the parameter that achieves $\mathop{\mathrm{argmax}}_{w \in \mathrm{Ball}_t} w^\top x_t$, by the optimality of $x_t$,
\begin{align*}
\ w_*^\top x_* - w^\top_* x_t &\leq \tilde{w}^\top x_t - w^\top_* x_t \\
&= (\tilde{w} - \hat{w}_t + \hat{w}_t - w_*)^\top x_t\\
&\leq \|\tilde{w} - \hat{w}_t\|_{\Sigma_t} \|x_t\|_{\Sigma^{-1}_t} + \|\hat{w}_t - w_*\|_{\Sigma_t} \|x_t\|_{\Sigma^{-1}_t}\\
&\leq 2\sqrt{\beta_t} u_t
\end{align*}
where the second inequality applies Holder's inequality; the last line uses the definition of $\mathrm{Ball}_t$ (note that both $w_*,\tilde{w}\in \mathrm{Ball}_t).$
\end{proof}
\subsection{Confidence Analysis}\label{sec:conf_ana}
All analysis in the previous section requires $w_* \in \mathrm{Ball}_t, \forall t\in [T]$. In this section, we show that our choice of $\beta_t$ in \eqref{eq:beta_t} is valid and $w_*$ is trapped in the uncertainty set $\mathrm{Ball}_t$ with high probability.
\begin{lemma}[Feasibility of $\mathrm{Ball}_t$]\label{lem:w_t}
Suppose Assumptions \ref{ass:boundedness}, \ref{ass:unique}, and \ref{ass:rho} hold. Set $\beta_t$ as in eq. \eqref{eq:beta_t}. Then, w.p. $> 1- \delta$,
\begin{align*}
\|w_* - \hat{w}_t\|^2_{\Sigma_t} \leq \beta_t, \forall t=1,2,...
\end{align*}
\end{lemma}
\begin{proof}
By setting the gradient of objective function in eq. \eqref{eq:w_t_opt} to be $0$, we obtain the closed form solution of eq. \eqref{eq:w_t_opt}:
\begin{align*}
\hat{w}_t = \Sigma_t^{-1} \sum_{i=0}^{t-1} y_i x_i.
\end{align*}
Therefore,
\begin{align}
\hat{w}_t - w_* &= - w_* + \Sigma_t^{-1} \sum_{i=0}^{t-1} x_i y_i \nonumber\\
&= - w_* + \Sigma_t^{-1} \sum_{i=0}^{t-1} x_i (x_i^\top w_* + \eta_i + \Delta_i) \nonumber\\
&= -w_* + \Sigma^{-1}_t \left(\sum_{i=0}^{t-1} x_i x_i^\top \right) w_* + \Sigma^{-1}_t \sum_{i=0}^{t-1} \eta_i x_i + \Sigma^{-1}_t \sum_{i=0}^{t-1} \Delta_i x_i,\label{eq:w_t_1}
\end{align}
where the second equation is by eq. \eqref{eq:obs2} and the first two terms of eq. \eqref{eq:w_t_1} can be further simplified as
\begin{align*}
-w_* + \Sigma^{-1}_t \left(\sum_{i=0}^{t-1} x_i x_i^\top \right) w_* &= -w_* + \Sigma^{-1}_t \left(\lambda I + \sum_{i=0}^{t-1} x_i x_i^\top - \lambda I \right) w_*\\
&= - w_* + \Sigma_t^{-1} \Sigma_t w_* - \lambda \Sigma_t^{-1} w_*\\
& = - \lambda \Sigma^{-1}_t w_*,
\end{align*}
where the second equation is by definition of $\Sigma_t$ (eq. \eqref{eq:sigma_t}). Therefore, eq. \eqref{eq:w_t_1} can be rewritten as
\begin{align*}
\hat{w}_t - w_* = - \lambda \Sigma^{-1}_t w_* + \Sigma^{-1}_t \sum_{i=0}^{t-1} \eta_i x_i + \Sigma^{-1}_t \sum_{i=0}^{t-1} \Delta_i x_i.
\end{align*}
Multiply both sides by $\Sigma_t^{\frac{1}{2}}$ and we have
\begin{align*}
\Sigma_t^{\frac{1}{2}}(\hat{w}_t - w_*) &= - \lambda \Sigma^{-\frac{1}{2}}_t w_* + \Sigma_t^{-\frac{1}{2}} \sum_{i=0}^{t-1} \eta_i x_i + \Sigma^{-\frac{1}{2}}_t \sum_{i=0}^{t-1} \Delta_i x_i.
\end{align*}
Take a square of both sides and apply generalized triangle inequality, we have
\begin{align}
\|\hat{w}_t - w_*\|^2_{\Sigma_t} & \leq 4 \lambda^2 \|w_*\|^2_{\Sigma_t^{-1}} + 4\left\| \sum_{i=0}^{t-1} \eta_i x_i \right\|^2_{\Sigma_t^{-1}} + 4\left\| \sum_{i=0}^{t-1} \Delta_i x_i \right\|^2_{\Sigma_t^{-1}}.\label{eq:w_t_2}
\end{align}
The remaining task is to bound these three terms separately. The first term of eq. \eqref{eq:w_t_2} is bounded as
\begin{align*}
4\lambda^2 \|w_*\|^2_{\Sigma^{-1}_t} \leq 4 \lambda \|w_*\|^2_2 \leq 4\sigma^2,
\end{align*}
where the first inequality is by definition of $\Sigma_t$ and $\|\Sigma^{-1}_t\|_\mathrm{op} \leq 1/\lambda$ and the second inequality is by choice of $\lambda = \sigma^2/C^2_w$.
The second term of eq. \eqref{eq:w_t_2} can be bounded by Lemma \ref{lem:self_norm} and Lemma \ref{lem:potential}:
\begin{align*}
4 \left\|\sum_{i=0}^{t-1} \eta_i x_i \right\|^2_{\Sigma_t^{-1}} &\leq 4\sigma^2 \log \left(\frac{\det (\Sigma_t) \det(\Sigma_0)^{-1}}{\delta_t^2} \right)\\
&\leq 4\sigma^2 \left(d \log\left(1 + \frac{t C^2_b}{d \lambda} \right) - \log \delta^2_t \right),
\end{align*}
where $\delta_t$ is chosen as $3\delta/(\pi^2 t^2)$ so that the total failure probabilities over $T$ rounds can always be bounded by $\delta/2$:
\begin{align*}
\sum_{t=1}^T \frac{3\delta}{\pi^2 t^2} < \sum_{t=1}^\infty \frac{3\delta}{\pi^2 t^2} = \frac{3\delta \pi^2 }{6 \pi^2} = \frac{\delta}{2}.
\end{align*}
And the third term of eq. \eqref{eq:w_t_2} can be bounded as
\begin{align*}
4 \left \| \sum_{i=0}^{t-1} \Delta_i x_i \right\|^2_{\Sigma^{-1}_t} &= 4\left(\sum_{i=0}^{t-1} \Delta_i x_i \right)^\top \Sigma^{-1}_t \left(\sum_{j=0}^{t-1} \Delta_j x_j \right)\\
&= 4 \sum_{i=0}^{t-1} \sum_{j=0}^{t-1} \Delta_i \Delta_j x_i \Sigma^{-1}_t x_j\\
&\leq 4\sum_{i=0}^{t-1} \sum_{j=0}^{t-1} |\Delta_i| |\Delta_j| \|x_i\|_{\Sigma^{-1}_t} \|x_j\|_{\Sigma^{-1}_t},
\end{align*}
where the last line is by taking the absolute value and Cauchy-Schwarz inequality. Continue the proof and we have
\begin{align*}
4\sum_{i=0}^{t-1} \sum_{j=0}^{t-1} |\Delta_i| |\Delta_j| \|x_i\|_{\Sigma^{-1}_t} \|x_j\|_{\Sigma^{-1}_t} &= 4\left( \sum_{i=0}^{t-1} |\Delta_i| \|x_i\|_{\Sigma^{-1}_t}\right) \left(\sum_{j=0}^{t-1} |\Delta_j| \|x_j\|_{\Sigma^{-1}_t}\right)\\
&= 4\left( \sum_{i=0}^{t-1} |\Delta_i| \|x_i\|_{\Sigma^{-1}_t}\right)^2\\
&\leq 4 \left(\sum_{i=0}^{t-1} |\Delta_i|^2 \right) \left(\sum_{i=0}^{t-1} \|x_j\|_{\Sigma^{-1}_t}^2 \right)\\
&\leq 4 d \rho^2 \sum_{i=0}^{t-1} r_i^2 .
\end{align*}
where the first inequality is due to Cauchy-Schwarz inequality and the second uses the self-bounding properties $|\Delta_i| \leq \rho r_i$ from Proposition~\ref{prop:perservation} and Lemma~\ref{lem:sum_pos2}.
To put things together, we have shown that w.p. $> 1-\delta$, for any $t\geq 1$,
\begin{align}
\|\hat{w}_t-w_*\|^2_{\Sigma_t^{-1}} &\leq 4 \sigma^2 + 4\sigma^2 \left(d\log\left(1+ \frac{t C^2_b }{d \lambda} \right) + 2\log \left(\frac{\pi^2 t^2}{3\delta} \right)\right) + 4\rho^2 d \sum_{i=0}^{t-1} r_i^2, \label{eq:radius}
\end{align}
where we condition on \eqref{eq:radius} for the rest of the proof.
Observe that this implies that the feasibility of $w_*$ in $\mathrm{Ball}_t$ can be enforced if we choose $\beta_t$ to be larger than \eqref{eq:radius}. The feasiblity of $w_*$ in turn allows us to apply Lemma~\ref{lem:r_t} to bound the RHS with $\beta_{0},...,\beta_{t-1}$. We will use induction to prove that our choice
$$\beta_t := 2\sigma^2\iota_t \text{ for } t=1,2,...$$ is valid, where short hand $$\iota_t:=4 + 4\left(d\log\left(1+ \frac{t C^2_b }{d \lambda} \right) + 2\log \left(\frac{\pi^2 t^2}{3\delta} \right)\right).$$
For the base case $t=1$, by eq. \eqref{eq:radius} and the definition of $\beta_1$ we directly have $\|\hat{w}_1-w_*\|^2_{\Sigma_1^{-1}}\leq \beta_1$. Assume our choice of $\beta_i$ is feasible for $i=1,...,t-1$, then we can write
\begin{align*}
\|\hat{w}_t-w_*\|^2_{\Sigma_t^{-1}} &\leq \sigma^2\iota_t + 4\rho^2 d \sum_{i=1}^{t-1} \beta_i u_i^2 \\
&\leq \sigma^2\iota_t + 4\rho^2 d \beta_{t-1}\sum_{i=1}^{t-1} u_i^2,
\end{align*}
where the second line is due to non-decreasing property of $\beta_t$. Then by Lemma \ref{lem:sum_pos} and Assumption~\ref{ass:rho}, we have
\begin{align}
\|\hat{w}_t-w_*\|^2_{\Sigma_t^{-1}} &\leq \sigma^2\iota_t +8\rho^2 d^2 \beta_{t-1}\log \left(1+\frac{tC_b^2}{d\lambda} \right) \nonumber\\
&\leq \sigma^2\iota_t + \frac{1}{2} \beta_{t-1} \leq 2\sigma^2\iota_t = \beta_{t},
\label{eq:known_rho}
\end{align}
The critical difference from the standard LinUCB analysis here is that if $\beta_{t-1}$ appears on the LHS of the bound and if its coefficient is larger, any valid bound for $\beta_t$ will have to grow exponentially in $t$. This is where Assumption \ref{ass:rho} helps us. Assumption \ref{ass:rho} ensures that the coefficient of $\beta_{t-1}$ is smaller than $1/2$, so we can take $\beta_{t-1}\leq \beta_t$ and move $\beta_t/2$ to the right-hand side.
\end{proof}
Proof of previous lemma needs the following two lemmas.
\begin{lemma}[Upper bound of $\sum_{i=0}^{t-1} x^\top_i \Sigma_t^{-1} x_i$]\label{lem:sum_pos2}
\begin{align*}
\sum_{i=0}^{t-1} x^\top_i \Sigma^{-1}_t x_i \leq d.
\end{align*}
\end{lemma}
\begin{proof}
Recall that $\Sigma_t = \sum_{i=0}^{t-1} x_i x_i^T + \lambda I_d$.
\begin{align*} \sum_{i=0}^{t-1} x^\top_i \Sigma^{-1}_t x_i &= \sum_{i=0}^{t-1}\mathrm{tr}\left[
\Sigma^{-1}_t x_ix_i^T \right]\\
&= \mathrm{tr}\left[
\Sigma^{-1}_t \sum_{i=0}^{t-1} x_ix_i^T \right] \\
&= \mathrm{tr}\left[
\Sigma^{-1}_t (\Sigma_t - \lambda I_d)\right] \\
&= \mathrm{tr}\left[I_d\right] - \mathrm{tr}\left[\lambda \Sigma^{-1}_t\right]\leq d.
\end{align*}
The last line follows from the fact that $\Sigma^{-1}_t$ is positive semidefinite.
\end{proof}
\begin{lemma}[Upper bound of $\sum_{i=0}^{t-1} x^\top_i \Sigma_i^{-1} x_i$ (adapted from \citet{abbasi2011improved})]\label{lem:sum_pos}
\begin{align*}
\sum_{i=0}^{t-1} x^\top_i \Sigma^{-1}_i x_i \leq 2d \log \left(1 + \frac{t C_b^2}{d \lambda} \right).
\end{align*}
\end{lemma}
\begin{proof}
First we prove that $\forall i \in \{0, 1,..., t-1\}, 0\leq x_i^\top \Sigma^{-1}_i x_i < 1$. Recall the definition of $\Sigma_i$ and we know $\Sigma^{-1}_i$ is a positive semidefinite matrix and thus $0 \leq x_i^\top \Sigma^{-1}_i x_i$. To prove $x_i^\top \Sigma^{-1}_i x_i < 1$, we need to decompose $\Sigma_i$ and write
\begin{align*}
\ x_i^\top \Sigma^{-1}_i x_i &= x_i^\top \left(\lambda I + \sum_{j=0}^{i-1} x_j x^\top_j \right)^{-1} x_i\\
&= x_i^\top \left(x_i x_i^\top - x_i x_i^\top + \lambda I + \sum_{j=0}^{i-1} x_j x^\top_j \right)^{-1} x_i.
\end{align*}
Let $A = - x_i x_i^\top + \lambda I + \sum_{j=0}^{i-1} x_j x^\top_j$ and it becomes
\begin{align*}
x^\top_i \Sigma^{-1}_i x_i = x^\top_i (x_i x^\top_i + A)^{-1} x_i.
\end{align*}
By Sherman-Morrison lemma (Lemma \ref{lem:sm}), we have
\begin{align*}
x^\top_i \Sigma^{-1}_i x_i &= x^\top_i \left(A^{-1} - \frac{A^{-1} x_i x^\top_i A^{-1}}{1 + x^\top_i A^{-1} x_i} \right) x_i\\
&= x^\top_i A^{-1} x_i - \frac{x^\top_i A^{-1} x_i x^\top_i A^{-1} x_i}{1 + x^\top_i A^{-1} x_i}\\
&= \frac{x^\top_i A^{-1} x_i}{1 + x^\top_i A^{-1} x_i} < 1.
\end{align*}
Next we use the fact that $\forall x \in [0, 1), x \leq 2\log(x+1)$ and we have
\begin{align*}
\sum_{i=0}^{t-1} x^\top_i \Sigma^{-1}_i x_i &\leq \sum_{i=0}^{t-1} 2\log \left(1+ x^\top_i \Sigma^{-1}_i x_i \right)\\
&\leq 2 \log \left( \frac{\det(\Sigma_{t-1})}{\det(\Sigma_0)} \right)\\
&\leq 2 d \log \left( 1 + \frac{t C^2_b}{d \lambda}\right),
\end{align*}
where the last two lines are by Lemma \ref{lem:det} and Lemma \ref{lem:potential}.
\end{proof}
\section{Technical Lemmas}\label{sec:tech_lem}
\begin{lemma}[Self-normalized bound for vector-valued martingales (Lemma A.9 of \citet{agarwal2021rl})]\label{lem:self_norm}
Let $\{\eta_i\}_{i=1}^\infty$ be a real-valued stochastic process with corresponding filtration $\{\mathcal{F}_i\}_{i=1}^\infty$ such that $\eta_i$ is $\mathcal{F}_i$ measurable, $\mathbb{E}[\eta_i | \mathcal{F}_{i-1}] = 0$, and $\eta_i$ is conditionally $\sigma$-sub-Gaussian with $\sigma \in \mathbb{R}^+$. Let $\{ X_i \}_{i=1}^\infty$ be a stochastic process with $X_i \in \mathcal{H}$ (some Hilbert space) and $X_i$ being $\mathcal{F}_t$ measurable. Assume that a linear operator $\Sigma: \mathcal{H} \rightarrow \mathcal{H}$ is positive definite, i.e., $x^\top \Sigma x > 0$ for any $x \in \mathcal{H}$. For any $t$, define the linear operator $\Sigma_t = \Sigma_0 + \sum_{i=1}^t X_i X^\top_i$ (here $xx^\top$ denotes outer-product in $\mathcal{H}$). With probability at least $1-\delta$, we have for all $t \geq 1$:
\begin{align*}
\left\| \sum_{i=1}^t X_i \eta_i \right\|^2_{\Sigma_t^{-1}} \leq \sigma^2 \log \left( \frac{\det(\Sigma_t) \det(\Sigma_0)^{-1}}{\delta^2}\right).
\end{align*}
\end{lemma}
\begin{lemma}[Sherman-Morrison lemma \citep{sherman1950adjustment}]\label{lem:sm}
Let $A$ denote a matrix and $b,c$ denote two vectors. Then
\begin{align*}
(A + bc^\top)^{-1} = A^{-1} - \frac{A^{-1} bc^\top A^{-1}}{1+ c^\top A^{-1} b}.
\end{align*}
\end{lemma}
\begin{lemma}[Lemma 6.10 of \citet{agarwal2021rl}]\label{lem:det}
Define $u_t = \sqrt{x^\top_t \Sigma^{-1}_t x_t}$ and we have
\begin{align*}
\det \Sigma_T = \det \Sigma_0 \prod_{t=0}^{T-1} (1 + u^2_t).
\end{align*}
\end{lemma}
\begin{lemma}[Potential function bound (Lemma 6.11 of \citet{agarwal2021rl})]\label{lem:potential}
For any sequence $x_0,...,x_{T-1}$ such that for $t< T, \|x_t\|_2 \leq C_b$, we have
\begin{align*}
\log \left( \frac{\det \Sigma_{T-1}}{\det \Sigma_0}\right) &= \log \det \left( I + \frac{1}{\lambda} \sum_{t=0}^{T-1} x_t x^\top_t \right) \leq d\log\left(1+ \frac{TC_b^2}{d \lambda} \right).
\end{align*}
\end{lemma}
\section{Conclusion}\label{sec:conlusion}
We study linear bandits with the underlying reward function being non-linear, which falls into the misspecified bandit framework. Existing work on misspecified bandit usually assumes uniform misspecification where the $\ell_\infty$ distance between the best-in-class function and the true function is upper bounded by the misspecification parameter $\epsilon$. Existing lower bound shows that the $\tilde{\Omega}(\epsilon T)$ term is unavoidable where $T$ is the time horizon, thus the regret bound is always linear. However, in solving optimization problems, one only cares about the approximation error near the global optimal point and approximation error is allowed to be large in highly suboptimal regions. In this paper, we capture this intuition and define a natural model of misspecification, called $\rho$-gap-adjusted misspecificaiton, which only requires the approximation error at each input $x$ to be proportional to the suboptimality gap at $x$ with $\rho$ being the proportion parameter.
Previous work found that classical LinUCB algorithm is not robust in $\epsilon$-uniform misspecified linear bandit when $\epsilon$ is large. However, we show that LinUCB is automatically robust against such gap-adjusted misspecification. Under mild conditions, e.g., $\rho \leq O(1/\sqrt{\log T})$, we prove that it achieves the near-optimal $\tilde{O}(\sqrt{T})$ regret for problems that the best-known regret is almost linear. Also, LinUCB doesn't need the knowledge of $\rho$ to run. However, if the upper bound of $\rho$ is revealed to LinUCB, the $\beta_t$ term can be carefully chosen according to eq. \eqref{eq:known_rho}. Our technical novelty lies in a new self-bounding argument that bounds part of the regret due to misspecification by the regret itself, which can be of independent interest in more settings.
We believe our analysis for LinUCB is tight and the requirement that $\rho = O(1/\sqrt{\log T})$ is essential, but we conjecture that there is a different algorithm that could handle constant $\rho$ or even when $\rho$ approaches $1$ at a rate of $O(1/\sqrt{T})$. We leave the resolution to this conjecture as future work.
More broadly, our paper opens a brand new door for research in model misspecification, including misspecified linear bandits, misspecified kernelized bandits, and even reinforcement learning with misspecified function approximation. Moreover, we hope our paper make people rethink about the relationship between function optimization and function approximation. In the future, much more can be done. For example, we can design a new no-regret algorithm that works under gap-adjusted misspecification framework where $\rho$ is a constant, and study $\rho$-gap-adjusted misspecified Gaussian process bandit optimization.
\subsection*{Acknowledgments}
The work was partially supported by NSF Awards \#2007117 and \#2003257.
|
{
"arxiv_id": "2302.13168",
"language": "en",
"timestamp": "2023-02-28T02:11:41",
"url": "https://arxiv.org/abs/2302.13168",
"yymm": "2302"
} | \section{Sample section}
\section{Introduction}\label{Introduction}
Graph clustering is one of the fundamental tasks in unsupervised learning. The flexibility of modeling any problem as a graph has made graph clustering very popular. Extracting clusters' information from graph is computationally expensive, as it usually done via eigen decomposition in a method known as spectral clustering. A recently proposed method, named as SpectralNet \cite{shaham2018spectralnet}, was able to detect clusters in a graph without passing through the expensive step of eigen decomposition.
SpectralNet starts by learning pairwise similarities between data points using Siamese nets \cite{Bromley1993Signature}. The pairwise similarities are stored in an affinity matrix $A$, which is then passed through a deep network to learn an embedding space. In that embedding space, pairs with large similarities fall in a close proximity to each other. Then, similar points can be clustered together by running $k$-means in that embedding space. In order for SpectralNet to produce accurate results, it needs an affinity matrix with rich information about the clusters. Ideally, a pair of points in the same cluster should be connected with an edge carrying a large weight. If the pair belong to different clusters, they should be connected with an edge carrying a small weight, or no weight which is indicated by a zero entry in the affinity matrix.
SpectralNet uses Siamese nets to learn informative weights that ensure good clustering results. However, the Siamese nets need some information beforehand. They need some pairs to be labelled as negative and positive pairs. Negative label indicates a pair of points belonging to different clusters, and a positive label indicates a pair of points in the same cluster. Obtaining negative and positive pairs can be done in a semi-supervised or unsupervised manner. The authors of SpectralNet have implemented it as a semi-supervised and an unsupervised method. Using the ground-truth labels to assign negative and positive labels, makes the SpectralNet semi-supervised. On the other hand, using a distance metric to label closer points as positive pairs and farther points as negative pairs, makes the SpectralNet unsupervised. In this study, we are only interested in an unsupervised SpectralNet.
Unsupervised SpectralNet uses a distance metric to assign positive and negative pairs. A common approach is to get the nearest $k$ neighbors for each point and assign those neighbors as positive pairs. A random selection of farther points are labeled as negative pairs. But this approach restricts all points to have a fixed number of positive pairs, which is unsuitable if clusters have different densities. In this work, we proposed a similarity metric based on random projection trees (rpTrees) \cite{Dasgupta2008Random,Freund2008Learning}. An example of an rpTree is shown in Fig.\ \ref{Fig:rpTree}. rpTrees do not restrict the number of positive pairs, as this depends on how many points in the leaf node.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth,height=20cm,keepaspectratio]{rpTree.pdf}
\caption{An example of rpTree; points in blue are placed in the left branch and points in orange are placed in the right branch. (Best viewed in color)}
\label{Fig:rpTree}
\end{figure}
The main contributions of this work can be summarized in the following points:
\begin{itemize}
\item Proposing a similarity metric for SpectralNet based on random projection trees (rpTrees) that does not restrict the number of positive pairs and produces better clustering accuracy.
\item Investigating the influence of the leaf size parameter $n_0$ on the clustering accuracy.
\item Performing an in-depth analysis of the projection direction in rpTrees, and examine how it influences the clustering accuracy of SpectralNet.
\end{itemize}
\section{Related work}
\label{RelatedWork}
\subsection{Graph neural networks (GNNs)}
GNNs became researchers’ go-to option to perform graph representation learning. Due to its capability in fusing nodes’ attributes and graph structure, GNN has been widely used in many applications such as knowledge tracing \cite{Song2022Survey} and sentiment analysis \cite{ZHOU2020Modeling}. The most well-known form of GNN is graph convolutional network (GCN) \cite{kipf2017semi}.
Researchers have been working on improving GCN. Franceschi et al. \cite{franceschi2019learning} have proposed to learn the adjacency matrix $A$ by running GCN for multiple iterations and adjusting the graph edges in $A$ accordingly. Another problem with GCN is its vulnerability to adversarial attack. Yang et al. used GCN with domain adaptive learning \cite{Yang2022Robust}. Domain adaptive learning attempts to transfer the knowledge from a labeled source graph to unlabeled target graph. Unseen nodes from the target graph can later be used for node classification.
\subsection{Graph clustering using deep networks}
GCN performs semi-supervised node classification. Due to limited availability of labeled data in some applications, researchers developed graph clustering using deep networks. Yang et al. developed a deep model for network clustering \cite{Yang2021Variational}. They used graph neural network (GCN) to encode the adjacency matrix $A$ and the feature matrix $X$. They also used multilayer perceptron (MLP) to encode the feature matrix $X$. The output is clustered using Gaussian mixture model (GMM), where GMM parameters are updated throughout training. A similar approach was used by Wang et al. \cite{WANG2022Learning}, where they used autoencoders to learn latent representation. Then, they deploy the manifold learning technique UMAP \cite{McInnes2018UMAP} to find a low dimensional space. The final clustering assignments are given by $k$-means. Affeldt et al. used autoencoders to obtain $m$ representations of the input data \cite{Affeldt2020Spectral}. The affinity matrices of these $m$ representations are merged into a single matrix. Then spectral clustering was performed on the merged matrix. One drawback with this approach is that it still needs eigen decomposition to find the embedding space.
SpectralNet is another approach for graph clustering using deep networks, which was proposed by Shaham et al. \cite{shaham2018spectralnet}. They used Siamese nets to construct the adjacency matrix $A$, which is then passed through a deep network. Nodes in the embedding space can be clustered using $k$-means. An extension to SpectralNet was proposed by \citet{Huang2019MultiSpectralNet}, where multiple Siamese nets are trained on multiple views. Each view is passed into a neural network to find an embedding space. All embedding spaces are fused in the final stage, and $k$-means was run to find the cluster labels. Another approach to employ deep learning for spectral clustering was introduced by \citet{Wada2019Spectral}. Their method starts by identifying hub points, which serve as the core of clusters. These hub points are then passed to a deep network to obtain the cluster labels for the remaining points.
\subsection{Graph similarity metrics}
Every graph clustering method needs a metric to construct pairwise similarities. A shared neighbor similarity was introduced by \citet{ZHANG2021Spectral}. They applied their method to \textit{attributed graphs}, a special type of graph where each node has feature attributes. They used shared neighbor similarities to highlight clusters’ discontinuity. The concept of shared neighbors could be traced back to Jarvis–Patrick algorithm \cite{Jarvis1973Clustering}. It is important to mention the higher cost associated with shared neighbor similarity. Because all neighbors have to be matched, instead of computing one value such as the Euclidean distance.
Another way of constructing pairwise similarities was introduced by \citet{Wen2020Spectral}, where they utilized Locality Preserving Projection (LPP) and hypergraphs. First, all points are projected onto a space with reduced dimensionality. The pairwise similarities are constructed using a heat kernel (Equation \ref{Eq-heatkernel}). Second, a hypergraph Laplacian matrix $L_H$ is used to replace the regular graph Laplacian matrix $L$. Hypergraphs would help to capture the higher relations between vertices. Two things needed to be considered when applying this method: 1) the $\sigma$ parameter in the heat kernel needs careful tuning \cite{Zelnik2005Self}, and 2) the computational cost for hypergraph Laplacian matrix $L_H$. Density information were incorporated into pairwise similarity construction by \citet{Kim2021KNN}. The method defines (locally dense points) that are separated from each other by (locally sparse points). This approach falls under the category of DBSCAN clustering \cite{Ester1996Density}. These methods are iterative by nature and need a stopping criterion to be defined.
\begin{equation}
A_{i,j} = exp^{\frac{\left\Vert x_i - x_j \right\Vert^2_2}{2\sigma ^2}}
\label{Eq-heatkernel}
\end{equation}
Considering the literature on graph representation learning, it is evident that SpectralNet \cite{shaham2018spectralnet}: 1) offers a cost-efficient method to perform graph clustering using deep networks and 2) it does not require labeled datasets. The problem is that it uses $k$-nearest neighbor graph with distance metric. This restricts points from pairing with more neighbors if they are in a close proximity. A suitable alternative would be a similarity metric based on random projection trees \cite{Dasgupta2008Random,Freund2008Learning}. rpTrees similarity were already used in spectral clustering by \cite{Yan2009Fast,Yan2019Similarity}. But they are yet to be extended to graph clustering using deep networks.
\section{SpectralNet and pairwise similarities}
\label{ProposedApproach}
The proposed rpTree similarity metric was used in SpectralNet alongside the distance metric that was used for $k$-nearest neighbor graph. The SpectralNet algorithm consists of four steps: 1) identifying positive and negative pairs, 2) running Siamese net using positive and negative pairs to construct the affinity matrix $A$, 3) SpectralNet that maps points onto an embedding space, and 4) clusters are detected by running $k$-means in the embedding space. An illustration of these steps is shown in Fig.\ \ref{Fig:outline}. The next subsection explains the used neural networks (Siamese and SpectralNet). The discussion of similarity metrics and their complexity is introduced in the following subsections.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth,height=20cm,keepaspectratio]{outline.pdf}
\caption{An outline of the used algorithm. (Best viewed in color)}
\label{Fig:outline}
\end{figure}
\subsection{SpectralNet}
\label{SpectralNet}
The first step in SpectralNet is the Siamese network, which consists of two or more neural networks with the same structure and parameters. These networks has a single output unit that is connected to the output layers of the networks in the Siamese net. For simplicity let us assume that the Siamese net consists of two neural networks $N_1$ and $N_2$. Both networks received inputs $x_1$ and $x_2$ respectively, and produce two outputs $z_1$ and $z_2$. The output unit compared the two outputs using the Euclidean distance. The distance should be small if $x_1$ and $x_2$ are a positive pair, and large if they are a negative pair. The Siamese net is trained to minimize contrastive loss, that is defined as:
\begin{equation}
L_{\text{contrastive}}=\begin{cases}
\left\Vert z_1 - z_2 \right\Vert^2, & \text{if $(x_1,x_2)$ is a positive pair}\\
max(c-\left\Vert z_1 - z_2 \right\Vert,0), & \text{if $(x_1,x_2)$ is a negative pair},
\end{cases}
\label{Eq-Siamese}
\end{equation}
\noindent
where $c$ is a constant that is usually set to 1. Then the Euclidean distance obtained via the Siamese net $\left\Vert z_1 - z_2 \right\Vert$ is used in the heat kernel (see Equation \ref{Eq-heatkernel}) to find the similarities between data points and construct the affinity matrix $A$.
The SpectralNet uses a gradient step to optimize the loss function $L_{SpectralNet}$:
\begin{equation}
L_{SpectralNet} = \frac{1}{m^2} \sum_{i,j=1}^{m} a_{i,j} \left\Vert y_i - y_j \right\Vert^2
\label{Eq-SpectralNet}
\end{equation}
\noindent
where $m$ is the batch size; $a$ of size $m \times m$ is the affinity matrix of the sampled points; $y_i$ and $y_j$ are the expected labels of the samples $x_i$ and $x_j$. But the optimization of this functions is constrained, since the last layer is set to be a constraint layer that enforces orthogonalization. Therefore, SpectralNet has to alternate between orthogonalization and gradient steps. Each of these steps uses a different random batch $m$ from the original data $X$. Once the SpectralNet is trained, all samples $x_1, x_2, \cdots, x_n$ are passed through network to get the predictions $y_1, y_2, \cdots, y_n$. These predictions represent coordinates on the embedding space, where $k$-means operates and finds the clustering.
\subsection{Constructing pairwise similarities using $k$-nn}
\label{Constructing-pairs-knn}
\begin{figure}[h]
\begin{minipage}[b]{0.6\linewidth}
\centering
\includegraphics[width=0.99\textwidth,height=20cm,keepaspectratio]{knn-similarity-01.png}
\end{minipage}
\begin{minipage}[b]{0.4\linewidth}
\centering
\includegraphics[width=0.9\textwidth,height=20cm,keepaspectratio]{knn-similarity-02.png}
\vspace{14ex}
\end{minipage}
\caption{Constructing positive and negative pairs using $k$-nn search; red points are the nearest neighbors when $k=2$; blue points are selected randomly (Best viewed in color)}
\label{Fig:knn-similarity}
\end{figure}
The original algorithm of SpectralNet \cite{shaham2018spectralnet} has used $k$-nearest neighbor graph with distance metric to find positive and negative pairs. The positive pairs are the nearest neighbors according to the selected value of $k$, the original method has $k$ set to be $2$. The negative pairs were selected randomly from the farther neighbors. An illustration of this process is shown in Fig.\ \ref{Fig:knn-similarity}
Restricting the points to have a fixed number of positive pairs can be a disadvantage of using $k$-nn. That is a problem we are trying to overcome by using rpTrees to construct positive and negative pairs. In rpTrees, there is no restriction on how many number of pairs for individual points. It depends on how many points ended up in the same leaf node.
\subsection{Constructing pairwise similarities using rpTrees}
\label{Constructing-pairs-rpTree}
rpTrees start by choosing a random direction $\overrightarrow{r}$ from the unit sphere $S^{D-1}$, where $D$ is the number of dimensions. All points in the current node $W$ are projected onto $\overrightarrow{r}$. On that reduced space $\mathbb{R}^{D-1}$, the algorithm picks a dimension uniformly at random and chooses the split point $c$ randomly between $\lbrack \frac{1}{4}, \frac{3}{4} \rbrack$. The points less than the split point $x < c$ are placed in the left child $W_L$, and the points larger than the split point $x > c$ are placed in the right child $W_R$. The algorithm continues to partition the points recursively, and stops when the split produces a node with points less than the leaf size parameter $n_0$.
To create positive pairs for the Simese net, we pair all points in one leaf node. So, points that fall onto the same leaf node are considered similar, and we mark them as positive pairs. For negative pairs, we pick one leaf node $W_x$, and from the remaining set of leaf nodes we randomly pick $W_y$. Then, we pair all points in $W_x$ with the points in $W_y$, and mark them as negative pairs (Equation \ref{Eq-pn-pairs}). An illustration of this process is shown in Fig.\ \ref{Fig:rpTree-similarity}.
\begin{equation}
\begin{split}
&(p,q) \in E(positive) \Leftrightarrow p \in W_x\ and\ q \in W_x\\
&(p,q) \in E(negative) \Leftrightarrow p \in W_x\ and\ q \in W_y.
\end{split}
\label{Eq-pn-pairs}
\end{equation}
\begin{figure}[h]
\begin{minipage}[b]{0.7\linewidth}
\centering
\includegraphics[width=0.99\textwidth,height=20cm,keepaspectratio]{rpTree-similarity-01.png}
\end{minipage}
\begin{minipage}[b]{0.28\linewidth}
\centering
\includegraphics[width=0.99\textwidth,height=20cm,keepaspectratio]{rpTree-similarity-02.png}
\vspace{15ex}
\end{minipage}
\caption{Constructing positive and negative pairs using rpTree. (Best viewed in color)}
\label{Fig:rpTree-similarity}
\end{figure}
\subsection{Complexity analysis for computing pairwise similarities}
\label{Complexityanalysis}
We will use the number of positive pairs to analyze the complexity of the similarity metric used in the original SpectralNet method and the metric proposed in this paper. The original method uses the nearest $k$ neighbors as positive pairs. This is obviously grows linearly with $n$, since we have to construct $n \times k$ pairs and pass them to the Siamese net \cite{Bromley1993Signature}.
Before we analyze the proposed metric, we have to find how many points will fall into a leaf node of an rpTree. This question is usually asked in proximity graphs \cite{Gilbert1961Random}. If we place a squared tessellation $T$ on top of $n$ data points (please refer to section 9.4.1 by Barthelemy \cite{barthelemy2017morphogenesis} for more elaboration). $T$ has an area of $n$ and a side length of $\sqrt{n}$. Each small square $s$ in $T$ has an area of $\log(n)$. The probability of having more than $k$ neighbors in $s$ is $P(l>k)$, where $l = k+1,\cdots,n$. The probability $P(l>k)$ follows the homogeneous Poisson process. This probability approximately equals $\frac{1}{n}$, which is very small, suggesting there is a significant chance of having at most $\log(n)$ neighbors in a square $s$. Since rpTrees follow the same approach of partitioning the search space, it is safe to assume that each leaf node would have at most $\log(n)$ data points.
The proposed metric depends on the number of leaf nodes in the rpTree and the number of points in each leaf node ($n_0$). The leaf size $n_0$ is a parameter that can be tuned before running rpTree. It also determines how many leaf nodes in the tree. Because the method will stop partitioning when the number of points in a leaf node reaches a minimum limit. Then we have to pair all the points in the leaf node.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth,height=20cm,keepaspectratio]{similarity-plot.png}
\caption{The expected number of positive pairs using $k$-nn and rpTree similarities. (Best viewed in color)}
\label{Fig:similarity-plot}
\end{figure}
To visualize this effect, we have to fix all parameters and vary $n$. In Fig.\ \ref{Fig:similarity-plot}, we set $k$ to be $2$ and $10$. The leaf size $n_0$ was set to $20$ and $100$. The number of points $n$ was in the interval $[100,100000]$. With $k$-nn graph we need $n \times k$ positive pairs, and in rpTree similarity we need ${n_0}^2 \times \frac{n}{n_0} = n \times n_0$ positive pairs. So, both similarity metrics grow linearly with $n$. The main difference is how the points are partitioned. $k$-nn graph uses $kd$-tree which produces the same partition with each run making the number of positive pairs fixed. But rpTrees partitions points randomly, so the number of positive pairs will deviate from $n \times n_0$.
\begin{figure}[h
\centering
\includegraphics[width=0.24\textwidth]{Dataset-01.png}
\includegraphics[width=0.24\textwidth]{Dataset-02.png}
\includegraphics[width=0.24\textwidth]{Dataset-03.png}
\includegraphics[width=0.24\textwidth]{Dataset-04.png}
\caption{Synthetic datasets used in the experiments; from left to right \texttt{Dataset 1} to \texttt{Dataset 4}.}
\label{Fig:Datasets}
\end{figure}
\section{Experiments and discussions}
\label{Experiments}
In our experiments we compared the similartity metrics using $k$-nearest neighbor and rpTree, in terms of: 1) clustering accuracy and 2) storage efficiency. The clustering accuracy was measured using Adjusted Rand Index (\texttt{ARI}) \cite{Hubert1985Comparing}. Given the true grouping $T$ and the predicted grouping $L$, \texttt{ARI} is computed using pairwise comparisons. $n_{11}$ if the pair belong to the same cluster in $T$ and $L$ groupings, and $n_{00}$ if the pair in different clusters in $T$ and $L$ groupings. $n_{01}$ and $n_{10}$ if there is a mismatch between $T$ and $L$. \texttt{ARI} is defined as:
\begin{equation}
ARI(T,L)=\frac{2(n_{00}n_{11}-n_{01}n_{10})}{(n_{00}+n_{01})(n_{01}+n_{11})+(n_{00}+n_{10})(n_{10}+n_{11})}\ .
\label{Eq-ARI}
\end{equation}
\noindent
The storage efficiency was measured by the number of total pairs used. We avoid using machine dependent metrics like the running time.
We also run additional experiments to investigate how the rpTrees parameters are affecting the similarity metric based on rpTree. The first parameter was the leaf size parameter $n_0$, which determines the minimum number of points in a leaf node. The second parameter was how to select the projections direction. There are a number of methods to choose the random direction. We tested these methods to see how they would affect the performance.
The two dimensional datasets used in our experiments are shown in Fig.\ \ref{Fig:Datasets}. The remaining datasets were retrieved from scikit-learn library \cite{scikit-learn, sklearn_api}, except for the \texttt{mGamma} dataset which was downloaded from UCI machine learning repository \cite{Dua2019UCI}. All experiments were coded in python 3 and run on a machine with 20 GB of memory and
a 3.10 GHz Intel Core i5-10500 CPU. The code can be found on \url{https://github.com/mashaan14/RPTree}.
\begin{figure}[h
\centering
\includegraphics[width=\textwidth]{Results-synthetic-01-a.png}
\includegraphics[width=\textwidth]{Results-synthetic-01-b.png}
\caption{Experiments with synthetic datasets; \texttt{Method 1} is $k$-nn graph with $k=2$, \texttt{Method 2} is $k$-nn graph with varying $k$, and \texttt{Method 3} is rpTree similarity with $n_0=20$; (top) ARI scores for 10 runs, (bottom) number of total pairs. (Best viewed in color)}
\label{Fig:Results-synthetic-01}
\end{figure}
\begin{figure}[h
\centering
\includegraphics[width=\textwidth]{Results-real-01-a.png}
\includegraphics[width=\textwidth]{Results-real-01-b.png}
\caption{Experiments with real datasets; \texttt{Method 1} is $k$-nn graph with $k=2$, \texttt{Method 2} is $k$-nn graph with varying $k$, and \texttt{Method 3} is rpTree similarity with $n_0=20$; (top) ARI scores for 10 runs, (bottom) number of total pairs. (Best viewed in color)}
\label{Fig:Results-real-01}
\end{figure}
\subsection{Experiments using $k$-nn and rpTree similarity metrics}
\label{k-nn-and-rpTree}
Three methods were used in this experiment. \texttt{Method 1} is the original SpectralNet method by Shaham et al \cite{shaham2018spectralnet}. \texttt{Method 2} was developed by Alshammari et al. \cite{Alshammari2021Refining}, it sets $k$ dynamically based on the statistics around the points. \texttt{Method 3} is the proposed method which uses an rpTree similarity instead of $k$-nn graph.
With the four synthetic datasets, all three methods delivered similar performances shown in Fig.\ \ref{Fig:Results-synthetic-01}. Apart from \texttt{Dataset 3}, where rpTree similarity performed lower than other methods. This could be attributed to how the clusters are distributed in this dataset. The rpTree splits separated points from the same cluster, which lowers the \texttt{ARI}. \texttt{Method 2} has the maximum number of pairs over all three methods. The number of pairs in \texttt{Method 2} and \texttt{Method 3} deviated slightly from the mean, unlike \texttt{Method 1} which has the same number of pairs with each run because $k$ was fixed ($k=2$).
rpTrees similarity outperformed other methods in three out of the five real datasets \texttt{iris}, \texttt{breast cancer}, and \texttt{mGamma} as shown in Fig.\ \ref{Fig:Results-real-01}. $k$-nn with Euclidean distance performed poorly in \texttt{breast cancer}, which suggests that connecting to two neighbors was not enough to accurately detect the clusters. Yan et al. reported a similar finding where clustering using rpTree similarity was better than clustering using Gaussian kernel with Euclidean distance \cite{Yan2019Similarity}. They showed the heatmap of the similarity matrix generated by the Gaussian kernel and by rpTree.
As for the number of pairs, the proposed similarity metric was the second lowest method that used total pairs across all five datasets. Because of the randomness involved in rpTree splits, the proposed similarity metric has a higher standard deviation for the number of total pairs.
\begin{figure}[h
\centering
\includegraphics[width=\textwidth]{Results-synthetic-02-a.png}
\includegraphics[width=\textwidth]{Results-synthetic-02-b.png}
\caption{Experiments with synthetic datasets; \texttt{Method 1} is $k$-nn graph with $k=2$, other methods use rpTree similarity with varying $n_0$; (top) ARI scores for 10 runs, (bottom) number of total pairs. (Best viewed in color)}
\label{Fig:Results-synthetic-02}
\end{figure}
\begin{figure}[h
\centering
\includegraphics[width=\textwidth]{Results-real-02-a.png}
\includegraphics[width=\textwidth]{Results-real-02-b.png}
\caption{Experiments with real datasets; \texttt{Method 1} is $k$-nn graph with $k=2$, other methods use rpTree similarity with varying $n_0$; (top) ARI scores for 10 runs, (bottom) number of total pairs. (Best viewed in color)}
\label{Fig:Results-real-02}
\end{figure}
\subsection{Investigating the influence of the leaf size parameter $n_0$}
\label{Investigating-w}
One of the important parameters in rpTrees is the leaf size $n_0$. It determines when the rpTree stops growing. If the number of points in a leaf node is less than the leaf size $n_0$, that leaf node would not be split further.
By looking at the clustering performance in synthetic datasets shown in Fig.\ \ref{Fig:Results-synthetic-02} (top), we can see that we are not gaining much by increasing the leaf size $n_0$. In fact, increasing the leaf size $n_0$ might affect the clustering accuracy like what happened in \texttt{Dataset 3}. The number of pairs is also related with the leaf size $n_0$, as it grows with $n_0$. This is shown in Fig.\ \ref{Fig:Results-synthetic-02} (bottom).
Increasing the leaf size $n_0$ helped us to get higher \texttt{ARI} with \texttt{breast cancer} and \texttt{mGamma} as shown in Fig.\ \ref{Fig:Results-real-02}. With other real datasets it was not improving the clustering accuracy measured by \texttt{ARI}. We also observed that the number of pairs increases as we increase $n_0$.
Ram and Sinha \cite{Ram2019Revisiting} provided a discussion on how to set the parameter $n_0$. They stated that $n_0$ controls the balance between global search and local search. Overall, they stated that $n_0$ effect on search accuracy is ``quite benign’’.
\subsection{Investigating the influence of the dispersion of points along the projection direction}
\label{Investigating-r}
The original algorithm of rpTrees \cite{Dasgupta2008Random} suggests using a random direction selected at random. But a recent application of rpTree by \citet{Yan2021Nearest} recommended picking three random directions ($nTry=3$) and use the one that provides the maximum spread of data points. To investigate the effect of this parameter, we used four methods for picking a projection direction: 1) picking one random direction, 2) picking three random direction ($nTry=3$) and use the one with maximum spread, 3) picking nine random directions ($nTry=9$), and 4) using principal component analysis (PCA) to find the direction with the maximum spread.
By looking at \texttt{ARI} numbers for synthetic datasets (Fig.\ \ref{Fig:Results-synthetic-03}) and real datasets (Fig.\ \ref{Fig:Results-real-03}), we observed that we are not gaining much by trying to maximize the spread of projected points. This parameter has very little effect. Also, all methods with different strategies to pick the projection direction have used the same number of pairs.
\begin{figure}[h
\centering
\includegraphics[width=\textwidth]{Results-synthetic-03-a.png}
\includegraphics[width=\textwidth]{Results-synthetic-03-b.png}
\caption{Experiments with synthetic datasets; \texttt{Method 1} is $k$-nn graph with $k=2$, other methods use different strategies to pick the projection direction; (top) ARI scores for 10 runs, (bottom) number of total pairs. (Best viewed in color)}
\label{Fig:Results-synthetic-03}
\end{figure}
\begin{figure}[h
\centering
\includegraphics[width=\textwidth]{Results-real-03-a.png}
\includegraphics[width=\textwidth]{Results-real-03-b.png}
\caption{Experiments with real datasets; \texttt{Method 1} is $k$-nn graph with $k=2$, other methods use different strategies to pick the projection direction; (top) ARI scores for 10 runs, (bottom) number of total pairs. (Best viewed in color)}
\label{Fig:Results-real-03}
\end{figure}
\begin{figure}[h
\centering
\includegraphics[width=0.4\textwidth]{picture-proj121-39.png}
\includegraphics[width=0.4\textwidth]{picture-121-93.png}
\caption{(left) Sampling 100 projection directions with maximum orthogonality between them; (right) splitting points along the direction with maximum dispersion. (Best viewed in color)}
\label{Fig:sample-directions}
\end{figure}
In a final experiment, we measured the accuracy differences between a choosing random projection direction against an ideal projection direction. As there are infinite number of projection directions, we instead sample up to 1000 different directions uniformly in the unit sphere, and then pick the best performing among those (see Fig.\ \ref{Fig:sample-directions}). For the tested datasets, we compared the best performing direction against the random direction. We found no significant difference among mean of those 100 or 1000 samples with the random vector as shown in Fig.\ \ref{Fig:Results-synthetic-04} and Fig.\ \ref{Fig:Results-real-04}. Our finding is supported by the recent efforts in the literature \cite{Keivani2021Random} to limit the number of random directions to make rpTrees more storage efficient.
\begin{figure}[h
\centering
\includegraphics[width=\textwidth]{Results-synthetic-04-a.png}
\includegraphics[width=\textwidth]{Results-synthetic-04-b.png}
\caption{Experiments with synthetic datasets; \texttt{random} represents picking one random projection direction, \texttt{$nTry=100$} and \texttt{$nTry=1000$} is the number of sampled directions; (top) ARI scores for 10 runs, (bottom) number of total pairs. (Best viewed in color)}
\label{Fig:Results-synthetic-04}
\end{figure}
\begin{figure}[h
\centering
\includegraphics[width=\textwidth]{Results-real-04-a.png}
\includegraphics[width=\textwidth]{Results-real-04-b.png}
\caption{Experiments with real datasets; \texttt{random} represents picking one random projection direction, \texttt{$nTry=100$} and \texttt{$nTry=1000$} is the number of sampled directions; (top) ARI scores for 10 runs, (bottom) number of total pairs. (Best viewed in color)}
\label{Fig:Results-real-04}
\end{figure}
\section{Conclusion}
\label{Conclusion}
The conventional way for graph clustering involves the expensive step of eigen decomposition. SpectralNet presents a suitable alternative that does not use eigen decomposition. Instead the embedding is achieved by neural networks.
The similarity metric that was used in SpectralNet was a distance metric for $k$-nearest neighbor graph. This approach restricts points from being paired with further neighbors because $k$ is fixed. A similarity metric based on random projection trees (rpTrees) eases this restriction and allows points to pair with all points falling in the same leaf node. The proposed similarity metric improved the clustering performance on the tested datasets.
There are number of parameters associated with rpTree similarity metric. Parameters like the minimum number of points in a leaf node $n_0$, and how to select the projection direction to split the points. After running experiments while varying these parameters, we found that rpTrees parameters have a limited effect on the clustering performance. So we recommend keeping the number of points in a leaf node $n_0$ in order of $log(n)$. Also, it is more efficient to project the points onto a random direction, instead of trying to find the direction with the maximum dispersion. We conclude that random projection trees (rpTrees) can be used as a similarity metric, where they are applied efficiently as described in this paper.
This work can be extended by changing how the pairwise similarity is computed inside the Siamese net. Currently it is done via a heat kernel. Also, one could use other random projection methods such as random projection forests (rpForest) or rpTrees with reduced space complexity. It would be beneficial for the field to see how these space-partitioning trees perform with clustering in deep networks.
\begin{singlespace}
\bibliographystyle{elsarticle-num-names}
|
{
"arxiv_id": "2302.13159",
"language": "en",
"timestamp": "2023-02-28T02:11:30",
"url": "https://arxiv.org/abs/2302.13159",
"yymm": "2302"
} | \section{Introduction}\label{S:intro}
\subsection{Motivation}\label{SS:motiv}\
Introduced almost 50 years ago by Purcell and Pennypacker \cite{PurcellPennypacker1973}, the Discrete Dipole Approximation (DDA) is a classical numerical method in computational electromagnetics that is the subject of a vast and still rapidly growing literature (see the survey \cite{YurkinHoekstra2007}), but is virtually unknown in the mathematical community. It can be considered as a numerical approximation scheme for a strongly singular volume integral equation that, however, is too simple to fit into any known framework for standard approximation schemes for such equations (Galerkin, collocation or Nystr\"om methods etc). In particular, to the authors' knowledge, there does not exist any error estimate or convergence proof for this method.
In the paper \cite{Yurkin:06i}, estimates for a consistency error are derived, and it is observed that to complete the convergence analysis, a uniform estimate for the inverse of the matrix of the linear system (stability estimate) would be needed. In the present paper, we prove first results on the way to such stability estimates for the DDA and related numerical schemes. The class of singular integral equations considered here includes the quasi-static case (i.e.\ zero frequency limit) of the Maxwell volume integral equation that describes the scattering of electromagnetic waves by a penetrable dielectric body in the case of constant electric permittivity. Further stability results for the non-zero frequency case will be the subject of a forthcoming paper.
Because of the simplicity of the class of operators considered here (convolution operators with kernels positively homogeneous of degree $-d$ on a bounded domain $\Omega\subset\R^{d}$), we are able to obtain rather sharp results on the region of stability, by estimating the numerical range of the discretized operator in comparison with the numerical range of the integral operator. It turns out that for some operators, including the quasi-static Maxwell case in dimension $d\ge2$, the stability region is smaller than what one would na\"ively expect.
This corresponds to the fact that the eigenvalues of the system matrix, as the mesh-width of the discretization tends to zero, accumulate on a set that is strictly larger than the convex hull of the essential spectrum of the integral operator.
In the paper \cite{Rahola2000}, motivated by the convergence analysis of iterative solutions of the resulting large linear systems, the essential spectrum of the Maxwell volume integral operator was studied for the case of scattering by a dielectric ball in $\R^{3}$. This is a subset of the segment in the complex plane that corresponds to the essential numerical range of the integral operator. It is now known (see \cite{CoDarSak2012,CoDarSak2015}) that the same form of the essential spectrum is valid for more general bounded Lipschitz domains.
In \cite{Rahola2000}, results of some numerical experiments are then shown that seem to indicate that the eigenvalues of the system matrices accumulate either at isolated points, corresponding to eigenvalues of the integral operator and hence to eigenvalues or resonances of the scattering problem, or at the points of the segment that is spanned by the essential spectrum of the integral operator. Looking closer at Figures 4.2--4.4 of \cite{Rahola2000}, one can detect an ``overshoot'', namely that the observed segment of accumulation points is actually larger than the span of the essential spectrum.
In the paper \cite{YurkinMinHoekstra2010}, there is a discussion of the spectrum of the system matrices of the DDA scheme for the quasi-static Maxwell equations, motivated by the numerical modeling of the scattering of light by dust particles whose size is small with respect to the wavelength of the light (``Rayleigh particles''). Based on extensive experience with numerical computations using the DDA code ADDA, the authors seem to be convinced that the DDA provides a faithful approximation of the solution of the volume integral equation in the sense that, among other things, the spectral measure of the DDA system matrices converges to the spectral measure of the volume integral operator. They provide plots of the spectral density of these matrices, including a zoom on a neighborhood of the lower end of the spectrum, see graph (a) in \cite[FIG. 8]{YurkinMinHoekstra2010}. There one can clearly see that there is an overshoot, namely a part of the spectrum below zero, and that its negative minimum does not disappear as the number of dipoles grows, but rather seems to converge to some number around $-0.09$, in contradiction to the description in the accompanying text.
For the quasi-static Maxwell case we prove below (see Proposition~\ref{P:W(F)Ex5} and \eqref{E:Lambda+-3} that such an overshoot indeed exists and that it amounts to an almost $20\%$ increase of the length of the segment spanned the essential spectrum.
This somewhat unexpected result implies that the simple discretization scheme of the DDA does not provide a spectrally correct approximation of the strongly singular volume integral operator. The additional observation, supported by numerical experience, that this concerns only a small neighborhood of the essential spectrum or perhaps even only of the endpoints of this spectrum, whereas discrete eigenvalues and large parts of the spectral density nevertheless are correctly approximated, still awaits a precise description and proof.
It also implies that the DDA scheme is actually unstable in high-contrast situations, namely if the relative permittivity is very small (smaller than $\sim 0.093$ ) or very large (larger than $\sim 11.8$). We prove this here for the zero-frequency limit, but expect that it is also true for non-zero frequencies.
\subsection{The Discrete Dipole Approximation}\label{SS:DDA}\
As its name indicates, the DDA (sometimes called Coupled Dipole Approximation) can be considered as an approximation of a dielectric continuum described by Maxwell's equations by a different physical system consisting of a finite number of dipoles that are characterized by their polarizability, interacting via electromagnetic fields.
The same mathematical system can be obtained by a procedure more amenable to arguments of numerical analysis, namely by transforming the Maxwell equations for the original dielectric continuum into an equivalent Lippmann-Schwinger style volume integral equation and then discretizing this integral equation by a simple delta-delta approximation on a regular grid
$\{x_{n}\mid n\in\Z^{d}\}$ of meshwidth $h>0$.
Thus a linear integral equation on a bounded domain $\Omega\subset\R^{d}$
\begin{equation}
\label{E:IE}
\lambda u(x) - \int_{\Omega}K(x,y)u(y)dy = f(x) \quad(x\in\Omega)
\end{equation}
will be approximated by the finite dimensional linear system
\begin{equation}
\label{E:DDAp}
\lambda u_{m} - \sum_{x_{n}\in\Omega,n\ne m}h^{d}K(x_{m},x_{n})u_{n} = f(x_{m})
\quad(x_{m}\in\Omega)\,.
\end{equation}
We omit the diagonal term $m=n$, because we shall have to do with singular kernels. Apart from this, \eqref{E:DDAp} looks like a Galerkin method with Dirac deltas as trial and test functions.
Let us briefly describe the construction of the volume integral equation.
We write the time harmonic Maxwell equations with normalized frequency $\kappa\in\C$ as a second order system for the electric field $u$.
\begin{equation}
\label{E:Maxwell}
\operatorname{\rm curl}\curl u -\kappa^{2}\epsilon u= i\kappa J.
\end{equation}
Here it is assumed that the magnetic permeability is constant (normalized to $1$) in the whole space. If one further assumes that the permittivity $\epsilon$ is equal to $1$ outside of a bounded domain and the source current $J$ has compact support, one can write this as a perturbation of the free-space situation
\begin{equation}
\label{E:Maxwellpert}
\operatorname{\rm curl}\curl u -\kappa^{2} u= -\kappa^{2}(1-\epsilon) u + i\kappa J.
\end{equation}
Here the right hand side has compact support, and therefore convolution with the outgoing fundamental solution $g_{\kappa}$ of the Helmholtz equation and application of the operator
$\nabla\div + \kappa^{2}$ leads to the volume integral equation in distributional form
\begin{equation}
\label{E:VIEdist}
u = -(\nabla\div + \kappa^{2})g_{\kappa}\star(1-\epsilon)u + u^{\rm inc} \,.
\end{equation}
Here the incoming field $u^{\rm inc}$ combines the field generated by the current density with possible sourceless full space solutions of Maxwell's equations (plane waves etc.)
Equation \eqref{E:VIEdist} can be considered in any dimension $d\ge2$, but only $d=2$ and $d=3$ are relevant for electrodynamics. The equation can be written in the form of a second kind strongly singular integral equation with the $d\times d$ matrix valued kernel
\begin{equation}
\label{E:VIOkernel}
K(x,y) = -(D^{2}+\kappa^{2})g_{\kappa}(x-y)\,.
\end{equation}
The integral operator thus defined involves second order distributional derivatives of the weakly singular kernel $g_{\kappa}(x-y)$. Instead of this form of an integro-differential operator, one can write the strongly singular integral operator also in the form of a Cauchy principal value integral, using the well-known relation (for more details, see section~\ref{SSS:SIOEx5} below)
\begin{equation}
\label{E:D2g-pv}
D^{2}\int_{\R^{d}}g_{\kappa}(x-y)u(y)dy =
\mathop{\rm p.v.}\!\!\int_{\R^{d}} D^{2}g_{\kappa}(x-y)u(y)dy - \frac1d u(x)\,.
\end{equation}
If we further assume that the permittivity $\epsilon$ equals a constant $\epsilon_{r}\in\C\setminus\{1\}$ in $\Omega$, we can divide by $1-\epsilon_{r}$ and arrive at the final form \eqref{E:IE} with the integral understood in the principal value sense, the kernel given by \eqref{E:VIOkernel}, and the spectral parameter $\lambda$ defined by the relation
\begin{equation}
\label{E:ClauMoss}
\lambda = \frac1{1-\epsilon_{r}} - \frac1d = \frac{d-1+\epsilon_{r}}{d(1-\epsilon_{r})}\,.
\end{equation}
For $d=3$, this relation $\lambda=\frac{2+\epsilon_{r}}{3(1-\epsilon_{r})}$ is known in the DDA literature as \emph{Clausius-Mossotti polarizability}, referring to the fact that $\frac1\lambda$ corresponds to the polarizability of the dipoles and to the Clausius-Mossotti equation between the molecular polarizability and the electric permittivity in a dielectric material, see for example \cite[Section 4.5]{Jackson1999}.
The principal part of the volume integral operator is obtained by taking the limit $\kappa\to0$, and we will refer to this situation as the quasi-static Maxwell case.
The resulting kernel is homogeneous of degree $-d$, and this property allows to analyze the corresponding linear system \eqref{E:DDAp} using Fourier analysis of Toeplitz matrices. For this reason we study in this paper a class of strongly singular kernels that includes the quasi-static Maxwell kernel.
\subsection{Outline of the paper}\label{SS:outline}\
In Section~\ref{SS:kernels} we define a class of strongly singular kernels that are homogeneous of degree $-d$ and translation invariant, and we evoke the relation between the numerical range of the corresponding singular integral operator in $L^{2}$ and values of its symbol. The notion of numerical range allows to use a Lax-Milgram type argument to get a resolvent estimate for the restriction of the convolution operator to a bounded domain $\Omega$.
After introducing in Section~\ref{SS:DDD} the delta-delta discretization, we state in Theorem~\ref{T:stabgeneral} the main stability result valid for our class of operators.
In Section~\ref{S:discrete} we study tools for proving stability results, namely infinite Toeplitz matrices and their symbols defined by Fourier series. Here a main difficulty is that one needs precise bounds for the values of a function (numerical symbol) defined by a Fourier series that is not absolutely convergent. We find that one can use Ewald's method for this purpose.
The result is that the symbol of the Toeplitz matrix is a bounded function, and that its range is always a superset of the range of the symbol of the integral operator, but that it might be strictly larger.
If this is the case, then stability of the delta-delta scheme implies well-posedness of the integral equation, but not vice versa: The numerical scheme does then not provide a spectrally correct approximation, and it might be unstable for values of the spectral parameter $\lambda$ for which any Galerkin scheme of the integral equation would be stable.
In Section~\ref{S:Examples} we study in detail five representative examples.
Example \ref{Ex:1} concerns the one-dimensional singular integral equation defined by the finite Hilbert transformation. Here the numerical symbol has a simple explicit expression, and this can be used to get estimates for the resolvent of the discretized operator by the resolvent of the integral operator, with constant equal to $1$. This gives Theorem~\ref{T:Stab1D}, which is the ideal stability result that subsequent results are measured against.
Examples \ref{Ex:2} and \ref{Ex:3} exhibit different behavior of the delta-delta scheme for two strongly singular integral operators in two dimensions. Whereas the two integral operators are equivalent, related by a simple rotation of the coordinate system, the two discrete systems show opposite behavior: We prove that in Example \ref{Ex:2} the ranges of the symbol of the integral operator and of the symbol of the infinite Toeplitz matrix are identical, whereas in Example \ref{Ex:3} there is an overshoot; the region of instability of the approximation scheme is strictly larger than the numerical range of the integral operator.
In Example \ref{Ex:4}, we graphically illustrate the relations, proved in Sections~\ref{SS:kernels} and \ref{S:discrete},
between the spectrum and numerical range of the system matrices and the numerical range of the singular integral operator by considering a non-selfadjoint case. The kernel is a complex-valued function whose real and imaginary parts are given by the kernels of Examples \ref{Ex:3} and \ref{Ex:2}, respectively.
The kernels studied in Examples \ref{Ex:2} and \ref{Ex:3} are also the off-diagonal and diagonal terms, respectively, in the matrix-valued kernel of the quasi-static Maxwell volume integral operator, which is the subject of Example \ref{Ex:5}. We study this for dimensions $d\ge2$ and give more precise results for $d=2$ and $d=3$. In two dimensions we find the same overshoot of the numerical range of the numerical symbol versus the symbol of the integral operator as Example \ref{Ex:3}. In three dimensions this overshoot is even larger, and it can be verified numerically either by computing the numerical symbol using Ewald's method or by studying the asymptotic behavior of the smallest and largest eigenvalues of the matrix of the linear system as the mesh width tends to zero.
\subsection{Notation for Fourier transforms and Fourier series}\label{SS:notation}\
We use the following convention for the Fourier transformation in $\R^{d}$.
\begin{equation}
\label{E:FT}
\hat{f}(\xi)={\mathscr F} f(\xi)=\int_{\R^{d}}f(x)e^{i\xi\cdot x}dx\,.
\end{equation}
Inverse:
\begin{equation}
\label{E:FTinv}
f(x) = {\mathscr F}^{-1}\hat{f}(x)=(2\pi)^{-d}\int_{\R^{d}}\hat f(\xi)e^{-ix\cdot \xi}d\xi\,.
\end{equation}
For Fourier series, we use the following notation.
For a sequence $a:\Z^{d}\to\C$, its Fourier series is defined as
\begin{equation}
\label{E:FS}
\tilde a(\tau) = \sum_{m\in\Z^{d}}a(m) e^{im\cdot \tau}\,,\quad
\tau \in Q=[-\pi,\pi]^{d} .
\end{equation}
Inverse:
\begin{equation}
\label{E:FSinv}
a(m) = (2\pi)^{-d}\int_{Q}\tilde a(\tau) e^{-im\cdot \tau}d\tau.
\end{equation}
The definitions are extended in the usual way from convergent sums and integrals to suitable spaces of functions and distributions. In particular, we have Parseval's theorem
\begin{equation}
\label{E:Parseval}
f\mapsto (2\pi)^{-\frac d2}\hat{f}: L^{2}(\R^{d}) \to L^{2}(\R^{d})
\quad\mbox{ and }\quad
a\mapsto (2\pi)^{-\frac d2}\tilde{a}: \ell^{2}(\Z^{d}) \to L^{2}(Q)
\end{equation}
are unitary (i.e.\ isometric Hilbert space isomorphisms).
Combining the Parseval formula and the convolution theorem gives
\begin{equation}
\label{E:Par+conv,int}
\int_{\R^{2d}}\overline{u(x)}\,k(x-y)\,v(y)\,dy\,dx
= (2\pi)^{-d}\int_{\R^{d}}\overline{\hat u(\xi)}\,\hat k(\xi)\,\hat v(\xi)\,d\xi\,,
\end{equation}
\begin{equation}
\label{E:Par+conv,series}
\sum_{m.n\in\Z^{d}}\overline{a(m)}\,c(m-n)\,b(n) =
(2\pi)^{-d} \int_{Q}\overline{\tilde{a}(\tau)}\,\tilde c(\tau) \,\tilde b(\tau)\, d\tau\,.
\end{equation}
From these formulas follows immediately that the operators of convolution with $k$ in $L^{2}(\R^{d})$ and of discrete convolution with $c$ in $\ell^{2}(\Z^{d})$ are bounded if and only if the ``symbols'' $\hat k$ and $\tilde c$ are bounded functions belonging to $L^{\infty}(\R^{d})$ and $L^{\infty}(Q)$, respectively.
Sufficient conditions for this are that $k\in L^{1}(\R^{d})$ and $c\in\ell^{1}(\Z^{d})$. But these conditions are not necessary, and it is precisely the situation where they are not satisfied that will be relevant in the following.
We will use the \emph{Poisson summation formula} in the form
\begin{equation}
\label{E:Pois}
\sum_{m\in\Z^{d}} f(m) e^{im\cdot \tau} =
\sum_{n\in\Z^{d}} \hat f(\tau+2\pi n)\,.
\end{equation}
A sufficient (but in no way necessary) condition for \eqref{E:Pois} to hold for all $\tau$ is that
$$
f\on{\Z^{d}}\in\ell^{1}(\Z^{d})\quad \mbox{ and }\; \hat f\in L^{1}(\R^{d}).
$$
If we do not assume $f\on{\Z^{d}}\in\ell^{1}$, but only $\hat f\in L^{1}$, then $f$ is bounded, the left hand side of \eqref{E:Pois} converges in the distributional sense and the right hand side converges in $L^{1}(Q)$. Then
\eqref{E:Pois} is true in a weaker sense, the distributional left hand side being equal to the $L^{1}(Q)$ right hand side.
\emph{Example}: Gaussian with parameter $s>0$.
\begin{equation}
\label{E:FGauss}
f(x) = e^{-|x|^{2}s} \quad\Longleftrightarrow\quad
\hat f(\xi)= (\tfrac\pi s)^{\frac d2} \,e^{-\frac{|\xi|^{2}}{4s}}\,.
\end{equation}
For this example the Poisson summation formula takes the form (for $\tau\in\R^{d}$)
\begin{equation}
\label{E:PGauss}
\sum_{m\in\Z^{d}} e^{-|m|^{2}s} e^{im\cdot \tau} =
\sum_{n\in\Z^{d}} (\tfrac\pi s)^{\frac d2} \,e^{-\frac{|\tau+2\pi n|^{2}}{4s}}\,.
\end{equation}
\subsection{Kernels and their symbols}\label{SS:kernels}
\subsubsection{Homogeneous kernels}\label{SSS:Homog}
Later on, we will consider a rather restricted class of strongly singular integral operators on $\R^{d}$ that are convolutions with
kernel functions of the form
\begin{equation}
\label{E:Kernels}
K(x) = p(x)\,|x|^{-d-2} \quad \mbox{where $p$ is a homogeneous polynomial of degree $2$}\,.
\end{equation}
But first we recall some well-known general properties of homogeneous functions and distributions that can be found, for example, in \cite[Chap. III]{Gelfand-Shilov1_1964}.
Let $K$ be a function on $\R^{d}$, positively homogeneous of degree $-d$ and smooth outside of the origin.
For a given $\epsilon>0$, one can define a distribution $K_{\epsilon}\in{\mathscr S}'(\R^{d})$ that coincides with $K$ on $\R^{d}\setminus\{0\}$ by its action on a test function $\phi$ as
\begin{equation}
\label{E:Kepsphi}
\langle K_{\epsilon}, \phi \rangle = \int_{|x|<\epsilon}K(x)(\phi(x)-\phi(0))dx
+ \int_{|x|>\epsilon} K(x)\phi(x)\,dx\,.
\end{equation}
This is independent of $\epsilon$ if and only if $K$ satisfies the cancellation condition on the unit sphere $\SS^{d-1}$
\begin{equation}
\label{E:Kvanish}
\int_{\SS^{d-1}}K\, ds=0\,.
\end{equation}
In this case, we denote the distribution simply by $K$, and we can take the limit
$\epsilon\to0$, thus we get the Cauchy principal value.
\begin{equation}
\label{E:pvK}
\langle K, \phi \rangle =
\mathop{\rm p.v.}\!\!\int K(x)\phi(x)\, dx =
\lim_{\epsilon\to0}\int_{|x|>\epsilon} K(x)\phi(x)\, dx\,.
\end{equation}
Another consequence of the cancellation condition \eqref{E:Kvanish} is that the Fourier transform $\hat K$ of the homogeneous distribution $K$ is a bounded function homogeneous of degree $0$, smooth outside of the origin and also satisfying the cancellation condition. The operator $A$ of convolution with $K$ is therefore bounded in $L^{2}(\R^{d})$.
Note that in the absence of condition \eqref{E:Kvanish}, $\hat K_{\epsilon}$ would have a logarithmic singularity at $0$.
The operator $A$ is diagonalized by Fourier transformation:
\begin{equation}
\label{E:FMul}
{\mathscr F} Au = \hat K \,\hat u\,.
\end{equation}
Therefore in $L^{2}(\R^{d})$, we can obtain information about the spectrum $\mathop{\mathrm{Sp}}\nolimits(A)$ and about the numerical range $W(A)$ from the corresponding easily checked information about the operator of multiplication by the symbol $\hat K$.
We recall that the numerical range of $A$ is defined by
$$
W(A) = \{(u,Au) \mid \|u\|=1\}\,,
$$
where $(\cdot,\cdot)$ denotes the Hilbert space inner product. It is convex by the Toeplitz-Hausdorff theorem and it contains the spectrum of $A$.
Denote by $\operatorname{\rm im}(\hat K)=\hat K(\R^{d})$ the image (range) of $\hat K$. This is a compact set.
We note a first result implied by the unitary equivalence \eqref{E:FMul} with the multiplication operator.
\begin{lemma}
\label{L:Sp+W(A)}
The spectrum $\mathop{\mathrm{Sp}}\nolimits(A)$ is the image $\operatorname{\rm im}(\hat K)$, and the closure $\overline{W(A)}$ of the numerical range of $A$ is the convex hull of $\,\operatorname{\rm im}(\hat K)$.
\end{lemma}
It is well known (and easy to prove) that the numerical range allows estimates for the operator norm of the resolvent: For any
$\lambda\in\C\setminus\overline{W(A)}$,
$$
\|(\lambda\Id-A)^{-1}\| \le \mathrm{dist}(\lambda,W(A))^{-1}\,.
$$
It is also monotone with respect to inclusions of subspaces, a property not shared by the spectrum. Given an open set $\Omega\subset\R^{d}$, we denote by $A_{\Omega}$ the restriction of the convolution operator $A$ to $L^{2}(\Omega)$ and consider the strongly singular integral equation $(\lambda \Id -A_{\Omega})u=f$, or in detail
\begin{equation}
\label{E:SIE}
\lambda u(x) - \mathop{\rm p.v.}\!\! \int_{\Omega}K(x-y) u(y)\,dy = f(x)\quad (x\in\Omega).
\end{equation}
From the definition of the numerical range follows immediately the inclusion
$W(A_{\Omega})\subset W(A)$.
We can summarize this discussion:
\begin{proposition}
\label{P:W(AOmega)}
Let $\cC\subset\C$ be a closed convex set such that
$\hat K(\xi)\in \cC$ for all $\xi\in \SS^{d-1}$. Then for all $\lambda\not\in\cC$ and any $f\in L^{2}(\Omega)$, the integral equation \eqref{E:SIE} has a unique solution $u\in L^{2}(\Omega)$, and there is a resolvent estimate in the $L^{2}(\Omega)$ norm
\begin{equation}
\label{E:resAOmega}
\|(\lambda\Id-A_{\Omega})^{-1}\| \le \mathrm{dist}(\lambda,\cC)^{-1}\,.
\end{equation}
\end{proposition}
\begin{remark}
\label{R:StabGal}
The same argument implies stability for any Galerkin method: Let $X_{h}$ be any closed subspace of $L^{2}(\Omega)$, and let $A_{h}:X_{h}\to X_{h}$ be the operator defined by restricting the sesquilinear form $(u,Av)$ to $X_{h}\times X_{h}$. Then the statement of Proposition~\ref{P:W(AOmega)} remains true if we replace $A_{\Omega}$ by $A_{h}$.
\end{remark}
\begin{remark}
\label{R:WAOmega=WA}
Whereas there is, in general, no simple relation between the spectra $\mathop{\mathrm{Sp}}\nolimits(A_{\Omega})$ and $\mathop{\mathrm{Sp}}\nolimits(A)$, for the numerical ranges of our convolution operators with homogeneous kernels we not only have the inclusion
$W(A_{\Omega})\subset W(A)$, but also the converse. Namely there holds
\begin{equation}
\label{E:WAOmega=WA}
\overline{W(A_{\Omega})} = \overline{W(A)}\quad\mbox{ for any open subset $\Omega\subset\C$.}
\end{equation}
\begin{proof}
The set of Rayleigh quotients $\frac{(u,Au)}{(u,u)}$, where
$u\in L^{2}(\R^{d})\setminus\{0\}$ has compact support, is a dense subset of $W(A)$.
We show that it is a subset of $W(A_{\Omega})$: Indeed,
let $u$ be such a function and let $\rho>0$ and $a\in\R^{d}$ be chosen such that the support of the function
$u_{\rho,a}$ defined by
$u_{\rho,a}(x)=u(\rho x +a)$ is contained in $\Omega$. Then
$$
\frac{(u,Au)}{(u,u)}=\frac{(u_{\rho,a},Au_{\rho,a})}{(u_{\rho,a},u_{\rho,a})}\in W(A_{\Omega}).
$$
\end{proof}
\end{remark}
\subsubsection{Special kernels}\label{SSS:Specific}
For $d=1$, there is essentially only one non-trivial kernel homogeneous of degree $-d$, namely $K(x)=\frac1x$.
In $\R^{d}$ for $d\ge2$, while some of the following analysis would be possible for more general homogeneous kernels, we focus now on the situation \eqref{E:Kernels}.
This means that from now on, we fix a strongly singular kernel $K$ and a homogeneous polynomial $p$ of degree $2$ with $K(x)=p(x)|x|^{-d-2}$, satisfying \eqref{E:Kvanish}, considered as a distribution on $\R^{d}$ according to \eqref{E:pvK}, and we denote by $\hat K$ its Fourier transform.
\begin{lemma}
\label{L:Khat}
Let $K$ have the form \eqref{E:Kernels} and satisfy \eqref{E:Kvanish}. Then
\begin{equation}
\label{E:FKhat}
\hat K(\xi) = -\nu_{d} \frac{p(\xi)}{|\xi|^{2}}, \quad\mbox{ where }
\nu_{d} = \frac{2\pi^{\frac d2}}{d\,\Gamma(\frac d2)}
\mbox{ is the volume of the unit ball in }\R^{d}\,.
\end{equation}
\end{lemma}
\begin{proof}
We first compute the Fourier transform of $p(x)e^{-|x|^{2}s}$, using \eqref{E:FGauss}
$$
{\mathscr F}_{x\mapsto\xi}[p(x)e^{-|x|^{2}s}] =
(\frac\pi s)^{\frac d2} p(-i\partial_{\xi}) e^{-\frac{|\xi|^{2}}{4s}} \,.
$$
The evaluation of these derivatives leads to the following simple result, as we will show:
\begin{equation}
\label{E:Fpexp}
{\mathscr F}_{x\mapsto\xi}[p(x)e^{-|x|^{2}s}] = -(\frac\pi s)^{\frac d2}\frac1{4s^{2}} p(\xi) e^{-\frac{|\xi|^{2}}{4s}}\,.
\end{equation}
For $j,k\in\{1,\dots,d\}$ with $j\ne k$, let
\begin{equation}
\label{E:ajk,bjk}
a_{jk}(x)=x_{j}^{2}-x_{k}^{2}\,,\;\quad b_{jk}(x)=x_{j}x_{k}\,.
\end{equation}
Any homogeneous polynomial of degree $2$ satisfying the cancellation condition $\int_{\SS^{d-1}}p=0$ is a linear combination of the $a_{jk}$ and $b_{jk}$, so we need to verify \eqref{E:Fpexp} only for these.
Note that $\partial_{\xi_{j}}e^{-\frac{|\xi|^{2}}{4s}}
= -\frac1{2s}\xi_{j} e^{-\frac{|\xi|^{2}}{4s}}$ and
$\partial_{\xi_{j}}^{2}e^{-\frac{|\xi|^{2}}{4s}}
= \big(-\frac1{2s}+ \frac1{4s^{2}}\xi_{j}^{2}\big) e^{-\frac{|\xi|^{2}}{4s}}$.
Then for $p=a_{jk}$, we see
$$
(\partial_{\xi_{k}}^{2}-\partial_{\xi_{j}}^{2}) e^{-\frac{|\xi|^{2}}{4s}} =
\frac1{4s^{2}}(\xi_{k}^{2}-\xi_{j}^{2}) e^{-\frac{|\xi|^{2}}{4s}}\,,
$$
and for $p=b_{jk}$, we see
$$
\partial_{\xi_{j}}\partial_{\xi_{k}} e^{-\frac{|\xi|^{2}}{4s}} =
\frac1{4s^{2}}\xi_{k}\xi_{j} e^{-\frac{|\xi|^{2}}{4s}}\,.
$$
Thus in both cases we have
\begin{equation}
\label{E:p(dxi)Gauss}
p(\partial_{\xi}) e^{-\frac{|\xi|^{2}}{4s}} =
\frac1{4s^{2}} p(\xi) e^{-\frac{|\xi|^{2}}{4s}}\,,
\end{equation}
and \eqref{E:Fpexp} is proved.
Now we use the definition of the Gamma function
$$
\Gamma(a) = \int_{0}^{\infty}t^{a}e^{-t}\tfrac{dt}{t}
= |x|^{2a}\int_{0}^{\infty} s^{a}\,e^{-|x|^{2}s}\tfrac{ds}{s}
$$
to write the kernel as an integral over Gaussians
\begin{equation}
\label{E:K=Gauss}
K(x) = p(x)|x|^{-d-2} =
\frac1{\Gamma(\frac d2+1)}\int_{0}^{\infty} s^{\frac d2+1}p(x)e^{-|x|^{2}s}\tfrac{ds}{s}\,.
\end{equation}
Taking Fourier transforms and using \eqref{E:Fpexp}, we find with $u=\frac{|\xi|^{2}
}{4s}$
\begin{equation}
\label{E:KhatGauss}
-\hat K(\xi) =
\frac{\pi^{\frac d2}}{4\Gamma(\frac d2+1)}\int_{0}^{\infty}\!\!\!\! s^{-1}p(\xi)e^{-\frac{|\xi|^{2}}{4s}}\tfrac{ds}{s}
= p(\xi) |\xi|^{-2} \frac{\pi^{\frac d2}}{\frac d2\Gamma(\frac d2)}\int_{0}^{\infty}\!\!\!\! e^{-u}du
= \nu_{d} \,p(\xi) \,|\xi|^{-2}
\end{equation}
as claimed.
\end{proof}
\subsection{Delta-delta discretization}\label{SS:DDD}\
Let $N\in\N$, fix some origin $a^{N}\in\R^{d}$ and define the cubic grid of meshwidth $h=\frac1N$ by
$$
\Sigma^{N}=\{x^{N}_{m}=a^{N}+\frac{m}N \mid m\in\Z^{d}\}\,.
$$
We further define
$$
\omega^{N} = \{m\in\Z^{d} \mid x^{N}_{m}\in \Omega\}\,.
$$
Then a very simple discretization of the strongly singular integral equation
$(\lambda \Id -A_{\Omega})u=f$
\eqref{E:SIE} is the following
\begin{equation}
\label{E:DDA}
\lambda u_{m} - N^{-d}\!\!\!\!\sum_{n\in\omega^{N},m\ne n}K(x^{N}_{m}-x^{N}_{n})u_{n}= f(x^{N}_{m}) \,,\quad (m\in\omega^{N})\,,
\end{equation}
or in shorthand $(\lambda\Id - T^{N})U=F$.
The name ``delta-delta discretization'' points at the fact that this discretization formally looks like a Galerkin method for the integral equation \eqref{E:SIE} with Dirac deltas as both test and trial functions, except for the diagonal terms of $T^{N}$, where we put zero, which is natural in view of the cancellation condition \eqref{E:Kvanish}.
Our aim in this paper is to analyze the linear system \eqref{E:DDA}, in particular its stability in $\ell^{2}(\omega^{N})$, in the same way as we did above for the integral equation \eqref{E:SIE} in $L^{2}(\Omega)$, and to compare the two.
We state a general result here, which we prove in the next section. More precise results will be given below in Section~\ref{S:Examples} for some examples, in particular those mentioned in Subsection~\ref{SS:motiv}.
\begin{theorem}
\label{T:stabgeneral}
Let $K$ be a strongly singular kernel satisfying \eqref{E:Kernels} and \eqref{E:Kvanish}. Then there exists a compact convex set $\cC\subset\C$ such that for any $\lambda\in\C\setminus\cC$, any $N\in\N$ for which $\omega^{N}$ is non-empty, and for any $F\in\ell^{2}(\Omega^{N})$, the system \eqref{E:DDA} has a unique solution, and there is a uniform estimate for the inverse in the $\ell^{2}(\Omega^{N})$ operator norm
\begin{equation}
\label{E:resTN}
\|(\lambda\Id-T^{N})^{-1}\|_{\cL(\ell^{2}(\Omega^{N}))} \le \mathrm{dist}(\lambda,\cC)^{-1}\,.
\end{equation}
Furthermore, with the strongly singular integral operator $A$ defined above in Section~\ref{SSS:Homog}, there holds the inclusion
\begin{equation}
\label{E:WAsubsetC}
W(A) \subset \cC\,.
\end{equation}
\end{theorem}
\begin{remark}
\label{R:overshoot}
Note that the inclusion $W(A) \subset \cC$ implies that for $\lambda\not\in\cC$ the singular integral equation is uniquely solvable, too, and provides the a priori estimate \eqref{E:resAOmega}, for any domain $\Omega\subset\R^{d}$. On the other hand, in order to guarantee stability for $\lambda\in\C\setminus\cC$, the inclusion may need to be strict, as we shall see in the examples, and then there may be $\lambda\in\cC\setminus W(A)$ for which the singular integral equation is well posed, but the delta-delta discretization is \emph{unstable}.
\end{remark}
\section{The discrete system}\label{S:discrete}
Let $T^{N}$ be the matrix representing the discretized integral operator in \eqref{E:DDA}:
\begin{equation}
\label{E:TN}
T^{N} = (t^{N}_{mn})_{m,n\in\omega^{N}}
\quad\mbox{ with }\quad
t^{N}_{mn} = \begin{cases}
N^{-d}\, K(x^{N}_{m}-x^{N}_{n}) &(m\ne n)\\
0 &(m=n)
\end{cases}
\,.
\end{equation}
Our aim is to bound the numerical range $W(T^{N})$ independently of $N$.
\subsection{Toeplitz structure}\label{SS:Toeplitz}\
The matrix elements $t^{N}_{mn}$ of $T^{N}$ do not depend on the choice of the origin $a^{N}$, and
since we assumed that $K$ is homogeneous of degree $-d$, we have
$$
N^{-d}\, K(x^{N}_{m}-x^{N}_{n}) = K(m-n)\,,
$$
hence $T^{N}$ is a finite section of a fixed infinite Toeplitz (discrete convolution) matrix
\begin{equation}
\label{E:Tinf}
T = (t_{mn})_{m,n\in\Z^{d}}
\quad\mbox{ with }\quad
t_{mn} = \begin{cases}
K(m-n) &(m\ne n)\,,\\
0 &(m=n)\,.
\end{cases}
\end{equation}
Theorem~\ref{T:stabgeneral} will be proved if we can show that $T$ defines a bounded linear operator in $\ell^{2}(\Z^{d})$ whose numerical range $W(T)$ contains $W(A)$. We can then choose $\cC$ as the closure of $W(T)$.
We use Fourier series and the convolution theorem to diagonalize the matrix $T$ and to represent the sesquilinear form defined by the matrix $T^{N}$, compare \eqref{E:Par+conv,series}.
For $U=(u_{m})_{m\in\omega^{N}}$, we find
\begin{equation}
\label{E:formTN}
(U,T^{N}U) = (2\pi)^{-d}\int_{Q} F(\tau)\,|\tilde u(\tau)|^{2}\,d\tau\,.
\end{equation}
Here $\tilde u(\tau)=\sum_{m\in\omega^{N}}u_{m}e^{im\cdot \tau}$ and $Q=[-\pi,\pi]^{d}$. $F(\tau)$ is the symbol (characteristic function) of the Toeplitz matrix $T$:
\begin{equation}
\label{E:F(t)}
F(\tau) = \sum_{m\in\Z^{d},m\ne0}K(m)\,e^{im\cdot \tau}\,.
\end{equation}
The problem is now reduced to the study of the operator of multiplication by the function $F$ in $L^{2}(Q)$.
\begin{lemma}
\label{L:TbyF}
The operator $T:\ell^{2}(\Z^{d}) \to \ell^{2}(\Z^{d})$ is bounded if and only if $F\in L^{\infty}(Q)$.\\
The closure of $W(T)$ is the closed convex hull of the range $\operatorname{\rm im}(F)=\{F(\tau)\mid \tau\in Q\}$ and is also equal to the closure of the union
$\bigcup_{N\in\N}W(T^{N})$\,.
\end{lemma}
Proof: This is immediate from \eqref{E:formTN}.
\subsection{Ewald method}\label{SS:Ewald}\
The problem that makes the statement $F\in L^{\infty}(Q)$ non trivial is that the Fourier series \eqref{E:F(t)} is not absolutely convergent. The sequence $(K(m))_{m\in\Z^{d}}$ is of order $O(|m|^{-d})$ at infinity and therefore in $\ell^{p}(\Z^{d})$ for all $p>1$, but not for $p=1$. Its membership in $\ell^{2}(\Z^{d})$ implies, for example, that the series converges in the sense of $L^{2}(Q)$.
The slow convergence of the Fourier series for $F$ makes it also unsuitable for using it in numerical computations to find bounds for $\operatorname{\rm im}(F)$.
We will use a variant of a method introduced by P.~P.~Ewald~\cite{Ewald1921} in 1921 as a tool to compute slowly converging lattice sums. It has become a routine method for the computation of periodic and quasi-periodic Green functions, with application in numerical electrodynamics and other fields where periodic structures appear. Among the many presentations of the method: Appendix A of the article \cite{Essmann-et-al_1995} or Section 2.13.3 in the book \cite{Ammari-et-al_2018}.
We use it here as a summation method for our slowly converging Fourier series. In our restricted setting it turns out to give surprisingly simple results.
The method introduces a decomposition $K=K^{F}+K^{P}$ for the coefficients and correspondingly $F=F^{F}+F^{P}$ for the Fourier series in such a way that both $K^{F}$ and the Fourier transform $\hat K^{P}$ of $K^{P}$ are exponentially decreasing at infinity, so that both the Fourier series for $F^{F}(\tau)$ and the Poisson sum (compare \eqref{E:Pois}) for $F^{P}(\tau)$ are rapidly convergent, which not only proves the boundedness of $F$, but gives also a fast numerical algorithm for its computation.
In the literature one often labels the two terms in the decomposition ``spatial'' and ``spectral'' sums, but this is not pertinent to our situation, where the lattice sum runs over the Fourier variable, and the Fourier series runs overs spatial points. So we will use ``Fourier'' and ``Poisson'' sums as labels.
The idea of Ewald's method is to represent $K(x)$ by an integral over Gaussians from $0$ to $\infty$ as we did already in Section~\ref{SSS:Specific} above:
\begin{equation}
\label{E:intK}
K(x) = p(x)|x|^{-d-2} =
\frac{p(x)}{\Gamma(\frac d2+1)}\int_{0}^{\infty} s^{\frac d2}e^{-|x|^{2}s}\,ds
\end{equation}
and then to split the integral at a point $\beta^{2}>0$:
\begin{align}
\label{E:intKF}
K^{F}(x) &= \frac{p(x)}{\Gamma(\frac d2+1)}\int_{\beta^{2}}^{\infty} s^{\frac d2}e^{-|x|^{2}s}\,ds\,,\\
\label{E:intKP}
K^{P}(x) &= \frac{p(x)}{\Gamma(\frac d2+1)}\int_{0}^{\beta^{2}} s^{\frac d2}e^{-|x|^{2}s}\,ds\,.
\end{align}
We see that $K^{F}$ is simply the product of $K$ by a function exponentially decreasing at infinity
\begin{equation}
\label{E:KF=KxGamma}
K^{F}(x) = K(x) \, \frac{\Gamma(\tfrac d2+1,\beta^{2}|x|^{2})}{\Gamma(\frac d2+1)}
\end{equation}
with the (upper) incomplete Gamma function (see \cite[\S 6.5]{Abramowitz-Stegun_1964})
$$
\Gamma(a,x)=\int_{x}^{\infty}t^{a-1}e^{-t}\,dt\,.
$$
Therefore
$K^{F}(x) = O(|x|^{2}e^{-\beta^{2} |x|^{2}})$ as $|x|\to\infty$\,, and the Fourier series for $F^{F}(\tau)$
\begin{equation}
\label{E:serFF}
F^{F}(\tau) = \sum_{m\in\Z^{d},m\ne0}K^{F}(m)\,e^{im\cdot \tau}
\end{equation}
converges rapidly, implying that $F^{F}$ is an analytic function on $\R^{d}/(2\pi\Z)^{d}$.
Consequently, the Fourier series for $F^{P}(\tau)$ converges as slowly as the one for $F(\tau)$, and we use instead the Poisson summation formula \eqref{E:Pois} and write
\begin{equation}
\label{E:serFP}
F^{P}(\tau) = \sum_{n\in\Z^{d}}\hat K^{P}(\tau+2\pi n)\,.
\end{equation}
We can evaluate $\hat K^{P}$ with the formulas used for $\hat K$ in Lemma~\ref{L:Khat}. As in \eqref{E:KhatGauss} we obtain
\begin{equation}
\begin{aligned}
\label{E:KPhat=KhatxG}
\hat K^{P}(\xi) &=
\frac{-\pi^{\frac d2}}{4\Gamma(\frac d2+1)}\int_{0}^{\beta^{2}}\!\!\!\! s^{-1}p(\xi)e^{-\frac{|\xi|^{2}}{4s}}\tfrac{ds}{s}
= -p(\xi) |\xi|^{-2} \frac{\pi^{\frac d2}}{\frac d2\Gamma(\frac d2)}\int_{\frac{|\xi|^{2}}{4\beta^{2}}}^{\infty} e^{-u}du\\
&= \hat K(\xi) e^{-\frac{|\xi|^{2}}{4\beta^{2}}}\,.
\end{aligned}
\end{equation}
Therefore we also obtain a very simple form for the Fourier transform, namely that $\hat K^{P}$ is just the symbol of $A$ cut off at infinity, and therefore the series \eqref{E:serFP} converges absolutely and uniformly. At most one term in the sum may be discontinuous, when $\tau+2\pi n=0$, and for $\tau\in Q$ this is the term with $n=0$.
We can summarize the result.
\begin{proposition}
\label{P:NumSym}
The symbol $F(\tau)$ of the infinite Toeplitz matrix $T$ is a bounded function given for any $\beta>0$ by the exponentially convergent sums
\begin{equation}
\label{E:Fewald}
F(\tau) =
\sum_{m\in\Z^{d},m\ne0}K(m)\tfrac{\Gamma(\tfrac d2+1,\beta^{2}|m|^{2})}{\Gamma(\frac d2+1)}e^{im\cdot \tau} +
\sum_{n\in\Z^{d}}\hat K(\tau+2\pi n)\,e^{-\frac{|\tau+2\pi n|^{2}}{4\beta^{2}}}\,.
\end{equation}
In the period cube $Q=[-\pi,\pi]^{d}$, it is $C^{\infty}$ outside of $\,0$, and it has the form
\begin{equation}
\label{E:F=F0+Khat}
F(\tau) = \hat K(\tau) + F_{0}(\tau) \quad\mbox{ where $F_{0}$ is analytic in $Q$ and $F_{0}(0)=0$}.
\end{equation}
\end{proposition}
\begin{proof}
We have proved equation \eqref{E:Fewald} above, except for one point: From Poisson's summation formula follows that the Poisson sum \eqref{E:serFP} equals the Fourier series with coefficients $K^{P}(m)$, $m\in\Z^{d}$, including $m=0$. But in the Fourier series \eqref{E:F(t)} defining $F(t)$ as well as in \eqref{E:serFF} defining $F^{F}(t)$, we have excluded $m=0$. So we should compensate for $K^{P}(0)$, which according to \eqref{E:intKP} equals
$$
K^{P}(0) = \frac{p(0)\beta^{d}}{\Gamma(\frac d2+1)}\,.
$$
Now, since we assumed $p(x)$ to be a homogeneous polynomial of degree $2$, we have $p(0)=0$ and hence no compensation is needed.
Representing $F_{0}$ as
$$
F_{0}(\tau)=
\sum_{m\in\Z^{d},m\ne0}K(m)\tfrac{\Gamma(\tfrac d2+1,\beta^{2}|m|^{2})}{\Gamma(\frac d2+1)}e^{im\cdot \tau} +
\sum_{n\in\Z^{d},n\ne0}\hat K(\tau+2\pi n)e^{-\frac{|\tau+2\pi n|^{2}}{4\beta^{2}}}
+ \hat K(\tau)\big(e^{-\frac{|\tau|^{2}}{4\beta^{2}}}-1\big)\,,
$$
we see immediately that it is analytic. For finding $F_{0}(0)$, we can use the following observation.
\begin{lemma}
\label{L:cubicsym}
Let $S\subset\R^{d}$ be a finite set that is cubically symmetric, i.\ e.\ invariant under reflections at coordinate planes and under permutations of the coordinates, and let $p$ be a homogeneous polynomial of degree $2$ satisfying the cancellation condition $\int_{\SS^{d-1}}p=0$. Then
$$
\sum_{x\in S}p(x) = 0.
$$
\end{lemma}
This is immediately clear when $p$ is one of the $a_{jk}$ or $b_{jk}$ from \eqref{E:ajk,bjk}, and it is therefore true for all $p$ satisfying the (spherical) cancellation condition.
For any $M\in\R$, the set $\{m\in\Z^{d}\mid |m|^{2}=M\}$ is either empty or cubically symmetric.
Therefore for $\tau=0$, the two sums in the representation of $F_{0}(\tau)$ are $0$. The last term
$$
\hat K(\tau)\big(e^{-\frac{|\tau|^{2}}{4\beta^{2}}}-1\big)
= -\nu_{d}\,p(\tau)\,\frac{e^{-\frac{|\tau|^{2}}{4\beta^{2}}}-1}{|\tau|^{2}}
$$
tends to $\nu_{D}p(0)/(4\beta^{2})=0$ as $\tau\to0$, and hence $F_{0}(0)=0$.
\end{proof}
Poposition~\ref{P:NumSym} implies Theorem~\ref{T:stabgeneral}, where $\cC$ is the closed convex hull of $\operatorname{\rm im}(F)$. The inclusion $W(A)\subset\cC$ is easy to see from \eqref{E:F=F0+Khat}:\\
Given $\epsilon>0$, let $\delta>0$ be such that for $|\tau|<\delta$ we have $|F_{0}(\tau)|<\epsilon$.
Since $\hat K$ is homogeneous of degree zero, it takes all of its values already on the ball $B_{\delta}(0)$ of radius $\delta$. Thus
$$
\operatorname{\rm im}(\hat K) \subset F(B_{\delta}(0)) + B_{\epsilon}(0) \subset \operatorname{\rm im}(F) + B_{\epsilon}(0)\,.
$$
Taking convex hulls shows that
$$
W(A) \subset \cC + B_{\epsilon}(0) \quad\mbox{ for all }\;\epsilon>0\,.
$$
\begin{remark}
\label{R:simple}
The very simple form of the Ewald representation \eqref{E:Fewald} comes from the very simple form of the Fourier transforms \eqref{E:FKhat} and \eqref{E:Fpexp}, which in turn rely on the cancellation condition \eqref{E:Kvanish}. Now for the kernel $K$ this condition is natural, because it is necessary in order to represent $K$ as a homogeneous distribution and to have a bounded Fourier transform. But for the symbol $\hat K$ it is not as natural. We can add a constant and still have a function homogeneous of degree zero, which will then not satisfy the cancellation condition. An example is $\xi_{j}\xi_{k}|\xi|^{-2}$ for all $j,k$, even for $j=k$.
On the other hand, the representation $K(x)=p(x)|x|^{-d-2}$ may not be the most natural, one may come across cases (see Example~\ref{Ex:5} below) like
$$K_{jk}(x)=\delta_{jk}|x|^{-d}-d\,x_{j}x_{k}|x|^{-d-2}\,,
$$
where for $j=k$ the two terms in the sum do not separately satisfy \eqref{E:Kvanish}. This fits into our framework, however, because
$$
K_{jk}(x)=-d\,b_{jk}(x)|x|^{-d-2}\quad\mbox{ for $j\ne k$, \quad and }\quad
K_{kk}(x)=\sum_{j=1}^{d}a_{jk}(x)|x|^{-d-2}\,.
$$
If one treats the two terms individually, one may get formulas for Fourier transforms and for the Ewald splitting that are less symmetric than what we presented above.
\end{remark}
\subsection{An integral representation}\label{SS:FInt}\ \nopagebreak
We have another look at the Ewald splitting for the numerical symbol $F(\xi)=F^{F}(\xi)+F^{P}(\xi)$ described in \eqref{E:intKF}--\eqref{E:Fewald}
\begin{align}
\label{E:FFsumint}
F^{F}(\xi) &= \sum_{m\in\Z^{d}}
\frac{p(m)}{\Gamma(\frac d2+1)}\int_{\beta^{2}}^{\infty} s^{\frac d2}e^{-|m|^{2}s}\,ds
\,e^{im\cdot \xi} \\
\label{E:FPsumint}
F^{P}(\xi) &= \sum_{n\in\Z^{d}}
\frac{-\pi^{\frac d2}}{4\Gamma(\frac d2+1)}\int_{0}^{\beta^{2}}\!\!\!\! s^{-2}p(\xi+2\pi n)e^{-\frac{|\xi+2\pi n|^{2}}{4s}}\,ds\,.
\end{align}
These formulas are valid for any $0<\beta<\infty$. All the sums and integrals are converging absolutely here, and therefore we can interchange sums and integrals.
\begin{align}
\label{E:FFintsum}
F^{F}(\xi) &= \int_{\beta^{2}}^{\infty}\!\!\! H^{F}(\xi,s)\,ds \quad\mbox{ with }\quad
H^{F}(\xi,s) = \sum_{m\in\Z^{d}}
\frac{p(m)}{\Gamma(\frac d2+1)} s^{\frac d2}e^{-|m|^{2}s}
\,e^{im\cdot \xi}\\
\label{E:FPintsum}
F^{P}(\xi) &= \int_{0}^{\beta^{2}}\!\!\! H^{P}(\xi,s)\,ds \quad\mbox{ with }\quad
H^{P}(\xi,s) =
\sum_{n\in\Z^{d}}
\frac{-\pi^{\frac d2}p(\xi+2\pi n)}{4\Gamma(\frac d2+1)} s^{-2}e^{-\frac{|\xi+2\pi n|^{2}}{4s}}\,.
\end{align}
From the definition \eqref{E:FFintsum} of $H^{F}$ and the fact that $|m|\ge1$ in the sum follows without difficulty that for any $0<\gamma<1$ there exists a constant $C$ such that
\begin{equation}
\label{E:HFstoinf} |H^{F}(\xi,s)| \le C\,e^{-\gamma s} \quad\mbox{ for all }s\ge1,\; \xi\in\R^{d}.
\end{equation}
To see the behavior of $H^{P}(\xi,s)$ from \eqref{E:FPintsum}, we decompose
$$
H^{P}(\xi,s) = H_{0}(\xi,s) + H_{1}(\xi,s)
$$
with
\begin{align}
\label{E:H0}
H_{0}(\xi,s) &= -\tfrac{\pi^{\frac d2}}{4\Gamma(\frac d2+1)}
\!\!\!
\sum_{n\in\Z^{d},n\ne0} \!\!\!
p(\xi+2\pi n) s^{-2}e^{-\frac{|\xi+2\pi n|^{2}}{4s}}\,,\\
\label{E:H1}
H_{1}(\xi,s) &= -\tfrac{\pi^{\frac d2}}{4\Gamma(\frac d2+1)}
p(\xi) s^{-2}e^{-\frac{|\xi|^{2}}{4s}}\,.
\end{align}
Now we use the fact that for $\xi\in Q$ and $n\ne0$ we have $|\xi+2\pi n|\ge\pi$.
Therefore for any $\delta<\frac{\pi^{2}}{4}$ there is a constant $C$ such that
\begin{equation}
\label{E:H0at0}
|H_{0}(\xi,s)| \le C\, e^{-\frac\delta s} \quad\mbox{ for all }0<s\le1,\; \xi\in Q,
\end{equation}
and $H_{0}(\xi,s)$ is analytic in $\xi$ for all $s$.
It remains to analyze the term with $n=0$, i.e. $H_{1}$. It is clear that it vanishes for $\xi=0$, and for every $\xi\ne0$ there exists a constant $C_{\xi}$ and $0<\gamma<\frac{|\xi|^{2}}4$ such that
\begin{equation}
\label{E:H1alls}
|H_{1}(\xi,s)| \le C_{\xi} \min\{s^{-2}, e^{-\frac\gamma s} \}
\quad\mbox{ for all } s\in(0,\infty)\,.
\end{equation}
Thus $H_{1}(\xi,s)$ is integrable over $s\in(0,\infty)$ for all $\xi$, but there is no uniform bound for $C_{\xi}$:
Considering $\sup_{s>0}s^{-2}e^{-\frac{|\xi|^{2}}{4s}}$, one sees that
$C_{\xi}=O(|\xi|^{-2})$ as $\xi\to0$.
Thus we see that $H^{F}$ is integrable as $s\to\infty$ according to \eqref{E:HFstoinf}, and $H^{P}$ is integrable as $s\to0$ according to \eqref{E:H0at0} and \eqref{E:H1alls}, but, because of Poisson's summation formula, they are in fact the same
$$
H^{F}(\xi,s) = H^{P}(\xi,s)\,,
$$
so we can use all of the above estimates for both of them.
We can summarize
\begin{proposition}
\label{P:FInt}
The symbol $F(\xi)$ has the integral representation
\begin{equation}
\label{E:F=intH}
F(\xi) = \int_{0}^{\infty} H(\xi,s)\,ds\,,
\end{equation}
where $H(\xi,s)$ is given either by the Fourier series $H^{F}$ in \eqref{E:FFintsum} or, equivalently, by the lattice sum $H^{P}$ in \eqref{E:FPintsum}.
The decomposition $F=F_{0}+\hat K$ in Proposition~\ref{P:NumSym} corresponds to the decomposition $H=H_{0}+H_{1}$ with $H_{0}$ and $H_{1}$ defined in \eqref{E:H0} and \eqref{E:H1}, and there holds
\begin{equation}
\label{E:F01Int}
F_{0}(\xi) = \int_{0}^{\infty} H_{0}(\xi,s)\,ds \qquad
\mbox{ and }\quad
\hat K(\xi) = \int_{0}^{\infty} H_{1}(\xi,s)\,ds \,.
\end{equation}
In these integrals, the functions $s\mapsto H_{0}(\xi,s)$, $s\mapsto H_{1}(\xi,s)$, and $s\mapsto H(\xi,s)$ are integrable on $(0,\infty)$ for any $\xi\in Q$, for any $\xi\in\R^{d}\setminus\{0\}$, and for any $\xi\in Q\setminus\{0\}$, respectively.
\end{proposition}
The integral representations \eqref{E:F=intH} and \eqref{E:F01Int} will be used below to get bounds for the function $F(\xi)$ from estimates for $H(\xi,s)$. The latter will be a consequence of the following observation that can be proved using Fourier representations \eqref{E:FFintsum} for $H$ and \eqref{E:Fpexp} for $H_{1}$.
\begin{lemma}
The functions
\label{L:heat}
$$
(\xi,s)\mapsto s^{-\frac d2}H_{0}(\xi,s),\quad
(\xi,s)\mapsto s^{-\frac d2}H_{1}(\xi,s),\quad
(\xi,s)\mapsto s^{-\frac d2}H(\xi,s),\quad
$$
are solutions of the heat equation
$$
(\partial_{s}-\Delta_{\xi})u(\xi,s)=0\quad\mbox{ in }\; Q\times(0,\infty).
$$
\end{lemma}
\subsection{Matrix-valued kernels}\label{SS:matrix}\
Until now, we have considered kernel functions with values in $\C$ and integral operators acting on scalar functions. The generalization to vector-valued functions and matrix-valued kernels is simple and straightforward, and we do not find it necessary to introduce typographic distinctions for the vector-valued objects. The main difference is that in the general theory of Section~\ref{SS:kernels}, one has to use the numerical range $W(K(x))$ of the matrix $K(x)$ instead of the value $K(x)$ in statements such as Lemma~\ref{L:Sp+W(A)} and Proposition~\ref{P:W(AOmega)}. In particular
\begin{equation}
\label{E:NumranGen}
\mbox{$\overline{W(A)}$\; is the closed convex hull of }\;
\bigcup\nolimits_{\xi \in\R^{d}}W(\hat K(\xi))\,.
\end{equation}
Theorem~\ref{T:stabgeneral} remains literally true, but for the construction of the set $\cC$ one has once again to use the numerical range $W(F(t))$ of the matrix-valued function $F$.
In Lemma~\ref{L:TbyF}, the characterization of the numerical range $W(T)$ is to be understood as follows.
\begin{lemma}
\label{L:TbyFEx5}
The closure of $W(T)$ is the closed convex hull of $\bigcup_{\tau\in Q}W(F(\tau))$ and is also equal to the closure of the union
$\bigcup_{N\in\N}W(T^{N})$.
\end{lemma}
The basic Parseval-convolution formula \eqref{E:Par+conv,series} now has to be written, instead of the scalar version \eqref{E:formTN}, as
\begin{equation}
\label{E:formTNEx5}
(U,T^{N}U) = (U,TU) = (2\pi)^{-d}\int_{Q} \overline{\tilde u(\tau)}^{\top} F(\tau)\,\tilde u(\tau)\,d\tau\,.
\end{equation}
Here $\tilde u(\tau)=\sum_{m\in\omega^{N}}u_{m}e^{im\cdot \tau}$, and $F(\tau)$ is the matrix-valued symbol of the block Toeplitz matrix
$T=\big(K(m-n)\big)_{m,n\in\Z^{d}}$:
\begin{equation}
\label{E:F(t)Ex5}
F(\tau) = \sum_{m\in\Z^{d},m\ne0}K(m)\,e^{im\cdot \tau}\,.
\end{equation}
From \eqref{E:formTNEx5} one can immediately read the properties of the numerical range stated in Lemma~\ref{L:TbyFEx5}.
In this paper, most considered examples of kernels are real-valued and the matrices symmetric, in which case the integral operators are selfadjoint, and the numerical ranges consist of intervals in the real line.
\section{Examples}\label{S:Examples}
\newcounter{Ex}
\refstepcounter{Ex}\label{Ex:1}
\subsection{Example 1. Dimension $d=1$. Finite Hilbert transformation}\label{SS:1d}\
We start with the simplest example of a strongly singular integral equation and show that the stability of its delta-delta approximation can be completely analyzed, resulting in a kind of ideal stability theorem.
\subsubsection{The singular integral equation}\label{SS:SIE1D}
Let $a,b\in\R$ with $a<b$. On the interval $\Omega=(a,b)$ we consider the singular integral equation, abbreviated as $(\lambda\Id-A_{\Omega})u=f$,
\begin{equation}
\label{E:IE1D}
\lambda u(x) - \frac1{i\pi}\int_{\Omega}\frac{u(y)}{x-y}dy = f(x)\,,\quad x\in\Omega\,.
\end{equation}
The integral is understood in the Cauchy principal value sense.
The kernel function $K(x)=\frac1{i\pi x}$ has the Fourier transform
$$
\hat K(\xi)=\mathrm{sign}\,\xi.
$$
The operator $A$ of convolution with $K$ on $\R$ is the Hilbert transformation. It satisfies $A^{2}=\Id$, and its spectrum (in a large class of function spaces, for instance $L^{p}(\R)$ with $1<p<\infty$) is $\{-1,1\}$, consisting of two eigenvalues of infinite multiplicity.
The finite Hilbert transformation $A_{\Omega}$ and its spectral theory are also well studied classical objects, see for example \cite{KoppelmanPincus1959}. Here the spectrum depends on the function space; for $L^{p}(\Omega)$ it is strongly dependent on $p$, but not on $\Omega$, as long as $\Omega$ is a proper subinterval of $\R$. For $p=2$ one has the following description.
\begin{lemma}
\label{L:SpAOmega}
The finite Hilbert transformation $A_{\Omega}$ is a bounded selfadjoint operator in $L^{2}(\Omega)$, unitarily equivalent to the operator of multiplication by $\sigma$ in $L^{2}(-1,1)$ with $\sigma(\xi)=\xi$.
Both the spectrum $\mathop{\mathrm{Sp}}\nolimits(A_{\Omega})$ and the closure of the numerical range $\overline{W(A_{\Omega})}$ are equal to
$\cC=[-1,1]$.
For all $\lambda\in\C\setminus\cC$ and any $f\in L^{2}(\Omega)$, the integral equation \eqref{E:IE1D} has a unique solution $u\in L^{2}(\Omega)$, and for the resolvent one has in the $L^{2}(\Omega)$ operator norm
\begin{equation}
\label{E:resAOmega1D}
\|(\lambda\Id-A_{\Omega})^{-1}\| = \mathrm{dist}(\lambda,\cC)^{-1}\,.
\end{equation}
\end{lemma}
Explicit formulas for the resolvent are known. For the infinite Hilbert transformation this is trivially obtained by algebra:
$$
(\lambda\Id - A)^{-1} =
\frac1{\lambda^{2}-1} (\lambda\Id + A)\,,
$$
and for the finite Hilbert transformation, formulas for the resolvent can be found for example in \cite{Soehngen1954} or \cite{Tricomi1957}.
\subsubsection{The discrete system}\label{SS:DA1D}
We use the notation of Section \ref{SS:DDA} with $d=1$, in particular $x^{N}_{m}=a^{N}+\frac{m}N$ and $\omega^{N} = \{m\in\Z \mid x^{N}_{m}\in \Omega\}$. The simple delta-delta discretization of our singular integral equation \eqref{E:IE1D} is
\begin{equation}
\label{E:DDA1D}
\lambda u_{m} - \frac1{i\pi N}\sum_{n\in\omega^{N},m\ne n}\frac{u_{n}}{x^{N}_{m}-x^{N}_{n}}= f(x^{N}_{m}) \,,\quad (m\in\omega^{N})\,.
\end{equation}
The system matrix $T^{N}$ with matrix elements $\frac1{i\pi N}\frac1{x^{N}_{m}-x^{N}_{n}}$ ($m,n\in\omega^{N}$) is a finite section of the infinite Toeplitz matrix
$$
T = \big( \frac1{i\pi(m-n)} \big)_{m,n\in\Z}\quad
\mbox{ with zero on the diagonal.}
$$
The symbol $F(\tau)$ is now given by the Fourier series
\begin{equation}
\label{E:F1D}
F(\tau) = \sum_{m\in\Z,m\ne0}\frac{e^{im\tau}}{i\pi m}
= \sum_{m=1}^{\infty} \frac{2\sin m\tau}{\pi m}\,,\quad \tau\in Q=[-\pi,\pi]\,.
\end{equation}
This series converges for all $t\in Q$ to the well known saw-tooth function
\begin{equation}
\label{E:sawtooth}
F(\tau) = \mathrm{sign} \,\tau - \frac \tau\pi\quad (\tau\ne0)\,,\quad F(0)=0\,.
\end{equation}
The range of this function is the interval $(-1,1)$.
Properties of the matrix $T$ follow immediately from this symbol $F$ and can be summarized as follows.
\begin{lemma}
\label{L:SpT}
The infinite Toeplitz matrix $T$ defines a bounded selfadjoint operator in $\ell^{2}(\Z)$, unitarily equivalent to the operator of multiplication by $F$ in $L^{2}(-\pi,\pi)$ with $F$ given in \eqref{E:sawtooth}.
Both the spectrum $\mathop{\mathrm{Sp}}\nolimits(T)$ and the closure of the numerical range $\overline{W(T)}$ are equal to
$\cC=[-1,1]$.
For all $\lambda\in\C\setminus\cC$ the operator $\lambda\Id-T$ is invertible in $\ell^{2}(\Z)$, and for the resolvent one has in the $\ell^{2}(\Z)$ operator norm
\begin{equation}
\label{E:resT1D}
\|(\lambda\Id-T)^{-1}\| = \mathrm{dist}(\lambda,\cC)^{-1}\,.
\end{equation}
\end{lemma}
\begin{corollary}
\label{C:StabTN1D}
The matrix $T^{N}$ of the system \eqref{E:DDA1D} is selfadjoint with its eigenvalues in $\cC=[-1,1]$. For $\lambda\in\C\setminus\cC$, there is a uniform resolvent estimate in the $\ell^{2}$ operator norm
\begin{equation}
\label{E:resTN1D}
\|(\lambda\Id-T^{N})^{-1}\| \le \mathrm{dist}(\lambda,\cC)^{-1}\,.
\end{equation}
\end{corollary}
The converse is also true: If there is a uniform stability estimate
$$
\|(\lambda\Id-T^{N})^{-1}\| \le C \quad\mbox{ for all $N$ },
$$
then one also has (by a standard Galerkin argument)
$\|(\lambda\Id-T)^{-1}\|\le C$, hence $\mathrm{dist}(\lambda,\cC)\ge\frac1C$ and $\lambda\not\in\cC$.
Combining this with Lemma~\ref{L:SpAOmega}, we obtain the following description of the stability result for our delta-delta discretisation of the finite Hilbert transform.
\begin{theorem}
\label{T:Stab1D}
For $\lambda\in\C$ the following are equivalent:\\
(i) The singular integral equation \eqref{E:IE1D} has a unique solution $u\in L^{2}(\Omega)$ for any $f\in L^{2}(\Omega)$.\\
(ii) The discretization method \eqref{E:DDA1D} is stable in the $\ell^{2}$ norm.\\
(iii) $\lambda\not\in\cC$, where $\cC=[-1,1]$.\\
For such $\lambda$, there is an estimate for the operator norms
\begin{equation}
\label{E:Res1D}
\|(\lambda\Id-T^{N})^{-1}\|_{\cL(\ell^{2}(\omega^{N}))} \le \|(\lambda\Id-A_{\Omega})^{-1}\|_{\cL(L^{2}(\Omega))}\,.
\end{equation}
\end{theorem}
\refstepcounter{Ex}\label{Ex:2}
\subsection{Example 2. Dimension $d=2$, kernel $x_{1}x_{2}|x|^{-4}$}\label{SS:2db12}\
We consider now the simplest higher-dimensional example where in the notation of Section~\ref{SSS:Specific} $d=2$ and $p(x)=-\frac1\pi b_{12}(x)$, see \eqref{E:ajk,bjk}.
We show that the stability of its delta-delta approximation follows a similar simple pattern as in the previous one-dimensional example, although the proof is non-trivial.
\subsubsection{The singular integral equation}
The kernel and its Fourier transform are
\begin{equation}
\label{E:KandKhatforb}
K(x) = -\frac{x_{1}x_{2}}{\pi|x|^{4}}\,,\qquad
\hat K(\xi) = \frac{\xi_{1}\xi_{2}}{|\xi|^{2}}\,.
\end{equation}
For $\Omega\subset\R^{2}$, we consider the singular integral equation
$(\lambda \Id -A_{\Omega})u=f$ as in \eqref{E:SIE}
\begin{equation}
\label{E:IEEx2}
\lambda u(x) - \mathop{\rm p.v.}\!\! \int_{\Omega}K(x-y) u(y)\,dy = f(x).
\end{equation}
Observing that the range of the function $\hat K$ is the interval
$[-\frac12,\frac12]$, we can formulate the result of Proposition~\ref{P:W(AOmega)} as follows
\begin{lemma}
\label{L:AOmegaEx2}
Let $\cC=[-\frac12,\frac12]$. For $\Omega=\R^{2}$, both the spectrum $\mathop{\mathrm{Sp}}\nolimits(A_{\Omega})$ and the closure of the numerical range $\overline{W(A_{\Omega})}$ in $L^{2}(\Omega)$ are equal to $\cC$.
For any open subset $\Omega\subset\R^{2}$, the closure of the numerical range
in $L^{2}(\Omega)$ satisfies $\overline{W(A_{\Omega})}\subset\cC$, and there is a resolvent estimate in the $L^{2}(\Omega)$ operator norm
\begin{equation}
\label{E:resAOmega2DEx2}
\|(\lambda\Id-A_{\Omega})^{-1}\| \le \mathrm{dist}(\lambda,\cC)^{-1}\,.
\end{equation}
\end{lemma}
\subsubsection{The discrete system}\label{SS:DA2dEx2}
Let now $\Omega$ be a bounded domain in $\R^{2}$.
In the notation of Section \ref{SS:DDD} with $d=2$, the regular grid consists of the points $x^{N}_{m}=a^{N}+\frac{m}N$, indexed by $\omega^{N} = \{m\in\Z^{2} \mid x^{N}_{m}\in \Omega\}$. The simple delta-delta discretization of our singular integral equation \eqref{E:IEEx2} is
\begin{equation}
\label{E:DDA2dEx2}
\lambda u_{m} + \frac{1}{\pi} N^{-2}
\sum_{n\in\omega^{N},m\ne n}
\frac{(x^{N}_{m,1}-x^{N}_{n,1})(x^{N}_{m,2}-x^{N}_{n,2})}{|x^{N}_{m}-x^{N}_{n}|^{4}}
u_{n}
= f(x^{N}_{m}) \,,\quad (m\in\omega^{N})\,.
\end{equation}
The system matrix $T^{N}$ is now a finite section of the infinite Toeplitz matrix
$$
T = -\frac1\pi \Big(
\frac{(m_{1}-n_{1})(m_{2}-n_{2})}{|m-n|^{4}}
\Big)_{m,n\in\Z^{2}}\quad
\mbox{ with zero on the diagonal.}
$$
Its symbol is therefore given by the Fourier series for $\tau\in Q=[-\pi,\pi]^{2}$
\begin{equation}
\label{E:FEx2}
F(\tau) = -\sum_{m\in\Z^{2},m\ne0}\frac{m_{1}m_{2}}{\pi|m|^{4}}\,e^{im\cdot \tau}
= \frac4\pi \sum_{m_{1},m_{2}=1}^{\infty}
\frac{m_{1}m_{2}}{(m_{1}^{2}+m_{2}^{2})^{2}}
\sin(m_{1}\tau_{1}) \sin(m_{2}\tau_{2})
\,.
\end{equation}
Whereas we do not know an explicit closed form expression for this function, we know from the results of Section~\ref{SS:Ewald} using Ewald's method that it is bounded and that it can be written as in equation~\eqref{E:F=F0+Khat}
\begin{equation}
\label{E:F=F0+KhatEx2}
F(\tau) = \hat K(\tau) + F_{0}(\tau) \quad
\mbox{ where $F_{0}$ is analytic in $Q$ and $F_{0}(0)=0$}.
\end{equation}
In addition, we know from \eqref{E:FEx2} that $F$ vanishes on the boundary of $Q$, hence
\begin{equation}
\label{E:F0boundary}
F_{0}(\tau) = -\hat K(\tau) \qquad (\tau\in\partial Q)\,.
\end{equation}
In the previous example, we used the explicit expression of $F(\tau)$ for finding the range of $F$. In fact, the function $F_{0}$ in that case was just the linear interpolation between the two values of the symbol $\hat K$ on $\partial Q$, which implied that the closed convex hull of $\operatorname{\rm im}(F)$ was the same as the convex hull of $\operatorname{\rm im}(\hat K)$. In the present case, we do not have a simple formula, but we can still prove that the conclusion is true.
\begin{lemma}
\label{L:im(Fex2)}
Let $F(\tau)$ be as defined in \eqref{E:FEx2}. Then for any $\tau\in Q$
\begin{equation}
\label{E:Fpos}
F(\tau) \in \cC=[-\frac12,\frac12].
\end{equation}
\end{lemma}
The proof is not obvious, although the claim is numerically evident if we compute $F$ using Ewald's method and plot its graph, see the contour plot in Figure~\ref{F:ContEx2}.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{FigExpl2_right.pdf}
\caption{Contour plot of $F(\xi)$ on the quarter square $Q_{++}$, Example \ref{Ex:2}.}
\label{F:ContEx2}
\end{figure}
Before we give the proof, let us draw the conclusion for the stability of the numerical scheme~\eqref{E:DDA2dEx2}.
\begin{corollary}
\label{C:stabEX2}
Let $\cC=[-\frac12,\frac12]$ and $\lambda\in\C\setminus\cC$. Then for any $N$ the linear system~\eqref{E:DDA2dEx2} has a unique solution, and there is a uniform resolvent estimate \begin{equation}
\label{E:ResTNex2}
\|(\lambda\Id-T^{N})^{-1}\|_{\cL(\ell^{2}(\Omega^{N}))} \le \mathrm{dist}(\lambda,\cC)^{-1}\,.
\end{equation}
\end{corollary}
\begin{proof}
For symmetry reasons, it is sufficient to prove \eqref{E:Fpos} for $\tau\in Q_{++}=(0,\pi)^{2}$.
For the proof of Lemma~\ref{L:im(Fex2)}, we will show the following:
\begin{equation}
\label{E:FposF0neg}
\mbox{ For any }\xi\in Q_{++}\,,\quad
F(\xi)\ge0 \;\mbox{ and } \; F_{0}(\xi)\le0\,.
\end{equation}
This implies $0\le F(\xi)\le \hat K(\xi)\le\frac12$, hence \eqref{E:Fpos}\,.
We use the integral representations from Proposition~\ref{P:FInt}
\begin{equation}
\label{E:F0Int}
F_{0}(\xi) = \int_{0}^{\infty} H_{0}(\xi,s)ds\,,\quad
\hat K(\xi) = \int_{0}^{\infty} H_{1}(\xi,s)\,ds\,,\quad
F(\xi) = \int_{0}^{\infty} H(\xi,s)\,ds\,.
\end{equation}
According to \eqref{E:H0}, $H_{0}(\xi,s)=H(\xi,s)-H_{1}(\xi,s)$ with
\begin{equation}
\label{E:H0H1Ex2}
H_{1}(\xi,s)= \frac{\xi_{1}\xi_{2}}{4s^{2}} e^{-\frac{|\xi|^{2}}{4s}}\,,\quad
H(\xi,s)= \sum_{n\in\Z^{2}} H_{1}(\xi+2\pi n,s)\,.
\end{equation}
Let $0\le\epsilon<T$ and $\Sigma_{\epsilon}^{T}=Q_{++}\times (\epsilon,T)$.
In $\Sigma_{\epsilon}^{T}$, we want to use the maximum principle for the heat equation (see Lemma~\ref{L:heat}) for the functions
$\tilde H_{0}(\xi,s)=s^{-1}H_{0}(\xi,s)$ and $\tilde H(\xi,s)=s^{-1}H(\xi,s)$.
Since $\tilde H_{1}(\xi,s)=s^{-1}H_{1}(\xi,s)$ is continuous for $(\xi,s)\in\R^{2}\times[0,\infty)\setminus\{0,0\}$ and the Poisson series
$$
\tilde H_{0}(\xi,s) = \sum_{n\in\Z^{2},n\ne0} \tilde H_{1}(\xi+2\pi n,s)
$$
converges uniformly for $(\xi,s)\in\overline{\Sigma_{0}^{T}}$ for all $T>0$, we see that
$\tilde H_{0}$ is continuous in $\overline{\Sigma_{0}^{T}}$ with initial value
$\tilde H_{0}(\xi,0)=0$.
On the lateral boundary we use the Fourier representation (see \eqref{E:FFintsum})
$$
\tilde H(\xi,s) = \frac4\pi \sum_{m_{1},m_{2}=1}^{\infty}
m_{1}m_{2} e^{-|m|^{2}s}
\sin (m_{1}\xi_{1}) \sin(m_{2}\xi_{2})\,.
$$
If $\xi_{1}$ or $\xi_{2}$ is in $\{0,\pi\}$, this implies that $\tilde H=0$ and therefore
$$
\tilde H_{0}(\xi,s) = -\tilde H_{1}(\xi,s)\le0
\quad\mbox{ for }(\xi,s)\in\partial Q_{++}\times(0,T]\,.
$$
According to Lemma \ref{L:heat}, $\tilde H_{0}$ satisfies the heat equation
$(\partial_{s}-\Delta_{\xi})\tilde H_{0}=0$ in $\Sigma_{0}^{T}$.
Thus we can apply the maximum principle to $\tilde H_{0}$ and obtain
$\tilde H_{0}(\xi,s)\le0$ in $\Sigma_{0}^{T}$, hence also $H_{0}(\xi,s)\le0$.
Integrating over $s\in(0,\infty)$ yields
$$
F_{0}(\xi) \le 0 \quad\mbox{ for }\xi\in Q_{++}\,.
$$
For $\tilde H$, we cannot apply the maximum principle directly in $\Sigma_{0}^{T}$, because $\tilde H$ is not continuous at $(0,0)\in\overline{\Sigma_{0}^{T}}$, but we can apply it in
$\Sigma_{\epsilon}^{T}$ for any $0<\epsilon<T$. On the lateral boundary, $\tilde H$ vanishes as seen above, and for the initial value at $s=\epsilon$ we have
$$
\tilde H(\xi,\epsilon) = \tilde H_{0}(\xi,\epsilon) + \tilde H_{1}(\xi,\epsilon)
\ge \tilde H_{0}(\xi,\epsilon) \ge \delta(\epsilon)
$$
with $\delta(\epsilon)=\inf_{\xi\in Q_{++}}\tilde H_{0}(\xi,\epsilon)$.
Hence by the maximum principle, in $\overline{\Sigma_{\epsilon}^{T}}$ we have
$$
\tilde H(\xi,s) \ge \min\{0,\delta(\epsilon)\}\,.
$$
Now, as we have seen above, $\tilde H_{0}(\cdot,s)$ tends to $0$ uniformly as $s\to0$, hence $\delta(\epsilon)\to0$ as $\epsilon\to0$, which implies $\tilde H(\xi,s)\ge0$ for any $s>0$ and $\xi\in Q_{++}$. After integrating over $s$, we finally get $F(\xi)\ge0$ for $\xi\in Q_{++}$, and the proof of the Lemma is complete.
\end{proof}
\begin{remark}
\label{R:idealEx2}
In conclusion, for this example we find the same ``ideal'' stability estimate as in the previous one-dimensional example.
\end{remark}
\refstepcounter{Ex}\label{Ex:3}
\subsection{Example 3. Dimension $d=2$, kernel $(x_{1}^{2}-x_{2}^{2})|x|^{-4}$}\label{SS:2da12}\
We consider another two-dimensional example where in the notation of Section~\ref{SSS:Specific} $d=2$ and $p(x)=-\frac1{2\pi} a_{12}(x)$, see \eqref{E:ajk,bjk}.
We show that the complement of the stability zone in this case is strictly larger than the image of the symbol of the integral operator.
\subsubsection{The singular integral equation}\label{SSS:SIOEx3}
We use the same notation for analogous objects as in the preceding example. Therefore in this section, the letters $K$, $\hat K$, $T$ etc.\ are redefined to have new meanings.
The kernel and its Fourier transform are now
\begin{equation}
\label{E:KandKhatfora}
K(x) = \frac{x_{2}^{2}-x_{1}^{2}}{2\pi|x|^{4}}\,,\qquad
\hat K(\xi) = \frac{\xi_{1}^{2}-\xi_{2}^{2}}{2|\xi|^{2}}
= \frac{\xi_{1}^{2}}{|\xi|^{2}} - \frac12\,.
\end{equation}
The normalization is chosen so that the range of the function $\hat K$ is again the interval
$[-\frac12,\frac12]$.
In fact, this kernel is the same as in the previous example \eqref{E:KandKhatforb} after a $45^{\circ}$ rotation of the coordinate system.
Therefore if we write the singular integral equation as in \eqref{E:IEEx2}, we can copy verbatim the statement of the previous example concerning the numerical range of the integral operator $A_{\Omega}$ (see Lemma~\ref{L:AOmegaEx2}) and the resolvent estimate \eqref{E:resAOmega2DEx2}.
\begin{lemma}
\label{L:AOmegaEx3}
Lemma \ref{L:AOmegaEx2} is true for the singular integral equation~\eqref{E:IEEx2} defined with the kernel~\eqref{E:KandKhatfora}.
\end{lemma}
\subsubsection{The discrete system}\label{SS:DA2dEx3}
To the delta-delta discretization
\begin{equation}
\label{E:DDA2dEx3}
\lambda u_{m} - N^{-2}\sum_{n\in\omega^{N},m\ne n} K(x^{N}_{m}-x^{N}_{n}) u_{n}
= f(x^{N}_{m}) \quad (m\in\omega^{N})
\end{equation}
corresponds the finite section $T^{N}$ of the infinite Toeplitz matrix
$$
T = \frac1{2\pi} \Big(
\frac{(m_{2}-n_{2})^{2}-(m_{1}-n_{1})^{2}}{|m-n|^{4}}
\Big)_{m,n\in\Z^{2}}\quad
\mbox{ with zero on the diagonal.}
$$
The numerical symbol (symbol of $T$) is now defined as
\begin{equation}
\label{E:FEx3}
F(\tau) = \sum_{m\in\Z^{2},m\ne0}\frac{m_{2}^{2}-m_{1}^{2}}{2\pi|m|^{4}}\,e^{im\cdot \tau}
\,.
\end{equation}
\begin{lemma}
\label{L:im(Fex3)}
Let $$\Lambda_{0}=\dfrac{\Gamma(\frac14)^{4}}{32\pi^{2}}=0.5471...\,.$$
Let $F(\tau)$ be as defined in \eqref{E:FEx3}.
Then there exists $\Lambda_{+}\ge\Lambda_{0}$ such that
$F(Q)=\cC=[-\Lambda_{+},\Lambda_{+}]$.
\end{lemma}
\begin{conjecture}
\label{C:Lambda+=Lambda0}
Numerical evidence suggests equality
\begin{equation}
\label{E:Lambda+=Lambda0}
\Lambda_{+}=\Lambda_{0}\,.
\end{equation}
\end{conjecture}
\begin{proof}
The function $F$ is odd with respect to permutation of $\xi_{1}$ and $\xi_{2}$. The decomposition $F=F_{0}+\hat K$ with $F_{0}$ continuous on $Q$ implies that $F$ takes its maximum $\Lambda_{+}$ on $Q$. Therefore its image $F(Q)$ is a closed symmetric interval
$\cC=[-\Lambda_{+},\Lambda_{+}]$. We are going to show that
\begin{equation}
\label{E:F(pi)=Lambda0}
F(\pi,0) = \Lambda_{0}\,.
\end{equation}
The conjecture \eqref{E:Lambda+=Lambda0} then corresponds to the claim that $F$ attains its maximum on $Q$ in the point $\tau=(\pi,0)$.
To prove \eqref{E:F(pi)=Lambda0}, we first transform the slowly converging double Fourier series
\begin{equation}
\label{E:FLambda0}
F(\pi,0) =
\sum_{m\in\Z^{2},m\ne0}(-1)^{m_{1}}\frac{m_{2}^{2}-m_{1}^{2}}{2\pi(m_{1}^{2}+m_{2}^{2})^{2}}
\end{equation}
into a rapidly convergent single series. One way to get this is to start with the Poisson summation formula applied to the function $f(x)=(x-iy)^{-1}$ whose Fourier transform is
$\hat f(\xi)=2\pi i \mathds{1}_{+}(\xi)e^{-y\xi}$ for $y>0$. The result is then valid for all $y\ne0$. It can be written for $t\in[-\pi,\pi]$ as
\begin{equation}
\label{E:1Ex3}
\sum_{n\in\Z}\frac{e^{int}}{n-iy} =
i \pi \frac{e^{y\sigma(t)}}{\sinh(\pi y)}
\quad \mbox{ with }
\sigma(t) = -t + \pi\,\mathrm{sign}\, t\,.
\end{equation}
Remark: Euler's formula \eqref{E:sawtooth} is a simple consequence of this.\\
Taking the derivative with respect to $y$ and subtracting the formulas for $y$ and $-y$ leads to
\begin{equation}
\label{E:sum1Ex3}
\sum_{n\in\Z}\frac{n^{2}-y^{2}}{(n^{2}+y^{2})^{2}}e^{int} =
\pi \,
\frac{\sigma(t)\sinh(\sigma(t)y)\sinh(\pi y)-\pi\cosh(\sigma(t)y)\cosh(\pi y)}{\sinh^{2}\pi y} .
\end{equation}
This can be used to reduce the double Fourier series for $F(\xi)$ to a single rapidly convergent Fourier series. We are here only interested in the limit $t\to0$:
\begin{equation}
\label{E:sum0Ex3}
\sum_{n\in\Z}\frac{n^{2}-y^{2}}{(n^{2}+y^{2})^{2}} =
\frac{-\pi^{2}}{\sinh^{2}\pi y} \,.
\end{equation}
Hence, by decomposing the double sum $\sum_{m\in\Z^{2}\setminus\{0\}}$ as
$\sum_{m_{1}=0,m_{2}\in\Z\setminus\{0\}}+\sum_{m_{1}\in\Z\setminus\{0\},m_{2}\in\Z}$ and using
$\sum_{n=1}^{\infty}\frac1{n^{2}}=\frac{\pi^{2}}6$ (which can also be obtained from \eqref{E:sum0Ex3} by looking at the pole in $y=0$)
we finally get
\begin{equation}
\label{E:Lambda0}
\Lambda_{0}=
\sum_{m\in\Z^{2},m\ne0}(-1)^{m_{1}}\frac{m_{2}^{2}-m_{1}^{2}}{2\pi(m_{1}^{2}+m_{2}^{2})^{2}}
= \frac\pi6 - \sum_{n=1}^{\infty}\frac{(-1)^{n}\pi}{\sinh^{2}\pi n}\,.
\end{equation}
This series converges rapidly, with $5$ terms giving $15$ significant digits:
$\Lambda_{0}=0.547109903806619...$
The series is covered by the formulas for $\mathrm{IX}_{s}$ with $s=2$ in \cite{Zucker1979}.
The explicit expression for $\Lambda_{0}$ given in the Lemma can be deduced from this.
\end{proof}
\begin{remark}
\label{R:confrad}
The conformal radius (or logarithmic capacity) of the unit square is known to be \cite[Tables]{PolyaSzego1951}
$$
R_{\Box}= \frac{\Gamma(\frac14)^{2}}{4\pi^{\frac32}}\,.
$$
This implies the remarkable relation
\begin{equation}
\label{E:ConfArea}
\pi R_{\Box}^{2}= 2 \Lambda_{0} \,.
\end{equation}
\end{remark}
The conjecture that $\Lambda_{+}=\Lambda_{0}$ is clearly supported by numerical evidence. Here are the results of two different approaches for the approximation of $\Lambda_{+}$:
In Table~\ref{T:supF} we approximate the numerical symbol $F$ from \eqref{E:FEx3} using the Ewald method \eqref{E:Fewald} from Proposition~\ref{P:NumSym}.
\begin{equation}
\label{E:FewaldEx3}
F(\tau) \approx
\sum_{|m_{1}|,|m_{2}|\le M,m\ne0}K(m)\Gamma(2,\pi|m|^{2})e^{im\cdot \tau} +
\sum_{|n_{1}|,|n_{2}|\le M}\hat K(\tau+2\pi n)\,e^{-\frac{|\tau+2\pi n|^{2}}{4\pi}}\,.
\end{equation}
We take the maximum of $F(\tau)$ over a regular $N\times N$ grid discretizing the period square $Q=[-\pi,\pi]^{2}$. Results are shown for $N=1001$, so that the point $(\pi,0)$ is included. One sees the rapid convergence of the sums in the Ewald method.
\begin{table}[htbp]
\centering
\begin{tabular}{@{} lcr @{}}
\toprule
$M$ & Maximum & diff with $\Lambda_{0}$\\
\midrule
1 & 0.5466820485568409 & -0.00043 \\
2 & 0.5471099022284376 & -1.578e-9\\
3 & 0.5471099038066192 & 1.11e-16\\
4 & 0.5471099038066192 & 1.11e-16\\
\bottomrule
\end{tabular}\vglue1ex
\caption{Computation of $\Lambda_{+}$}
\label{T:supF}
\end{table}
In Table~\ref{T:maxEv} we show the maximum eigenvalue of the matrix $T^{N}$ where $\Omega$ is the unit square, together with an extrapolated value and its difference with $\Lambda_{0}$.
\begin{table}[htbp]
\centering
\begin{tabular}{@{} lccr @{}}
\toprule
$N$ & $\lambda_{\max}(T^{N})$ & extrap. & diff with $\Lambda_{0}$\\
\midrule
16 & 0.541802946417726 & &\\
24 & 0.544571778645890 & & \\
36 & 0.545922219922679 & 0.547207966733364 & 9.81e-5\\
54 & 0.546562896841136 & 0.547141211191569 & 3.13e-5\\
81 & 0.546860792009930 & 0.547119678405314 & 9.77e-6\\
\bottomrule
\end{tabular}\vglue1ex
\caption{Computation of $\max(\mathop{\mathrm{Sp}}\nolimits(T^{N}))$, Example \ref{Ex:3}}
\label{T:maxEv}
\end{table}
For comparison, we show in Table~\ref{T:maxEvEx2} the analogous computations for the matrices from Example \ref{Ex:2}, where $\Lambda_{+}=0.5$.
\begin{table}[htbp]
\centering
\begin{tabular}{@{} lccr @{}}
\toprule
$N$ & $\lambda_{\max}(T^{N})$ & extrap. & diff with $0.5$\\
\midrule
16 & 0.4299869696672885 & &\\
24 & 0.4526591158216325 & & \\
36 & 0.4683227545642122 & 0.5033301483277116 & 3.33e-3\\
54 & 0.4789372344435390 & 0.5012512843991882 & 1.25e-3\\
81 & 0.4860451011088278 & 0.5004526691660266 & 4.53e-4\\
\bottomrule
\end{tabular}\vglue1ex
\caption{Computation of $\max(\mathop{\mathrm{Sp}}\nolimits(T^{N}))$, Example \ref{Ex:2}}
\label{T:maxEvEx2}
\end{table}
In the previous Example \ref{Ex:2}, we were able to prove the equation $\Lambda_{+}=0.5$ using an argument involving the maximum principle for the heat equation, see the proof of Lemma~\ref{L:im(Fex2)}, in particular \eqref{E:FposF0neg}.
While we have no proof for the equation $\Lambda_{+}=\Lambda_{0}$ here,
it is possible to use an analogous argument to obtain an upper bound for $\Lambda_{+}$. The square $Q_{++}$ (of area $\pi^{2}$) of the previous example now has to be turned by $45^{\circ}$ and to be replaced by the lozenge (a square of area $2\pi^{2})$
$$
Q_{\lozenge} = \big\{\xi\in\R^{2} \mid 0<\xi_{1}<2\pi;\; |\xi_{2}|<\min\{\xi_{1}, 2\pi-\xi_{1}\}\big\}.
$$
Then one can see again that $F(\xi)=0$ on $\partial Q_{\lozenge}$.
But now $\partial Q_{\lozenge}$ contains two points of discontinuity of $F$, the origin $(0,0)$ and the point $(2\pi,0)$. Therefore the decomposition $F=F_{0}+\hat K$ has to be refined into
$$
F(\xi) = F_{00}(\xi) + \hat K(\xi) + \hat K(\xi-(2\pi,0))\,.
$$
The function $F_{00}$ defined by this will then be continuous on the closure of $Q_{\lozenge}$. Now one can use the integral representation from Proposition~\ref{P:FInt} similarly to \eqref{E:F0Int} and use the maximum principle for the heat equation as before to conclude that
$$
\mbox{ For any }\xi\in Q_{\lozenge}\,,\quad
F(\xi)\ge0 \;\mbox{ and } \; F_{00}(\xi)\le0\,.
$$
This implies $0\le F(\xi)\le \hat K(\xi)+ \hat K(\xi-(2\pi,0))$ in $Q_{\lozenge}$, and hence by taking the maximum,
\begin{equation}
\label{E:Lambda+Ex3}
\Lambda_{+}\le 1\,.
\end{equation}
Unfortunately, this estimate is much less sharp than the estimate by $\frac12$ in the previous example.
To illustrate the behavior of the numerical symbol $F(\xi)$, we present in Figur~\ref{F:NumSymEx3} a surface graph of $F$ on the square $Q$ and a contour plot on the lozenge $Q_{\lozenge}$. The parts exceeding the range of the symbol $\hat K$ are indicated in bright red hues. The maximum at the midpoint $(\pi,0)$ of $Q_{\lozenge}$ is in clear evidence.
\begin{figure}[h]
\centering
\includegraphics[width=0.495\textwidth]{FigExpl3_right_surf.png}
\includegraphics[width=0.495\textwidth]{FigExpl3_right_contour.pdf}
\caption{Numerical symbol for Example \ref{Ex:3}.
Left: $F$ on $Q$, right: $F$ on $Q_{\lozenge}$.}
\label{F:NumSymEx3}
\end{figure}
Let us summarize the stability result obtained for this example.
\begin{corollary}
\label{C:stabEX3}
Let $\cC=[-\Lambda_{+},\Lambda_{+}]$ and $\lambda\in\C\setminus\cC$. Then for any $N$ the linear system~\eqref{E:DDA2dEx2} has a unique solution, and there is a uniform resolvent estimate
\begin{equation}
\label{E:ResTNex3}
\|(\lambda\Id-T^{N})^{-1}\|_{\cL(\ell^{2}(\Omega^{N}))} \le \mathrm{dist}(\lambda,\cC)^{-1}\,.
\end{equation}
For $\lambda\in[-\Lambda_{+},-\frac12)\cup(\frac12,\Lambda_{+}]$ the integral equation
$(\lambda \Id -A_{\Omega})u=f$ with kernel \eqref{E:KandKhatfora} is well-posed in $L^{2}(\Omega)$, but the corresponding delta-delta approximation scheme \eqref{E:DDA2dEx3} is unstable.
\end{corollary}
\refstepcounter{Ex}\label{Ex:4}
\subsection{Example 4. Dimension $d=2$, kernel $(x_{1}+ix_{2})^{2}|x|^{-4}$}\label{SS:2da+ib}\
Let $d=2$ and $p(x)=-\frac1{1\pi} (a_{12}(x)+2ib_{12}(x))$. The corresponding kernel and its Fourier transform are
\begin{equation}
\label{E:KandKhatforEx4}
K(x) = \frac{x_{2}^{2}-x_{1}^{2}-2ix_{1}x_{2}}{\pi|x|^{4}}\,,\qquad
\hat K(\xi) = \frac{(\xi_{1}+i\xi_{2})^{2}}{|\xi_{1}+i\xi_{2}|^{2}}.
\end{equation}
The normalization is chosen so that $|\hat K(\xi)|=1$ for $\xi\in\R^{2}$.
We include this example, which has features combining those of the two preceding examples, mainly for purposes of illustration. Because the singular integral operator and the system matrices of the corresponding delta-delta discretization in this case are non-selfadjoint, we expect to see less trivial relations between spectra and numerical ranges than in the selfadjoint case.
It is obvious from the definition \eqref{E:KandKhatforEx4} that the spectrum $\mathop{\mathrm{Sp}}\nolimits(A)$ of the operator of convolution with $K$ in $L^{2}(\R^{2})$ is the unit circle
$\{\xi\in\C\mid |\xi|=1\}$ and that its numerical range is the unit disk. Whereas we do not know the spectrum $\mathop{\mathrm{Sp}}\nolimits(A_{\Omega})$ for a bounded domain $\Omega\subset\R^{2}$, the numerical range is still the unit disk, compare \eqref{E:WAOmega=WA},
\begin{equation}
\label{E:SpW(AOmega)Ex4}
\mathop{\mathrm{Sp}}\nolimits(A_{\Omega})\subset W(A_{\Omega}) = W(A) = \{\xi\in\C\mid |\xi|\le1\}.
\end{equation}
For the system matrices $T^{N}$ of the delta-delta discretization scheme, Theorem~\ref{T:stabgeneral} and Lemma~\ref{L:TbyF} provide the following relations.
\begin{equation}
\label{E:SpW(TN)Ex4}
\mathop{\mathrm{Sp}}\nolimits(T^{N})\subset W(T^{N}) \subset \overline{W(T)} = \overline{\mathop{\rm conv}}\bigcup_{M\in\N}W(T^{M})\quad\mbox{ and }\quad
W(A) \subset \overline{W(T)}\,.
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth]{Ex_complex,N=8.pdf}
\includegraphics[width=0.49\textwidth]{Ex_complex,N=32.pdf}
\caption{Spectrum and numerical range.
Left: $N=8$, right: $N=32$.}
\label{F:Ex4}
\end{figure}
In Figure~\ref{F:Ex4} we show for the case of a square domain $\Omega$ and two values of $N$ the spectrum $\mathop{\mathrm{Sp}}\nolimits(T^{N})$ (red points), the boundary of the numerical range $W(T^{N})$ (red line), and the unit circle, which is the boundary of $W(A)$ (green line).
We can see the inclusions from \eqref{E:SpW(TN)Ex4} between $\mathop{\mathrm{Sp}}\nolimits(T^{N})$ and $W(T^{N})$, and we can perceive the asymptotic inclusion of $W(A)$ in $W(T^{N})$ as $N$ tends to infinity.
We can also see that the eigenvalues of the matrices, in contrast to the numerical range, will not fill the whole unit disk asymptotically. On the other hand, we clearly see the overshoot
$W(T)\setminus W(A_{\Omega})$, that is the region of $\lambda$ where the volume integral equation is uniquely solvable and the operator $\lambda\Id-A_{\Omega}$ is sectorial, so that every $L^{2}$-conforming Galerkin method would converge, whereas the delta-delta scheme is unstable. It appears that the limits for the real part of this overshoot are the same (scaled by a factor $2$) as in the previous example, that is
$\pm2\Lambda_{0}$ with $\Lambda_{0}$ defined in Lemma~\ref{L:im(Fex3)}.
\refstepcounter{Ex}\label{Ex:5}
\subsection{Example 5. Dimension $d\ge2$. Volume Integral Equation for the Quasi-static Maxwell system}\label{SS:QSM}\
In the quasi-static Maxwell volume integral equation (see Section~\ref{SS:DDA}), the right hand side and the solution are $\C^{d}$-valued functions, and the singular integral operator is defined as the matrix of second distributional derivatives of the convolution with the free-space Green function $g$ for the Laplace operator, see equation \eqref{E:VIEdist}.
If we call this operator $A^{0}$, then it is not the same as the operator $A$ defined by the Cauchy principal value of the integral with the same kernel, but there is a simple relation:
Let
\begin{equation}
\label{E:KmaxQS}
K(x)= -D^{2}g(x) = \big(-\partial_{i}\partial_{j}g(x)\big)_{i,j\in1,\dots,d} \quad\mbox{ for } x\ne0\,.
\end{equation}
Then
\begin{equation}
\label{E:D2VSPV}
A^{0}u(x) = -\nabla\div\int_{\R^{d}}g(x-y)u(y)dy =
\mathop{\rm p.v.}\!\!\int_{\R^{d}} K(x-y)u(y)dy + \frac1d u(x) =
(A+\frac1d\Id)u(x)\,.
\end{equation}
This is most easily seen by first using the symmetries of the kernel with respect to reflections at coordinate axes and permutations of the variables in order to deduce that the distribution kernel of $A^{0}-A$ must be a scalar multiple of the $d\times d$ identity matrix
$\Id_{d}$ times the Dirac distribution $\delta_{0}$, and then determining this multiple by taking traces:
$\mathop{\mathrm{tr}}(-D^{2}g)=-\Delta g =\delta_{0}= \mathop{\mathrm{tr}}(\frac1d \Id_{d}\delta_{0})$.
\subsubsection{The singular integral equation}\label{SSS:SIOEx5}
We consider the strongly singular integral equation, still written as
$(\lambda \Id -A_{\Omega})u=f$,
\begin{equation}
\label{E:VIOMQS}
\lambda u(x) - \mathop{\rm p.v.}\!\! \int_{\Omega}K(x-y) u(y)\,dy = f(x) \quad
\mbox{ with } K \mbox{ given in \eqref{E:KmaxQS}}\,.
\end{equation}
The function space is now $L^{2}(\Omega;\C^{d})$.
Let us note the explicit form of the kernel, valid in any dimension $d\ge2$, where we consider points in $\R^{d}$ as column vectors,
\begin{equation}
\label{E:Kexplicit}
K(x) = -\frac1{\nu_{d}} (x\,x^{\top}-\frac1d\Id_{d}|x|^{2})\,|x|^{-d-2}\,,
\quad\mbox{ with }\;
\nu_{d} = \frac{2\pi^{\frac d2}}{d\,\Gamma(\frac d2)}\,.\end{equation}
The simplest way to see this is to first look at the symbol of the operator. For this we employ $d$-dimensional Fourier transformation and use the fact that $\hat g(\xi)=|\xi|^{-2}$, hence
\begin{equation}
\label{E:SymMQS}
{\mathscr F}(-D^{2}g)(\xi) = \frac{\xi\,\xi^{\top}}{|\xi|^{2}} \quad\mbox{ and }\;
\hat K(\xi) = \frac{\xi\,\xi^{\top}-\frac1d\Id_{d}|\xi|^{2}}{|\xi|^{2}}\,.
\end{equation}
We check that $\mathop{\mathrm{tr}}\hat K=0$ and that $\hat K$ satisfies the spherical cancellation condition. Indeed, in the notation of Lemma~\ref{L:Khat}, the (matrix-valued) polynomial $p(\xi)$ is given by
\begin{equation}
\label{E:pxiMQS}
p(\xi)= -\frac1{\nu_{d}} (\xi\,\xi^{\top}-\frac1d\Id_{d}|\xi|^{2})\,.
\end{equation}
Thus the off-diagonal elements of the matrix $p(x)$ are given by
$$
-\frac1{\nu_{d}}x_{j}x_{k} = -\frac{b_{jk}(x)}{\nu_{d}},
$$
and the diagonal elements by
$$
-\frac1{\nu_{d}}(x_{k}^{2}-\frac1d|x|^{2})= \frac1{d\,\nu_{d}}\sum_{j=1}^{d}a_{jk}(x)\,,
$$
compare Remark~\ref{R:simple}.
From our formulas of Section~\ref{SSS:Homog} we find the explicit form \eqref{E:Kexplicit} for our kernel. For $d=2$, we have $\nu_{d}=\pi$, and we recognize the kernels studied in the Examples~\ref{Ex:2} and \ref{Ex:3}.
The matrix $\frac{\xi\,\xi^{\top}}{|\xi|^{2}}$ is an orthogonal projection matrix, hence its numerical range is the interval $[0,1]$. Therefore $W(\hat K(\xi))=[-\frac1d,1-\frac1d]$ for any $\xi\ne0$. We immediately get the following instance of Proposition~\ref{P:W(AOmega)}.
\begin{lemma}
\label{L:WAMQS}
Let $\cC=[-\frac1d,1-\frac1d]$. Then for all $\lambda\not\in\cC$ and any $f\in L^{2}(\Omega;\C^{d})$, the integral equation \eqref{E:VIOMQS} has a unique solution $u\in L^{2}(\Omega;\C^{d})$, and there is a resolvent estimate in the $L^{2}(\Omega;\C^{d})$ operator norm
\begin{equation}
\label{E:resAOmegaMQS}
\|(\lambda\Id-A_{\Omega})^{-1}\| \le \mathrm{dist}(\lambda,\cC)^{-1}\,.
\end{equation}
\end{lemma}
\subsubsection{The discrete system}\label{SS:DA2dEx5}
With the $d\times d$ matrix-valued kernel $K$ and vector-valued functions $u$ and $f$, we can write the delta-delta discretization $(\lambda\Id - T^{N})U=F$ of the integral equation \eqref{E:VIOMQS} in the same form as in the scalar case
\begin{equation}
\label{E:DDAEx5}
\lambda u_{m} - N^{-d}\!\!\!\!\sum_{n\in\omega^{N},m\ne n}K(x^{N}_{m}-x^{N}_{n})u_{n}= f(x^{N}_{m}) \,,\quad (m\in\omega^{N})\,,
\end{equation}
where now the system matrix $T^{N}$ is of size $d|\omega^{N}|\times d|\omega^{N}|$ and is considered as a linear operator in $\ell^{2}(\omega^{N};\C^{d})$.
We recall the discussion of matrix-valued kernels in Section~\ref{SS:matrix} above, in particular the properties of the numerical range stated in Lemma~\ref{L:TbyFEx5}.
The basic stability estimate follows.
\begin{proposition}
\label{P:W(F)Ex5}
Let $K$ be the kernel defined in \eqref{E:KmaxQS}, \eqref{E:Kexplicit}. Then there exist
$\Lambda_{-}^{(d)},\Lambda_{+}^{(d)}\in\R$ with
\begin{equation}
\label{E:Lambda+-Ex5}
\Lambda_{-}^{(d)}\le-\frac1d\,,\qquad \Lambda_{+}^{(d)}\ge 1-\frac1d
\end{equation}
such that the following holds.\\
(i)
For $\tau\in Q=[-\pi,\pi]^{d}$, $\tau\ne0$, $F(\tau)$ is a real symmetric matrix with eigenvalues contained in the interval $\cC=[\Lambda_{-}^{(d)},\Lambda_{+}^{(d)}]$,
\begin{equation}
\label{E:Lambda+-SpF}
\Lambda_{-}^{(d)} = \inf_{\tau\in Q}\min(\mathop{\mathrm{Sp}}\nolimits(F(\tau))\,,\qquad
\Lambda_{+}^{(d)} = \sup_{\tau\in Q}\max(\mathop{\mathrm{Sp}}\nolimits(F(\tau))\,.
\end{equation}
(ii)
For any $N$, the numerical range $W(T^{N})$ is contained in $W(T)=\cC$.\\
\begin{equation}
\label{E:Lambda+-SpT}
\Lambda_{-}^{(d)} = \inf_{N\in\N}\min(\mathop{\mathrm{Sp}}\nolimits(T^{N}))\,,\qquad
\Lambda_{+}^{(d)} = \sup_{N\in\N}\max(\mathop{\mathrm{Sp}}\nolimits(T^{N}))\,.
\end{equation}(iii)
The delta-delta scheme \eqref{E:DDAEx5} is stable if and only if $\lambda\in\C\setminus\cC$,
and one has the stability estimate in the $\ell^{2}(\omega^{N};\C^{d})$ operator norm
\begin{equation}
\label{E:ResTNex5}
\|(\lambda\Id-T^{N})^{-1}\| \le \mathrm{dist}(\lambda,\cC)^{-1}\,.
\end{equation}
\end{proposition}
\begin{proof}
The matrix $K(x)$ is symmetric for $x\ne0$, implying that also $F(\tau)$ is a symmetric matrix for $\tau\ne0$. The symmetry $K(-x)=K(x)$ implies that the matrix elements of $F(\tau)$ are real.
Therefore the numerical range of $F(\tau)$ is the interval $[\lambda_{-}(\tau),\lambda_{+}(\tau)]$, where
$$
\lambda_{-}(\tau) = \min(\mathop{\mathrm{Sp}}\nolimits(F(\tau))\,,\qquad
\lambda_{+}(\tau) = \max(\mathop{\mathrm{Sp}}\nolimits(F(\tau))\,.
$$
This justifies \eqref{E:Lambda+-SpF}.
All the other statements of the proposition are instances of the statements of Section~\ref{SS:DDD}, in particular Theorem~\ref{T:stabgeneral}, and their proofs in Section~\ref{S:discrete}, based on Ewald's method.
\end{proof}
What remains is to get information on the numbers $\Lambda_{\pm}^{(d)}$ and to see whether the inequalities \eqref{E:Lambda+-Ex5} are strict. In that case,
for $\lambda\in[\Lambda_{-}^{(d)},-\frac1d)\cup(1-\frac1d,\Lambda_{+}^{(d)}]$,
the integral equation is well-posed in $L^{2}(\Omega;\C^{d})$, but the delta-delta discretization scheme is unstable in $\ell^{2}(\Omega^{N};\C^{d})$.
We will discuss this for the practically relevant cases $d=2$, where we get rather precise information, and $d=3$, which is the most important case because of its relevance for the DDA method in computational electromagnetics.
\subsubsection{Dimension $d=2$}.
Here the numerical symbol has the form
$$
F(\tau) =
\begin{pmatrix}
a(\tau) & b(\tau)\\
b(\tau) & -a(\tau)
\end{pmatrix}
$$
with real-valued functions $a$ and $b$. The eigenvalues are
$\lambda_{\pm}(\tau)=\pm\sqrt{a(\tau)^{2}+b(\tau)^{2}}$, implying
$\Lambda_{-}^{(d)}=-\Lambda_{+}^{(d)}$.
The functions $a$ and $b$ have been studied in the previous examples, $a$ in Example~\ref{Ex:3} and $b$ in Example~\ref{Ex:2}.
In particular, $b(\tau)=0$ for $\tau\in\partial Q$, and therefore
\begin{equation}
\label{E:lambda+=Lambda0}
\mbox{ for }\:\tau=(\pi,0),\quad \lambda_{+}(\tau) = a(\tau) = \Lambda_{0}
\end{equation}
with the number $\Lambda_{0}=0.5471...$ encountered in Example \ref{Ex:3}, Lemma~\ref{L:im(Fex3)}.
This implies
$$\Lambda_{+}^{(2)}\ge\Lambda_{0},$$
and we are in the same situation as in Example~\ref{Ex:3}:
Strong numerical evidence suggests that the function $\lambda_{+}$ attains its maximum in the point $\tau=(\pi,0)$ and therefore $\Lambda_{+}^{(2)}=\Lambda_{0}$, but we do not have a formal proof for this. The positive eigenvalue $\lambda_{+}$ is plotted in Figure~\ref{F:Ex52D}.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{FigExpl5_Max2D_surf.png}
\caption{$d=2$. Eigenvalue $\lambda_{+}$ on $Q$.}
\label{F:Ex52D}
\end{figure}
In any case, we have proved that in dimension $d=2$ for any bounded open set $\Omega\subset\R^{2}$ and any
$\lambda\in(-0.5471,-0.5)\cup(0.5,0.5471)$ the delta-delta scheme
$(\lambda\Id-T^{N})U=F$ is unstable in $\ell^{2}(\Omega^{N};\C^{2})$ as $N\to\infty$, whereas the integral equation $(\lambda\Id-A_{\Omega})u=f$ is well posed in $L^{2}(\Omega;\C^{2})$.
\subsubsection{Dimension $d=3$}.
The three eigenvalues $\lambda_{j}$ of $F(\tau)$ satisfy
$\lambda_{1}+\lambda_{2}+\lambda_{3}=0$.
Numerically, one sees that the minimal and maximal values are attained on the intersection of the boundary of $Q=[-\pi,\pi]^{3}$ with the coordinate planes.
In Figure~\ref{F:3EVs} we show a graph of the three eigenvalues on the line $\{(\pi,y,0)\mid y\in[-\pi,\pi]\}$. The values $-\frac13$ and $\frac23$ are shown as dashed lines.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{3EVs3D.pdf}
\caption{$d=3$. Eigenvalues of $F(\tau)$ on middle line of face of $Q$.}
\label{F:3EVs}
\end{figure}
This suggests
$\Lambda_{-}^{(3)}=\min\mathop{\mathrm{Sp}}\nolimits(F((\pi,\pi,0)))$ and $\Lambda_{+}^{(3)}=\max\mathop{\mathrm{Sp}}\nolimits(F((\pi,0,0)))$.
The computed values are
\begin{equation}
\label{E:Lambda+-3}
\Lambda_{-}^{(3)}=-0.4260241507272727\,,
\qquad
\Lambda_{+}^{(3)}=0.7709022227747195\,.
\end{equation}
This implies a length of $W(T)$ of
$\Lambda_{+}^{(3)}-\Lambda_{-}^{(3)}=1.1969263735019922$ instead of $1$, which is the length of $W(A)$, thus an overshoot of almost $20\%$.
Under this assumption, one can write simple series expansions for the numbers $\Lambda_{\pm}^{(3)}$. If all the coordinates of $\tau$ are $0$ or $\pi$, then the off-diagonal elements of the matrix $F(\tau)$ vanish and the $3$ eigenvalues are the diagonal elements. Therefore the Fourier series for $F(\tau)$ gives
\begin{equation}
\label{E:Fpm3series}
\Lambda_{-}^{(3)} = \!\!\!\sum_{m\in\Z^{3},m\ne0} \!\!\!
\frac{(-1)^{m_{1}+m_{2}}}{4\pi} \frac{m_{1}^{2}+m_{2}^{2}-2m_{3}^{2}}{(m_{1}^{2}+m_{2}^{2}+m_{3}^{2})^{\frac52}}\,,\quad
\Lambda_{+}^{(3)} = \!\!\!\sum_{m\in\Z^{3},m\ne0} \!\!\!
\frac{(-1)^{m_{3}}}{4\pi} \frac{m_{1}^{2}+m_{2}^{2}-2m_{3}^{2}}{(m_{1}^{2}+m_{2}^{2}+m_{3}^{2})^{\frac52}}\,.
\end{equation}
These sums, although not absolutely convergent, appear to converge quite well in the sense of partial sums over cubes,
$$
\sum_{m\in\Z^{3},m\ne0} = \lim_{N\to \infty}\sum_{\max\limits_{j}|m_{j}|\le N,m\ne0} \,.
$$
We do not know whether explicit expressions for these sums exist.
By means of the Clausius-Mossotti relation \eqref{E:ClauMoss} one can express the stability results equivalently in terms of the relative permittivity $\epsilon_{r}$. Let
\begin{equation}
\label{E:epsminmax}
\epsilon_{\min} = \frac{3\Lambda_{+}-2}{1+3\Lambda_{+}}=0.0943961\dots
\,,\qquad
\epsilon_{\max} = \frac{3\Lambda_{-}-2}{1+3\Lambda_{-}}=11.788555\dots\,.
\end{equation}
The numerical range $\lambda\in[-\frac13,\frac23]$ of the quasi-static Maxwell volume integral operator corresponds to $\epsilon_{r}\le0$. The volume integral equation is therefore well posed in $L^{2}(\Omega)$ if the relative permittivity $\epsilon_{r}$ is either non-real or positive.
On the other hand, the corresponding DDA scheme is stable in $\ell^{2}(\Z^{3})$ if and only if $\epsilon_{r}$ is either non-real or contained in the interval
$(\epsilon_{\min},\epsilon_{max})$.
For $\epsilon_{r}\in(0,\epsilon_{\min}]\cup[\epsilon_{\max},\infty)$ the integral equation (and therefore the dielectric scattering problem) is well-posed, but the DDA scheme is unstable.
To conclude this discussion, we show in Table~\ref{T:maxminEVEx5} the result of some computations for the spectrum of the system matrix $T^{N}$ for a cube in three dimensions.
One can see convergence to the expected values~\eqref{E:Lambda+-3}, even for rather modest values of $N$. Compare also \cite[FIG. 8]{YurkinMinHoekstra2010}.
\begin{table}[htbp]
\centering
\begin{tabular}{@{} rccc @{}}
\toprule
$N$ & $\lambda_{\max}(T^{N})$ & $\lambda_{\min}(T^{N})$ &
$\lambda_{\max}(T^{N})-\lambda_{\min}(T^{N})$\\
\midrule
4 & 0.67730278666935 & -0.3896455148525014 & 1.06694830152185\\
8 & 0.73653727456221 & -0.4130173055963489 & 1.14955458015856\\
12 & 0.75323748914578 & -0.4193953119966648 & 1.17263280114245\\
16 & 0.76017444184544 & -0.4220149407429199 & 1.18218938258836\\
\bottomrule
\end{tabular}\vglue1ex
\caption{Computation of $\max(\mathop{\mathrm{Sp}}\nolimits(T^{N}))$ and $\min(\mathop{\mathrm{Sp}}\nolimits(T^{N}))$, Example \ref{Ex:5}}
\label{T:maxminEVEx5}
\end{table}
\bibliographystyle{acm}
\subsection{\@startsection{section}{1}%
\z@{1.2\linespacing\@plus\linespacing}{.5\linespacing}%
{\normalfont\scshape\centering}}
\def\subsection{\@startsection{subsection}{2}%
\z@{0.9\linespacing\@plus.7\linespacing}{-.5em}%
{\normalfont\bfseries}}
\makeatother
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amsbsy}
\usepackage{amssymb}
\usepackage[normalem]{ulem}
\usepackage{times}
\usepackage{mathrsfs}
\usepackage{exscale}
\usepackage{graphicx}
\usepackage[colorlinks,citecolor=blue,linkcolor=rouge,
bookmarksopen,
bookmarksnumbered
]{hyperref}
\usepackage{dsfont}
\usepackage{times}
\renewcommand{\arraystretch}{1.25}
\usepackage{color}
\definecolor{gr}{rgb} {0., 0.6, 0.25 }
\definecolor{mg}{rgb} {0.85, 0., 0.85}
\definecolor{marin}{rgb} {0., 0., 0.8}
\definecolor{rouge}{rgb} {0.8, 0., 0.}
\definecolor{orange}{rgb} {0.8, 0.4, 0.}
\usepackage{booktabs}
\newcommand{\color{gr}}{\color{gr}}
\newcommand{\color{mg}}{\color{mg}}
\newcommand{\color{black}}{\color{black}}
\newcommand{\color{red}}{\color{red}}
\newcommand{\color{marin}}{\color{marin}}
\newcommand{\color{orange}}{\color{orange}}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{conjecture}[theorem]{Conjecture}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{assumption}[theorem]{Assumption}
\newtheorem{postulate}[theorem]{Postulate}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{example}[theorem]{Example}
\numberwithin{equation}{section}
\newcommand{\hskip0.15ex}{\hskip0.15ex}
\newcommand{\hskip-0.15ex}{\hskip-0.15ex}
\newcommand{\dd}[1]{_{\raise-0.3ex\hbox{$\scriptstyle #1$}}}
\newcommand{\displaystyle}{\displaystyle}
\newcommand{\on}[1]{\raise-.5ex\hbox{\big|}_{#1}}
\renewcommand{\Re}{\operatorname{\rm Re}}
\renewcommand{\Im}{\operatorname{\rm Im}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\mathop{\rm p.v.}}{\mathop{\rm p.v.}}
\newcommand{\operatorname{\rm im}}{\operatorname{\rm im}}
\newcommand{\round}[1]{\lfloor #1\hskip0.15ex\rceil}
\newcommand{\left.\vphantom{T^{j_0}_{j_0}}\!\!\right.}{\left.\vphantom{T^{j_0}_{j_0}}\!\!\right.}
\newcommand {\Norm}[2]{ \mathchoice
{|\hskip0.15ex #1\hskip0.15ex|\dd{#2}}
{| #1 |_{#2}}
{| #1 |_{#2}}
{| #1 |_{#2}} }
\newcommand {\DNorm}[2]{ \mathchoice
{\|\hskip0.15ex #1\hskip0.15ex\|\dd{#2}}
{\| #1 \|_{#2}}
{\| #1 \|_{#2}}
{\| #1 \|_{#2}} }
\newcommand {\Normc}[2]{ \mathchoice
{|\hskip0.15ex #1\hskip0.15ex|\dd{#2}^2}
{| #1 |_{#2}^2}
{| #1 |_{#2}^2}
{| #1 |_{#2}^2} }
\newcommand {\DNormc}[2]{ \mathchoice
{\|\hskip0.15ex #1\hskip0.15ex\|\dd{#2}^2}
{\| #1 \|_{#2}^2}
{\| #1 \|_{#2}^2}
{\| #1 \|_{#2}^2} }
\newcommand{\overcirc}[2]{%
\hspace{-0.7ex}\hspace{#1ex}{\raisebox{#2ex}{$\scriptstyle\circ$}}\hspace{-#1ex}}
\newcommand{\oco}{\mathchoice
{\overcirc{1.2}{1.1}\omega}{\overcirc{1.2}{1.0}\omega}
{\overcirc{1.0}{0.7}\omega}{\overcirc{1.2}{0.7}\omega}}
\newcommand{\overcirc{1.3}{1.7}\Pi}{\overcirc{1.3}{1.7}\Pi}
\renewcommand{\div}{\operatorname{\rm div}}
\newcommand{\operatorname{\textbf{grad}}}{\operatorname{\textbf{grad}}}
\newcommand{\operatorname{\rm curl}}{\operatorname{\rm curl}}
\newcommand{\operatorname{\rm curl}}{\operatorname{\rm curl}}
\newcommand{\operatorname{\rm supp}}{\operatorname{\rm supp}}
\newcommand{\mathscr{C}^\infty}{\mathscr{C}^\infty}
\newcommand{\operatorname{\textrm{erfc}}}{\operatorname{\textrm{erfc}}}
\newcommand\bB{{\mathbb B}}
\newcommand\C{{\mathbb C}}
\newcommand\R{{\mathbb R}}
\newcommand\N{{\mathbb N}}
\newcommand\bP{{\mathbb P}}
\newcommand\Q{{\mathbb Q}}
\renewcommand\SS{{\mathbb S}}
\newcommand\T{{\mathbb T}}
\newcommand\Z{{\mathbb Z}}
\newcommand{{\mathscr A}}{{\mathscr A}}
\newcommand{{\mathscr B}}{{\mathscr B}}
\newcommand{{\mathscr C}}{{\mathscr C}}
\newcommand{{\mathscr D}}{{\mathscr D}}
\newcommand{{\mathscr E}}{{\mathscr E}}
\newcommand{{\mathscr F}}{{\mathscr F}}
\newcommand{{\mathscr L}}{{\mathscr L}}
\newcommand{{\mathscr S}}{{\mathscr S}}
\newcommand{{\mathscr X}}{{\mathscr X}}
\newcommand{{\mathscr Y}}{{\mathscr Y}}
\newcommand{{\mathscr Z}}{{\mathscr Z}}
\renewcommand{\Re}{\operatorname{Re}}
\renewcommand{\Im}{\operatorname{Im}}
\newcommand\cA{{\mathcal{A}}}
\newcommand\cB{{\mathcal{B}}}
\newcommand\cC{{\mathcal{C}}}
\newcommand\cD{{\mathcal{D}}}
\newcommand\cE{{\mathcal{E}}}
\newcommand\cF{{\mathcal{F}}}
\newcommand\cG{{\mathcal{G}}}
\newcommand\cI{{\mathcal{I}}}
\newcommand\cH{{\mathcal{H}}}
\newcommand\cJ{{\mathcal{J}}}
\newcommand\cK{{\mathcal{K}}}
\newcommand\cL{{\mathcal{L}}}
\newcommand\cM{{\mathcal{M}}}
\newcommand\cO{{\mathcal{O}}}
\newcommand\cR{{\mathcal{R}}}
\newcommand\cS{{\mathcal{S}}}
\newcommand\cT{{\mathcal{T}}}
\newcommand\cU{{\mathcal{U}}}
\newcommand\cV{{\mathcal{V}}}
\newcommand\cX{{\mathcal{X}}}
\newcommand\cY{{\mathcal{Y}}}
\newcommand\cZ{{\mathcal{Z}}}
\newcommand{{\mathsf{A}}}{{\mathsf{A}}}
\newcommand{{\mathsf{B}}}{{\mathsf{B}}}
\newcommand{{\mathrm{D}}}{{\mathrm{D}}}
\newcommand{{\mathsf{F}}}{{\mathsf{F}}}
\newcommand{{\mathsf{G}}}{{\mathsf{G}}}
\newcommand{{\mathsf{H}}}{{\mathsf{H}}}
\newcommand{{\mathsf{M}}}{{\mathsf{M}}}
\newcommand{{\mathsf{L}}}{{\mathsf{L}}}
\newcommand{{\mathrm{P}}}{{\mathrm{P}}}
\newcommand{{\mathsf{P}}}{{\mathsf{P}}}
\newcommand{{\mathsf{Q}}}{{\mathsf{Q}}}
\newcommand{{\mathsf{R}}}{{\mathsf{R}}}
\newcommand{{\mathrm{S}}}{{\mathrm{S}}}
\newcommand{{\mathsf S}}{{\mathsf S}}
\newcommand{{\mathsf{T}}}{{\mathsf{T}}}
\newcommand{{\mathsf{U}}}{{\mathsf{U}}}
\newcommand{{\mathsf{V}}}{{\mathsf{V}}}
\newcommand{{\mathsf{W}}}{{\mathsf{W}}}
\newcommand{{\mathsf{b}}}{{\mathsf{b}}}
\newcommand{{\mathsf{c}}}{{\mathsf{c}}}
\newcommand{{\mathrm d}}{{\mathrm d}}
\newcommand{{\mathrm e}}{{\mathrm e}}
\newcommand{{\mathsf{g}}}{{\mathsf{g}}}
\newcommand{{\mathsf{s}}}{{\mathsf{s}}}
\newcommand{{\mathsf{u}}}{{\mathsf{u}}}
\newcommand{{\mathsf{v}}}{{\mathsf{v}}}
\newcommand {\gA}{\mathfrak{A}}
\newcommand {\gC}{\mathfrak{C}}
\newcommand {\gE}{\mathfrak{E}}
\newcommand {\gF}{\mathfrak{F}}
\newcommand {\gL}{{\mathfrak L}}
\newcommand {\gM}{{\mathfrak M}}
\newcommand {\gP}{{\mathfrak P}}
\newcommand {\gR}{{\mathfrak R}}
\newcommand {\gU}{{\mathfrak U}}
\newcommand {\gW}{{\mathfrak W}}
\newcommand {\gZ}{{\mathfrak Z}}
\newcommand{{\tilde{u}}}{{\tilde{u}}}
\newcommand {\Id}{\mathbb I}
\newcommand{\mathrm{comp}}{\mathrm{comp}}
\newcommand{\mathrm{diam}}{\mathrm{diam}}
\newcommand{\mathrm{dist}}{\mathrm{dist}}
\newcommand{\mathrm{meas}}{\mathrm{meas}}
\newcommand{\mathrm{sign}}{\mathrm{sign}}
\newcommand{\mathop{\mathrm{Sp}}\nolimits}{\mathop{\mathrm{Sp}}\nolimits}
\newcommand{\mathrm{span}}{\mathrm{span}}
\newcommand{\overline}{\overline}
\usepackage{soul}
|
{
"arxiv_id": "2302.13166",
"language": "en",
"timestamp": "2023-02-28T02:11:38",
"url": "https://arxiv.org/abs/2302.13166",
"yymm": "2302"
} | \subsection{Please Capitalize the First Letter of Each Notional Word in Subsection Title}
\section{Introduction}
\label{sec:intro}
In this Introduction, we first consider briefly traditional stellar dynamical modeling, and then point out how some aspects of the modeling may be dealt with using machine learning techniques. We complete the Introduction by setting our objectives for the investigation described in this article. We describe our approach to meeting the objectives in the next section.
The stellar dynamics of galaxies are explored by developing some model of a galaxy, constraining it with observational data, and then examining the model to see what might be learnt about the real galaxy. \cite{BT2008} describe the theory behind many of the modeling techniques that might be used.
A key point which must be understood about external galaxies is that our models are only indicative and illustrative of real (external) galaxies because of the current technical limitations of the instruments we employ for observations: in essence, we can not obtain 3D galaxy data. Observing and modeling our Galaxy is different as we do have the capability to collect 3D spatial and velocity data.
Theoretically, the collisionless Boltzmann equation (CBE), by linking a galaxy's gravitational potential and its phase space distribution function $f(\textbf{x},\textbf{v},t)$, gives us a start point for modeling. Integrating the distribution function (a probability density function) in various ways gives us a means of creating model observables which can then be compared with real stellar observables (that is, the observed data). Equations of motion (if they are needed) come from Hamiltonian mechanics, giving a 6D phase space. Constants or integrals of motion come from considering the Jeans theorems, and lead to isolating integrals which partition phase space, and non-isolating integrals which do not. Typical isolating integrals are energy, and some or all of the components of angular momentum. Spatial resonances may arise from the equations of motion under certain circumstances, and resonant orbits may have trajectories which influence the shape sub-structures present in the galaxy.
Steady state where the potential and distribution function are not time dependent is a major simplifying assumption which is often used. In a similar vein, the form of the potential is frequently assumed from the shape of the galaxy.
Modeling methods typically come from the Jeans equations, or from processes trying to avoid knowing the distribution function, or from using processes where the form of the distribution function is assumed. Schwarzschild's method \citep{Schwarz1979} and the made-to-measure (M2M) method of \cite{Syer1996} are two methods which use weighted orbits or particles to tailor models with the intention that, by adjusting the weights, model observations will match real observations. While the weights can be interpreted astrophysically their values are in fact set by purely numerical methods \citep{Long2016, Long2018, Long2021}. The M2M method we will refer to later. Jeans equation methods may yield non-physical models \citep[for example,][]{vandeVen2003, Cappellari2008} so we will not discuss them further.
Having set the context using a more traditional stellar dynamical modeling approach, we now consider what has been achieved and might be achieved by using machine learning. Bulk processing of noisy, image data at radio and optical wavelengths to classify galaxies is well-advanced - see, for example, \cite{Clarke2020}, \cite{Canducci2022} and \cite{Tang2022}. For single galaxies, orbit trajectory data and normalizing flows provide the capability to determine distribution functions and accelerations from the gravitational potential - see \cite{Green2021}, \cite{An2021} and \cite{Naik2022}. \cite{An2021} used a \cite{Hernquist1990} model in their experiments. From our own experiences (as yet unpublished) converting to other models appears to work satisfactorily. Training models to learn and follow conservation laws in a Hamiltonian context was accomplished by \cite{Greydanus2019} with a Lagrangian implementation in \cite{Cranmer2020}. Data driven discovery of coordinates and equations of motion using an autoencoder architecture with symbolic regression was investigated by \cite{Champion2019}. Symbolic regression \citep[see review by][]{Cava2021} is concerned with taking numerical input and output data and producing an algebraic formula to convert the input to the output. Its value in a stellar dynamical modeling context is appreciable: it means, for example, that we should be able to take the weights from a M2M modeling run and produce an algebraic formula for the galaxy distribution function. Both \cite{Nature2021} and \cite{Meng2022} deal with physics-informed machine learning with the latter surveying the methods and techniques available. What is required in due course is that the
physics-informed techniques are investigated for their applicability in an astronomy, stellar dynamics context.
What is hopefully now clear is that there is sufficient overlap between the traditional stellar dynamical modeling techniques and machine learning techniques to warrant further investigation.
The end position we should be aiming for is to see how far we can get in developing new stellar dynamical modeling tools based on machine learning to augment if not replace the existing traditional tooling. The gap is not that large in that appropriate machine learning based tools are emerging, and the existing traditional M2M scheme already has some features found in neural networks, for example back propagation \citep{Rumelhart1986}.
Turning now to this paper, as we said in the Abstract, knowing the conserved quantities that a galaxy's stellar orbits conform to is important in helping us understand the stellar distribution and structures within the galaxy. Isolating integrals of motion and resonances are particularly important, non-isolating integrals less so. The first step is knowing just how many conserved quantities an orbit conforms to, and we will do this by examining and comparing two methods for counting such quantities, the first based on a traditional approach and the second on machine learning. The traditional method is closely related to the correlation integral method described in \cite{Carpintero2008} which is based on the work of \cite{Carnevali1984} and \cite{Barnes2001}, which in turn are based on \cite{Grassberger1983}. The machine learning method is as described in \cite{Liu2021}, and takes a manifold dimensionality approach enabled by \cite{Saremi2019}. \cite{Liu2021} was developed in a physics rather than an astronomy context, and we are not aware of any previous galaxy-based usage.
As a consequence, our research objectives are to compare and contrast methods (with and without the exploitation of machine learning) for determining the number of integrals of motion and resonances from galaxy stellar orbit trajectories. These trajectories will be created by integrating the equations of motion in various theoretical gravitational potentials representing galaxies, where the expected number of isolating integrals is known in advance.
The structure of our paper is as follows. Section \ref{sec:approach} describes at a top level the approach we will take to our investigations. Relevant theory and descriptions of the methods are in Section \ref{sec:theory}. Our results and subsequent discussion are in Sections \ref{sec:results} and \ref{sec:discussion}, with our conclusions in Section \ref{sec:conclusions}.
\section{Approach}
\label{sec:approach}
The motion of stellar objects within a galaxy is usually described and analyzed mathematically by using Hamiltonian mechanics \citep{BT2008}.
Galaxy stellar orbit phase space trajectories define manifolds \citep{Arnold1989}, and we use the local properties of manifolds to help us determine
the number of conserved quantities present in an orbit trajectory. In particular, manifolds are locally homeomorphic to a Euclidean space, and conserved integral quantities act to reduce the effective dimensionality of the manifold, as do resonances. As indicated in the Introduction, Section \ref{sec:intro}, we are comparing two
different methods, a recent method
by \cite{Liu2021} using manifold machine learning techniques, and an earlier, correlation integral method based on the work of \cite{Grassberger1983}.
\cite{Liu2021} in their supplementary material do mention briefly the correlation integral method but refer to it as the `fractal' method. In this
paper, our correlation integral methodology is mainly influenced by \cite{Carpintero2008}.
Our comparison utilizes theoretical galaxy models of increasing complexity, where the normally expected number of isolating integrals is known. If more than the expected number are found, this implies that a trajectory has additional conserved quantities, for example, resonances \citep{Carpintero2008}. If fewer are found, then we are dealing with irregular or chaotic orbits. In 6D phase space, the minimum number of conserved quantities is one (corresponding to energy) and the maximum is five (which would imply a 1D orbit). It is important to note that the methods operate with the trajectory of a single orbit. Even though we will talk about galaxy models and orbits from those models, in no sense do the methods operate at a galaxy level utilizing many orbits in one model: the methods operate on single orbits only.
Our galaxy models have various morphologies and include spherical, axisymmetric (oblate), and triaxial (ellipsoidal) theoretical models. We have deliberately picked models that have been used elsewhere for similar related work. Our spherical models are taken from \cite{Plummer1911} and \cite{Hernquist1990}, and our axisymmetric model is the logarithmic model from \cite{Richstone1980} in both cored and singular forms. For our triaxial model, we use the perfect ellipsoid described in \cite{deZeeuw1985}.
Both methods take orbit trajectories as their only data input. We create 6D phase space orbit trajectories in Cartesian coordinates, but also use coordinates specific to the morphologies of our galaxy models to help us shed light on the functioning of the methods.. For spherical models, we use 2D Cartesian configuration coordinates in the individual orbital planes of the trajectories, and similarly for axisymmetric models where we use the meridional plane. For triaxial models, we also use 3D confocal ellipsoidal coordinates. For operational, time efficiency reasons, we limit our work to 64 orbit trajectories per galaxy model. The trajectories are the same for both modeling schemes. Note that resonances, if present, may manifest themselves differently in different coordinate systems \citep{Papa1996}.
For each model within the three galaxy morphologies mentioned, we execute four modeling runs made up of the two methods with two coordinate schemes, all utilizing the same 64 orbits. In the comparison, we are looking for a high level of consistency between the two methods with the expectation that the number of conserved quantities for an individual orbit will be correct and the same (within, of course, the limitations of the analysis techniques available to us).
Our work is limited to a comparison between methods using trajectory data from simple theoretical models. We are concerned to understand whether the individual methods function correctly and consistently with 6D phase space data. As a consequence, more complex scenarios, such as the following, are out of scope,
\begin{itemize}
\item perturbed trajectories,
\item general time-varying systems where the gravitational potential is a function of time,
\item time-averaged conserved quantities as in \cite{Qin2021} for example,
\item rotating bar potentials as in \cite{Barnes2001}, and
\item the impact of adiabatic variations on orbits.
\end{itemize}
In addition, determining formulae for conserved quantities is not performed: various schemes are described in the literature but, from our own experiences with them, no robust, reliable method exists as yet to do so for multiple conserved quantities in 6D phase space.
For the remainder of this paper, we refer to the machine learning method as the ML method, and the more traditional method using the correlation integral as the CI method. To clarify, since we are not able to determine formulae and therefore are unable to confirm quite what we have counted, we use the term `conserved quantities' (CQ) to cover both integrals and resonances. This lack of formulae is not specific to our work, and is present in earlier CI work. We use the notation $CQ = n$ to indicate $n$ conserved quantities.
To be clear, our work is concerned with stellar dynamics in a galaxy context. Use of methods and the results achieved must be interpreted in that context: they may not be applicable in other contexts.
\section{Theory and Methods}
\label{sec:theory}
\subsection{The CI Method}
\label{sec:CImethod}
Our design and implementation of the correlation integral (CI) method is based on that in \cite{Carpintero2008}.
The method aims to determine the number of effective dimensions $N_{\rm eff}$ of an orbit.
Having done that the number of conserved quantities $CQ$ for the orbit is just
\begin{equation}
\label{eqn:CQeqn}
CQ = N_{\rm PS} - N_{\rm eff},
\end{equation}
where $N_{\rm PS}$ is the number of phase space dimensions of the orbit.
Thinking now of the manifold the phase space orbit trajectory defines, given the number of conserved quantities reduces the
dimensionality of the trajectory, if we define a hypersphere of radius $r_{\rm s} << 1$ around a trajectory point,
the number of other trajectory points $N_{\rm pts}$ in the hypersphere should increase as $r_{\rm s} ^{N_{\rm eff}}$ as $r_{\rm s}$ is increased.
So, for increasing $r_{\rm s}$
\begin{equation}
\label{eqn:prop}
N_{\rm pts} \propto r_{\rm s} ^{N_{\rm eff}},
\end{equation}
a power law relationship which allows $N_{\rm eff}$ and thus $CQ$ to be determined.
More formally, the correlation integral can be written as
\begin{equation}
\label{eqn:formal}
C(r_{\rm s}) = \lim_{N \to \infty} \frac{2}{N(N-1)} \sum ^N_{i=1} \sum^N_{j>i} \Theta(r_{\rm s} - |\mathbf{X}_i - \mathbf{X}_j|),
\end{equation}
where $\mathbf{X}_i$ and $\mathbf{X}_j$ are phase space trajectory points, $\Theta$ is the Heaviside function, and $N$ is the number of trajectory points contributing to the integral. Changing terminology slightly, it should be clear that the correlation integral is just the 2-point correlation function of the trajectory points. Taking Equations (\ref{eqn:prop}) and (\ref{eqn:formal}) together, $N_{\rm pts}$ is equivalent to $C(r_{\rm s})$. From Equation (\ref{eqn:formal}), the single computer processor performance of the method nominally scales as $N^2$ but the $C(r_{\rm s})$ calculation is readily amenable to parallelization: in computing terms, it is just a \textit{for} loop within a \textit{for} loop.
For $r_{\rm s}$, from experimentation, we take $80$ values uniformly spaced logarithmically
in the range [$10^{-3}$, $1$]. In subsequent sections, we will refer to $r_{\rm s}$ as a scale length.
Prior to calculating the correlation integral, we normalize the coordinates of a trajectory's points using min-max normalization as in \cite{Floss2018}.
Distances $d$ between two points in phase space (on the manifold) are calculated as
\begin{equation}
d = \sqrt{\delta x^2 + \delta y^2 + \delta z^2 + \delta v_x^2 + \delta v_y^2 + \delta v_z^2},
\end{equation}
where the difference in positions is $(\delta x,\delta y,\delta z)$ and in velocities $(\delta v_x,\delta v_y,\delta v_z)$.
We utilize two approaches in determining an integer value for $N_{\rm eff}$ and subsequently $CQ$. We can work with the correlation integral directly, or we can work with its (numerically determined) gradient. We have implemented and use both approaches but our preference is the gradient approach. All the CI results described here are based on that approach, with the direct approach being used in a supporting role.
$N_{\rm eff}$ is the gradient in a log-log formulation of Equation (\ref{eqn:prop}). This gradient varies with $r_{\rm s}$, is not a constant, and does
not have an integer value that we can immediately associate with $N_{\rm eff}$. We arrive at a constant integer value by fitting possible
integer values ($1$ to $5$) of the gradient, and taking the value with the smallest squared residual with the integral gradient. We do not use all the gradient
points in a single fitting process but using a number of successive gradient points (eight in our case), and perform multiple rolling fits
of the gradient by advancing the start point by one position along the gradient for each fit. As part of the fitting process, we employ a range constraint on the selected points to deal with binning noise in constructing the correlation integral and its gradient: the minimum to maximum difference has to be less than some tolerance ($0.2$ initially). If no gradient can be found using a given tolerance, the tolerance is relaxed slightly (by $0.05$) until integer gradient is found.
The approach using the correlation integral directly also employs the rolling fit tactic but fits a straight line to the integral points themselves, and uses the line's gradient to arrive at a value for $N_{\rm eff}$. \cite{Carpintero2008} use a similar process.
Figure \ref{fig:ciplots} illustrates the CI method for an orbit and contains two plots, one showing the correlation integral, and a second showing the best-fitting integer gradient. The points contributing to the best-fitting gradient are highlighted in yellow on both plots. 6D phase space is being used so, in this case, a gradient of $2$ corresponds to the number of conserved quantities being $4$.
\begin{figure}[h]
\centering
\caption{CI Method - example plots}
\label{fig:ciplots}
\begin{tabular}{cc}
\includegraphics[width=55mm]{CImethod/ci_ai_28b.png} & \includegraphics[width=55mm]{CImethod/gradient_ai_28b.png} \\
\end{tabular}
\medskip
Example plots from using the CI method on trajectory data taken from a triaxial model orbit. The left hand plot shows the correlation integral plotted against scale length as a log-log plot. The right hand plot shows the gradient of the curve on the left with the points contributing to the best-fitting integer gradient ($2$) marked in yellow. In this case, the gradient of $2$ corresponds to the number of conserved quantities being $4$. The fitting process is described in Section \ref{sec:CImethod}.
\end{figure}
\subsection{The ML method}
\label{sec:MLmethod}
Functionally, the ML method has similar aims to the CI method but whereas the CI method attempts to determine the number of effective dimensions of an orbit trajectory, the ML method attempts to determine the number of conserved quantities an orbit trajectory conforms to. Using Equation (\ref{eqn:CQeqn}) from the previous section, the ML method obtains a value for $CQ$ and then $N_{\rm eff}$ . Cross-matching mathematical symbols with \cite{Liu2021}, their $\hat{s}$ is the equivalent of our $N_{\rm eff}$, and their $n_{\rm eff}$ is our $CQ$.
Since we use their main algorithm and software largely unchanged, we only briefly outline the method and point the reader to \cite{Liu2021} and \cite{Saremi2019} for a fuller explanation. The method has three main phases,
\begin{enumerate}
\item preprocessing involving trajectory data whitening, and (optional) removal of any linear conserved quantities using Principal Component Analysis (PCA),
\item trajectory manifold sampling using a neural empirical Bayes technique \citep{Saremi2019} to characterize the local tangent space by perturbing the trajectory data, and
\item use of PCA explained variance ratios to estimate the number of effective dimensions and the number of conserved quantities.
\end{enumerate}
It is important to note that phase (2) above is repeated multiple times using different degrees of perturbation controlled by a scale length parameter.
We use the AI Poincar\a'e Python software package produced by \cite{Liu2021} unchanged except for making the user interface more amenable to batch operation, and splitting the package so that the plots are produced in a script separately runnable from the main modeling code. The original package is publicly available on the internet (github.com/KindXiaoming/aipoincare). In training the neural networks, we use the `Adam' optimizer \citep{Adam2014} and mean square error for the loss function.
In machine literature generally, there is no precise guidance on setting the number of hidden layers and the number of nodes in neural networks other than that more complex modeling requires more hidden layers and nodes. After some experimentation, we find this to be true in our work where we use two layers of 128 nodes for spherical models, four layers of 192 nodes for our axisymmetric model, and four layers of 320 nodes for the triaxial model. Similarly, we experiment with the values of the training learning rate and the number of training iterations, and arrive at a value of $0.001$ for the learning rate and values of 500 (spherical models) and 2000 (other models) for the training iterations. We take values of $[0.001, 0.01, 0.025, 0.05, 0.075, 0.1, 0.15, 0.25, 0.5, 1.0]$ for the scale (perturbation) lengths.
For the ML method, we use the plots described by \cite{Liu2021} to monitor the behavior of the method (see their explained ratio diagrams), and add a further plot to examine training loss reduction and convergence. Figure \ref{fig:mlplots} shows example plots taken from our work.
\begin{figure}[h]
\centering
\caption{ML Method - example plots}
\label{fig:mlplots}
\begin{tabular}{ccc}
\includegraphics[width=50mm]{MLmethod/ERD.png} & \includegraphics[width=50mm]{MLmethod/CQ.png} & \includegraphics[width=50mm]{MLmethod/losses_0.100.png} \\
\end{tabular}
\medskip
Example plots from using the ML method on trajectory data taken from a triaxial model. The left and middle plots are as described by \cite{Liu2021}. The right plot shows the change in the loss function as the neural net for a given scale (perturbation) length is being trained. The scale length in this case is $0.1$. The gradient is calculated from fitting a straight line to the last 20\% of the loss values from the training iterations (the short red line).
\end{figure}
In PCA terms, explained variance is the variance attributed to individual components, and the ratio is this variance as a fraction of the total variance across all components. As set out in \cite{Liu2021}, low ratios for a component for a number of consecutive scale factors are taken to indicate a conserved quantity (component), and high ratios unconserved quantities. The valley \cite{Liu2021} see around a scale factor of $0.1$, we also see in our results. Their ratio cut-off of $0.1/N$ is not met consistently in our results for 6D data modeling (in particular, in our axisymmetric and triaxial galaxy models) but is met when modeling with 4D data. As a consequence we treat the explained ratio diagrams as informative but prefer to quote results based on $CQ$ estimation as described in the text surrounding the cosine formula in equation (3) of \cite{Liu2021}. The key difference between the two mechanisms is that the explained ratio diagrams are based on perturbing a single data point from a trajectory while the $CQ$ estimation uses multiple data points. In our case, we use $200$ data points perturbed $2000$ times.
\subsection{Galaxy Models}
\label{sec:toys}
In this section, we cover briefly the theoretical galaxy models and potentials we use in creating the orbit trajectories. We include any parameter values required to use the models, and indicate the number of isolating integrals expected. In what follows $(x, y, z)$ are Cartesian coordinates and $r$ is the spherical radius. For the axisymmetric model, $(R, z)$ are taken from cylindrical polar coordinates aligned along the symmetry axis of the galaxy. For all models, the total mass $M = 3$, and the gravitational constant $G = 1$. All units are theoretical.
For the spherical \cite{Plummer1911} galaxy model, the potential $\phi$ is
\begin{equation}
\phi(r) = - \frac{GM}{\sqrt{r^2 + b^2}},
\end{equation}
where $b$ is the scale length ($b = 0.5$ in our models).
For the \cite{Hernquist1990} model,
\begin{equation}
\phi(r) = - \frac{GM}{r + a},
\end{equation}
where $a$ is the scale length ($a = 0.5$ in our models). For both potentials, we expect four integrals to be counted by our modeling, energy and the three components of angular momentum. Given that angular momentum is conserved, an orbit's trajectory will be in a plane orthogonal to the angular momentum vector. In such an orbit plane, trajectories have a rosette appearance, for example as in figure 3.1 in \cite{BT2008} and here in our Figure \ref{fig:resplumcq3}, and are confined to the region between an inner and an outer circle.
For the axisymmetric logarithmic models based on \cite{Richstone1980},
\begin{equation}
\phi(R, z) = \frac{1}{2} v_0^2\log (R^2 + \frac{z^2}{q^2} + R_c^2),
\end{equation}
where $v_0$ is the scale velocity, $q$ is the $z$-axis flattening, and $R_c$ is the core radius. In our models, $v_0 = 1$ and $q = 0.9$, and $R_c = 0.1$ or $R_c = 0$ depending on whether a cored or singular (uncored) model is being used. We expect three integrals (energy, the z component of angular momentum, and a `non-classical' third integral) to be counted for models in 3D Cartesian coordinates, but only two (energy and the third integral) for 2D meridional plane $(R, z)$ coordinates. Orbit trajectories are typically tube orbits about the z-axis, for example, as in figure 3.4 in \cite{BT2008}.
For the triaxial, perfect ellipsoid galaxy model \citep{deZeeuw1985},
\begin{equation}
\phi(x, y, z) = - \frac{GM}{\pi} \int_0^{\infty}\frac{1}{1 + m^2(u)}\frac{du}{\sqrt{(a^2 + u)(b^2 + u)(c^2 + u)}},
\end{equation}
where
\begin{equation}
m^2(u) = \frac{x^2}{a^2 + u} + \frac{y^2}{b^2 + u} + \frac{z^2}{c^2 + u},
\end{equation}
with $a \geq b \geq c > 0$. For consistency with table 2 of \cite{deZeeuw1985}, we take $a = 1$, $b = 0.625$, and $c = 0.5$. The relevant equations in \cite{deZeeuw1985} are, for ellipsoidal coordinates, equations 6 to 10, for the potential, and equations 13 to 17 and section 4.2 for the integrals of motion. We expect three integrals of motion to be counted by our modeling, energy and two related to angular momenta. These integrals are consistent with the St\"{a}ckel formulation of the perfect ellipsoid potential. Depending on the values of the integrals, orbit trajectories are associated with one of four families - boxes, inner long axis tubes, outer long axis tubes, and short axis tubes as described in \cite{deZeeuw1985} section 5.1. We adopt the same integration style for the potential and its Cartesian coordinate derivatives as in section 2 of \cite{Merritt1996}, where a substitution is used to make integration bounds finite. Although trajectories in ellipsoidal coordinates are unlikely observationally, we do use them in some of our models for completeness.
\subsection{Orbit Initial Conditions and Trajectories}
\label{sec:ics}
The same initial conditions (spatial positions and velocities) for the trajectories are used for each galaxy, regardless which modeling method is being utilized. For the spherical models, we create a spatial distribution of positions that matches the theoretical mass density distribution associated with the galaxy, and use Gaussian sampling with mean zero and the theoretical velocity dispersion to allocate velocity values. For the axisymmetric model, we use a three isolating integral scheme as employed in, for example, \cite{Cappellari2006} or \cite{Long2012}. For the triaxial model, we match its density distribution to create spatial positions and then allocate velocities uniformly randomly having previously assigned kinetic energies to the initial spatial positions of the orbits. In creating the initial conditions, no attempt has been made to create orbits with specific characteristics: we have just reused our existing stellar dynamical modeling software.
Orbit trajectories are created in a given galaxy potential using a 3rd order leapfrog integrator \citep{Ruth1983} with a time step of $10^{-2}$ units. With such a scheme, the known isolating integrals are reproduced to approximately 1 part in $10^{7}$. The number of trajectory points varies according to the orbital period of an orbit. Orbital periods are determined for spherical and axisymmetric models using an epicyclic approximation \citep[see][section 3.2.3]{BT2008} with an equivalent long axis tube orbit approximation being employed for the triaxial model \citep{Valluri1998}. Based on our experiences, we limit orbit trajectories to $75$ orbital periods \citep[][used $50$]{Barnes2001}, and set an overall upper limit on the number of trajectory points per orbit to $640K$. Using more orbital periods has little impact on our results. To avoid any short distance bias to the modeling calculations only every nth point is actually utilized. From experimentation, we take $n=16$ for triaxial models, $n=8$ for spherical models, and $n=4$ for axisymmetric models. Trajectories are always constructed using Cartesian coordinates, and then converted later to other coordinate systems as required by the modeling.
The software base for initial conditions and trajectories is taken from the lead author's implementation of the \cite{Syer1996} made-to-measure stellar dynamical modeling method and the \cite{Schwarz1979} orbit based modeling method. This software was first used in \cite{Long2010}, and most recently in \cite{Long2021}.
\subsection{Resonances}
\label{sec:res}
Resonances are concerned with integer relationships between the main frequencies of the coordinate components of an orbit's trajectory. A resonance is defined by
\begin{equation}
\label{eqn:res}
\sum _k ^K m_kw_k = 0,
\end{equation}
where the $w_k$ are the main frequencies, $m_k$ are integer coefficients, and $K$ is the dimension of configuration space ($2$ or $3$ in our work). We determine component frequencies using the Python NAFF implementation by \cite{Zis2019} (NAFF is the Numerical Analysis of Fundamental Frequencies described in \cite{Laskar1990, Laskar1993}). We will not describe NAFF in any detail: there is plenty of material regarding its theory and operational use in the literature, for example, \cite{Papa1996, Papa1998}, \cite{Valluri1998}, \cite{Merritt1999}, and \cite{Wang2016}. We concentrate on describing how we provide input to NAFF, and use its output to find resonances. There is an alternative approach in \cite{Carpintero1998} for obtaining resonances which we do not utilize.
We provide input to NAFF in complex form ($z_k$) combining component positions ($x_k$) as the real part and velocities ($v_k$) as the imaginary part,
\begin{equation}
z_{k,j} = x_{k,j} + iv_{k,j},
\end{equation}
where subscript $j$ indicates the jth point along the orbit's phase space trajectory. The outputs from NAFF relevant to our purposes are the component amplitudes (the real amplitudes, to be precise) and their frequencies. Practically, it is convenient to sort the component amplitudes and their frequencies into descending amplitude order to facilitate the searching for frequencies that is required. Quite how we select the main frequencies depends on whether we are working with 2D or 3D orbits. By default, we use 3D orbits but there are times when using 2D alternatives are appropriate, for example, when we are working with orbital plane or meridional plane coordinates.
For 2D orbits, the first component's main frequency ($w_1$) is taken as the frequency associated with the first component's maximum amplitude. Searching the second component frequencies (searching as described above), the second main frequency ($w_2$) is taken as the first encountered frequency that is not equal to first component's main frequency ($w_1$).
For 3D orbits, the first main frequency ($w_1$) is taken as the maximum amplitude frequency from the component that has the largest amplitude across all three components. The second main frequency ($w_2$) comes from the component that has the second largest amplitude across all three components, and is the frequency (after searching) that does not have an integer relationship with the first main frequency ($w_1$). The third main frequency ($w_3$) comes from the remaining component, and is the frequency that does not have an integer linear relationship with the two main frequencies previously determined ($w_1$ and $w_2$).
The final step is to determine whether or not integer coefficients ($m_k$) can be found such that the main frequencies ($w_k$) have an integer relationship as in Equation (\ref{eqn:res}). We do this using the integer programming capabilities of the publicly available Python package OR-Tools (github.com/google/or-tools). If we are able to find such integer coefficients, then we have found a resonance.
\section{Results}
\label{sec:results}
We group our results by galaxy model, and include results from both modeling methods in the same figures for ease of comparison. In these plots and figures, results are color-coded with blue being used for the CI method, and orange for the ML method. Most of the comparison plots are displaying percentages. With 64 orbits in total for each galaxy model, $1$ orbit equates to $\approx 1.6\%$ of the total. For individual orbits, we show typical results plots in Figures \ref{fig:ciplots} and \ref{fig:mlplots}, in Sections \ref{sec:CImethod} and \ref{sec:MLmethod}. The galaxy models we have employed mean that we know the number of isolating integrals that should be found. Ignoring any non-isolating integrals and resonances, this gives us a basis for examining our results, and enables us to discuss whether more or less conserved quantities than expected have been detected by the methods.
\subsection{Spherical Galaxy Models}
Results for the Plummer and Hernquist spherical galaxy models, using both the CI and ML methods with 2D and 3D configuration spaces, are shown in Figure \ref{fig:resplumhern}.
\begin{figure}[h]
\centering
\caption{Spherical Galaxy Models - Plummer and Hernquist}
\label{fig:resplumhern}
\begin{tabular}{cM{65mm}M{65mm}M{65mm}}
& \textit{Plummer} & \textit{Hernquist} \\
\textit{3D} & \includegraphics[width=65mm]{Plummer/a1F640K_PF640K.png} & \includegraphics[width=65mm]{Hernquist/a1F640K_PF640K.png} \\
\textit{2D} & \includegraphics[width=65mm]{Plummer/a1S640K_PS640K.png} & \includegraphics[width=65mm]{Hernquist/a1S640K_PS640K.png} \\
\end{tabular}
\medskip
Plummer and Hernquist models in 3D Cartesian coordinates and 2D orbital plane coordinates. The plots show histograms of the percentages of orbits having the indicated numbers of conserved quantities. As stated at the start of Section \ref{sec:results}, CI results are in blue and ML results in orange. We expect the peak percentages to be associated with $CQ=4$ for 3D modeling and with $CQ = 2$ for 2d modeling, and this is met in practice.
\end{figure}
Considering first the Plummer model 3D results, they are largely as anticipated with all the orbits having either $CQ = 4$ or $CQ = 5$ conserved quantities. The CI and ML methods achieve the same CQ for $86\%$ (55 out of 64) of orbits. Where the methods differ is in the number of orbits with more conserved quantities than anticipated ($CQ = 5$), with 12 for the CI method and 3 for the ML method. The 3 ML orbits are found by the CI method as well. This difference ($12$ vs $3$ orbits) may be related to whether or not the methods are able to handle non-isolating integrals as conserved quantities. Certainly, `ring' orbits, where the orbital plane inner radius is approximately equal to the outer radius, are present in the orbit sample. Such orbits are described as having non-isolating integrals in \cite{BT2008} section 3.1, equation 3.62. The 3 common $CQ = 5$ orbits are `thin' ring orbits (see Fig. \ref{fig:resplumcq5}) whereas the additional 9 orbits found by the CI method have slightly `fatter' rings.
\begin{figure}[h]
\centering
\caption{Plummer `Thin' Ring $CQ = 5$ Orbits}
\label{fig:resplumcq5}
\begin{tabular}{ccc}
\includegraphics[width=50mm]{Plummer/zorbit_8c.png} & \includegraphics[width=50mm]{Plummer/zorbit_24c.png} & \includegraphics[width=50mm]{Plummer/zorbit_45c.png}\\
\end{tabular}
\medskip
Plummer model `thin' ring $CQ = 5$ orbits plotted in their orbital planes. Both the CI and ML methods agree on the categorization of these orbits as $CQ = 5$. As such the orbits have more conserved quantities than anticipated. In this case the categorization appears to be as a result of the orbits having a non-isolating integral of motion.
\end{figure}
Almost all the 2D orbital plane Plummer orbits have the expected number of conserved quantities ($CQ = 2$) with only 3 orbits being identified by the CI method as having more quantities ($CQ = 3$). None of these orbits is part of the $CQ = 5$ set of orbits from the 3D work above.
Also, as can be seen from Figure \ref{fig:resplumcq3}, none of the orbits has an obvious `thin' ring structure. Without having a mechanism to attempt to determine the formulae for the conserved quantities, it is not clear why the CI method identified these orbits as having more conserved quantities.
Similarly, it is unclear why the ML method has not identified any such orbits as it was able to do when modeling in 3D.
\begin{figure}[h]
\centering
\caption{Plummer 2D Orbital Plane - Orbits with additional CQs}
\label{fig:resplumcq3}
\begin{tabular}{ccc}
\includegraphics[width=50mm]{Plummer/zorbit_34c.png} & \includegraphics[width=50mm]{Plummer/zorbit_44c.png} & \includegraphics[width=50mm]{Plummer/zorbit_49c.png}\\
\end{tabular}
\medskip
Plummer model 2D orbital plane orbits having more conserved quantities ($CQ > 2$) than expected. The orbit trajectories have been restricted to 10K data points so that the orbit structures are visible.
As may be seen, none of the orbits has a `ring' structure. Without formulae for the conserved quantities, it is not clear why the CI method has determined the orbits have $CQ = 3$ rather than the anticipated $CQ = 2$.
\end{figure}
The Hernquist model results, both 2D and 3D, are very similar to the Plummer results. However, the ML method for the Hernquist model finds no orbits with more conserved quantities than anticipated, that is, no orbits with $CQ = 5$ (3D) nor with $CQ = 3$ (2D). The CI method is able to find $8$ $CQ = 5$ orbits with a ring structure, of which $5$ are considered to be `thin'. In other words, the CI method is able to identify at least some orbits as having non-isolating integrals. The ML method appears to find none for the Hernquist models but was able to do so for the Plummer model.
\subsection{Axisymmetric Galaxy Models}
Results for the singular and cored axisymmetric logarithmic galaxy models, using both the CI and ML methods with 2D and 3D configuration spaces, are shown in Figure \ref{fig:resaxi} and Tables \ref{tab:axiresuncore} and \ref{tab:axirescore}.
\begin{figure}[h]
\centering
\caption{Axisymmetric Logarithmic Models - Singular and Cored}
\label{fig:resaxi}
\begin{tabular}{cM{65mm}M{65mm}M{65mm}}
& \textit{Singular} & \textit{Cored} \\
\textit{3D} & \includegraphics[width=65mm]{Axilog/a1F640K_PF640K.png} & \includegraphics[width=65mm]{CAxilog/a1F640K_PF640K.png} \\
\textit{2D} & \includegraphics[width=65mm]{Axilog/a1R640K_PR640K.png} & \includegraphics[width=65mm]{CAxilog/a1R640K_PR640K.png} \\
\end{tabular}
\medskip
Singular and cored axisymmetric logarithmic models in 3D Cartesian coordinates and 2D meridional plane coordinates. We expect the peak percentages in the histograms to be associated with $CQ=3$ for 3D modeling and with $CQ = 2$ for 2d modeling. For the 3D CI models, this peak expectation is not met.
\end{figure}
\begin{table}[h]
\centering
\caption{Axisymmetric Logarithmic Models - Singular}
\label{tab:axiresuncore}
\begin{tabular}{cc|cc|cc|cc}
\hline
& \textbf{CQ} & \multicolumn{2}{c|}{\textbf{CI Method}} & \multicolumn{2}{c|}{\textbf{ML Method}} & \multicolumn{2}{c}{\textbf{Common}} \\
& & Orbits & Resonances & Orbits & Resonances & Orbits & Resonances \\
\hline
& 1 & 0 & 0 & 0 & 0 & 0 & 0\\
& 2 & 3 & 2 & 1 & 1 & 1 & 1\\
\textit{3D} & 3 & 25 & 4 & 51 & 18 & 21 & 3\\
& 4 & 28 & 11 & 12 & 3 & 8 & 2\\
& 5 & 8 & 5 & 0 & 0 & 0 & 0\\
\hline
& 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
\textit{2D} & 2 & 59 & 19 & 60 & 19 & 58 & 19\\
& 3 & 4 & 2 & 3 & 2 & 2 & 2 \\
\hline
\end{tabular}
\medskip
Axisymmetric singular logarithmic models showing the distribution of orbits and resonances by model-determined CQ value. The \textit{Orbits} and \textit{Resonances} columns give the number counts of orbits, and orbits with resonances, for each $CQ$ value. The \textit{Common} column gives the number of times the CI and ML methods agree on the $CQ$ categorization. The main points to note are that some orbits have more conserved quantities than might be expected and this can not be explained by resonances alone; resonances, with the exception of the CI-3D results, do not appear to be identified very well; and, lastly, the 2D meridional results show less variation (lack of discrimination ?) than the 3D results.
\end{table}
The pattern of results is similar between the singular and cored models. We examine the singular results first and then deal with any differences shown by the cored results. Looking at the plot of the singular modeling with 3D Cartesian coordinates in Figure \ref{fig:resaxi}, it is clear that orbits have predominately $CQ \geq 3$ (as expected) but that the ML results are more peaked at $CQ = 3$ than the CI results: the CI results have significantly more orbits with additional conserved quantities ($CQ > 3$). The two methods only achieve the same results for $47\%$ of the orbits (30 out of 64). Looking at the 2D meridional plane results in Figure \ref{fig:resaxi}, the agreement is much higher with $95\%$ of the orbits (61 out of 64) having the same results but with few orbits having additional conserved quantities.
A resonance analysis (Sect. \ref{sec:res}) of the singular model orbits shows that $22$ orbits are resonant, and might be expected to have an increased number of conserved quantities ($CQ > 3$). The CI method 3D Cartesian results have $11$ orbits with $CQ = 4$, and $5$ with $CQ = 5$. By comparison, the ML method results have only $3$ orbits with $CQ = 4$; of the remaining $19$, $18$ have just the anticipated number of conserved quantities ($CQ = 3$) which seems not be correct. Examining the 2D meridional plane results, the CI method has $2$ resonant orbits (see Fig. \ref{fig:resaxicq3}) having an increased number of conserved quantities ($CQ = 3$), with the ML method having the same orbits also with $CQ = 3$. In the 3D results, both methods show orbits with increased conserved quantities ($CQ = 4$).
\begin{figure}[h]
\centering
\caption{Singular Axisymmetric Model - Orbits with additional CQs}
\label{fig:resaxicq3}
\begin{tabular}{cc}
\includegraphics[width=50mm]{Axilog/zorbit_49c.png} & \includegraphics[width=50mm]{Axilog/zorbit_52c.png} \\
\end{tabular}
\medskip
Singular axisymmetric model orbits with additional conserved quantities. Both the orbits are categorized as $CQ = 3$ (2D modeling) and $CQ = 4$ (3D modeling) by both methods. As such, these orbits have more conserved quantities than might be expected from considering only isolating integrals. The resonance analysis of the orbits indicates both are resonant.
\end{figure}
From the previous paragraphs in this section, hopefully it is clear that the numbers of orbits with increased conserved quantities ($CQ \geq 4$) in the 3D singular runs can not be explained by resonances alone. The numbers imply that an appreciable number of orbits must have non-isolating integrals. Having formulae for the conserved quantities would enable these orbits to be investigated.
The $3$ singular 3D model orbits with less conserved quantities ($CQ = 2$) than expected are shown in Figure \ref{fig:resaxicq2}. The left hand plot orbit trajectory visually appears to be similar to the chaotic orbit in figure 7 of \cite{Zotos2014}, and has both the CI and ML methods agreeing on its categorization. Also, the orbit is categorized similarly in the 2D meridional plane modeling. Quite why the other $2$ orbits have category $CQ = 2$ is unclear.
\begin{figure}[h]
\centering
\caption{Singular Axisymmetric Model - Orbits with less CQs}
\label{fig:resaxicq2}
\begin{tabular}{ccc}
\includegraphics[width=50mm]{Axilog/zorbit_24c.png} & \includegraphics[width=50mm]{Axilog/zorbit_26c.png} & \includegraphics[width=50mm]{Axilog/zorbit_45c.png}\\
\end{tabular}
\medskip
Singular axisymmetric model orbits with less conserved quantities. From the 3D modeling, both the CI and ML methods agree these three orbits have less conserved quantities than might be expected from considering only isolating integrals ($CQ < 3$). The orbit in the left hand plot has similarities with the chaotic orbit in figure 7 of \cite{Zotos2014}. The remaining orbits do not appear to be chaotic but may have a reduced number of isolating integrals. The 2D meridional plane modeling also indicates that the left hand plot orbit has less conserved quantities ($CQ = 1$).
\end{figure}
\begin{table}[h]
\centering
\caption{Axisymmetric Logarithmic Models - Cored}
\label{tab:axirescore}
\begin{tabular}{cc|cc|cc|cc}
\hline
& \textbf{CQ} & \multicolumn{2}{c|}{\textbf{CI}} & \multicolumn{2}{c|}{\textbf{ML}} & \multicolumn{2}{c}{\textbf{Common}} \\
& & Orbits & Resonances & Orbits & Resonances & Orbits & Resonances \\
\hline
& 1 & 0 & 0 & 0 & 0 & 0 & 0\\
& 2 & 3 & 2 & 3 & 1 & 0 & 0\\
\textit{3D} & 3 & 21 & 7 & 48 & 18 & 20 & 7\\
& 4 & 22 & 9 & 13 & 7 & 8 & 4\\
& 5 & 18 & 8 & 0 & 0 & 0 & 0\\
\hline
& 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
\textit{2D} & 2 & 60 & 23 & 61 & 24 & 60 & 23\\
& 3 & 4 & 3 & 3 & 2 & 3 & 2 \\
\hline
\end{tabular}
\medskip
Axisymmetric cored logarithmic models showing the distribution of orbits and resonances by model-determined CQ value. The columns are as defined for Table \ref{tab:axiresuncore}. Not that unsurprisingly, given the similarities in gravitational potentials, the results are consistent with the singular modeling results in Table \ref{tab:axiresuncore}.
\end{table}
The results for the cored axisymmetric model are very similar to the singular results, so we will focus on just the resonance related matters in Table \ref{tab:axirescore}. For the 3D cored model, a resonance analysis shows that there are $26$ resonant orbits. $17$ of these orbits ($65\%$) have been categorized by the CI method as having an increased number of conserved quantities ($CQ \geq 4$). The ML method is only able to categorize $7$ resonant orbits as $CQ = 4$. More broadly, the CI method has $40$ orbits
with $CQ \geq 4$, while the ML method has $13$ orbits. Even with the resonant orbits subtracted, there is still an appreciable number left with an increased number of conserved quantities that require explaining. Having formulae for the conserved quantities would assist matters.
\subsection{Triaxial Galaxy Model}
Results for the perfect ellipsoid triaxial galaxy model, using both Cartesian and ellipsoidal coordinates, are shown in Figure \ref{fig:restri} and Table \ref{tab:trires}. The 64 orbit sample for this triaxial galaxy comprises 37 X-tube orbits (58\%), 10 Z-tube (16\%), and 27 box orbits (26\%).
\begin{figure}[h]
\centering
\caption{Triaxial Model - Perfect Ellipsoid}
\label{fig:restri}
\begin{tabular}{cc}
\textit{3D Cartesian Coordinates} & \textit{3D Ellipsoidal Coordinates} \\
\includegraphics[width=70mm]{Triaxial/a1F640K_PF640K.png} & \includegraphics[width=70mm]{Triaxial/a1E640K_PE640K.png} \\
\end{tabular}
\medskip
Perfect ellipsoid triaxial models in 3D Cartesian coordinates and 3D ellipsoidal coordinates. We expect the peak percentages in the histograms to be associated with $CQ=3$, and this is met in practice.
\end{figure}
\begin{table}[h]
\centering
\caption{Triaxial Model - Perfect Ellipsoid}
\label{tab:trires}
\begin{tabular}{cc|cc|cc|cc}
\hline
& \textbf{CQ} & \multicolumn{2}{c|}{\textbf{CI}} & \multicolumn{2}{c|}{\textbf{ML}} & \multicolumn{2}{c}{\textbf{Common}} \\
& & Orbits & Resonances & Orbits & Resonances & Orbits & Resonances \\
\hline
& 1 & 0 & 0 & 0 & 0 & 0 & 0\\
& 2 & 2 & 0 & 7 & 3 & 0 & 0\\
\textit{Cartesian} & 3 & 50 & 11 & 55 & 13 & 44 & 9\\
& 4 & 11 & 6 & 2 & 2 & 2 & 2\\
& 5 & 1 & 1 & 0 & 0 & 0 & 0\\
\hline
& 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
& 2 & 0 & 0 & 0 & 0 & 0 & 0 \\
\textit{Ellipsoidal} & 3 & 47 & 13 & 63 & 17 & 47 & 13 \\
& 4 & 17 & 5 & 1 & 1 & 1 & 1\\
& 5 & 0 & 0 & 0 & 0 & 0 & 0\\
\hline
\end{tabular}
\medskip
Perfect ellipsoid triaxial models showing the distribution of orbits and resonances by model-determined CQ value. The columns are as defined for Table \ref{tab:axiresuncore}. The distribution patterns are consistent with those for the axisymmetric models - see Tables \ref{tab:axiresuncore} and \ref{tab:axirescore}.
\end{table}
The results for the triaxial modeling are generally as anticipated with most orbits having $CQ = 3$ as expected, or $CQ = 4$, an increased number of conserved quantities. Both methods have categorized some orbits as having fewer conserved quantities than expected ($CQ = 2$) indicating irregular orbits.
A resonance analysis (Sect. \ref{sec:res}) of the orbits shows that $18$ are resonant. Using 3D Cartesian coordinates, the CI method categorizes $7$ of these orbits in the $CQ = 4$ or $CQ = 5$ bands where they might be expected to be. By comparison, the ML method only places $2$ orbits in the $CQ = 4$ band.
This would appear to leave $11$ orbits (CI method) and $16$ orbits (ML method) unrecognized as resonant orbits. The equivalent orbit numbers, unrecognized when ellipsoidal coordinates are used, are $13$ orbits (CI method) and $17$ orbits (ML method).
The CI and Ml methods reach the same CQ categorization for $72\%$ (46 out of 64) of the orbits when Cartesian coordinates are in use, and $75\%$ (48 out of 64) when using ellipsoidal coordinates. The $CQ = 2$ orbits, indicating orbits with fewer conserved quantities than expected, are a concern, particularly for the ML method, as all the orbits for the triaxial galaxy model have $3$ isolating integrals by construction (which were explicitly confirmed as conserved during trajectory construction - see Sect. \ref{sec:ics}).
Note that no $CQ = 2$ orbits have been counted when modeling with ellipsoidal coordinates. Also, it should be noted that the $CQ = 4$ orbits which are not resonant may have non-isolating integrals.
Having formulae for the conserved quantities would enable these unexplained $CQ = 2$ and $CQ = 4$ orbits to be investigated.
\subsection{Computer Utilization}
\label{sec:computil}
Modeling runs were performed on a $20$ core desktop PC. No attempt was made to use graphics processing units (GPUs) to improve performance but their use is not precluded. All software used is Python 3 based with some use of Cython for performance critical code. PyTorch is used in the machine learning scripts. Multi-processor working, limited to a maximum of 10 processors, was invoked for both the CI and ML implementations.
\begin{table}[h]
\centering
\caption{Modeling Runtimes per Orbit}
\label{tab:perfpo}
\begin{tabular}{r|cc|cc}
\hline
& \multicolumn{2}{c|}{\textbf{3D Cartesian}} & \multicolumn{2}{c}{\textbf{Alternative}} \\
& \multicolumn{2}{c|}{\textbf{Coordinates}} & \multicolumn{2}{c}{\textbf{Coordinates}} \\
& CI & ML & CI & ML \\
\hline
\textbf{Spherical Models} &&&&\\
Plummer & 1s & 5s & 1s & 5s \\
Hernquist & 1s & 5s & 1s & 5s \\
\textbf{Axisymmetric Models} &&&& \\
Singular & 5s & 38s & 4s & 18s \\
Cored & 10s & 39s & 10s & 18s \\
\textbf{Triaxial Model} &&&& \\
Ellipsoid & 8s & 58s & 8s & 28s \\
\hline
\end{tabular}
\medskip
Processing runtimes per orbit for the different galaxy models. The alternative coordinates are 2D orbital or meridional plane coordinates for the spherical and axisymmetric models respectively, while for triaxial the coordinates are 3D ellipsoidal coordinates. Times for the ML method are appreciably higher than the CI method: this is explained in Section \ref{sec:computil} by the need to train neural networks.
\end{table}
In Table \ref{tab:perfpo}, we show the processing runtimes for the various galaxy models using both methods up to an equivalent point in each method. For the CI method this point is just after the correlation integral has been computed, and for the ML method just after neural net training has been completed.
Internally, the ML method trains one neural network per scale length (10 in total for our purposes). For the CI method, the correlation integral is computed from a single pass of the trajectory data using multiple scale lengths: it does not need to be repeated for each scale length. Given that both methods use the same number of computer cores ($10$), training costs explain why the ML method takes longer. Note that the performance achieved is dependent on how the methods are configured: changing various hyperparameters (for example, the number of hidden neural net layers, or the number of orbital periods) will change the performance profiles.
In looking at the table, it should be understood that we have used only a few orbits (64) per galaxy model. If we were looking to analyze galaxy orbits from, say, a cosmological simulation, we might want to analyze several thousand in a reasonable time frame of perhaps just a few hours (less than 8 hours, for example).
\section{Discussion}
\label{sec:discussion}
As will be clear from preceding sections, our research would benefit from being able to use robust methods and software to determine algebraic formulae for all conserved quantities in a computationally cost effective and timely manner. The key here is \textit{all}: software which will nominally find a \textit{single} conserved quantity does exist (but does not always perform in a timely manner). A very recent article \cite{Liu2022} appeared once the work here was substantially complete and may suggest a solution. Given the problem domain in which we are operating (galaxies), we are able to provide support to any symbolic regression packages trying to find algebraic formulae from trajectory data. For example, once a coordinate system has been determined for a galaxy, potentially conserved quantities such as momenta can readily be calculated from trajectory coordinates; total energy cannot be, but the kinetic component can. From the literature, material from the 1990s on genetic algorithms, capable of identifying multiple solutions, is readily identifiable. A further consequence of the lack of a symbolic regression capability is that no false positive or negative analyses have been possible.
Claims in \cite{Liu2021} that the ML method outperforms the correlation integral (fractal) method do not appear to have been confirmed by our work. In our particular, galaxy context, the ML method is slower than the CI method.
Resonances are not handled well by either method though the CI method does appear to outperform the ML method, unless of course there is some coincidentally hidden factor in operation. For non-isolating integrals, they do seem to be recognized for spherical models but the position is much less clear for our axisymmetric and triaxial models. It may be that spherical model non-isolating integrals are easy to detect because of their simple ring structure but that more complex structures are inherently harder to detect.
What is clear is that for orbits with more than the expected number of conserved quantities (based on counting isolating integrals), the increase can not be explained by resonances alone: it does appear as if there is some other factor contributing which could be non-isolating integrals.
Also, it must not be forgotten that functions of conserved quantities are also conserved.
We have previously commented on increasing the number of hidden neural network layers and their nodes as complexity rises from spherical to triaxial orbit modeling. Learning rate is another parameter to experiment with but we found no benefit from so doing. Our approach to setting hyper-parameters has been to assume that using the same parameter values for all orbits from a given galaxy model is acceptable, but it may not be. It may be appropriate to set parameters for individual orbits, for example, for orbits that remain very close to the center of the galaxy.
It has been useful to use 2D orbits in this research to help examine the behaviors of the methods. When working with observed Galaxy data or with cosmological simulations, say, it is likely that 3D orbits will be more usual. Related to the orbits is the number of orbit trajectory points available. What would be appropriate in the future would be to investigate how few trajectory points can be used both to count conserved quantities and to determine their formulae.
Not withstanding the issues we have identified, it may be worth considering using the CI and ML methods in this paper as part of a set of phase space trajectory tools. For example, such a tool set might comprise
\begin{itemize}
\item the CI and ML methods to count CQs;
\item a trajectory-based angular momentum conservation analyzer;
\item resonance analysis as per Section \ref{sec:res};
\item kinetic energy is not conserved but is easy to calculate from the trajectory for use elsewhere; and
\item using symbolic regression to find the total energy, subtracting the kinetic energy to give the gravitational potential.
\end{itemize}
Work on the last point above is already well advanced and will be submitted for publication in the near future.
\section{Conclusions}
\label{sec:conclusions}
We have met the objectives set out in the Introduction, Section \ref{sec:intro}, in that we have compared two schemes for determining the number of integrals and resonances. One scheme utilizes machine learning while the other using more traditional methods does not.
Overall, the results are mixed - neither scheme appears to be clearly better than the other in all aspects. Both fail to deal with resonances adequately though the correlation integral approach does appear to perform somewhat better. It is tempting to say that the machine learning approach is better able to identify isolating integrals but this would have to be in the absence of any non-isolating integrals and resonances. Equally, it could be said that the correlation integral approach appears to be finding some non-isolating integrals but it is difficult to know truly whether they are non-isolating integrals and whether all such integrals have been found.
The reason we say 'appears to' above is that no robust mechanism yet exists for determining the algebraic formulae for multiple conserved quantities from a single trajectory in a timely manner, though it may be that the investigations of \cite{Liu2022} will suggest a viable solution. Addressing this symbolic regression issue needs to be tackled with some urgency, and this or a similar investigation repeated.
While the issues we have noted are being addressed, the CI and ML methods might form part of a trajectory tool kit for simulated galaxies.
\begin{acknowledgements}
We thank the referee for their comments which have helped improve the paper. This work is partly supported by the National Key Basic Research and Development Program of China (No. 2018YFA0404501 to Shude Mao), and by the National Science Foundation of China (Grant No. 11821303, 11761131004 and 11761141012 to Shude Mao). Yougang Wang acknowledges support by the CAS Interdisciplinary Innovation Team (JCTD- 2019-05).
Thanks are due to Ziming Liu for making the AI Poincare software publicly available.
\end{acknowledgements}
\bibliographystyle{raa}
|
{
"arxiv_id": "2302.13234",
"language": "en",
"timestamp": "2023-02-28T02:13:41",
"url": "https://arxiv.org/abs/2302.13234",
"yymm": "2302"
} | \section{Introduction}\label{sec:introduction}
Advanced Persistent Threat (APT) refers to the hidden and persistent process of computer network intrusion, usually for commercial or political motives, targeting specific organizations or countries, and maintaining high concealment for a long time \cite{c1}. APT consists of three elements: advanced, long-term and threat. The advanced implies the use of sophisticated malware and technology to exploit vulnerabilities in the system. The long-term implies that an external force will continuously monitor a specific target and obtain data from it. Threat refers to an attack behavior planned by humans. Carrying out APT traffic detection in enterprise network environment is of great significance in network security domain.
Traditional APT traffic detection methods rely heavily on characteristic string detection, which is based on a set of key strings analyzed and obtained from captured APT samples or traffic, such as domain names, URLs, and specific character sequences. As the detection basis of intrusion detection system (IDS) or intrusion prevention system (IPS), the characteristic string works as IoC (Indicators of Compromise) and directly recognizes and matches threats in the traffic. However, in recent years, as more and more APTs tend to use encryption protocols such as TLS for communication, most of the data in application layer has been encrypted during transmission, and this traditional characteristic string detection method are flopping.
Machine learning method can achieve statistical learning recognition effect based on feature set. In recent years, great progress has been made in the field of encrypted traffic identification. In terms of TLS encrypted traffic identification, a certain degree can be achieved with the help of spatio-temporal features, handshake features, certificate features and other non-ciphertext features to achieve some success in the field of malicious traffic detection. However, at present, many research and products use a session / flow (hereinafter referred to as flow) determined by four tuples (source IP, source port, destination IP, destination port, hereinafter referred to as quadruple) as the basic identification unit. This method is difficult to capture the multi-session communication features which is generally seen in APT communication, and the identification effect in many scenes is limited.
In order to solve the above problems, we propose an APT encrypted traffic detection method based on two parties and multi-session. From the multi-session data of two communication parties in a certain period of time, this method extracts multiple recognizable features and transforms them into image data. Then, using the advantages of deep learning method in image recognition domain, we designed convolution neural network method to realize image recognition and indirectly realize flow identification. We used the encrypted traffic of an APT group and the encrypted traffic of normal network application to carry out the experimental test. The experimental results show that the method has achieved good results in accuracy and false positive rate.
The reminder of this paper is organized as follows: Section \ref{sec:relatedwork} introduces the relevant work and explains the origin of our ideas; Section \ref{sec:method} introduces the overall technical roadmap of the work; Section \ref{sec:evaluation} demonstrates the experimental results and analysis; Finally we conclude in Section \ref{sec:conclusion}.
\section{Related Work}\label{sec:relatedwork}
The traditional APT traffic detection method that relies on feature string matching is relatively mature in the industry and are used in many IDS or IPS products. For example, the Snort \cite{c2} in early stage and Suricata \cite{c3} in recent are based on matching a set of custom rules to achieve their detection function. Such products often rely on the deep packet inspection (DPI), which uses protocol parsing to extract metadata from network flow, and take the metadata as detection unit, Finsterbusch et al. \cite{c4} summarized the current traffic identification methods based on DPI.
In recent years, there have been a lot of research on APT traffic detection method based on machine learning. Anderson et al. published relevant research results in 2016 \cite{c5} and 2017 \cite{c6}. They used various machine learning algorithms such as random forest to carry out threat detection for encrypted traffic. At present, some commercial products based on this method have appeared, such as Cisco's StealWatch \cite{c7} and Huawei's Agile Campus Network \cite{c8}, which can achieve certain practical effects in specific application scenarios. The above research results all take a single session as the identification unit. In the process of APT attack, it contains some features that can only be presented among multiple session, which cannot be seen in a single session. For example, in Command \& Control (C\&C) stage \cite{c9}, there will be many heartbeat sessions or multiple secret stealing sessions between the two communication parties in a certain period of time. These sessions have many identifiable features in terms of interaction sequence, data size, upload/download ratio, etc. However, the current machine learning methods have not made full use of these features.
Based on the above analysis, we propose an APT encrypted traffic detection method based on two parties and mluti-session, trying to make full use of the multi-session features that are not yet used in the current research. Thus, more accurate detection results can be achieved.
\section{Methodology} \label{sec:method}
Since the traffic detection method we proposed has special requirements for training data, we will first introduce the specially created data set, then explain the traffic image conversion method, and finally introduce the CNN model architecture we used.
\subsection{Data Set}
The detection method in this paper needs multiple session data between two communication parties of the same application type. In the public traffic analysis data set, most of them are sorted characteristic data, such as the classic KDD CUP1999 \cite{c10}. In a few datasets that provide raw traffic data, such as USTC-TFC-2016 \cite{c11}, after analysis and comparison, we have not found a data set that can meet the requirements of our method. In order to carry out the experimental work, we used internally collected data to construct traffic data sets that meet the conditions. The data sets consists of two parts: one is the traffic data of an APT group, and specific in command and control phase which generally has more sessions; the second part is four types of normal application traffic data, including browser, mail, Microsoft Office and video. The traffic sessions of each type are grouped according to the both communication parties. The APT traffic set is 3500 groups, and the normal traffic set is 5000 groups each. The details are shown in Table 1.
\begin{table}
\centering
\caption{Content of Traffic Data Set}
\begin{tabular}{c c c c}
\hline
Label & Type & \tabincell{c}{Stage \\Application } & \tabincell{c}{Number of \\Session Groups} \\
\hline
APT Flow & APT Group & C\&C & 3500 \\
\multirow{4}*{Normal Flow} & Browser & Chrome & 5000 \\
~ & Mail & Outlook & 5000 \\
~ & Office & Excel & 5000 \\
~ & Video & Youku & 5000 \\
\hline
\end{tabular}
\end{table}
\subsection{Traffic Image Conversion}
Converting the original traffic data to image data which needs to be classified, four steps are needed: traffic analysis, session grouping, feature extraction and image conversion.
\subsubsection{Traffic Analysis}
Traffic analysis is the basic work of traffic classification. Traffic is a kind of continuous data, which needs to be divided into discrete data according to certain rules, and then classification work can be processed on it. At present, the mainstream method is to divide the traffic into multiple session data according to the four tuples, and treat each session as an independent data unit for classification. According to our technical roadmap, we also need to divide the traffic into multiple sessions first. After this step, the input continuous traffic data can be converted into a set of discrete data units composed of multiple sessions. Suppose the input traffic is $T$, then the output data is the session set $S={s_1, s_2, s_3, ..., s_n}$, where s1 to $s_n$ are the data of each session and n is the total number of sessions.
\subsubsection{Session Grouping}
The set of session units obtained from the traffic analysis step are further grouped according to both sides of the communication, where the communication parties are grouped into the triplet of the IP addresses of both parties and the server port. Compared with the four tuples of traffic analysis, the only difference is that the client port information is ignored. A typical scenario is that when a user uses a browser to continuously visit the same HTTPS website within a fixed period of time, multiple TLS protocol sessions will be generated. The client ports of these sessions are different, due to they are randomly selected each time. However, since the web server has a constant IP address, the client accessing the website has also a constant IP address, and the server port is fixed port $443$. Therefore, these sessions can be grouped into one set. Intuitively, these sessions should have similar data properties. Likewise, there are similar scenarios in APT encrypted traffic. For example, the heartbeat sessions used to inform the command and control server that it is alive has similar properties. After this step, the input is session set $S={s_1, s_2, s_3, ..., s_n}$, the output is $G={g_1, g_2, ..., g_m}$, where g1 to gm are session groups, each group contains several sessions. Sessions are arranged in chronological order of their first frame, and m is the number of session groups, or the number of two parties.
\subsubsection{Feature Extraction}
Each session group in the session grouping result is the basic data unit for the subsequent traffic classification and identification. The feature extraction step extracts a set of features for each unit, that will be used in subsequent image conversion steps. The features extracted include session temporal relation, session time interval and up/down data ratio. Session temporal relation refers to the sequence of sessions. The extraction process of session temporal relation is relatively simple. The first frame data of all sessions in each group can be used directly. Thus, the output result is a set of time series data. Session interval refers to the time interval between each two sessions in a group, specifically the time interval between the last frame timestamp of one session and the first frame timestamp of the following session. The up/down data ratio refers to the ratio of the bytes sent from client to server and conversely bytes sent from server to client. Since in the TLS protocol, it is the application layer data that truly reflects the data exchange process, therefore, only the application layer data is used for calculation, that is, the data actually transmitted by the two parties after the key negotiation of the TLS protocol. After this step, the input is session group data $G={g_1, g_2, ..., g_m}$, output as feature set data $F={f_1, f_2, ..., f_m}$, where $f_i$ is feature set, and each set contains three types of feature data, such as session temporal relation, session time interval and up/down data ratio.
\subsubsection{Image Conversion}
Image conversion refers to converting the output from feature extraction step into images, and visually reflecting the three types of feature data mentioned above. An example of the image is shown in Figure 1.
\begin{figure}
\centering
\includegraphics[width=3.4in]{p1.pdf}
\caption{Example of Two-parties Multi-session Image}\label{fig:processstatus}
\end{figure}
The above figure shows the multi-session traffic image of two communication parties. Two ends represent the two communication parties. The left side is the client, using client IP to label, while the right side is the server, using both the server IP and server port to label. The column chart in the middle is the interactive data between the two parties, and each column represents a session. The upper and lower parts of each column on the horizontal axis are up and down data. The length above the horizontal axis represents the bytes of application layer data sent from the client to the server, and the length below represents the bytes of application layer data sent from the server to the client. The order of columns is the order of sessions and the interval between the columns reflect the session time interval. In summary, each pair of communication parties produce one of the above images. Intuitively analyzed, different types of traffic can reflect certain image features for distinguishing and identifying.
\subsection{Convolution Neural Network Architecture}
CNN(Convolutional Neural Network) is currently the mainstream deep learning model in the field of image classification, and has achieved excellent results in many application scenarios. Considering the complexity of the image itself and the amount of training data samples, we use the convolutional neural network architecture of LeNet-5 \cite{c12}, and the architecture is shown in Figure 2.
\begin{figure}
\centering
\includegraphics[width=3.4in]{p2.pdf}
\caption{CNN Model Architecture}\label{fig:CNN}
\end{figure}
CNN reads the pixel values from the image file, and these pixel values are normalized and converted from $0~255$ to $0~1$. In the first convolution layer $C_1$, the input is convolved by a convolution kernel with a size of $5*5$. There are $32$ channels to generate $32$ feature maps with size of $28*28$, and then $a 2*2$ maximum pooling operation is performed in $P_1$ layer to generate $32$ feature maps with size of $14*14$. In the second convolution layer $C_2$, the convolution kernel size is also $5*5$, but there are $64$ channels to generate $64$ feature maps with size of $14*14$. Then, $a$ $2*2$ maximum pool operation is performed in $P_2$ layer to generate $64$ $7*7$ feature maps. Next are two fully connected layers, which convert the data size to $1024$ and $10$ in turn. Finally, a $Softmax$ function is used to output various probability values. To reduce overfitting, dropout is used before the output layer.
\section{Evaluation}\label{sec:evaluation}
We use Keras and TensorFlow as the training platform, running on the $Ubuntu 18.04 64-bit$ operating system. $2 / 10$ of the training data was randomly selected as the test data, and the remaining $8 / 10$ was used for training. We used the following criteria to evaluate the proposed method: $Accuracy (A)$, $Precision (P)$, $recall (R)$, and $F_1-Score (F_1)$
as follows:
\begin{equation}\label{A}
A =\frac{TP + TN}{TP + FP + FN + TN}
\end{equation}
\begin{equation}\label{P}
P =\frac{TP}{TP + FP}
\end{equation}
\begin{equation}\label{R}
R =\frac{TP}{TP + FN}
\end{equation}
\begin{equation}\label{F}
F_1 =\frac{2PR}{P + R}
\end{equation}
Among them, true positive $TP$ represents the number of correctly identified target flows, positive and negative $TN$ denotes the number of other flows correctly identified, false positive $FP$ represents the number of target flows wrongly identified, and false negative $FN$ represents the number of target flows missed.
The experimental results are shown in Table 2. The precision, recall and F1 values refer to the corresponding result data of APT-C\&C flow.
\begin{table*}
\centering
\caption{experimental result data (percentage)}
\begin{tabular}{c c c c c c}
\hline
~ & Data Set & Accuracy & Precision & \tabincell{c}{Recall \\Ratio } & \tabincell{c}{F1 \\Value} \\
\hline
\multirow{2} *{1} & APT-C\&C & \multirow{2} *{95.3} & \multirow{2} *{96.3} & \multirow{2} *{92.9} & \multirow{2} *{94.6} \\
~ & Chrome & ~ & ~ & ~ & ~ \\
\multirow{2} *{2} & APT-C\&C & \multirow{2} *{99.8} & \multirow{2} *{99.6} & \multirow{2} *{99.9} & \multirow{2} *{99.7} \\
~ & Outlook & ~ & ~ & ~ & ~ \\
\multirow{2} *{3} & APT-C\&C & \multirow{2} *{92.0} & \multirow{2} *{89.0} & \multirow{2} *{96.5} & \multirow{2} *{92.6} \\
~ & Excel & ~ & ~ & ~ & ~ \\
\multirow{2} *{4} & APT-C\&C & \multirow{2} *{92.1} & \multirow{2} *{97.1} & \multirow{2} *{91.1} & \multirow{2} *{94.0} \\
~ & Youku & ~ & ~ & ~ & ~ \\
\multirow{2} *{5} & APT-C\&C & \multirow{2} *{96.1} & \multirow{2} *{97.0} & \multirow{2} *{95.5} & \multirow{2} *{96.2} \\
~ & Mixed Normal Flow & ~ & ~ & ~ & ~ \\
\hline
\end{tabular}
\end{table*}
From the result data, we can see that the experiment has achieved good results in the five binary classification tasks, and all the accuracy rate are above 90\%. Among them, the highest 99.8\% results were achieved in the APT-C\&C and Outlook classification experiments. Especially in the fifth experiment, the normal flow set is random mixed data of four normal flow types, which is more closer to the actual application scenario, and the accuracy rate is 96.1\%. In conclusion, through the above experimental results, the effectiveness of our proposed APT encrypted traffic identification method based on two parties and multi-session is preliminarily verified.
\section{Conclusion}\label{sec:conclusion}
In order to solve the problem of APT encrypted traffic identification in the field of network security, an APT encrypted traffic identification method based on two parties and multi-session is proposed. This method does not need complex feature engineering work, but only needs to extract the multi-session temporal relation, time interval and up/down data ratio. Then convert them into image data, realize image recognition by designing convolutional neural network model, and further realize flow identification. The experimental results show that the method has achieved good results in a number of binary classification experiments and verified the effectiveness. In the next stage of our work, we will use more types of data to carry out the verification work, and expand the experimental task to multi-classification scenarios to further explore the application potential of this method.
\section*{Acknowledgment}
This work is supported by the Fundamental Research Funds for the Central Universities(2018XNG1815) and MCM20180504.
|
{
"arxiv_id": "2302.13170",
"language": "en",
"timestamp": "2023-02-28T02:11:47",
"url": "https://arxiv.org/abs/2302.13170",
"yymm": "2302"
} | \section{Introduction} \label{sec:intro}
Emotion has been widely recognized as a mental state and a psycho-physiological process that can be associated with various brain or physical activities \cite{dalgleish2004emotional,koelstra2011deap,zhang2021deep}. It can be expressed through various modalities, such as face expression, hand gestures, body movement, and voice, with various levels of intensity \cite{peelen2010supramodal}. Emotion plays an important role in our daily life by effecting decision making and interactions \cite{turner2006sociological}; it is thus vital to enable computers to identify, understand, and respond to human emotions \cite{picard2000affective} for better human-computer interaction and user experience. In recent decades, among non-invasive physiological measurements such as electrocardiography \cite{sarkar2020self}, electromyography \cite{jerritta2011physiological}, and electrodermal activity \cite{bhatti2021attentive}, Electroencephalography (EEG) has been widely used for human emotion recognition due to its direct reflection of brain activities \cite{koelstra2011deap,zhang2020rfnet,zhang2021distilling,liu2021comparing}.
Recently, compared to statistic models and machine learning algorithms, deep learning techniques have grown in popularity, resulting in considerable performance improvements on emotion classification from EEG \cite{zhang2020rfnet,lotte2018review}. Many of the existing deep learning based algorithms are \textit{fully supervised}, which require annotations for a large amount of training samples \cite{zhang2022holistic}. Nonetheless, EEG labeling is a time-consuming, expensive, and difficult process since it requires multiple evaluations such pre-stimulation and post-experiment self-assessment \cite{zheng2015investigating,zheng2018emotionmeter,liu2021comparing}. To tackle the challenge of scarcity of EEG labels, \textit{Semi-Supervised Learning (SSL)} has been recently explored for emotion recognition using EEG \cite{zhang2021deep,zhang2022holistic,zhang2022parse}.
Another challenge in EEG labeling is the reliability of self-assessment for emotions \cite{correa2018amigos}. For example, after watching emotion-related video clips, participants may find it easy to distinguish between \textit{dissimilar} emotions or feelings such as `happy' vs. `fear', but have difficulty in distinguishing similar ones, for instance `disgust' vs. `fear'. This uncertainty in providing accurate labels may result in an unreliable self-assessment reports \cite{correa2018amigos}.
To tackle the above-stated problems, we propose a framework to allow participants to report multiple possible emotions if they are uncertain about their affective state during the self-assessment stage. This notion has been explored in other domains and is referred to as \textit{\textbf{Partial Label Learning (PLL)}}. However, ambiguous labeling often causes performance degradation in deep learning algorithms as the ground truth is hidden within the candidate labels during the training phase \cite{wang2022pico}, making PLL a challenging area. In this paper, we explore several state-of-the-art PLL techniques \cite{seo2021power,lv2020progressive,zhang2022exploiting,wen2021leveraged,wu2022revisiting,wang2022pico} that have been proposed in the area of computer vision, and adapt them for EEG representation learning for the first time. We use a large-scale emotion EEG dataset, SEED-V \cite{liu2021comparing} and create a new experimental setup to allow for proper testing of PLL techniques with this dataset. We comprehensively compare and analyze these techniques to understand the viability of using PLL for EEG-based emotion recognition. We provide an overview of the core concept behind PLL in Figure \ref{sl_comparison}.
Our contributions in this paper are as follows. (\textbf{1}) For the first time, we address the challenge of ambiguous EEG labeling in emotion recognition tasks. (\textbf{2}) We conduct extensive experiments by re-implementing and adapting six recently developed deep state-of-the-art PLL algorithms for emotion recognition using EEG.
(\textbf{3}) We design experiments to evaluate and compare the performance of PLL methods with different candidate labels generation processes. (\textbf{4}) We make our code publicly available at: \href{https://github.com/guangyizhangbci/PLL-Emotion-EEG}{https://github.com/guangyizhangbci/PLL-Emotion-EEG}.
The rest of this paper is organized as follows. In the next section, we provide a literature survey on EEG learning for affective computing followed by an summary of PLL methods proposed in the literature. In Section \ref{sec:method}, we provide the problem statement and the details of the six PLL techniques which we explore and adopt for EEG. We then describe the dataset in Section \ref{sec:experiment}, along with implementation details and experiment setup. In Section \ref{sec:result}, the detailed results and analysis are provided. And finally Section \ref{sec:conclusion} presents the concluding remarks.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{sl_comparison.pdf}
\caption{A description of full-supervised learning, semi-supervised learning and partial label learning for emotion recognition is presented.}
\label{sl_comparison}
\end{center}
\end{figure}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=0.8\textwidth]{overview.pdf}
\caption{A general EEG-based PLL framework for emotion recognition is presented.}
\label{overview}
\end{center}
\end{figure*}
\section{Background} \label{sec:background}
\subsection{Deep EEG Representation Learning}
Deep learning techniques, such as deep belief networks \cite{zheng2015investigating}, fully connected neural networks \cite{zhang2020rfnet}, Convolutional Neural Networks (CNNs) \cite{lawhern2018eegnet}, capsule networks \cite{zhang2021capsule,zhang2021distilling}, Recurrent Neural Networks (RNNs)\cite{roy2019chrononet}, long short-term networks \cite{zhang2019classification}, graph neural networks \cite{song2018eeg,zhong2020eeg}, as well as combinations of CNNs and RNNs \cite{zhang2018cascade} have been widely used for EEG-based fully-supervised tasks, such as motor imagery or movement classification and emotion recognition \cite{craik2019deep,roy2019deep}. These deep learning techniques outperformed classical statistics algorithms and conventional machine learning methods in most tasks, as they are able to learn non-linear and more complex patterns, and focus on task-relevant features \cite{zhang2019classification,zhang2020rfnet,zhang2021capsule}. CNNs are the most popular deep learning backbones for EEG learning \cite{craik2019deep,roy2019deep,lawhern2018eegnet}.
Deep EEG representation learning frameworks have also shown good performance in semi-supervised tasks, where only a few EEG annotations are available during training \cite{zhang2021deep,zhang2022holistic}. An attention-based recurrent autoencoder was proposed for semi-supervised EEG learning \cite{zhang2021deep}. Furthermore,
state-of-the-art semi-supervised techniques originally developed for computer vision tasks, such as MixMatch \cite{berthelot2019mixmatch}, FixMatch \cite{berthelot2019mixmatch} and AdaMatch \cite{berthelot2021adamatch}, have been adapted for EEG learning and obtained promising results in emotion recognition tasks. A novel pairwise EEG representation alignment method (PARSE), based on a lightweight CNN backbone, was lately proposed and achieved state-of-the-art results in semi-supervised tasks on multiple publicly available large-scale affective datasets \cite{zhang2022parse}. More importantly, PARSE achieved similar performance to fully-supervised models trained on large-scale labeled datasets with substantially fewer labeled samples, demonstrating the superiority of deep semi-supervised EEG learning in the face of a scarcity of labeled data \cite{zhang2022parse}.
\subsection{Partial Label Learning}
PLL algorithms have been lately used to tackle the challenges of label ambiguity and achieved promising performance in a variety of image classification tasks with ambiguous labels. For instance, in \cite{seo2021power}, a deep naive partial label learning model was proposed based on the assumption that distribution of candidate labels should be uniform since ground truth is unknown \cite{jin2002learning}. In addition to this naive approach, a number of other frameworks have been lately developed to rely on a process called `label disambiguation', which refines the distribution of candidate labels by updating it in each training iteration. This process of combining label disambiguation and model classification works as an Expectation-Maximization (EM) algorithm as shown in \cite{jin2002learning}. Specifically, the candidate labels' distribution is initially assumed to be uniform. In the first iteration, a model is trained on the uniformly distributed candidate labels. Then, the candidate labels are disambiguated based on the trained model's predictions. In the next iteration, the model is trained on the disambiguated candidate labels, and the process repeats during the training phase \cite{jin2002learning,lv2020progressive}.
In \cite{zhang2022exploiting}, a method was proposed for label disambiguation that utilized the importance of each output class, rather than relying solely on the model's predictions. To mitigate the possible negative effect of training on false positive labels, in \cite{wen2021leveraged,wu2022revisiting}, a method was proposed to leverage both candidate and \textit{non-candidate} labels with the label disambiguation process. Furthermore, in \cite{wang2022pico}, a prototype-based label disambiguation method was proposed, which was used in combination with supervised contrastive learning \cite{khosla2020supervised}, a technique that relies on contrastive loss to train a Siamese-style network. Specifically, first, prototype-based label disambiguation is used to guess the ground truth and generate accurate positive pairs for contrastive learning. Then, the contrastively learned embeddings, in turn, could better guide the label disambiguation process. Thus, these two components are mutually beneficial during the iterative training, leading the PLL framework to achieve state-of-the-art results in multiple vision tasks \cite{wang2022pico}.
\section{PLL Methods} \label{sec:method}
\subsection{Problem Setup} \label{sec: problem setup}
Let's denote $\mathcal{X}$ and $\mathcal{Y} \in \{1,2,...k\}$ as the input EEG feature space and the emotion label space, respectively, where $k$ is the total number of emotion categories. Accordingly, $x \in \mathcal{X}$ and $y \in \mathcal{Y}$ represent EEG samples and emotion ground truth labels.
For a classification problem with ambiguous labels, the collection of all subsets in $\{1,2,...k\}$, $2^{\mathcal{Y}}\setminus\emptyset$ has been used instead of $\mathcal{Y}$ as the label pace.
The training set $D = \{(x_i, Y_i)\}_{i=1}^N$ consists of normalized EEG input $x_i \sim[0,1]$, and candidate label $Y_i \subset 2^{\mathcal{Y}}\setminus\emptyset$. The ground truth $y_i$ is concealed in the candidate labels as $y_i \in Y_i$.
In classical PLL literature, candidate labels are assumed to be generated independently, uniformly, and randomly. Thus, for the distribution of each candidate label set, we have $\mathrm{P}(Y_i\mid x_i, y_i, q) = \prod_{s \neq y_i} q, \forall s \in[1,k]$, where $q<1$ represents the degree of label ambiguity. Note that $q=1 \iff s=y_i$.
The goal of PLL is to construct a robust multi-class classifier by minimizing the divergence between model output and candidate labels. In this study, we aim to evaluate the effectiveness of various PLL algorithms for EEG representation learning when emotion labels with different levels of ambiguity are provided.
\subsection{Method Overview} \label{sec: solutions overview}
A general PLL framework for emotion recognition is illustrated in Figure \ref{overview}. In a typical EEG-based emotion recognition experiment setup, EEG recordings are collected from a participant's brain scalp while they are watching an emotion-related video clip. Then, candidate labels are generated based on
the participant's self-assessment. These candidate labels are then used to train a model with the features extracted from EEG recordings. The label disambiguation is often based on \textit{uniformly distributed} candidate labels ($\hat{Y}$) or the disambiguated labels obtained from last training iteration ($\vec{Y}$) and model predictions ($p_m(x)$). The disambiguated labels ($\vec{Y}$) are updated in each training iteration, and the model is trained by minimizing divergence between the model predictions and the disambiguated labels. When label disambiguation is not used, the divergence will be consistently minimized between model predictions and the candidate labels (with uniform distribution) during the entire training phase. Training is performed by batches, where $x_b$ denotes input samples of each batch.
We identify six state-of-the-art PLL techniques from the literature, which re-implement and adapt for emotion recognition from EEG. These methods are Deep Naive Partial Label (DNPL) learning \cite{seo2021power}, PROgressive iDENtification (PRODEN) true Labels for PLL \cite{lv2020progressive}, Class Activation Value Learning (CAVL) for PLL, \cite{zhang2022exploiting}, Leveraged Weighted (LW) loss for PLL \cite{wen2021leveraged}, revisiting Consistency Regularization (CR) \cite{wu2022revisiting} for deep PLL, and Partial label learning with COntrastive label disambiguation (PiCO) \cite{wang2022pico}, which we describe in the following sub-sections.
\subsection{DNPL}
In DNPL \cite{seo2021power}, since the ground truth is unknown \cite{jin2002learning}, the candidate labels are assumed to have uniform distribution as follows:
\begin{equation}\label{eq: uniform}
\overline{\hat{Y}_s}= \frac{1}{|Y|} ~~~~\text{if}~~s \in Y, ~~~0~~ \text{otherwise},
\end{equation}
where $\hat{Y}_s=1, \forall s \in Y$.
The DNPL model is simply trained by minimizing the divergence between the model's predictions and the uniformly distributed candidate labels, as:
\begin{equation}\label{eq: DNPl}
\mathcal{L}_{naive} = - \frac{1}{|x_b|}\sum_{i=1}^{|x_b|} \log(\nu(\hat{Y} \phi(p_m(x_i)))),
\end{equation}
where $\phi(.)$ denotes the softmax operation, and $\nu$ is the clamp operator which limits the output in the range of $[0,1]$.
\subsection{PRODEN}
PRODEN was proposed to refine candidate labels through a label disambiguation process \cite{lv2020progressive}. Specifically, the process obtains disambiguated labels by multiplying model predictions with candidate labels, as: $\vec{Y} = p_m(x) \hat{Y}$. Following, the model is trained by minimizing cross-entropy between the model's predictions and the disambiguated labels, as:
\begin{equation}\label{eq: PRODEN}
\mathcal{L}_{ce} = -\frac{1}{|x_b|}\sum_{i=1}^{|x_b|} \vec{Y} ~\log (\phi(p_m(x_i))).
\end{equation}
Here, when the label disambiguation process is not used, $\vec{Y}$ will be replaced by $\hat{Y}$.
\subsection{CAVL}
In \cite{zhang2022exploiting}, CAVL was proposed to disambiguate candidate labels by focusing on the importance score of each label in the model's predictions. Specifically, inspired by \cite{selvaraju2017grad}, CAVL uses gradient flow ($v^s$) of the network output's log probability as a measurement of the importance of each label, which is shown as:
\begin{equation}\label{eq: CAVL}
\begin{split}
v^s &= \bigg| \frac{\partial(-\log(\phi^s(p_m(x))))}{\partial p_m(x)} \bigg|\partial \phi^s(p_m(x)) \\
&= \big|\phi^s(p_m(x))-1\big|\phi^s(p_m(x)), \forall s \in [1,k].
\end{split}
\end{equation}
As suggested in \cite{zhang2022exploiting}, $\phi(p_m(x))$ is replaced by $p_m(x)$ since $p_m(x)$ contains more information. Consequently, we have $\hat{v}^s = \big|p_m^{s}(x_i)-1\big|p_m^{s}(x_i)$ and disambiguated label $\vec{Y} = \argmax (\hat{v}^s \overline{\hat{Y}})$. The same cross-entropy loss in (Eq. \ref{eq: PRODEN}) is used for model training. When label disambiguation is not used, same as in PRODEN, $\vec{Y}$ will be replaced by $\hat{Y}$.
\subsection{LW} \label{method: LW}
Most existing PLL algorithms only focus on learning from candidate labels while ignoring non-candidate ones \cite{seo2021power,lv2020progressive,zhang2022exploiting,wang2022pico}. However, a model could be misled by the false positive labels if it only relies on the candidate label set \cite{wu2022revisiting}. To mitigate this possible negative effect, in \cite{wen2021leveraged}, both candidate and non-candidate labels have been used for model training using a LW loss. To do so, first, the sigmoid loss $\sum_{s\notin Y}\varphi(p_m(x))$ ($\varphi(.)$ representing sigmoid operation) or negative log-likelihood loss $-\sum_{s\notin Y} \log(1-\phi(p_m(x)))$ is applied on non-candidate labels, in order to discourage the model predictions to be among the non-candidate labels. Next, the modified sigmoid loss $\sum_{s\in Y} \varphi(-p_m(x))$ or negative log-likelihood loss $-\sum_{s\in Y}\log(\phi(p_m(x)))$ is employed for model training on candidate labels. Finally, the model is trained on both candidate and non-candidate labels. The total loss function using the sigmoid function and cross-entropy loss (negative log-likelihood) is shown in Eq. \ref{eq: LW-sigmoid} and Eq. \ref{eq: LW-CE}, respectively.
\begin{equation}\label{eq: LW-sigmoid}
\mathcal{L}_{ce}^{lw} = -\frac{1}{|x_b|}\sum_{i=1}^{|x_b|}\big[ \sum_{s\in Y} w_s \varphi(-p_m(x_i)) + \beta \sum_{s\notin Y} w_s \varphi(p_m(x_i)) \big],
\end{equation}
\begin{equation}\label{eq: LW-CE}
\begin{split}
\mathcal{L}_{sig}^{lw} & = - \frac{1}{|x_b|}\sum_{i=1}^{|x_b|}\big[\sum_{s\in Y} w_s \log(\phi(p_m(x_i)))\\ & + \beta \sum_{s\notin Y} w_s \log(1-\phi(p_m(x_i))) \big].
\end{split}
\end{equation}
A trade-off parameter $\beta=0, 1, 2$ is applied as suggested in \cite{wen2021leveraged}. The weights $w_s$ are the normalized model predictions, as $w_s = \phi(p_m(x_i)), \forall s \in Y$ and $w_s = \phi(p_m(x_i)), \forall s \notin Y$. The wights ($w_s$) are updated chronologically and used to assign more weights to learn the candidate labels which are more likely to be the ground truth and the more confusing non-candidate labels. Note that $w_s=1$ when label disambiguation is not used.
\subsection{CR}
In \cite{wu2022revisiting}, both candidate and non-candidate labels are leveraged in a similar manner to the LW method \cite{wen2021leveraged}, where modified negative log-likelihood loss is applied on non-candidate labels, as $\mathcal{L}_s = - \frac{1}{|x_b|} \sum_{i=1}^{|x_b|} \sum_{s \notin Y} \log(1-\phi(p_m(x_i)))$. In \cite{wu2022revisiting}, instead of using cross-entropy loss,
a consistency regularization method has been proposed for learning candidate labels. Consistency regularization encourages a model to have consistent predictions on the original data or its perturbed versions \cite{samuli2017temporal,berthelot2019mixmatch,zhang2022parse}. As suggested in \cite{wu2022revisiting}, consistency regularization is applied by minimizing the divergence between each augmentation of an instance and its \textit{refined} label. To do so, we first obtain the refined label based on the network predictions of the original data and its different augmentations, as:
\begin{equation}\label{eq: CR disambiguation}
\vec{Y}_{s} = \prod_{i=1}^{3} \phi(p_m(A_i(x_b)))^{1/3}, \forall s \in Y
\end{equation}
where $A_i$ presents the original, weakly, and strongly augmented data, as $x,A_w,A_s$ when $i=1,2,3$. Additive Gaussian noise has been commonly used as an effective data augmentation method in recent EEG studies \cite{li2019channel,luo2020data,zhang2022holistic,zhang2022parse}. Therefore, we produce the augmented data as $\mathcal{A}_{s/w}(x) = x + \mathcal{N}(\mu, \sigma)$, where $\mathcal{N}$ represents a Gaussian distribution. As suggested in \cite{zhang2021capsule,zhang2022parse}, we choose mean value ($\mu$) of $0.5$, and standard deviation values ($\sigma$) of $0.8$ and $0.2$ for strong ($\mathcal{A}_{s}$) and weak ($\mathcal{A}_{w}$) augmentations respectively. Following, we minimize the divergence between different augmentations of an instance and its refined label,
\begin{equation}\label{eq: CR unsupervised loss}
\mathcal{L}_u = \frac{1}{|x_b|}\sum_{i=1}^{|x_b|} \sum_{j=1}^{3} D_{KL}(\vec{Y}'||\log(\phi(p_m(A_j(x_i))))),
\end{equation} where $D_{KL}$ denotes Kullback-Leibler (KL) divergence and $\vec{Y}'$ denotes the normalized disambiguated labels, as $\vec{Y}' = \vec{Y}/ \sum_{s \in Y} \vec{Y}$.
Finally, the total loss consists of a supervised loss applied on non-candidate labels and a consistency regularization term applied on candidate labels,
\begin{equation}\label{eq: CR loss}
\mathcal{L}^{cr} = \mathcal{L}_s+ \eta \mathcal{L}_u,
\end{equation}
where $\eta= t/T$ is the warm-up function. $t$ and $T$ represent current epoch and the total training epochs, respectively. We have $\eta=0$ when label disambiguation is not used.
\subsection{PiCO} \label{method: PiCO}
PiCO was proposed to incorporate contrastive learning and a prototype-based label disambiguation method for effective partial label learning \cite{wang2022pico}. PiCO employs a similar framework with MoCo \cite{he2020momentum}, which includes a query network $p_m$ and a key network $p'_m$ sharing the same encoder architecture, where $p'_m$ uses a momentum update with $p_m$ \cite{he2020momentum}. Following we describe the details of the contrastive learning and the prototype-based label disambiguation technique.
\textbf{Contrastive Learning.} Contrastive learning has achieved promising performance in both supervised and unsupervised learning tasks by maximizing the similarity between learned representations of a positive pair \cite{khosla2020supervised,chen2020simple,he2020momentum}. In the supervised contrastive learning, a positive pair includes samples from the same class, while a negative pair contains examples belonging to different classes. In unsupervised contrastive learning, a positive pair often consists of a sample and its augmentation, while a negative pair consists of a sample and a randomly chosen instance.
Since the ground truth is unknown in partial label learning tasks, contrastive learning could be performed in an unsupervised manner. To do so, we first construct a positive pair using an instance ($x_i$) and its weak augmentation ($A_w(x_i)$). Then, we obtain a pair of L$2$ normalized embeddings from the query and key networks, as $Q_i=||p_{\emb}(x_i)||_2$, $K_i=||p'_{\emb}(A_w(x_i))||_2 \in (0, 1)^{N_{\emb}}$, where $p_{\emb}$ and $p'_{\emb}$ denotes the embeddings obtained from encoders $p_m$ and $p'_m$, respectively. $N_{\emb}=64$ denotes the dimension of the embeddings. Following, we initialize the negative sample pool as $\queue \in \mathbb{R}^{[N_q \times k]}, \text{where} \queue_{i,j} \sim \mathcal{N}(0,1), \forall i \in [1, N_q], \forall j \in [1,N_{\emb}]$, as suggested in \cite{he2020momentum}.
We set the length of the pool, $N_q=1000$, according to the total number of EEG training samples.
Next, we calculate the unsupervised contrastive loss as:
\begin{equation}\label{eq: unsupervised contrastive loss}
\mathcal{L}_{u}^{ct} = - \frac{1}{|x_b|} \sum_{i=1}^{|x_b|} \log \frac{\exp((\bmm(Q_i, K_i)/\tau)}{\sum_{i=1}^{|x_b|}\sum_{1}^{N_q} \exp(Q_i \queue^\top/\tau)},
\end{equation}
where $\bmm$ denotes the batch matrix multiplication operator such that $\bmm(Q_i, K_i) \in \mathbb{R}^{[|x_b|, 1]}$. We choose the temperature hyper-parameter $\tau=0.07$, as suggested in \cite{wang2022pico}.
Unsupervised contrastive learning remains challenging since the false negative samples in the negative samples pool may cause performance degradation \cite{khosla2020supervised,chen2022incremental}. To tackle this challenge, one solution is to construct a positive pair consisting of two samples whose \textit{guessed} labels are the same. To do so, we first estimate the pseudo-label as $\vec{Y}_i = \phi{(p_m(x_i))} \overline{\hat{Y}_i}$.
We then construct the contrastive representation pool as $\queue^{+}= \cat(K, \queue)$, where cat$(.)$ denotes a concatenation operation. We also have the corresponding pseudo-label pool as ${Y^{\queue^{+}} = \cat(\overline{\hat{Y}}, Y^{\queue})}$, where $Y^{\queue}\sim \mathcal{N}(0,1)$.
Next, we construct the set of instances with the same guessed label as $S_{\pos} =\{V'|V' \in \queue^{+}, \argmax(Y^{\queue^+}_j) = \argmax(\vec{Y}_i), \forall i \in [1, |x_b|], \forall j \in [1, N_{q+}]\}$, where $N_{q+}$ represents the length of $\queue+$.
Following, we employ the supervised contrastive loss as:
\begin{equation}\label{eq: supervised contrastive loss}\small
\mathcal{L}_{s}^{ct}=-\frac{1}{|x_b|}\sum_{i=1}^{|x_b|} \{\frac{1}{|S_{\pos}|} \sum_{V' \in {S_{\pos}}}\log \frac{\exp(Q_i V'^\top /\tau)}{\sum_{V \in {\queue^{+}}} \exp(Q_i V^\top/\tau)}\}.
\end{equation}
Both the negative samples pool and the corresponding pseudo-label pool, are randomly initialized with Gaussian distributions. $\queue$ is replaced by the embeddings $K$ obtained in the previous training batch, and $Y^{\queue}$ is replaced by the uniformly distributed candidate labels $\overline{\hat{Y}}$, chronologically.
\textbf{Prototype-based Label Disambiguation.} Prototype is defined as a representative embedding of instances with the same label \cite{li2021prototypical, wang2022pico}. We denote prototype as $\proto \in \mathbb{R}^{k\times N_{\emb}}$, where the instances belonging to each class have their unique embedding vector. From an EM perspective, the E-step uses clustering to estimate the distribution of the prototype, and the M-step is to update the query network parameters through contrastive learning \cite{li2021prototypical, wang2022pico}. The prototype is zero initialized and smoothly updated by the embedding $Q$ as:
\begin{equation}\label{eq: prototype}
\begin{split}
\proto_j &= ||\proto_j* \lambda + (1-\lambda)*Q_i ||_2, \\
&\text{if} ~ j = \argmax( \overline{\hat{Y}}\phi(p_m(x_i))), \forall i \in [1, |x_b|], \forall j \in [1, k],
\end{split}
\end{equation}
where $\lambda=0.99$ is the coefficient used in a moving average strategy. Following, the pseudo-label after prototype-based disambiguation is denoted as:
\begin{equation}\label{eq: pseudo label}
Y^{\proto}_i = \argmax(\phi(Q_i\proto^\top) \overline{\hat{Y_i}}).
\end{equation}
Finally, the total loss consists of the cross-entropy loss and the supervised contrastive loss using prototype-disambiguated labels, as
\begin{equation}\label{eq: PiCO}
\mathcal{L} = \mathcal{L}_{ce} + \xi \mathcal{L}_s^{cont},
\end{equation}
where $\vec{Y}$ refers to $Y^{\proto}$.
We adopt $\xi=0.5$ as the contrastive loss weight, as suggested in \cite{wang2022pico}. When the label disambiguation process is not used, the contrastive loss $\mathcal{L}_s^{ct}$ will be replaced by $\mathcal{L}_u^{ct}$, and $\vec{Y}$ will be replaced by $\hat{Y}$ in Eq. \ref{eq: PRODEN}. Note that when contrastive learning is not used, $\xi=0$.
\section{Experiment Setup} \label{sec:experiment}
\subsection{Dataset}
We use the SEED-V dataset\footnote{\href{https://bcmi.sjtu.edu.cn/home/seed/seed-v.html}{https://bcmi.sjtu.edu.cn/home/seed/seed-v.html}} \cite{liu2021comparing} to evaluate and compare the different techniques described earlier. In this dataset, $45$ short films with five emotions (happy, neutral, sad, fear, and disgust) were used as stimuli. $16$ participants including $10$ females and $6$ males participated in the study. All participants repeated the experiment three times, with completely new stimuli each time. Each experiment contains $3$ trials for each emotion, yielding $15$ trials in total. Each trial includes three stages: $15$ seconds initial stimuli prior to starting, $2-4$ minutes of watching film clips, and $15$ or $30$ seconds of self-assessment on the induction effect of the stimuli. EEG recordings were collected with $62$ channels at a sampling frequency of $1000$\textit{Hz}.
\subsection{Feature Space}
EEG recordings in each experiment were split into continuous and non-overlapping $4$-second segments. We use the same Differential Entropy (DE) features provided by the dataset \cite{liu2021comparing}. DE features were extracted from $5$ EEG bands (delta, theta, alpha, beta, and gamma) and all $62$ EEG channels for each segment, yielding $s=310$ features in total. We normalize the feature vector of each segment ($x_i \sim [0,1]^{310}$) to be used as input for training the models.
\subsection{Evaluation Protocols}
We apply the same evaluation protocols that were originally used in \cite{liu2021comparing}. Specifically, EEG recordings of each subject have been formed into three pre-defined folds. Fold No. 1 includes the first $5$ trials of all three repeated experiments, yielding $15$ trials in total. A similar strategy is applied to folds No. $2$ and $3$. Next, a 3-fold cross-validation scheme has been used for training and testing, as suggested in \cite{liu2021comparing}.
\subsection{Backbone Model}
We employ the same lightweight CNN used in very recent EEG-based affective computing studies \cite{zhang2022holistic,zhang2022parse} as the backbone model. As shown in Table \ref{model}, the encoder consists of two 1-dimensional (1-D) convolutional blocks. Each block has a 1-D convolutional layer followed by a 1-D batch normalization layer and a LeakyReLU activation function. The encoder is used to transform the input EEG data into a compact representation. The learned embedding is then fed to the classifier which includes two fully connected layers with a dropout rate of 0.5 for emotion recognition. In the table, $s$ and $k$ denote the total number of EEG features, and the number of emotion categories, respectively.
\begin{table}[t]
\caption{Backbone Model Details.}
\scriptsize
\label{model}
\resizebox{\columnwidth}{!}
{
\begin{tabular}{c|c|c}
\hline
\textbf{Module} & \textbf{Layer details} & \textbf{Output shape} \\[.1cm] \hline
\textbf{Input} & - & $(1, s)$ \\[.1cm] \hline
\multirow{4}{*}{\textbf{Encoder}} & \convMatrix{3}{5}{5} & $(5, s-2)$ \\[.1cm]
\cline{2-3} & \convMatrix{3}{10}{10} & $(10, s-4)$ \\[.1cm]
\cline{2-3} \hline \textbf{Embedding} & Flatten & $10\times (s-4)$ \\[.1cm]
\cline{2-3}
\hline
\multirow{2}{*}{\textbf{Classifier}} & \convMatrixc{k} & $(k)$ \\[.1cm]
\hline
\end{tabular}
}
\end{table}
\subsection{Implementation Details}
In all the experiments, we ran $T=30$ training epochs with a batch size of $|x_b|=8$. We set the learning rate to $0.01$ and used SGD as the optimizer with a default momentum of $0.9$ and weight decay of $0.0001$.
The learning rate scheduler was not used for the naive method \cite{seo2021power} and the fully supervised method as it hurt the performance in these cases. All the PLL methods have been evaluated \textbf{five} times, each time with different random seeds used for candidate label generation.
Our experiments were carried out on two NVIDIA GeForce RTX $2080$ Ti GPUs using PyTorch \cite{paszke2019pytorch}. For the sake of reproducibility, we made the source code of this work publicly available at: \href{https://github.com/guangyizhangbci/PLL-Emotion-EEG}{https://github.com/guangyizhangbci/PLL-Emotion-EEG}.
\begin{table*}[!ht]
\centering
\setlength
\tabcolsep{10pt}
\caption{The accuracy (in \%) of Comparison PLL Methods.}
\begin{tabular}{c|c|c|cccccc}
\toprule
Method &Venue & LD & $q=0.2$ & $q=0.4$ & $q=0.6$ & $q=0.8$ & $q=0.9$ & $q=0.95$ \\
\midrule
\cellcolor{Gray} Fully Supervised &\cellcolor{Gray}-&\cellcolor{Gray}-& \multicolumn{6}{c}{\cellcolor{Gray}63.08 \scriptsize(13.87)} \\
\midrule
\multirow{2}*{PRODEN \cite{lv2020progressive}} & \multirow{2}*{ICML20} &\xmark &58.55\scriptsize(16.63) & 57.69\scriptsize(15.99) & 53.87\scriptsize(15.24) & 43.05\scriptsize(14.76) & 32.37\scriptsize(11.73) & 26.00\scriptsize(10.72) \\
& &\checkmark &59.73\scriptsize(16.81) & 58.83\scriptsize(16.12) & 55.87\scriptsize(16.37) & 47.53\scriptsize(14.06) & \underline{37.58\scriptsize(13.54)} & 26.99\scriptsize(10.32) \\
\midrule
DNPL \cite{seo2021power} & ICASSP21 & \xmark &\underline{60.86\scriptsize(16.64)} & 60.37\scriptsize(15.82) & \underline{59.62\scriptsize(16.45)} & \textbf{59.42\scriptsize(15.65)} & \textbf{57.08\scriptsize(16.46)} & \textbf{49.44\scriptsize(15.85)} \\
\midrule
\multirow{2}*{LW \cite{wen2021leveraged}} & \multirow{2}*{ICML21} &\xmark &60.64\scriptsize(15.83) & 58.86\scriptsize(16.03) & 55.97\scriptsize(15.91) & 48.43\scriptsize(14.00) & 36.09\scriptsize(12.14) & \underline{29.30\scriptsize(11.57)} \\
& &\checkmark &60.48\scriptsize(16.75) & \underline{60.83\scriptsize(16.22)} & \textbf{59.81\scriptsize(16.30)} & \underline{54.61\scriptsize(16.30)} & 34.35\scriptsize(17.91) & 22.12\scriptsize(10.33) \\
\midrule
\multirow{2}*{CAVL \cite{zhang2022exploiting}} & \multirow{2}*{ICLR22} &\xmark &58.51\scriptsize(16.63) & 57.60\scriptsize(15.90) & 53.67\scriptsize(15.24) & 43.03\scriptsize(14.70) & 32.39\scriptsize(11.66) & 26.03\scriptsize(10.68) \\
& &\checkmark &57.58\scriptsize(16.52) & 54.94\scriptsize(16.68) & 48.58\scriptsize(15.64) & 30.75\scriptsize(12.18) & 23.44 $ $\scriptsize(8.19) & 21.72 $ $\scriptsize(7.03) \\
\midrule
\multirow{2}*{CR \cite{wu2022revisiting}} & \multirow{2}*{ICML22} &\xmark &42.42\scriptsize(13.70) & 42.64\scriptsize(14.37) & 41.22\scriptsize(13.56) & 35.60\scriptsize(12.62) & 30.70\scriptsize(10.71) & 26.41 $ $\scriptsize(9.27) \\
& &\checkmark &28.31\scriptsize(10.08) & 28.55 $ $\scriptsize(9.52) & 27.57 $ $\scriptsize(9.57) & 22.68 $ $\scriptsize(8.66) & 22.09 $ $\scriptsize(7.90) & 20.94 $ $\scriptsize(8.56) \\
\midrule
\multirow{2}*{PiCO \cite{wang2022pico}} & \multirow{2}*{ICLR22} &\xmark &57.23\scriptsize(16.44) & 55.28\scriptsize(16.71) & 49.95\scriptsize(15.89) & 38.04\scriptsize(12.82) & 28.34\scriptsize(11.18) & 24.91\scriptsize(10.26) \\
& &\checkmark &\textbf{62.68\scriptsize(15.88)} & \textbf{61.92\scriptsize(15.94)} & 57.54\scriptsize(15.59) & 43.87\scriptsize(14.99) & 31.26\scriptsize(11.52) & 24.69 $ $\scriptsize(9.66) \\
\bottomrule
\end{tabular}
\label{table: comparison}
\end{table*}
\begin{table*}[!ht]
\centering
\setlength
\tabcolsep{10pt}
\caption{The accuracy (in \%) of LW \cite{wen2021leveraged} with different settings.}
\begin{tabular}{c|c|c|cccccc}
\toprule
Method & $\beta$ & LD & $q=0.2$ & $q=0.4$ & $q=0.6$ & $q=0.8$ & $q=0.9$ & $q=0.95$ \\
\midrule
\multirow{6}*{LW-Sigmoid} & \multirow{2}*{0} &\xmark &35.68\scriptsize(12.69) & 31.25\scriptsize(11.01) & 26.79\scriptsize(10.01) & 22.14 $ $\scriptsize(9.22) & 22.47 $ $\scriptsize(9.26) & 20.66 $ $\scriptsize(8.84) \\
& &\checkmark &30.14\scriptsize(10.52) & 27.88\scriptsize(10.03) & 23.03 $ $\scriptsize(8.03) & 20.98 $ $\scriptsize(6.79) & 20.11 $ $\scriptsize(6.52) & 20.00 $ $\scriptsize(6.65) \\
& \multirow{2}*{1} &\xmark &57.44\scriptsize(16.66) & 48.55\scriptsize(15.32) & 34.54\scriptsize(11.24) & 25.77 $ $\scriptsize(9.85) & 24.35 $ $\scriptsize(9.68) & 22.08 $ $\scriptsize(9.62) \\
& &\checkmark &40.99\scriptsize(23.27) & 33.97\scriptsize(19.67) & 24.74\scriptsize(10.24) & 20.75 $ $\scriptsize(7.80) & 20.04 $ $\scriptsize(6.99) & 20.61 $ $\scriptsize(6.72) \\
& \multirow{2}*{2} &\xmark &36.72\scriptsize(15.55) & 48.96\scriptsize(18.35) & 49.51\scriptsize(15.64) & 28.98\scriptsize(11.12) & 24.54 $ $\scriptsize(9.70) & 22.52 $ $\scriptsize(9.40) \\
& &\checkmark &42.44\scriptsize(23.89) & 37.82\scriptsize(22.15) & 32.85\scriptsize(19.30) & 21.56 $ $\scriptsize(9.26) & 20.25 $ $\scriptsize(6.91) & 20.53 $ $\scriptsize(7.35) \\
\hline
\multirow{6}*{LW-Cross Entropy} & \multirow{2}*{0} &\xmark &59.44\scriptsize(16.16) & 57.17\scriptsize(16.03) & 53.93\scriptsize(15.14) & 42.63\scriptsize(14.17) & 31.81\scriptsize(12.40) & 27.29\scriptsize(11.04) \\
& &\checkmark &59.71\scriptsize(16.81) & 58.85\scriptsize(16.08) & 55.73\scriptsize(16.23) & 48.13\scriptsize(13.88) & \textbf{36.83\scriptsize(13.21)} &27.81\scriptsize(10.48) \\
& \multirow{2}*{1} &\xmark &60.14\scriptsize(15.85) & 58.49\scriptsize(16.13) & 55.58\scriptsize(15.96) & 46.38\scriptsize(14.55) & 33.95\scriptsize(12.81) & \underline{28.27\scriptsize(11.14)} \\
& &\checkmark &60.23\scriptsize(16.84) & \underline{60.01\scriptsize(16.36)} & \underline{58.79\scriptsize(16.14)} & \underline{51.48\scriptsize(16.19)} & 30.47\scriptsize(15.87) & 22.05\scriptsize(10.25) \\
& \multirow{2}*{2} &\xmark &\textbf{60.64\scriptsize(15.83)} & 58.86\scriptsize(16.03) & 55.97\scriptsize(15.91) & 48.43\scriptsize(14.00) & \underline{36.09\scriptsize(12.14)} & \textbf{29.30\scriptsize(11.57)} \\
& &\checkmark &\underline{60.48\scriptsize(16.75)} & \textbf{60.83\scriptsize(16.22)} & \textbf{59.81\scriptsize(16.30)} & \textbf{54.61\scriptsize(16.30)} & 34.35\scriptsize(17.91) & 22.12\scriptsize(10.33) \\
\bottomrule
\end{tabular}
\label{table: LW}
\end{table*}
\begin{table*}[!ht]
\centering
\setlength
\tabcolsep{10pt}
\caption{The accuracy (in \%) of PiCO \cite{wang2022pico} with different settings.}
\begin{tabular}{c|c|cccccc}
\toprule
CL & LD & $q=0.2$ & $q=0.4$ & $q=0.6$ & $q=0.8$ & $q=0.9$ & $q=0.95$ \\
\midrule
\xmark & \xmark &59.35\scriptsize(16.09) & 57.75\scriptsize(16.27) & 53.13\scriptsize(15.84) & 43.31\scriptsize(13.41) & \underline{33.30\scriptsize(12.53)} & \textbf{27.12\scriptsize(10.52)} \\
\xmark &\checkmark &\underline{59.48\scriptsize(16.27)} & \underline{58.77\scriptsize(16.65)} & \underline{55.21\scriptsize(16.52)} & \textbf{45.93\scriptsize(16.42)} & \textbf{35.12\scriptsize(13.51)} & \underline{25.81 $ $\scriptsize(9.60)} \\
\checkmark & \xmark &57.23\scriptsize(16.44) & 55.28\scriptsize(16.71) & 49.95\scriptsize(15.89) & 38.04\scriptsize(12.82) & 28.34\scriptsize(11.18) & 24.91\scriptsize(10.26) \\
\checkmark & \checkmark &\textbf{62.68\scriptsize(15.88)} & \textbf{61.92\scriptsize(15.94)} & \textbf{57.54\scriptsize(15.59)} & \underline{43.87\scriptsize(14.99)} & 31.26\scriptsize(11.52) & 24.69 $ $\scriptsize(9.66) \\
\bottomrule
\end{tabular}
\label{table: PiCO}
\end{table*}
\section{Results} \label{sec:result}
\subsection{Performance in Classical Experiments}
We evaluate the performance of each PLL method in all $6$ scenarios with different levels of ambiguity ($q\in \{0.2, 0.4, 0.6, 0.8, 0.9, 0.95 \}$) and present the results in Table \ref{table: comparison}.
We observe that when low ambiguous candidate labels ($q \in \{0.2, 0.4\}$) are provided, PiCO \cite{wang2022pico} obtains the best results (shown in bold). DNPL \cite{seo2021power} obtains the second-best performance (shown with underline) with less ambiguous labels ($q\in \{0.2, 0.6\}$) while achieving the best performance with more ambiguous labels ($q\in \{0.8, 0.9, 0.95 \}$). LW method \cite{wen2021leveraged} obtains the best and the second-best results when the label ambiguity is moderate ($q\in \{0.4, 0.6, 0.8 \}$). Overall, across all the scenarios, DNPL \cite{seo2021power} maintains a very stable performance, while others suffer from major performance drops when candidate labels are provided with very high ambiguity ($q > 0.8$).
All PLL methods use label disambiguation methods \textit{except} for DNPL \cite{seo2021power}. To analyze the effect of label disambiguation in each PLL methods, we evaluate the majority of PLL methods both with and without Label Disambiguation (LD).
As shown in Table \ref{table: comparison}, among these five PLL methods, LD methods play different roles. Specifically, LD consistently improves the model performance in PRODEN \cite{lv2020progressive} and PiCO \cite{wang2022pico} (except when $q=0.95$), while causing performance decline in CAVL \cite{zhang2022exploiting} and CR \cite{wu2022revisiting}, across all the ambiguously labeled scenarios. In LW method \cite{wen2021leveraged}, LD is able to improve the model performance when candidate labels with lower ambiguity are provided ($q \in \{0.4, 0.6, 0.8\}$). However, when candidate labels are provided with very high ambiguity ($q \in \{0.9, 0.95 \}$), LD results in performance degradation.
Furthermore, we compare the average performance (across $q\in\{0.2, 0.4, 0.6, 0.8, 0.9, 0.95 \}$) for all the PLL methods. As illustrated in Figure \ref{comprison}, DNPL shows the best performance, followed by LW, when label disambiguation is not used. With label disambiguation, LW outperforms the rest, followed by PRODEN. We also observe that label disambiguation methods do not improve the model performance in CAVL \cite{zhang2022exploiting} and CR \cite{wu2022revisiting}, while in LW \cite{wen2021leveraged}, the average performance stays almost the same with and without label disambiguation.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.45\textwidth]{comparison_all.pdf}
\caption{Average performance of all PLL methods across all $6$ ambiguously labeled scenarios ($q\in \{0.2, 0.4, 0.6, 0.8, 0.9, 0.95 \}$).}
\label{comprison}
\end{center}
\end{figure}
In LW \cite{wen2021leveraged}, we further evaluate the method using both sigmoid and cross-entropy losses with three suggested values for the loss leveraging parameter ($\beta = 0, 1, 2$) (discussed in section \ref{method: LW}). This experiment is performed as these parameters are deemed important in the original LW paper \cite{wen2021leveraged}. As shown in Table \ref{table: LW}, LW with cross-entropy loss outperforms LW with sigmoid loss in all the $\beta$ and LD settings, across all the ambiguously labeled scenarios. LW with cross-entropy loss achieves the best results when more weights are assigned for learning non-candidate labels ($\beta = 2$).
In PiCO \cite{wang2022pico}, we evaluate the impact of two major components, namely Contrastive Learning (CL) and label disambiguation (discussed in section \ref{method: PiCO}). As shown in Table \ref{table: PiCO}, we find that CL and LD are beneficial in cases where the candidate labels are less ambiguous ($q\in\{0.2, 0.4, 0.6\}$). PiCO performs the best without CL when label ambiguity increases ($q\in\{0.8, 0.9\}$), and obtains the top performance without both components when label ambiguity is very high ($q=0.95$).
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5\textwidth]{emotion_wheel.pdf}
\caption{Wheel of emotions reproduced from \cite{russell1980circumplex,zhong2019study}.}
\label{emotion_wheel}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{similarity.pdf}
\caption{Estimation of normalized similarity score between two emotions.}
\label{similarity}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{real_world.pdf}
\caption{Performance of all PLL methods in the real-world experiments}
\label{real_world}
\end{center}
\end{figure}
\subsection{Performance in Real-World Scenarios}
Most existing PLL methods employ independent sampling for candidate label generation, where each class except for the ground truth has the same probability of $q$ to become the candidate label. However, independent sampling may not be suitable for emotion recognition tasks since in the real word scenarios, it is more likely to confuse emotions that are more closely related (e.g., `sad' and `fear'), while it is less likely to confuse dissimilar emotions (e.g., `sad' and `happy'). To address this, we design additional experiments to generate the candidate labels based on the similarity between two emotions. To do so, we first estimate the location of each emotion on Russell’s circumplex model \cite{russell1980circumplex,zhong2019study}. As shown in Figure \ref{emotion_wheel}, the wheel of emotions is a circumplex that represents the relationships between different emotions on two orthogonal axis, arousal and valence. This was first proposed by Russell \cite{russell1980circumplex} and further developed in studies such as \cite{zhong2019study}. In this model, emotions are arranged in a circular layout, with certain emotions being considered more closely related to one another based on their proximity on the wheel. To perform more realistic experiments on label generation, we estimate the polar coordinates of each emotion in the format of (radius, angle degrees). According to \cite{zhong2019study}, we assume that emotions (with the exception of disgust and fear) are uniformly distributed in each quarter of the wheel, with each adjacent angle representing a difference of $18^{\circ}$. `Disgust' is positioned between `upset' and `nervous', and `fear' is positioned between `stressed' and `tense'. Moreover, the `neutral' emotion is always placed at the center of the wheel, at the coordinate of $(0, 0^{\circ})$. Based on this distribution, we can determine the polar coordinates of each emotion, with the radius of the wheel being set to $1$. Following, we calculate the distance between two emotions ($i$ and $j$), as $\dist(i,j) =\sqrt{r_i^2 + r_j^2- 2 r_i r_j \cos{(\theta_i -\theta_j)}}$, where $r$ and $\theta$ denote the radius and angle radians, respectively. Next, we obtain the normalized similarity score between two emotions, as $\gamma(i,j) =1- \dist(i,j)/ \max{(\dist(i,j))}, \forall i,j \in [1,k]$.
The normalized similarity scores among the five emotions are shown in Figure \ref{similarity}. Finally, for candidate label generation, we use the normalized similarity score instead of a pre-defined constant ($q$) as the probability, such that $\mathrm{P}(Y_i\mid x_i, y_i) = \prod_s \gamma(s,y_i), \forall s \in[1,k]$. Therefore, emotions that are more related to the ground truth have higher probabilities to become the candidate labels and vice-versa.
We evaluate the performance of the existing PLL methods with the candidate labels generated based on the similarity scores between two emotions. As shown in Figure \ref{real_world}, in contrast to the role of label disambiguation in PLL methods under classical settings (uniform distribution of candidate labels), label disambiguation improves model performance for the majority of methods under real-world settings. In particular, the LW method with label disambiguation closely approaches the best result. The only exception is in CR method \cite{wu2022revisiting}, where label disambiguation has a negative impact on model performance. We believe that compared to the classical experiments, label disambiguation is less likely to be misled by false positive labels and allowed the model to focus on the candidate labels which are closer to the ground truth in real-world experiments. Therefore, the label disambiguation process is helpful to identify the ground truth from candidate labels, thus improves model performance. Furthermore, we find that DNPL \cite{seo2021power} and LW \cite{wen2021leveraged} with the label disambiguation process are able to approach the \textit{fully supervised learning} method ($63.08\%$), addressing the challenge of ambiguous EEG labels in real-world affective computing. We also observe that DNPL \cite{seo2021power} performs the best in both classical and real-world experiments, demonstrating its effectiveness in handling candidate labels with various ambiguity levels.
\section{Conclusion} \label{sec:conclusion}
In this study, we tackle the challenge of ambiguous EEG labeling in emotion recognition tasks by adapting and implementing state-of-the-art \textit{partial label learning} methods. These methods were originally developed for computer vision applications. We conduct extensive experiments with six PLL frameworks across $6$ scenarios with different label ambiguities (across $q\in \{0.2, 0.4, 0.6, 0.8, 0.9, 0.95 \}$). We also design a real-world scenario where candidate labels are generated based on similarities among emotions instead of a uniform distribution assumption. We evaluate the performance of all the PLL methods and investigate the importance of the label disambiguation process in both the classical and real-world experiments on a large publicly available dataset, SEED-V with $5$ emotion categories. The results show that, in the majority of cases, the label disambiguation process improves the model performance in real-world experiments while causing performance degradation in the classical experiments. We believe the false positive labels generated with a uniform distribution would mislead the label disambiguation process, while the label disambiguation process is provided with better guidance when the candidate labels are generated based on the similarities with the ground truth. These results indicate the potential of using label disambiguation-based PLL frameworks for real-world emotion recognition experiments. Overall, DNPL, achieves the best performance and approaches \textit{fully supervised} learning in both classical and real-world experiments, addressing the challenge of ambiguous EEG labeling in emotion classification tasks.
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.13174",
"language": "en",
"timestamp": "2023-02-28T02:11:48",
"url": "https://arxiv.org/abs/2302.13174",
"yymm": "2302"
} | \section{What is the permutohedron?}
Symmetric objects attracted human curiosity from ancient times. People often used to associate them with mystical and magical properties. However, they also motivated severe investigations and the birth of new mathematical and science disciplines and multidisciplinary studies. Early examples are five regular solids, also known as Platonic solids, see Figure~\ref{figurepla}.
\begin{figure}[h!h!h!]
\centerline{\includegraphics[width=0.85\textwidth]{platonicsolids.pdf}}
\caption{Platonic solids} \label{figurepla}
\end{figure}
Why are such objects so specific and appear in many fascinating natural places? Philosophically speaking, because they are perfect geometric objects and natural shapes tends to be perfect. Mathematics is a language we are trying to explain this perfectness. If geometry catches their forms and obvious exterior beauty, other mathematical disciplines like group theory, number theory and algebra describe their hidden deep properties.
The solid we will present here is one of the essential objects in modern mathematics. It is obtained by truncating the regular octahedron in all its vertices, see Figure~\ref{Figuretru}. Its facets are regular polygons: six squares and eight regular hexagons. The solid has 24 vertices and 36 edges, with three edges meeting in each vertex. Its first recorded appearance \cite{Schoute} in mathematical literature dates back to 1911 and a Dutch mathematician Pieter Hendrik Schoute known for his work on regular polyhedrons and their higher dimension analogues, \textit{regular polytopes}. Georges Th. Guilbaud and Pierre Rosenstiehl~\cite{Rosen} named it \textit{the permutohedron} in 1963.
\begin{figure}[h!h!h!]
\centerline{\includegraphics[width=\textwidth]{truncatedoctahedron.pdf}}
\caption{Truncated octahedra} \label{Figuretru}
\end{figure}
Its name is coined from two words, permute and hedra (the face of a solid). There are 24 permutations of the elements of the set $\{1, 2, 3, 4\}$ and 24 vertices of the permutohedron, but could we give a more satisfactory explanation to justify its name? Of course, the answer is yes! The simplest way is to introduce it as a member of a family of convex polytopes. A convex polytope in a $n$-dimensional Euclidean space $\mathbb{R}^n$ is a convex hull of a finite set of points in this space. We maintained the basic features of convex polygons and polyhedra for higher dimensions by this definition. Indeed, geometrical analogues between polygons and polyhedra in dimensions two and three emerge here. For example, a convex $4$ polytope beside vertices and edges will have polygons as $2$-faces and polyhedra as its $3$-faces. More generally, a convex $n$-polytope has faces of dimension $k$ for all $0\leq k\leq n-1$, while faces of dimension $n-1$ are called facets. The interested reader in polytopes is kindly referred to the outstanding book \textit{Lectures on Polytopes} by G\"{u}nter Ziegler \cite{Zieg}.
\begin{figure}[h!h!h!]
\centerline{\includegraphics[width=0.85\textwidth]{percon.pdf}}
\caption{Permutohedron as the convex hull of its vertices} \label{figurehex}
\end{figure}
The $n$-permutohedron is the convex hull of $(n+1)!$ points in $\mathbb{R}^{n+1}$, each one of them being a permutation of the coordinates of $\{1, 2, 3, \dots, n\}$. A regular hexagon is therefore a 2-dimensional permutohedron. As the sum of vertex coordinates is always $\frac{(n+1)(n+2)}{2}$, the $n$-permutohedron lies in the hyperplane $$x_1+x_2+\cdots+x_n=\frac{(n+1)(n+2)}{2}$$ in $\mathbb{R}^{n+1}$, see Figure~\ref{figurehex}. It has between $\frac{n(n+1)!}{2}$ and $2^{n+1}-2$ facets, while each facet corresponds to a proper subset of $\{1, 2, \dots, n+1\}$. The last fact can be seen geometrically because the $n$-permutohedron can be defined alternatively after truncating all faces of $n$ simplex, see Figure~\ref{figuretrun}. An $n$-simplex is a convex hull of $n+1$ points in general positions in $\mathbb{R}^n$, a $n$-dimensional analogue of a triangle and a tetrahedra. The combinatorics around $k$-faces of $n$-permutohedron is exciting. The total number of faces of dimension $k$ is equal to $(n-k+1) \cdot S (n+1, n+1-k)$, where $S (n, k)$ stands for the Stirling number of the second kind - the number of ways to partition a set of $n$ elements into $k$ non-empty subsets.
\begin{figure}[h!h!h!]
\centerline{\includegraphics[width=\textwidth]{trunc.pdf}}
\caption{Total truncation of tetrahedra} \label{figuretrun}
\end{figure}
The permutation group $S_{n+1}$ acts on the vertices of the $n$-permutohedron by permuting the coordinates, and this is the principal reason this object has a fancy look. Indeed, the permutohedron helps understand permutations and the symmetric group $S_{n+1}$. To any group $G$ and a set of its generators $S$, we can assign an edge-colored directed graph $\Gamma (G, S)$ whose vertex set are elements of $G$ and there is a directed edge colored by $s\in S$ from $g\in G$ to $h\in G$ exactly when $h=g s$ in $G$. Such a graph is called a Cayley graph of $G$. It turns out that the vertices of the $n$-permutohedron can be labeled by permutations so that its edges make the Cayley graph of $S_{n+1}$ for the set of generators being transpositions $$(1,2), (2, 3), \dots, (n, n+1), $$ those that swap two consecutive elements, see Figure~\ref{fig}.
\begin{figure}[h!h!h!]
\centerline{\includegraphics[width=0.85\textwidth]{cayley.pdf}}
\caption{Permutohedra as a Cayley graph of $S_4$} \label{fig}
\end{figure}
\section{Magic figures and solids}
The magic square is one of the earliest recreational mathematical objects known at least 190 BCE. People assigned them an occult or mythical significance in the past, so they often appeared as symbols in works of art. The magic square of order $n$ is a sequence of $n^2$ numbers arranged in $n\times n$ squares so that the sum of numbers in each row, each column and both diagonals are the same, see Figure~\ref{magic}. The latter number is also known as a \textit{magic constant}. Mathematicians also studied magic squares with extra constraints, such as ultra magic squares, pandiagonal, symmetric magic squares, etc.
\begin{figure}[h!h!h!]
\centerline{\includegraphics[width=\textwidth]{kvadrati.pdf}}
\caption{Magic squares} \label{magic}
\end{figure}
Depending on the remainder of $n$ modulo 4 there are various methods for constructing higher-order magic squares. If $n$ is odd, we say that a magic square is odd. It is called doubly even if four divides $n$ and singly even when $n \equiv 2 \pmod 4$. Singly even magic squares are more challenging to be generated. Some methods allow us to create more than one magic square for a given $n$, while others, which are usually simpler, provide us with just one magic square of a given order. Some transformations and symmetries allow us to construct a few more solutions. However, classifying all magic squares of a given order has been an open question for a long time. The problem has been studied by many famous mathematicians, including John Conway, Simon de la Loub\`{e}re, and Claude Bachet, who wrote \textit{Probl\`{e}mes plaisans et delectables qui se font par les nombres}, the first books in the recreational mathematics in early 17th century.
The idea of magic squares was implemented in other plane figures and even in higher dimensional figures. Here we consider a similar problem where the goal is arranging the numbers from the set of $\{1, 2, \dots, m\}$ in the vertices of a regular $m$-vertex solid. We immediately ruled out such possibilities for the tetrahedron and the icosahedron since two triangles sharing an edge would have had the same numbers in the vertices opposite to the joint edge. Finding such a placement is straightforward; see \ref {magiccube} for an example. It is easy to see that in any such configuration, the sum of the vertices on each face of the cube has to be 18. A small analysis shows six distinct options up to the cube's rotations.
\begin{figure}[h!h!h!]
\centerline{\includegraphics[width=0.45\textwidth]{magiccube.pdf}}
\caption{Magic cube} \label{magiccube}
\end{figure}
Let us think about the dodecahedron case. If we denote by $S$ the sum of numbers in each pentagon, then we have that $$12 S=3 \cdot (1+2+\cdots+20)=630,$$ but this is impossible since $12\nmid 630$. Apart from the cube, other Platonic solids do not have surprising configurations for placing the numbers in the vertices. One can ask what is happening with other polyhedra and whether we can relax the constraint so that we still have some magical properties.
The next example of a regular $n$-gon prism is illustrative. Let us try to arrange $1$, $2, \dots, 2n$ in the vertices of this prism so that the sum of the numbers in each basis is $A$, and the sum of the numbers in each rectangular face of the prism is $B$. We have the following conditions:
\begin{align*}
2 A= \frac{2n (2n+1)}{2}=n(2n+1) \label{eq1} & \text{\, \, and \, \,} n B= 2n (2 n+1)\\
\end{align*}
The second condition yields $B=4n+2$, but the requirement from the first condition is that $n$ must be even. It is easy to produce such a configuration for $n=2k$; one possibility is depicted in Figure~\ref {prism}.
\begin{figure}[h!h!h!]
\centerline{\includegraphics[width=0.45\textwidth]{prism.pdf}}
\caption{Magic $2k$-gon prism} \label{prism}
\end{figure}
\section{Magic permutohedron}
The example of the magic $2n$-gon prism inspires us to consider the following question:
\textit{Is it possible to place the numbers $1, 2, \dots, 24$ in the vertices of permutohedra so that the sum of the numbers in each square face is the same and the sum of the numbers in each hexagonal face is also the same?}
We straightforwardly find that the sum of the numbers in a square face and a hexagonal face should be 50 and 75, respectively. However, if we try to find an arrangement by hand explicitly, we are most likely to fail as we have too many spaces of freedom at the beginning. On the other hand, the authors' attempts to put some more natural constraints failed, so we wrote a program to make it for us!
The results were more than exciting. The computer found $3900064$ distinct solutions (up to the rotations of the permutohedra). However, this number of solutions makes less than $4\time 10^{-15} \%$ of the total of 12926008369442488320000 possibilities. One solution found by the computer is presented in Figure~\ref{mp}.
\begin{figure}[h!h!h!]
\centerline{\includegraphics[width=0.65\textwidth]{magicni-permutoedar.pdf}}
\caption{The magic permutohedra} \label{mp}
\end{figure}
We wonder if a similar property holds for the $n$-permutohedron for $n\geq 4$. The condition can be formulated on 2-faces (hexagons and squares) or on its facets which are the products of smaller dimensional permutohedra.
\section*{Acknowledgements}
The first author was supported by the Serbian Ministry of Science, Innovations and Technological Development through the Mathematical Institute of the Serbian Academy of Sciences and Arts.
|
{
"arxiv_id": "2302.13241",
"language": "en",
"timestamp": "2023-02-28T02:14:01",
"url": "https://arxiv.org/abs/2302.13241",
"yymm": "2302"
} | \section{Introduction}
Large-scale knowledge bases (KBs) such as Freebase \cite{bollacker2008freebase} and DBpedia \cite{auer2007dbpedia} store huge amounts of structured knowledge.
These KBs support a variety of natural language processing tasks, including question answering over knowledge base (KBQA), where models exploit the knowledge related to the questions and precisely identify the answers by reasoning through various KB relations.
Although most large-scale KBs claim to contain multilingual information, they could not completely support non-English languages as expected.
For example, Freebase has no translation for the KB relations/attributes in any non-English languages.
More than half of the entities in Freebase have no Chinese translations, despite the fact that Chinese is the most spoken non-English language in the world.
Therefore, these KBs could not directly support question answering in non-English languages, bringing up the problem of answering non-English questions over the KBs constructed in English.
\begin{figure}
\centering
\includegraphics[scale=0.3]{fig/xkbqa_example_arr.pdf}
\caption{An example of answering questions in non-English languages over an English knowledge base.}
\label{fig:xkbqa_example}
\end{figure}
In this work, we focus on cross-lingual KBQA (xKBQA), which aims to answer questions over a KB in another language.
Figure~\ref{fig:xkbqa_example} shows a KB subgraph and several factoid questions in non-English languages, which can be answered by a node in the KB subgraph.
Despite considerable progress in monolingual KBQA, xKBQA receives little attention.
A significant challenge in xKBQA is the lack of large-scale xKBQA datasets.
Such datasets are quite expensive to annotate since the annotators are expected to be multilingual and have background knowledge about KBs.
As a result, even the largest xKBQA dataset so far contains only a few hundred questions~\cite{ngomo20189th}.
Another challenge is that, compared to other cross-lingual tasks, the expression difference between structured KB schemas and natural language questions further hinders the learning of cross-lingual mapping.
To address these challenges, we propose to convert the KB subgraphs into natural language texts and leverage the progress in cross-lingual machine reading comprehension (xMRC) to solve the xKBQA task.
Recently, there has been a series of large-scale xMRC datasets, such as MLQA \cite{lewis2020mlqa}, MKQA \cite{longpre-etal-2021-mkqa} and XQuAD \cite{artetxe2020cross}.
Multilingual pre-trained language models (MPLMs), such as mBERT \cite{devlin2019bert} and XLM-R \cite{Conneau2020UnsupervisedCR}, achieve competitive performance on these xMRC benchmarks.
As for xKBQA, by converting KB subgraphs into natural language texts, we narrow the gap between KB schemes and natural language expressions.
We then utilize the PLM-based xMRC models finetuned on xMRC datasets to learn the cross-lingual mapping efficiently, even with limited xKBQA annotations.
Specifically, we first identify the topic entity from the given question, link it to the KB, and extract its $n$-order neighbors to construct a KB subgraph, following traditional monolingual KBQA methods \cite{saxena2020improving,he2021improving}.
We then convert the subgraph into a question-specific passage with KB-to-text generation models, incorporating the KB triples with contextual expressions.
Given the converted cross-lingual question-passage pairs, we adopt MPLMs to rank answer candidates in the passages.
As a general framework, our approach can be easily applied to different languages or KBs without specialized modifications.
We empirically investigate the effectiveness of our method on two xKBQA datasets, QALD-M~\cite{ngomo20189th} and WebQSP-zh.
QALD-M is a collection of a few hundred questions in 11 non-English languages, from a series of xKBQA evaluation campaigns.
Considering its small size, we also construct a new dataset WebQSP-zh with 4,737 Chinese questions translated from WebQSP~\cite{yih2016value} by native speakers.
WebQSP-zh is much larger in size and involves more natural expressions as the annotators take into account commonsense knowledge and realistic vocabulary choices during manual translation.
Experimental results demonstrate that our method outperforms a variety of English-as-pivot baselines based on monolingual KBQA models, reaching 74.37\% hits@1 on WebQSP-zh.
Moreover, our method achieves strong few-shot and zero-shot performance.
Using only 10\% of the training data, our method performs comparably to several competitive English-as-pivot baselines trained with full training data.
For the zero-shot evaluation on QALD-M, our method achieves 51.20\% hits@1 on average across 11 languages.
Our main contributions are summarized as:
\vspace{-\topsep}
\begin{itemize}
\item
We formulate xKBQA as answering questions by reading passages converted from KB subgraphs, bridging the gap between KB schemas and natural language expressions. Existing high-quality xMRC resources are further utilized to alleviate the data scarcity issue.
\vspace{-\topsep}
\item
We collect a large xKBQA dataset with native expressions in Chinese, i.e., WebQSP-zh. It, along with its original version, i.e., WebQSP, can be used for analyzing the gap between monolingual and cross-lingual KBQA.
\vspace{-\topsep}
\item We conduct extensive experiments on two datasets with 12 languages. Our method outperforms various baselines and achieves strong few-shot and zero-shot performance.
\end{itemize}
\begin{figure*}[ht]
\centering
\includegraphics[scale=0.23]{fig/pipeline_new_wider_wider.pdf}
\caption{An illustration of our method.
Given a large-scale KB in a rich-resource language such as English,
to answer a question in a language with relatively fewer resources, we first extract a subgraph from KB according to entity linking results.
We then convert the subgraph into a question-specific passage in natural language with KB-to-text generation models, complementing the KB triples with contextual expressions. The question and converted passage are fed into a PLM-based xMRC model, which ranks all candidate answer spans to obtain the final answer.
}
\label{fig:illustration}
\end{figure*}
\section{Related Works}
\paragraph{KBQA}
Recent efforts in KBQA generally fall into two main paradigms, either the information extraction style~\cite{miller-etal-2016-key,sun2018open,xu-etal-2019-enhancing,saxena2020improving,he2021improving,shi-etal-2021-transfernet}
or the semantic parsing style~\cite{yih-etal-2015-semantic,lan-jiang-2020-query,ye-etal-2022-rng,gu-su-2022-arcaneqa}.
The former retrieves a set of candidate answers from KB, which are then compared with the questions in a condensed feature space.
The latter manages to distill the symbolic representations or structured queries from the questions.
\paragraph{xKBQA}
Both styles of KBQA methods can be applied to xKBQA.
Previous xKBQA efforts generally fall in the semantic parsing style.
They rely on online translation tools \cite{hakimov2017amuse} or embedding-based word-to-word translation \cite{zhou2021improving} to obtain synthetic training data.
In contrast, the information extraction based xKBQA approach is less explored. An advantage of this style of xKBQA methods is that it requires no annotation of structured queries, which is expensive to obtain for non-English languages.
In this paper, we attempt to explore xKBQA approaches of the information extraction style with less reliance on machine translation tools and investigate their performance in the few-shot and zero-shot settings.
\paragraph{xMRC}
xMRC is a cross-lingual QA task receiving extensive attention recently, with considerable progress in datasets and models. There has been a stream of high-quality datasets in a wide range of languages, including MLQA \cite{lewis2020mlqa}, MKQA \cite{longpre-etal-2021-mkqa}, XQuAD \cite{artetxe2020cross} and TyDi QA \cite{Clark2020TyDiQA}.
Several works for xMRC adopt machine translation tools\cite{Asai2018MultilingualER,Cui2019CrossLingualMR,Lee2019LearningWL}
or question generation systems \cite{riabi-etal-2021-synthetic} to obtain more cross-lingual training data, while other works attempt to learn better cross-lingual mapping with MPLMs \cite{Yuan2020EnhancingAB,wu-etal-2022-learning}.
\paragraph{KB-to-text in QA}
To benefit xKBQA with the progress in xMRC, we propose to convert the xKBQA task into reading comprehension.
Previous works in other QA tasks attempt to convert KB triples into texts by simple concatenating heuristics \cite{oguz2021unik} or by manually-designed rules \cite{bian2021benchmarking}.
\citet{ma-etal-2022-open} resort to PLM-based generation models and argue that data-to-text can serve as a universal interface for open domain QA.
To the best of our knowledge, our work is the first to introduce data-to-text methods into KBQA and cross-lingual QA.
Compared with \citet{ma-etal-2022-open},
we further address the real-world problems of complex KB structures, cross-lingual semantic gap, and data scarcity when applying data-to-text to xKBQA.
\section{Methodology}
We propose a novel approach to tackle xKBQA as reading comprehension.
As illustrated in Figure \ref{fig:illustration}, we first convert KB triples into sentences using generation models and obtain question-specific passages for reading comprehension.
We then adopt MPLMs finetuned on xMRC datasets to answer cross-lingual questions according to the converted passages.
\subsection{Task Formulation}
In xKBQA, given a knowledge base $G$ in language $A$ and a question $q$ in another language $B$, the model is expected to answer $q$ by entities or literal values in $G$.
In practice, $A$ is often a rich-resource language such as English, and $B$ is a language with relatively fewer resources.
A knowledge base $G$ consists of a set of knowledge triples.
In a triple $(h, r, t)$, $h \in E$ is a head entity, $t \in E \cup L $ is a tail entity or a literal value, and $r \in R$ is the relation/predicate between $h$ and $t$, where $E$ denotes the set of all entities, $L$ denotes the set of all literal values, and $R$ denotes the set of all relations.
\subsection{KB-to-Text Conversion}
In a typical monolingual KBQA framework, one first identifies the topic entity in the question and links it to the given KB. This can be achieved by surface-level matching \cite{sun2018open} or supervised entity linkers \cite{yang2015s}.
In the cross-lingual setting, one can directly adopt multilingual entity linkers such as mGENRE~\cite{de-cao-etal-2022-multilingual} or translate questions and KB entities into the same language for monolingual linking.
After entity linking, a KB subgraph is constructed by the neighbors within several hops around the topic entities.
Based on the given question, all candidates in the subgraph are ranked to arrive at the final answers.
To successfully identify from the subgraph the KB predicates leading to the answer, the KBQA models are expected to learn a mapping between KB predicates and natural language expressions in the questions.
In addition to the language gap as in most cross-lingual tasks, the models have to deal with the difference in expression styles used in the KB schemas and questions.
To narrow down the gap of mapping, we propose to convert KB subgraphs to natural language passages, formulating xKBQA as an xMRC task, so that we can benefit from recent advances in xMRC.
Converting KB subgraphs into natural sentences brings plausible context for candidate KB answers, facilitating the matching between questions and answers.
Furthermore, with the natural language expressions of the KB subgraphs, current xMRC models can be directly adopted to solve the questions.
We believe that xMRC models could benefit the xKBQA task for their strong capabilities of mapping between cross-lingual expressions.
Even without annotated xKBQA data, they are able to answer a portion of xKBQA questions, utilizing their prior knowledge of the cross-lingual mapping learned from pre-training and fine-tuning on xMRC datasets.
To convert KB subgraphs into readable passages, we utilize PLM-based KB-to-text models, such as JointGT~\cite{chen2021jointly}.
A KB-to-text model converts a structured KB subgraph to natural language texts, complementing the given entities and relations with potential contextual expressions.
Compared with simply concatenating the head entity, relation and tail entity of a triple, a KB-to-text model can generate more natural and coherent sentences.
It also alleviates the onerous manual design of conversion rules.
Moreover, the KB-to-text model can handle not only single-relation triples but also more complex KB structures, such as CVT nodes, which is a complex node type in Freebase referring to an event with multiple fields.
Figure \ref{fig:kb2text} shows examples of KB-to-text conversion for a single-relation triple and a CVT node.
After conversion, we identify the candidate answer spans from the pieces of text with fuzzy string matching tools.
To form a passage, we concatenate the pieces of text, sorted by their semantic similarities to the questions.\footnote{Previous work shows that PLM-based MRC models are not sensitive to the order of sentences in the passage \cite{sugawara2020assessing}. We do not observe significant performance change after we shuffle the sentence order in the passage, which conforms to the finding by \citet{sugawara2020assessing}.}
We observe that the subgraphs around a topic entity can be very large, especially for the \textit{hub} entities like \textit{the USA}.
Consequently, the converted passages can be very long, even up to 20k words in length.
Current xMRC models struggle with such long passages.
To shorten the converted passages, we fix the maximum length of the passage and discard the remaining redundant sentences.
\begin{figure}[t]
\centering
\includegraphics[scale=0.3]{fig/conversion_arr.pdf}
\caption{Examples of KB-to-text conversion for a single relation (upper) and a complex event-like fact, such as CVT nodes in Freebase (lower).}
\label{fig:kb2text}
\end{figure}
\subsection{Cross-Lingual Reading Comprehension}
MPLMs are widely adopted in xMRC for their strong capabilities of understanding cross-lingual texts.
They can encode different languages in a unified semantic space, relieving the reliance on translation tools.
We thus use MPLMs to solve the xMRC instances converted from xKBQA.
Specifically, we concatenate the question and the converted passage as the input to the MPLMs and predict the boundary of the answer span.
In the KB-to-text step, we have identified the corresponding span in the passage for each candidate KB entity or literal value.
Thus, during inference, we only need to rank the candidate answer spans.
The corresponding KB entity or value for the top-ranked candidate span is selected as the final answer.
To address the data scarcity in xKBQA, we further propose to finetune the models on MRC data in multiple stages before on xKBQA data.
Compared to KBQA, it is easier to acquire annotated MRC data for its straightforward annotation process without the requirement of background knowledge in KBs.
Apart from large-scale English MRC datasets such as SQuAD~\cite{Rajpurkar2016SQuAD1Q}, there are a series of high-quality xMRC datasets, including MLQA, MKQA and XQuAD, covering a wide range of non-English languages such as Russian, Hindi, and Dutch.
In the first stage, we use large-scale English MRC datasets, e.g., SQuAD, to help MPLMs learn the language-agnostic ability to find answers from the passages.
In the second stage, we finetune the models on high-quality xMRC datasets in the target language, strengthening the reading comprehension ability for the target language.
In this way, the two-stage finetuning before training on xKBQA data benefits models with the rich resources in MRC and mitigate the data scarcity problem in xKBQA.
\section{Experimental Setup}
\begin{table}
\small
\centering
\begin{tabular}{p{0.94\columnwidth}}
\toprule
\textbf{WebQSP-zh:}
\begin{CJK*}{UTF8}{gbsn}\small安娜肯德里克出演过什么?\end{CJK*}/ \textit{What did Anna Kendrick star in?}\\
\textbf{WebQSP-MT:}
\begin{CJK*}{UTF8}{gbsn}\small安娜肯德里克在干什么?\end{CJK*}/ \textit{What is Anna Kendrick doing?} \\
\textbf{WebQSP:} What has Anna Kendrick been in? \\
\textbf{Freebase Predicate:} film.actor.film~~film.performance.film \\
\midrule
\textbf{WebQSP-zh:} \begin{CJK*}{UTF8}{gbsn}\small1945年前苏联的领导人是谁?\end{CJK*}/ \textit{Who was the leader of the former Soviet Union in 1945?} \\
\textbf{WebQSP-MT: } \begin{CJK*}{UTF8}{gbsn}\small1945年苏联的领导人是谁?\end{CJK*}/ \textit{Who was the leader of the Soviet Union in 1945?} \\
\textbf{WebQSP:} Who was the leader of the Soviet Union in 1945? \\
\textbf{Freebase Predicate:} \\
government.governmental\_jurisdiction.governing\_officials\\
government.government\_position\_held.office\_holder \\
\bottomrule
\end{tabular}
\caption{Examples from WebQSP-zh and their corresponding questions in WebQSP.
WebQSP-MT is the Chinese translation of WebQSP by Baidu Translate, a machine translation tool. The italic English texts are the literal meaning of the Chinese questions.
}
\label{tab:webqspzh}
\end{table}
\subsection{Datasets}
We evaluate our method on two datasets, \textbf{QALD-M}, a small evaluation dataset in 11 languages, and \textbf{WebQSP-zh}, a new dataset with a larger size and more realistic expressions.
\paragraph{QALD-M} QALD-M is a series of evaluation campaigns on question answering over linked data.
We use the version provided by \citet{zhou2021improving} and filter the out-of-scope ones.
It consists of testing questions for 11 non-English languages (fa, de, ro, it, ru, fr, nl, es, hi, pt, pt\_BR) over DBPedia.
The numbers of used questions for each language range from 66 to 363. We use QALD-M mainly for zero-shot evaluation.
See Appendix \ref{app:qald-m} for more details.
\paragraph{WebQSP-zh}
Considering that the size of QALD-M is small and its multilingual questions are mostly literal translations without language-dependent paraphrasing, we collect a new xKBQA dataset WebQSP-zh, with 3,098 questions for training and 1,639 questions for test.
To collect WebQSP-zh, we employ two Chinese native speakers proficient in English to manually translate all the questions in WebQSP \cite{yih2016value}, a widely-used English KBQA dataset, together with another annotator responsible for checking translation quality.
To provide a more realistic benchmark for cross-lingual evaluation, the annotators are instructed to pay much attention to commonsense knowledge and natural vocabulary choices during translation.
For example, in the upper example of Table~\ref{tab:webqspzh}, the phrase \textit{be in} in the WebQSP question has multiple translations in Chinese. Based on the commonsense knowledge that \textit{Anna Kendrick} is an actress, it is translated as \begin{CJK*}{UTF8}{gbsn}\small出演\end{CJK*}/\textit{star in} instead of its literal meaning \begin{CJK*}{UTF8}{gbsn}\small在做\end{CJK*}/\textit{be doing}.
In the lower example of Table~\ref{tab:webqspzh}, the annotator chooses the Chinese word \begin{CJK*}{UTF8}{gbsn}\small前苏联\end{CJK*}/\textit{former Soviet Union} for translation instead of \begin{CJK*}{UTF8}{gbsn}\small苏联\end{CJK*}/\textit{Soviet Union} because the former is more often used by native Chinese speakers.
See Appendix \ref{app:weqsp-zh} for more statistics, annotation details, and examples.
\subsection{Baselines}
\paragraph{Supervised}
A widely-adopted baseline method in cross-lingual QA tasks is translating data in non-English languages into English with machine translation tools and utilizing mono-lingual models~\cite{Asai2018MultilingualER,Cui2019CrossLingualMR}, which we call \textbf{English-as-pivot}.
For supervised experiments on WebQSP-zh, we select several competitive monolingual KBQA models for English-as-pivot evaluation.
For information extraction style, we select \textbf{EmbedKGQA} \cite{saxena2020improving}, \textbf{GraftNet} \cite{sun2018open}, \textbf{NSM} (with its teacher-student variant, \citealp{he2021improving}), all of which require no annotation of structured KB queries, as our method does.
For semantic parsing style, we select \textbf{QGG}~\cite{lan-jiang-2020-query}.
\footnote{We did not include the recent semantic-parsing-style models based on Seq2Seq generation, including RnG-KBQA~\cite{ye-etal-2022-rng} and ArcaneQA~\cite{gu-su-2022-arcaneqa}, both of which outperform QGG by 1.6\% F1 on WebQSP. However, setting up an environment for them requires up to 300G memory, far exceeding our computational budgets.
So we think that OGG is a suitable baseline that strikes a good balance between performance and computational resources.}
We also provide a \textbf{Closed-book QA} baseline~\cite{roberts-etal-2020-much} with generation-based MPLMs, e.g., mT5~\cite{xue-etal-2021-mt5}.
We feed the question directly into the model and expect it to output the answer based on its knowledge learned in pre-training.
This method requires no external knowledge, such as KBs, and can coarsely evaluate how much parametric knowledge an MPLM may have.
\paragraph{Zero-shot}
Since the above supervised baselines are unable to answer any questions without training data, we further implement two baselines inspired from \citet{zhou2021improving} for zero-shot evaluation.
One is \textbf{Multilingual Semantic Matching}, which measures the similarity between questions and inferential chains with an MPLM finetuned on LC-QuAD \cite{trivedi2017lc}, an English KBQA dataset.
The other, based on the previous baseline, uses \textbf{Bilingual Lexicon Induction} (BLI, \citealp{lample2018word}) to obtain word-to-word translation in the target languages as data augmentation.
\subsection{Metrics}
Following previous works \cite{saxena2020improving,he2021improving}, we use hits@1 as the evaluation metric.
It is the ratio of questions whose top~1 predicted answer is in the set of golden answers.
\subsection{Implementation Details}
Following previous works~\cite{sun2018open,saxena2020improving,he2021improving}, we use the golden topic entities for a fair comparison with the baselines.
We also discuss the effects of entity linking in Section \ref{sec:el}.
For KB-to-text generation, we use JointGT \cite{chen2021jointly} finetuned on WebNLG \cite{gardent2017creating}, a KB-to-text dataset.
We use TheFuzz\footnote{\url{https://github.com/seatgeek/thefuzz}} to identify candidate answer spans.
We fix the maximum passage length to 750 words and discard the sentences with lower semantic similarity to the questions, measured by the multilingual model of SentenceTransformers \cite{Reimers2020MakingMS}.
For xMRC, we experiment with mBERT and XLM-R.
Before finetuning on the xMRC instances converted from xKBQA datasets, we first finetune models on SQuAD~1.1, and then on three xMRC datasets, MLQA, MKQA and XQuAD.
We do not search hyperparameters for the xMRC models and adopt the default configuration used by SQuAD.
For English-as-pivot baselines, we use Baidu Translate API\footnote{\url{https://fanyi.baidu.com/}} to obtain English translations.
See Appendix \ref{app:implementation} for more details.
\section{Results and Analyses}
\subsection{Supervised Setting}
\begin{table}
\centering
\small
\begin{tabular}{lll}
\toprule
\textbf{Model} & \textbf{WebQSP} & \textbf{WebQSP-zh} \\
\midrule
\multicolumn{1}{l}{\textit{English-as-pivot}} \\
EmbedKGQA~\citeyearpar{saxena2020improving} & 66.18 & 63.15 (-3.03) \\
GraftNet~\citeyearpar{sun2018open} & 67.79 & 65.61 (-2.18)\\
NSM~\citeyearpar{he2021improving} & 68.70 & 67.30 (-1.40) \\
NSM-student~\citeyearpar{he2021improving} & 74.30 & 72.54 (-1.76)\\
QGG~\citeyearpar{lan-jiang-2020-query} & 73.70 & 72.36 (-1.34) \\
\midrule
\multicolumn{1}{l}{\textit{Closed-book QA}} \\
mT5-base & & 7.02 \\
mT5-large & & 12.87 \\
\midrule
\multicolumn{1}{l}{\textit{xKBQA-as-MRC (Ours)}} \\
mBERT-base & & 70.53 \\
XLM-R-base & & 69.92\\
XLM-R-large & & \textbf{74.37} \\
\bottomrule
\end{tabular}
\caption{Hits@1 (\%) of baselines and our method on the test set of WebQSP-zh using the full training data. The ``WebQSP'' column shows the model performance on the test set of WebQSP after training on the original English WebQSP data. The numbers in the brackets denote the performance drop of English-as-pivot models compared to their corresponding English KBQA models on WebQSP.
All models except GraftNet use golden topic entities.
}
\label{tab:main_results}
\end{table}
\begin{table*}[t]
\small
\centering
\begin{tabular}{lccccccccccc|c}
\toprule
\textbf{Model} & \textbf{fa} & \textbf{de} & \textbf{ro} & \textbf{it} & \textbf{ru} & \textbf{fr} & \textbf{nl} & \textbf{es} & \textbf{hi} & \textbf{pt} & \textbf{pt\_BR} & \textbf{Avg.} \\
\midrule
\multicolumn{13}{l}{\textit{Multilingual Semantic Matching}} \\
LC-QuAD & 43.41 & 44.90 & 48.55 & 47.93 & 36.84 & 47.38 & 43.93 & 46.53 & 41.60 & 37.43 & 48.48 & 44.27 \\
+ Sing. BLI & 46.41 & 50.41 & 50.87 & 51.24 & 40.35 & 48.76 & 48.55 & 49.42 & 34.73 & 40.35 & 54.54 & 46.88 \\
+ All BLI & 46.41 & 49.31 & 50.58 & 49.04 & 41.52 & 49.59 & 47.40 & 48.55 & 41.98 & 40.94 & 51.51 & 46.98\\
\midrule
\multicolumn{13}{l}{\textit{xKBQA-as-MRC (Ours)}} \\
SQuAD & 39.22 & 48.21 & 44.48 & 45.45 & 33.33 & 45.17 & 48.27 & 47.11 & 43.89 & 35.67 & 51.51 & 43.85 \\
+ Sing. xMRC & 39.22 & 52.07 & \textbf{52.91} & \textbf{56.20} & \textbf{45.61} & 51.24 & 52.02 & \textbf{54.62} & \textbf{50.76} & \textbf{42.69} & 59.09 & 50.59 \\
+ All xMRC & \textbf{48.50} & \textbf{55.10} & 52.03 & 54.27 & 44.44 & \textbf{53.44} & \textbf{52.89} & 53.47 & 46.95 & 41.52 & \textbf{60.61} & \textbf{51.20} \\
\bottomrule
\end{tabular}
\caption{Hits@1 (\%) of the baseline and our method with XLM-R-large on QALD-M under the zero-shot setting. ``LC-QuAD'' and ``SQuAD'' means using LC-QuAD and SQuAD for finetuning, respectively. ``BLI'' and ``xMRC'' means using BLI translation and xMRC datasets for finetuning, respectively. `` Sing.'' means using the data in the target language only while ``All'' means combining the data in all the languages. We do not find available xMRC datasets for Persian (fa), so the performance of ``+ Sing. xMRC'' on Persian is the same as that of ``SQuAD''. }
\label{tab:QALD-M}
\end{table*}
As shown in Table \ref{tab:main_results}, we first compare our method with English-as-pivot baselines using full training data of WebQSP-zh.
These baselines can benefit from the development of monolingual KBQA models and achieve over 63\% hits@1 on WebQSP-zh.
Suppose we have perfect translation results, the English-as-pivot baselines on the WebQSP-zh should reach the performance of monolingual models on the original English WebQSP.
However, the English-as-pivot baselines on WebQSP-zh drop 1.4-3.0\% hits@1 compared to their monolingual performance on the original WebQSP.
This is because the English-as-pivot baselines are highly dependent on machine translation tools, whose outputs may contain unnatural expressions or even errors.
As for the closed-book QA baselines, mT5-large correctly outputs the answers in English for even 12.9\% of the WebQSP-zh questions, without resorting to any external knowledge.
This proves that MPLMs have learned a large amount of factual knowledge and strong cross-lingual capabilities, which can be properly utilized for xKBQA, as our method does.
All our models reach over 69\% hits@1 on WebQSP-zh.
Our two base-size models outperform EmbedKGQA by approximately 6\% hits@1, an English-as-pivot baseline that utilizes RoBERTa-base and the KB embedding ComplEx~\cite{trouillon2016complex}.
Our model with XLM-R-large outperforms all baselines, achieving 74.37\% hits@1 thanks to the strong cross-lingual capability from MPLMs and rich resources in xMRC.
Moreover, these results demonstrate another merit of our approach that it can directly answer non-English questions over KBs in English, reducing the reliance on machine translation systems.
Although NSM-student, which does not use PLMs itself, performs better than our two base-size models,
the parameters and computational complexity introduced by the translation system are much heavier than the MPLM used in our method.
Furthermore, our approach demonstrates its advantage with fewer or even no training data, as we will discuss next.
\subsection{Few-Shot and Zero-Shot Settings} \label{sec:few-shot}
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{fig/webqsp_few-shot_arr_no100.pdf}
\caption{Few-shot and zero-shot performance of our method and NSM-student on the test set of WebQSP-zh.}
\label{fig:few-shot}
\end{figure}
Consider the high cost of annotating high-quality xKBQA data,
we investigate the capabilities of our method under few-shot and zero-shot settings.
Figure \ref{fig:few-shot} shows the performance of our method and NSM-student on WebQSP-zh under few-shot and zero-shot settings.
For NSM-student, its performance drops drastically with the decrease in training data.
and it is totally incapable of zero-shot xKBQA.
By contrast, when trained with half of the training data, our method
still performs well, with less than 3\% decrease in hits@1 compared with those trained with full data.
With only 10\% of the training data, i.e., 310 instances, our models reach over 62\% hits@1, comparable with EmbedKGQA trained with full training data.
Even under the zero-shot setting, our method can achieve 53-61\% hits@1.
The high performance of our method with limited training data is attributed to the KB-to-text conversion, which in turn makes it possible to benefit from the rich resources in xMRC.
The MPLMs for xMRC have learned to encode different languages in the same semantic space during pre-training.
After finetuning on xMRC datasets, the models can learn the ability to seek information from passages in a different language.
By combining the prior knowledge of cross-lingual mapping and reading comprehension abilities, our models can successfully answer a large portion of the xMRC-like questions converted from xKBQA.
To demonstrate that our method can generalize to different languages without specialized modifications, we test our approach on QALD-M in 11 typologically-diverse languages under the zero-shot setting.
We evaluate the model on QALD-M after finetuning (1) on SQuAD only, (2) on SQuAD and xMRC datasets of a single language, and (3) on SQuAD and xMRC datasets of all the languages.
As shown in Table \ref{tab:QALD-M}, after finetuning XLM-R-large with SQuAD, our models achieve 43.9\% hits@1 on average across 11 non-English languages,
demonstrating our method's strong generalization ability from English MRC datasets.
After further finetuning on xMRC datasets for each language, we observe a 6.7\% hits@1 boost in the average performance, showing the benefit of xMRC datasets in the absence of xKBQA data.
If we combine the xMRC of all the languages for finetuning, the average hits@1 further increases slightly by 0.6\%, probably due to the potential complementary effects between data in different languages.
Compared with the semantic matching baseline finetuned with LC-QuAD and BLI-based translations, our best model outperforms it by 4.2\% hits@1 on average. This is because the KB-to-text process of our method provides richer context than single inferential chains and the xMRC data are of higher quality than the BLI-based word-to-word translation.
\subsection{Ablation Study}
To evaluate the effectiveness of the designs in our approach, we conduct experiments in several ablated settings on WebQSP-zh with full xKBQA training data.
We additionally conduct an ablation study with only 10\% of the training to investigate what is behind the promising few-shot performance.
The results are shown in Table \ref{tab:ablation}.
With full training data, after we replace the PLM-based KB-to-text model with the simple heuristic of concatenating the head, predicate, and tail (w/o KB to text), the performance drops by 2.13\% hits@1.
Although the xMRC models can to some extent learn the mapping between questions and sentences converted by heuristics, the coherence and readability of KB-to-text generation results contribute to the final performance.
Skipping the finetuning on either SQuAD (w/o SQuAD) or xMRC datasets (w/o xMRC data) leads to a performance drop, showing the importance of high-quality data augmentation in absence of large-scale xKBQA data.
In the setting with 10\% of the training data, both KB-to-text generation and finetuning on the MRC data contribute to the high few-shot performance, similar to the full training data setting.
We observe a drastic drop of 12.81\% hits@1 if the model is not finetuned on any MRC data (w/o xMRC data, SQuAD).
This indicates that MRC data, no matter monolingual or cross-lingual, can greatly relieve the problem of data scarcity in xKBQA.
\begin{table}
\centering
\small
\begin{tabular}{lll}
\toprule
\textbf{Model} & \textbf{100\%} & \textbf{10\%} \\
\midrule
XLM-R-large (Ours) & 74.37 & 67.60 \\
- w/o KB to text & 72.24 \scriptsize{(-2.13)} & 65.58 \scriptsize{(-2.02)}\\
- w/o xMRC data & 71.81 \scriptsize{(-2.56)} & 65.53 \scriptsize{(-2.07)} \\
- w/o SQuAD & 71.02 \scriptsize{(-3.35)} & 65.10 \scriptsize{(-2.50)}\\
- w/o xMRC data, SQuAD & 66.69 \scriptsize{(-7.68)} & 54.79 \scriptsize{(-12.81)}\\
\bottomrule
\end{tabular}
\caption{Ablation study of our method with XLM-R-large on WebQSP-zh, using 100\% or 10\% of the training data (Hits@1 in percent). }
\label{tab:ablation}
\end{table}
\begin{table*}[ht]
\small
\centering
\begin{tabular}{p{0.25\columnwidth}p{0.95\columnwidth}p{0.6\columnwidth}r}
\toprule
\textbf{Source} & \textbf{Example} & \textbf{Explanation} & \textbf{\%} \\
\midrule
Answer
Annotation
&
\textbf{Question:} \begin{CJK*}{UTF8}{gbsn}\small沃尔玛经营什么产业?\end{CJK*}/ \textit{What industry does Walmart operate in?}
\textbf{Passage:} ... The industry of Walmart is \underline{Retail-Store}, \underline{Variety Stores} and \underline{Department Stores}. ...
\textbf{Answer:} Variety Stores
\textbf{Prediction:} Retail-Store
& The annotated answers in the original WebQSP dataset are incomplete or incorrect. In the left case, the annotated answer set fails to include two correct answers, \textit{Retail-Store} and \textit{Department Stores}.
& 34 \\
\midrule
KB-to-text
Generation
& \textbf{Question:} \begin{CJK*}{UTF8}{gbsn}\small凯南·鲁兹在灯红酒绿杀人夜中扮演谁?\end{CJK*}/ \textit{Who does Kellan Lutz play in Prom Night?}
\textbf{Passage:} ... Kellan Lutz, a character in the film ``Prom Night'', played with \underline{Rick Leland}. ... Kellan Lutz, a character in Twilight, played the role of \underline{Emmett Cullen}. ...
\textbf{Answer:} Rick Leland
\textbf{Prediction:} Emmett Cullen
& The KB-to-text model converts a KB schema
to a wrong natural language expression or omits the entities in the given triple. In the left case, the model incorrectly converts the KB schema \textit{character} to the expression \textit{play with}.
& 12 \\
\midrule
Sentence
Filtering
& \textbf{Question:} \begin{CJK*}{UTF8}{gbsn}\small爱德华多·包洛奇在他的工作中使用了什么材料?\end{CJK*}/ \textit{What Materials did Eduardo Paolozzi use in his work?}
\textbf{Passage:} ... The art forms of Eduardo Paolozzi are \underline{Sculpture}. ...
\textbf{Answer:} Bronze
\textbf{Prediction:} Sculpture
& The answers are missing in the passages because the model for sentence similarity calculation incorrectly filters out the sentences containing answers. In the left case, the sentence containing the answer \textit{Bronze} is mistakenly filtered out.
& 20\\
\midrule
Reading
Comprehension
& \textbf{Question:} \begin{CJK*}{UTF8}{gbsn}\small谁是杰拉尔德福特的副总裁?\end{CJK*}/ \textit{Who was the vice president of Gerald Ford?}
\textbf{Passage:} ... \underline{David Gergen} was appointed as the White House Communications Director by President Gerald Ford . ... The vice president of Gerald Ford was \underline{Nelson Rockefeller }. ...
\textbf{Answer:} Nelson Rockefeller
\textbf{Prediction:} Staff Dick Cheney
& The xMRC model fails to select the correct answer span. In the left case, the xMRC model incorrectly maps the word \begin{CJK*}{UTF8}{gbsn}\small副总裁\end{CJK*}/\textit{vice president} to the expression \textit{White House Communications Director} in the passage.
& 34 \\
\bottomrule
\end{tabular}
\caption{Examples, explanations and percentages of different sources of error in the 50 sampled WebQSP-zh question that XLM-R-large fails to answer. The underlined spans in passages are answer candidates.}
\label{tab:err_examples}
\end{table*}
\subsection{Error Analysis}
We sample 50 error cases in WebQSP-zh and analyze their sources of error, as shown in Table \ref{tab:err_examples}.
34\% of the errors result from the annotation of the original WebQSP dataset, where the annotated answer sets may be incomplete or incorrect.
Another common source of error is the MRC model, which incorrectly answers 34\% of the sampled questions.
Among them, many are complex questions involving constraints or multiple relations.
In the future, multi-hop MRC models can be adopted for addressing them.
Besides, there are also several error cases resulting from KB-to-text generation and sentence filtering.
We believe that our model will achieve better performance if each module in our framework is carefully optimized for the datasets.
\subsection{Effect of Entity Linking} \label{sec:el}
Entity linking (EL) is a crucial issue in KBQA, which requires linking the entity mentions in the questions to the entities in a KB.
It becomes even more difficult in the cross-lingual setting.
In the experiments above, we use golden entity linking results following previous works.
To further investigate the effect of entity linking in xKBQA, we conduct pilot experiments with two EL methods. One is surface-level matching after translating the questions, and the other is mGENRE~\cite{de-cao-etal-2022-multilingual}, a cross-lingual EL tool that does not rely on machine translation tools.
On the test set of WebQSP-zh, two EL methods achieve 89.1\% and 76.8\% recall@5, respectively.
With the results from two EL methods, our xMRC model with XLM-R-large achieves 65.9\% and 56.5\% hits@1, respectively.
The large gap compared to the results with golden topic entities indicates that more future research on cross-lingual EL is desired.
\section{Conclusion}
In this paper, we propose to formulate xKBQA as answering questions by reading passages, benefiting from the recent advance in xMRC.
By converting KB subgraphs into passages, we narrow the gap between KB schemas and natural questions under cross-lingual settings.
The cross-lingual knowledge in MPLMs and the rich resources in xMRC alleviate the problem of data scarcity in xKBQA.
To facilitate the evaluation of xKBQA, we collect WebQSP-zh, a new large-scale xKBQA dataset with more natural expressions.
Extensive experiments on two datasets with 12 languages show the strong performance of our method under both supervised and zero-shot settings.
We hope that our work will inspire more efforts into xKBQA.
Several promising research directions under our framework include generating better passages for KB subgraphs, supporting more types of KBQA questions, and exploring better EL strategies for xKBQA.
\section*{Limitations}
We discuss the limitations of our work from the following four aspects:
First, our work mainly focuses on single-relation questions and CVT questions in KBQA.
We construct a new dataset WebQSP-zh based on WebQSP, which lacks complex questions with multiple constraints or relations.
Since we use a vanilla BERT-based MRC model in our framework, it has a limited capacity for solving complex KBQA questions.
As future work, multi-hop MRC models can be adopted to address complex questions in cross-lingual KBQA.
Second, our method is mainly designed for entity-centric QA.
It can handle well the answer types of KB entities or attribute values in KBQA.
Yet its capability on other types of answers is currently unknown.
We will consider extending our method with more diverse answer types in the future.
Third, the size of retrieved KB subgraphs is constrained by the maximum input length of PLMs.
This could, to some extent, lower the answer coverage of the converted passages and hurt the overall performance.
In the future, Longformer-based encoders or text summarization techniques could be explored to address this limitation.
Fourth, although using existing xMRC datasets can alleviate the data scarcity problem in xKBQA, it cannot fundamentally solve the problem of insufficient and expensive cross-lingual datasets.
With more powerful cross-lingual PLMs, we may reduce the reliance on xMRC data.
We will explore more strategies for tackling the data scarcity problem in future work.
\section*{Acknowledgments}
This work is supported by NSFC (62161160339, 62206070).
We would like to thank the anonymous reviewers for their valuable suggestions.
Also, we would like to thank Xiao Liu and Quzhe Huang for their great help in this work.
For any correspondence, please contact Yansong Feng.
|
{
"arxiv_id": "2302.13230",
"language": "en",
"timestamp": "2023-02-28T02:13:31",
"url": "https://arxiv.org/abs/2302.13230",
"yymm": "2302"
} | \section{Introduction}
\label{sec:introduction}
There have been significant advancements in field robotics in the past decade in terms of maturity of commercially available platforms, advanced sensor technology, navigation capability, power systems and compute systems. However, fully autonomous deployment of robots in real world field applications are still not commonplace. This is especially so in unknown, dangerous and difficult environments. There seem to be some technology gaps that prevent widespread use of robotic systems in such environments--an application area that can have significant benefits in removing humans from dull, dirty and dangerous environments.
Subterranean environments such as mining tunnels, underground urban infrastructure and natural caves are especially challenging for robots due to lack of GPS for localisation, degraded wireless communication, poor perception due to obscurants, locomotion and navigation difficulties due to narrow passages and doorways, clutter, uneven ground, slippery and loose terrain, stairs, ledges, overhangs, dripping water, and dynamic obstacles that can move to block paths etc. The DARPA Subterranean Challenge (SubT Challenge) was motivated by this gap in technology that was preventing effective deployment of autonomous robots in to this type of environments. As described in \cite{Orekhov_2022}, the primary scenario centred around providing advanced situational awareness to rescuers following a collapsed mine, earthquake or personnel lost or injured in a cave. The challenge was designed with extensive input from and in collaboration with first responders. The aims included spurring innovation, building communities of practice, setting new benchmarks for state-of-the-art, and creating societal impact. Point scoring in competition events was achieved by detecting, identifying and locating an artefact to an accuracy of within 5~m. Artefact classes included survivors (thermal mannequins), backpacks, ropes, helmets, fire extinguishers, power drills, vents, \ensuremath{\mathrm{CO_2}}\xspace concentrations, cell phones and LED-illuminated cubes. To make the scenarios as realistic as possible, the event courses included rough terrain, steep inclines, narrow openings, stairs, water, obscurants such as dust and smoke, and dynamic obstacles (e.g., representing further cave-ins during the mission). Due to the underground environment, GPS is unavailable and communications are severely restricted.
Challenge events commenced with the SubT Integration Exercise (STIX) in the Edgar Experimental Mine, Idaho Springs, CO, USA in April 2019. The first competitive event was the Tunnel Circuit at the NIOSH Safety Research Coal Mine and Experimental Mine in Pittsburgh, PA, USA in August 2019. The Urban Circuit event was held in February 2020 at Satsop Business Park, Elma, WA, USA, in an un-commissioned nuclear power plant. The COVID-19 pandemic led to the cancellation of the Cave Circuit event that was scheduled to take place in August 2020. The teams were encouraged to conducted their own testing in local cave environments that they could access. Each year of the challenge represented a Phase, with Phase I including the Tunnel Circuit, Phase II the Urban and Cave Circuits, and Phase III concluding with the Final Event. After successfully competing in circuit events in Phase I and II of the DARPA SubT Challenge \citep{hudson2021heterogeneous}, Team CSIRO Data61 was selected as one of the eight finalist to compete in the Final Event that was held in September 2021 at the Louisville Mega Cavern, KY, USA (\figref{fig:CSIRO_Data61_team}). After winning the preliminary round at the Final Event, Team CSIRO Data61 tied for the top score of 23 points with Team CEREBRUS in the final prize run. After tie-breaker rules were invoked, Team CSIRO Data61 won the US\$1\,Million second place prize, with only 1 minute,
1\,cm, or 1 extra artefact report away from first place~\citep{chung_into_2023}.
This paper summarises the heterogeneous robot system deployed by Team CSIRO Data61, utilising unified multi-agent mapping and autonomy. We also describe how the team of robots overcame various challenges in the Final Event course, and results and lessons learned from the program. The overall system of systems demonstrated remarkable resilience in the harsh environment even in the face of attrition of individual agents. We will focus on the advancements and changes implemented on our systems for Phase III of the competition, in comparison to the work presented in our prior work \cite{hudson2021heterogeneous} representing Phases I and II.
\begin{figure}[ht]
\centering
\includegraphics[width=120mm]{figures/CSIRO_Data61_Team_SubT_Finals.jpg}
\caption{Team CSIRO Data61 members at the DARPA Subterranean Challenge Final Event along with the robot fleet.}
\label{fig:CSIRO_Data61_team}
\end{figure}
\subsection{Related Work}
\label{sec:related_work}
System reviews from the various SubT teams from Phase I and II each contain detailed literature reviews, and can be found in \cite{agha_nebula_2021, tranzatto_cerberus_autonomous_2022, scherer_resilient_2022, ohradzansky_multi-agent_2021, roucek_system_2021, lu_heterogeneous_2022, isaacs_teleoperation_2022, hudson2021heterogeneous}. Here, we describe the systems employed by SubT teams as described in these papers, and subsequent examinations of particular system components.
\cite{tranzatto_cerberus_2022} describes the system employed by Team CERBERUS (University of Nevada, Reno, ETH Z\"{u}rich, NTNU, University of California Berkeley, Oxford Robotics Institute, Flyability and Sierra Nevada Corporation) that won first place in the final prize round, with \cite{tranzatto_cerberus_autonomous_2022} outlining the system from the Tunnel and Urban Circuits. The robot roster for Team CERBERUS consisted of aerial scout Uncrewed Aerial Vehicles (UAVs, including small, medium and large platforms), a tethered wheeled ground vehicle to extend communications, and two variations of the ANYmal C quadruped: Carrier for deploying communication-extender modules, and Explorer for proceeding deep into the environment. Graph-based path planning was used with a bifurcated architecture for efficient local and global exploration \citep{dang_graph_2020}. Localisation and mapping was performed with complementary multimodal sensor fusion (CompSLAM) described by \cite{khattak_complementary_2020}. \cite{miki_learning_2022} details the learning-based perceptive locomotion utilised by the quadrupeds that balances the information from multi-modal perception sources to traverse complex terrain (including stairs) in the presence of sensor noise.
Team CoSTAR (NASA Jet Propulsion Laboratory, California Institute of Technology, MIT, KAIST, Lulea University of Technology) used a combination of Boston Dynamics Spot robots, UAVs and wheeled robots; the system that was deployed in the Tunnel and Urban Circuits is described in \cite{agha_nebula_2021}. The focus of their development was NeBula (networked belief aware perceptual autonomy), which aimed to address challenging, degraded-sensing environments through a combination of sensor fusion, and uncertainty-aware planning. By incorporating map and location uncertainty into the planning problem, risk-aware plans were constructed that account for difficult phenomena such as dust and smoke. Planning under uncertainty is a very challenging problem; this was approached through a hierarchical system described in \cite{kim_plgrim_2021}. The primary modality for SLAM was lidar/inertial, but alternative solutions were maintained based on visual, thermal, radar and wheel odometry, fused based on the estimated confidence \citep{santamaria_hero_2022}, and used as hints for the lidar/inertial solution \citep{paleri_locus_2021}. Description on the integration of NeBula onto the Spot legged platforms to enable autonomous legged exploration is provided in \cite{bouman_autonomous_2020}. A semantic object mapping pipeline provides the operator with enhanced object detections as described in \cite{lei_early_recall_2022}.
Team CTU-CRAS-NORLAB (Czech Technological University, Université Laval) solution for the Tunnel and Urban Circuits utilised wheeled Husky UGVs, tracked Absolem UGVs, PhantomX hexapods and DJI quadrotors \citep{roucek_system_2021}. Platforms used various models of lidar, visual cameras and depth cameras: The Husky used a multi-beam lidar and six cameras; the Absolem used a rotating line lidar and a single omnicamera, the PhantomX used RGBD cameras, and the quadrotor used multi-beam lidar and cameras. Multi-band communications provides high-rate WiFi connection at short ranges (5\,GHz and 2.3\,GHz), and low bandwidth control information at longer ranges (900\,MHz). Object detection used YOLOv3 trained on 20,000 images, alongside \ensuremath{\mathrm{CO_2}}\xspace and WiFi detection capability. Wheeled and tracked robots used an extended Kalman filter (EKF) to provide an initialisation for an iterative closest point (ICP)-based odometry method, based on lidar odometry and mapping (LOAM), with no loop closure enabled (since the required accuracy could be met without it). Exploration utilised frontier methods~\citep{Bayer2019OnAS}, with coordination provided by the human supervisor.
Team Explorer's (Carnegie Mellon University, Oregon State University) system from the Tunnel and Urban Circuit events is described in \cite{scherer_resilient_2022}. Custom wheeled robots enabled coverage of difficult terrain, and marsupial launch of a UAV. Communications were built around Rajant DX2's \footnote{\url{https://rajant.com/}}, and ground robots carried a total of 24 nodes, which were dropped autonomously based on line of sight and signal strength. Data was selectively shared between robots based on a ledger system. Rather than using a tightly coupled approach, super odometry \citep{zhao_super_2021} uses a mixed (loosely and tightly coupled fusion) scheme where visual and lidar inertial odometry (VIO and LIO) estimates are fused with with IMU measurements asynchronously to estimate robot trajectory at a fast (200\,Hz) rate. Object detection used CNN-based detection pipelines trained on RGB and thermal images, trained using data sets augmented with synthetic data. WiFi and gas localisation relied on human interpretation of the noisy signal strengths. The reference coordinate system (i.e., the ``gate'') was localised using a Total Station-based calibration. Exploration utilised a hierarchical approach \citep{cao_tare_2021}, where a global planner maintains a course tour, and a local planner maintains a detailed path within a local region. Exploration was driven by camera coverage of observed surfaces rather than mapping of 3D space.
Team MARBLE's (University of Colorado Boulder, University of Colorado Denver, Scientific Systems Company, University of California Santa Cruz) solution for the Tunnel, Urban and Cave Circuit (held virtually) events utilised map and goal-point sharing among agents \citep{riley_assessment_2021}, a metric-topological graph-based planner and a continuous frontier-based planner \citep{ohradzansky_multi-agent_2021}. In this system, the base station did not act as a central agent, but instead could relay information as any other agent. Additionally, the base station merged artefact reports from all robots, providing the human supervisor with a single submission for detections with a similar position and type. The robot fleet consisted of wheeled Husky and tracked Superdroid UGVs, and Lumenier UAVs. Ground platforms used Ouster multi-beam lidars and RGBD cameras with a high-power GPU workstation providing computation, whereas the UAVs used RGBD and ToF cameras. Mapping was performed using Google Cartographer, and map sharing was achieved by extending Octomap to permit transmission of map differences. Communications were built on custom beacons using a custom transport layer solution named \textit{udp\_mesh}.
Team Coordinated Robotics used a teleoperation strategy for the Urban Circuit event (\cite{isaacs_teleoperation_2022}). Due to time constraints in preparing for the event (four months), the team made the decision to focus on integrating multiple platforms with minimal autonomy over a single platform with semi-autonomy.
The SLAM algorithm LeGO-LOAM (\cite{shan_lego_2018}) was used, with its output fused with IMU data from two Intel Realsense D435i using the ROS Extended Kalman Filter package `robot\_localization' (\cite{moore_generalized_2016}) for estimating the robot pose and artefact position.
As the robots are teleoperated via a video feed, only one robot could be controlled at a time. This led to the team's strategy to use the robots as communication nodes, with a robot teleoperated into the course until the communication bandwidth is unable to sustain the video feed. The robot was then driven back into full communications range using the map data. The next robot was then driven deeper into the course, past the previous robot resulting in a wireless communication backbone and data flow to the operator.
Team NCTU (National Chiao Tung University) used a heterogeneous team of ground robots (Husky) and blimps to navigate the complex environments of the Tunnel and Urban Circuit events. An overview of the approach is provided in \cite{lu_heterogeneous_2022}.
Navigation was achieved through deep reinforcement learning using a cross-modal contrastive learning of representations (CM-CLR) method, where mmWave radar and lidar data were used for training \citep{huang_cross-modal_2021}. Through this approach, the ground robots were able to navigate in smoke-filled environments using only mmWave radar and XBee for communications. The unique aspect of the team's approach was the use of lighter than air platforms. The motivation for a blimp were twofold, to enable collision-tolerant navigation and for long term autonomous flight \citep{huang_duckiefloat_2019}. With lift achieved by helium, the motors are only required for propulsion, enabling longer flight times. Although the blimp could collide and bounce off the environment, due to the required size of the blimp to generate adequate lift, it had difficulties passing through narrow passageways and was susceptible to airflow within the tunnel. The team attempted to learn a navigation policy for the blimp, but were unable to learn a robust policy due to the inability to model the complex blimp dynamics in the Gazebo simulator.
In-depth analysis of the results from the SubT Challenge Finals event, along with insights, lessons learned and future work recommendations were provided in \cite{chung_into_2023}. The performance of the different teams were compared using the competition's singular scoring objective (artefacts detected) as well as describing hypothetical scenarios where the artefact error threshold values (set at 5\,m from ground truth for the competition) was adjusted. Alternate relevant evaluation metrics, such as lowest map deviation, greatest map coverage, largest map (points) and fastest successful report, were also introduced to compare the teams' solutions. Additionally, technical and operational insights based on the competition results and evolution of technology during the Challenge period were provided. The most successful solutions had a heterogeneous team (different platform mobility and function) of robots built upon reliable commercial-off-the-shelf (COTS) products. This allowed the solution to adapt to different scenarios and reduce the impact when attrition of robots occurred. Another insight provided was on the crucial role the human supervisor performs. While the human supervisor was a mission enabler, providing high-level commands and coordination, they were also the weak link, where the cognitive load of the human supervisor could limit performance.
\subsection{Contributions}
\label{sec:contributions}
The key differentiator in our solution is the homogeneous sensing capability, which enables shared maps between all agents. UGVs and UAVs both utilise spinning lidars which power both mapping and autonomy, with cameras providing object detection capability. The highly modular UGV solution centred around a common sensing pack and navigation stack has enabled rapid adaption to a wide range of platforms. All agents utilise the same SLAM system, which develops complete shared maps on each agent. These were exploited to provide shared global maps between all ground agents, and multi-robot task allocation.
This paper includes a brief description of our full system approach, with further detail on aspects that changed from our Phase I and II system described in \cite{hudson2021heterogeneous}. Further details of the earlier work can be found in \cite{hudson2021heterogeneous}; particular differences from this approach include:
\begin{itemize}
\item The UGV exploration system was revised to calculate traversability frontiers, utilising multi-agent global mapping data.
\item A new planner was developed that specifically targeted passing through narrow gaps.
\item The multi-robot task allocation reward function was reformulated to incorporate elements that consider the positioning of all agents in the tunnel network.
\item The tools available to the human supervisor for prioritising tasks and altering assignments were greatly improved.
\item The object tracker was redeveloped to associate new detections to full multi-robot histories (as opposed to a recent window on the local agent).
\item The Boston Dynamics Spot was incorporated in our robot fleet, and integrated with our common sensing and autonomy stacks.
\item The drop nodes and dropping mechanisms were redesigned based on lessons in prior events.
\item A new UAV platform was adopted to overcome the limitations of the previous platform, and hardware was integrated to permit use of the same object detection pipeline as ground agents.
\item SLAM was modified to selectively share frames, avoiding repeated sharing of identical frames when the robot is not moving significantly.
\end{itemize}
Additionally, we describe in detail, the results of the approach at the Final Event of the DARPA Subterranean Challenge. In particular:
\begin{itemize}
\item For each run, details of operator's intentions, complicating events and results.
\item Communications performance, use of autonomy and human intervention.
\item Object detection performance and lessons learned.
\item Analysis of the mapping performance achieved in comparison with the ground truth scans provided by DARPA.
\item Experiences providing remote support due to severe limitations on the deployed team.
\end{itemize}
\section{System Description}
\label{sec:system_description}
SubT involves a fleet of robots autonomously exploring an underground environment under the control of a single human supervisor. The Final Event combines the challenging elements from tunnel (e.g., large scale), urban (e.g., stairs) and cave (e.g., extreme terrain) environments. The solution fielded by Team CSIRO Data61 at the Final Event involved two Boston Dynamics Spot robots\footnote{\url{https://www.bostondynamics.com/products/spot}}, two BIA5 OzBot All Terrain Robots (ATRs)\footnote{\url{https://bia5.com/}}, and two Emesent UAVs carried by the ATRs. Communications nodes based on the Rajant Breadcrumb ES1 were carried on the ATRs and deployed through the course. The fleet employed is illustrated in \figref{fig:annotated_robot_platforms}.
In this section, we describe the hardware components of the system. We begin in \secref{sec:UGV_platforms} describing the UGV platforms developed for the Final Event, and subsequently the UAV system in \secref{sec:UAV_platform}. Finally, \secref{sec:comms_nodes} describes the design of the communications system.
\begin{figure}[ht]
\centering
\includegraphics[width=120mm]{figures/Deployed_systems_SubT_Finals.png}
\caption{Team CSIRO Data61's robot platforms deployed during the final event.}
\label{fig:annotated_robot_platforms}
\end{figure}
\subsection{UGV Platforms}
\label{sec:UGV_platforms}
The hardware systems architecture for the UGV platforms is summarised in \figref{fig:ugv_hardware}. A number of minor changes were made to the architecture deployed in the earlier phases of the SubT Challenge. The primary change to \figref{fig:ugv_hardware} is the addition of the Boston Dynamics Spot and the removal of a USB hub connecting external sensors. These changes and modifications are described in the following subsections.
\begin{figure}[ht]
\centering
\includegraphics[width=120mm]{figures/DARPA_Electronic_System_Overview_General.png}
\caption{Hardware systems architecture for the Spot, DTR and ATR platforms.}
\label{fig:ugv_hardware}
\end{figure}
\subsubsection{BIA5 OzBot All Terrain Robot}
\label{sss:atr}
The BIA5 OzBot All-Terrain Robot (ATR) was first utilised by Team CSIRO Data61 in the Tunnel Circuit event in August 2019. Subsequently, a light-weight (90\,kg vs 300\,kg) version was custom-built for CSIRO by BIA5 and employed in the Urban Circuit event and cave testing performed in lieu of the Cave Circuit event.
The same robots were utilised in the Final Event, using the same LiFePO$_4$ batteries, power system and Cincoze DX-1100 ruggedised workstation (with Intel i7-8700T CPU). An evolved version of the ``CatPack'' perception pack was utilised, providing lidar, IMU and cameras with integrated compute performing SLAM and object detection. The new CatPack version had a full machined aluminium housing instead of the previous mixed aluminium and printed plastic housing, providing better dust and moisture protection, more effective cooling and better camera lens mounting giving improved image focus. A WiFi and Bluetooth module was also integrated into the CatPack and could be used for artefact detection instead of USB connected modules.
Extensive improvements were made to the robot's motors and motor controllers. The motivation for this was two-fold; firstly, to overcome issues with burn-out of motors in autonomous operation; and secondly, to provide finer control over the paths that the robot executed. The changes made are representative of those that have been found to be necessary to support autonomous operation on other platforms designed for teleoperation. The platform speed is 1.2\,ms$^{-1}$.
In relation to robustness, motors were burned out on a number of occasions. The first instance of this occurred during the Urban Circuit event, where a track became fixed on a large hook concreted into the ground. Subsequently, additional burn-outs occurred on a number of occasions where one track lost traction and spun repeatedly as autonomy attempted to recover from the condition. This was addressed firstly by introducing higher torque motors (a custom 108\,mm out-runner motor configuration coupled with BIA5's gearboxes) that were capable of handling more heat, and secondly by upgrading the motor control system and introducing a thermal model of the motor. The upgraded motor controller used an Elmo Motion Control (MC) system, providing feedback on current, temperature, velocity and acceleration at 200\,Hz.
The thermal modelling introduced to this system was used to estimate the core temperature of the motor from an externally mounted thermistor on the motor casing. The thermal transfer from the motor core to the casing was based on a model provided by the motor manufacturer \cite{maxon_motor}. On each update of the motor communications loop, the motor current and thermistor readings would be fed into the model to get an internal temperature estimate. This allowed the system to apply significantly more power to the tracks in short bursts without risking a burnout, and consequently increasing the overall agility of the platform.
In relation to fine motor control, the stock motors and controllers had a range of difficulties, such as low-rate feedback (10\,Hz), simple PID-based control, fixed acceleration limits and an opaque interface. Most significant among these was the PID-based control, which resulted in significant overshoot due to a large integral wind-up to overcome stiction. This presented a significant barrier in tight environments such as when navigating through a narrow doorway. In contrast, the Elmo MC's built-in system identification was utilised to obtain precise velocity control. The higher-bandwidth 200\,Hz feedback also enabled improvements to the outer control loop, which was executed over a dedicated Ethercat interface. This higher rate feedback was utilised to implement a differential drive controller which balanced the desired level of aggressive control with smoothing when excessive velocity changes where commanded (e.g., when switching to a time-critical recovery behaviour).
Our efforts provided us with a highly robust platform capable of aggressive navigation, yet with exceptional fine control. This platform demonstrated robust control in adversarial terrain conditions in testing such as large rock piles and stairs, as well as challenging deformable cave environments in the self-led Cave Circuit. Around the time of the Final Event, further motor replacements were required, but these appeared to be related to deterioration over time rather than particular traumatic events. We plan to address this in future design iterations through an integrated gearbox, motor and controller solution.
\subsubsection{CSIRO Dynamic Tracked Robot}
\label{sss:dtr}
The CSIRO Dynamic Tracked Robot (DTR) was developed to combine the strengths of the ATR with those of the smaller SuperDroid LT2-F, particularly around stairs and narrow doorways. Development of this platform continued from its initial design used at the self-led Cave Circuit up to the Final Event. Mechanically, the track tension mechanism was improved by coupling the position of the tensioner wheel and suspension arm. Their coupled motion maintain a constant tension in the track throughout the motion range of the suspension arm. This maintained soft suspension, but achieved the high track tension needed to provide robustness to small rocks in the tracks. The final design of the platform is shown in \figref{fig:dtr}; the platform is capable of operating at a speed of 3\,ms$^{-1}$, but is configured to use a similar speed to the ATR due to the tuning of autonomy.
\begin{figure}[ht]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=55mm]{figures/DTR-CAD-side.png}
\caption{}
\label{fig:dtr_cad}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=55mm]{figures/DTR_iso_nobkg.jpg}
\caption{}
\label{fig:dtr_iso}
\end{subfigure}
\caption{The new DTR design showing its improved obstacle clearance ability next to a 180\,mm high step in a CAD diagram (a), and the real robot carrying an earlier version of the CatPack perception pack (b).}
\label{fig:dtr}
\end{figure}
The design maximised commonality with the ATR components, and served as a prototype of the Elmo MC system describe in \secref{sss:atr}. This commonality was an explicit design goal, to permit either platform to provide spare parts for the other. The same CatPack perception was utilised as the ATR, while the navigation computer was based on an Intel NUC NUC8i7BEH.
Tuning of the autonomy stack for the DTR was de-prioritised when the Spot was integrated into the robot team, but continued at a low rate of effort. It was intended that the platform would feature in the team at the Final Event, but this needed to be dropped due to the minimal team that was able to be sent due to COVID travel restrictions. An accurate simulation model of the platform was developed, and was utilised by Team CTU-CRAS in the DARPA SubT Challenge Virtual track. The platform is seeing continued use, particularly in natural environments where it has less impact than the larger and heavier ATR platform.
\subsubsection{Boston Dynamics Spot Quadruped Robot}
\label{sss:spot}
The Boston Dynamics Spot platform is a COTS quadrupedal legged platform. It features a top speed of 1.6\,ms$^{-1}$, max payload of 14\,kg and a typical operation time of 90\,min (unloaded; typical operation times were 40-45\,min with a payload). It has stereo cameras positioned around the body which allow it to generate a 360\ensuremath{^{\circ}}\xspace map of the local terrain. This map, combined with state-of-the-art locomotion software, allows it to traverse a wide array of terrain. The Spot platform was used to navigate into areas that were unsuitable for the tracked ground platforms, including stairs, and narrow passageways \cite{tam2021deploying}.
Previously the Ghost Robotics Vision60 Platform had been fulfilling this role within the fleet. The decision to switch to the Spot platform was made based on testing its performance in these specific areas. Predominant capabilities of the Spot are understood to be due to its terrain sensing: Unlike the Spot platform, the version 4.2 Vision60 platform available at the time did not provide any local terrain mapping or footfall planning, which made it unsuitable for traversing stairs, a key area the legged platform was aimed at addressing. The Spot platform also provided a payload interface that was more robust, better documented, and supported for the user. This not only allowed the team to rapidly integrate Spot into the fleet, but also allowed the onboard sensor data to be integrated into the team's autonomy stack as described in \secref{sss:spot_local_nav}.
As illustrated in \figref{fig:annotated_robot_platforms}, the Spot was fitted with the same CatPack perception pack utilised in the other platforms, along with the same navigation computer utilised with the DTR and the earlier Ghost platform. Communications were provided by a Rajant ES1 Breadcrumb node as detailed in \secref{sec:comms_nodes}, however the platform was not equipped to drop additional nodes.
\subsection{UAV Platform}
\label{sec:UAV_platform}
The SubTerra ``Navi''\footnote{\url{https://aeronavics.com/models-of-drones/navi/}} UAV used in the SubT Finals is the first UAV specifically designed to work with the Emesent Hovermap payload (\figref{fig:navi_drone}). This platform was commissioned from the New Zealand-based company, Aeronavics, by Emesent.
\begin{figure}[t]
\centering
\begin{subfigure}{.48\linewidth}
\includegraphics[height=40mm]{figures/Navi_Drone_nobkg.jpg}
\caption{}
\label{fig:titan_cave_rocks}
\end{subfigure}
\begin{subfigure}{.25\linewidth}
\includegraphics[height=40mm]{figures/Navi-drone-launch.jpg}
\caption{}
\label{fig:titan_tunnel_node_dropped}
\end{subfigure}
\begin{subfigure}{.25\linewidth}
\includegraphics[height=40mm]{figures/Navi-drone-finals.jpg}
\caption{}
\label{fig:titan_tunnel_drone}
\end{subfigure}
\caption{Emesent/Aeronavics SubTerra (``Navi'') UAV (a), being launched off the back of an ATR UGV (b), and in flight inside the DARPA SubT finals course (c).}
\label{fig:navi_drone}
\end{figure}
Throughout all previous circuits, our UAV platform of choice was the DJI M210. Whilst this platform was robust and met our size and payload requirements, it provided some challenges when pushed beyond its manufacturer expected use cases. Most critical were random events in which the UAV would override take-off commands due to perceived magnetic interference. The act of moving an ATR through a circuit with an M210 on the back could cause sufficient interference with the UAVs in-built sensors, raising an error. A further problem was that, once fully loaded with the Hovermap, communications node and vision system, it only provided around 8\,min of flight time. With these problems in mind, a full configurable platform with a 20\,min battery life and high payload capacity was required to allow for full control over every aspect of the system and achieve our desired coverage goals.
With no commercially available UAVs on the market at the time, Emesent commissioned a custom solution from Aeronavics based off of their existing Navi UAV. The Navi provided an Ardupilot-based system that met our transparency, flight time and lift capacity requirements. Unfortunately, the development of this platform was problematic requiring significant effort to improve platform reliability. Such reliability issues included electronic speed controller (ESC) burnout, debris ingress through open motor housings, battery failure and short life-span, and communications issues with the ground station RC. Ultimately, most of these problems were solved or mitigated before the final circuit and the platform proved itself. In the second preliminary round, one Navi platform was downed by a sheet of foam in mid-air, however the system sustained no major damage to the air-frame, Hovermap or vision system, only requiring prop replacement to be back in the air within an hour of returning to the pit.
\begin{table}[b!]
\begin{center}
\caption{UAV Specifications - Loaded with Hovermap and Vision System.}
\begin{tabular}{lr}
\toprule
\textbf{Property} & \textbf{Value}\\
\midrule
Length & 740\,mm\\
Length (With propeller guards) & 900\,mm\\
Width & 740\,mm\\
Width (With propeller guards) & 900\,mm\\
Height & 350\,mm\\
Weight & 5.8\,kg\\
Nominal clearance from robot center & 850\,mm \\
Minimum passage width & 2000\,mm\\
Minimum take-off clearance & 1000\,mm\\
Flight time (Fully Loaded) & 20\,min\\
Nominal movement speed & 1.5\,ms$^{-1}$\\
Maximum movement speed & 2.0\,ms$^{-1}$\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
A feature of this platform were propeller guards. These were installed when needed as they traded increased survivability in the air for reduced ability to enter narrow passageways while riding the parent vehicle. Whilst on the ATR, the propeller guards extended beyond the UGV's 780\,mm width, preventing the ATR from entering as narrow passages. The guards also increased the UAV's width in flight, but the UAV could still navigate the same size tunnel, as light contact with the wall would no longer guarantee a crash. In the final circuit, propeller guards were installed for the first preliminary run to ensure that the UAVs would return intact. For the second and final runs, a more risky approach was taken and guards were removed on all UAV systems.
\subsubsection{UAV Vision System}
Accompanying the Navi platform was a new vision system called the ``Tick,'' (\figref{fig:annotated_tick_sensors}), which was developed in parallel with improvements to the previous gimbal-based approach before being chosen as the UAV vision system for the final circuit.
\begin{figure}[t]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=40mm]{figures/Tick_Compute_Interfacing_Updated.png}
\caption{}
\label{fig:annotated_tick_compute}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=40mm]{figures/Tick_Sensor_Arrangement_Updated.png}
\caption{}
\label{fig:sub2}
\end{subfigure}
\caption{Anatomy of the vision system (the Tick) on an Emesent Navi UAV showing compute and interfacing (a) and sensor arrangement (b).}
\label{fig:annotated_tick_sensors}
\end{figure}
Prior to the Tick, a gimbal-based vision system was employed by our UAVs. This was an in-house design utilising off-the-shelf electronics to create a fully controllable and stabilized gimbal. A single FLIR BFS USB 3.0 camera would feed frames through the Hovermap to an Intel Neural Compute Stick 2 for image classification. This secondary compute unit was required as on-board image classification would compete with SLAM, trajectory planning and other core processes for CPU resources, leading to higher image classification latency and a lower output rate.
The gimbaled camera system, along with our visual coverage tracking software, could scan the course independently from UAV orientation. Compared to a static system, this provided enhanced vertical coverage of the course and a higher chance to detect objects when the platform was stationary. In theory, the single moving camera with a narrow-moderate field of view (FOV) allowed for both highly detailed images at long ranges and large field coverage given enough time in one region. However, due to the limited flight time of small UAV platforms, it was not practical to loiter in one region for extended periods, thus in prior SubT circuits it suffered from limited coverage and missed target objects.
Due to these coverage limitations, we developed a static, multi-camera system with on-board processing utilising CSIRO Data61's perception stack. As the Hovermap host system was used to run the Wildcat SLAM solution, this package could leverage smaller NVIDIA Jetson hardware than the CatPack used on the UGV. This system was called the Tick due to its parasitic nature on the Hovermap.
At its core, the Tick ran a NVIDIA Jetson Xavier-NX on a Connect-tech Quark carrier board, allowing for dual MIPI-CSI cameras. Whilst the Xavier-NX had the connectivity and processing power required to run more cameras, carrier board and weight restrictions limited this to two cameras. The specific camera arrangement and lens combination was the single largest factor in the performance of the Tick. Initially the Tick was configured with dual downward angled, forward/side facing, wide FOV (160-180\ensuremath{^{\circ}}\xspace) cameras to maximise coverage, with particular focus on the space directly in front of the UAV and that of unexplored pathways/nooks to either side. Testing showed that this configuration suffered from problems including:
\begin{itemize}
\item A short range of only 5-6\,m due to the large FOV of each camera.
\item High power consumption due to the lighting requirements for such a large FOV.
\item Over-exposure of the image due to reflections from UAV legs.
\item After transitioning from the M210 to the Navi, the new leg setup with four legs placed at the UAV corners blocked a significant part of the camera FOV.
\end{itemize}
Further testing led to the final configuration, consisting of one front-facing, 70\ensuremath{^{\circ}}\xspace FOV camera and one rear, downward facing 140\ensuremath{^{\circ}}\xspace FOV camera. This configuration was able to maximise both the systems coverage and range by relying on the tendency of the exploration code to stop and spin in place when passing an unexplored passageway. The rotation of the platform allowed the fixed cameras to cover a larger FOV, with the board FOV (140\ensuremath{^{\circ}}\xspace) sensor specializing in targets close to the UAV ($\sim$5-8\,m), while the narrow FOV camera could search for targets at ranges of up to 15\,m. Continued improvements to the system included aspects of the lighting configuration. Specifically, increasing vertical separation between the rear lights and camera significantly reduced image blow-out on close objects and increased the accuracy of the Tick.
\begin{table}[h!]
\begin{center}
\caption{Tick Specifications.}
\begin{tabular}{lr}
\toprule
\textbf{Property} & \textbf{Value}\\
\midrule
Camera Sensor & AR0521\\
Camera Resolution & 5.1\,MP\\
Camera Frame-rate & 3\,FPS\\
Front Camera FOV & 70\ensuremath{^{\circ}}\xspace\\
Front Camera Maximum Artefact Range & 15\,m\\
Rear Camera FOV & 140\ensuremath{^{\circ}}\xspace\\
Rear Camera Maximum Artefact Range & 6\,m\\
LED Power & 12\,W + 12\,W\\
LED Luminous Flux & 1800\,lm\\
Total System Power Consumption & 45\,W\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Communications System}
\label{sec:comms_nodes}
The communications system used in the Final Event closely matched the solution that was deployed at the Urban and Cave Events. COTS Rajant Breadcrumb nodes were used to provide a layer 2 (i.e., the data link layer in the OSI communications model~\cite{osi_zim_1980}) mesh network using wireless, dual-band (2.4\,GHz and 5.8\,GHz) 802.11 radio links. The base station and robot autonomy computers all used a standard Linux TCP/IP stack to communicate over this network. Layer 4 (transport layer) traffic was handled exclusively by a software component named Mule, described in \secref{sec:mule}.
\subsubsection{Base Station Communications}
The base station computer was connected to two Rajant Breadcrumb nodes: an ES1 and a Peregrine. Each node was mounted with antennae on dedicated masts that provided approximately 1.5\,m of ground clearance. The ES1 was connected to 4\,dBi (2.4\,GHz and 5.8\,GHz), multi-polarized, omni-directional, goose-neck style antennae whilst the Peregrine was connected to 9\,dBi (2.4\,GHz) and 12\,dBi (5.8\,GHz), directional sector antennae. The decision to use multiple node masts at the base station was made during the lead-up to the event and was motivated by the idea that the course entrance may have consisted of two or more narrow tunnels which would not be well handled by a single directional antennae. Though the actual course used a single entrance tunnel, the team decided to continue deploying both mast nodes in each run due to early results indicating a non-negligible degree of RF transparency through some of the course walls.
\subsubsection{Communications Node and Dropping Mechanism}
Whereas the earlier communications node design was a minor repackaging incorporating a battery into the Rajant Breadcrumb ES1, the node utilised in the Final Event underwent a major redesign. The core ES1 node was supplemented by a Raspberry Pi 4, which served two functions. Firstly, it controlled deployment of the node. Rather than being statically dropped, the revised node design actively unfurls two side panels, self-righting in the process, powered by Dynamixel motors. Subsequently, antennas unfurl to a configuration with sufficient height to provide improved communications quality, and finally the ES1 power is activated. The deployment process and design are shown in \figref{fig:comms_node}.
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=0.55\textwidth]{figures/comms_node_stages.jpg}}~
\subfloat[]{\includegraphics[width=0.4\textwidth]{figures/finals_drop_node.pdf}}
\caption{(a) Communications node packing, showing stages of deployment: (left) as dropped from the robot, (middle) after side panels unfurl, self-righting the node, and (right) after antenna deployment. (b) Block diagram of communications node design.}
\label{fig:comms_node}
\end{figure}
The Raspberry Pi 4 also provided the secondary capability of running an instance of the Mule communications software on the node. Due to re-prioritisation of tasks in the lead up to the final event (again, due to COVID-19 related travel restriction for the team), insufficient experimentation had been conducted with this concept prior to the Final Event, so the capability was not employed. However, it has potential to allow nodes at the boundary of communication to retain data from a robot as it goes deep beyond communication.
The node dropper design also evolved for the Final Event. Lessons from testing in the cave environment showed that carrying node droppers which extruded beyond the vehicle footprint was highly undesirable. It was also found that dropping to the side of the platform was preferable over dropping in front or behind, as it inherently tended to result in node positions which did not block traversal of a passage. Accordingly, as shown in \figref{fig:annotated_robot_platforms}, four node droppers were installed on the track guards of each ATR (two on each side).
\subsubsection{UGV and UAV Communication Systems}
Each UGV was equipped with a Rajant ES1 node connected to 4\,dBi (2.4\,GHz and 5.8\,GHz), multi-polarized, omni-directional, goose-neck style antennae. Each UAV was equipped with the smaller Rajant DX2 node connected to 2.6\,dBi (5.8\,GHz only), omni-directional, lollipop style antennae due to size and weight constraints.
\subsubsection{Mule}
\label{sec:mule}
Mule was the layer 4 software component which bridged ROS topic messages between the independent ROS systems running at the base station and on each robot. Mule provided best-effort, end-to-end transport for ephemeral data such as robot status and teleoperation video, as well as disruption-tolerant, hop-by-hop transport for mission-critical data such as Wildcat frames, and object detection reports. These features enabled the operator and autonomy software to use data-muling as a strategy for improving exploration efficiency and overcoming robot attrition.
Improvements were made to the design of Mule for deployment in the final to expose more information to the human-operator, such as providing more detailed information on exactly what data was yet to be downloaded from a given peer, and a separation of
``synchronisation lag" into upload vs download metrics.
The capability provided by Mule was critical to the outcome achieved at the Final Event. For example, as described in \secref{sec:discussion}, in the Final Prize run, data from a fallen Spot robot was relayed to another platform that passed nearby, and later transmitted to the base. It was these data muling efforts that resulted in the final successful artefact report in the final critical seconds of the run.
\subsubsection{Communications System Improvements}
\label{sec:comms_improvements}
Despite the core design of the communications system remaining relatively unchanged between the Urban and Final Events, significant performance improvements were attained due to the resolution of previously undiscovered integration issues.
The first of these issues was related to the active Bluetooth scanning being used for detection of the mobile phone artefact. At the Urban event, UGVs were equipped with a UD100 Bluetooth USB adapter and software that would repeatedly perform active Bluetooth ``inquiry'' scans to elicit responses from nearby Bluetooth devices. Each of these scans involved brief transmissions on a sequence of 1\,MHz Bluetooth channels spread across the 2.4\,GHz ISM band. These transmissions were unmanaged with respect to the 20\,MHz 802.11 channel that was used by Rajant Breadcrumbs in the same 2.4\,GHz ISM band. Integration testing conducted after the Urban event revealed that these Bluetooth transmissions were significantly degrading the performance of the Rajant mesh network. With consideration of overall system performance and time constraints, it was decided that the Bluetooth detection capability would be removed from UGVs altogether, relying on (passive) WiFi detection (for cell phones) and the visual signature of the cube artefact. A similar 2.4\,GHz interference problem was discovered with the remote control transceiver, however, this scenario was restricted to times when the handheld remote control unit was switched on and in close range of the robot (a situation that could not arise during actual competition).
The second of these issues was related to unintentional mesh network traffic originating from devices other than the base station and autonomy computers. The hardware designs of the UGVs and UAVs both contained a single Ethernet network that was used to connect the robot autonomy computer with on-board sensors, controllers, and the Rajant Breadcrumb node. Early on in the development process, the Rajant Breadcrumb nodes were configured to use a bridging mode where layer 2 packets received via the local Ethernet connection could be transported via the mesh network regardless of the source address. This configuration simplified certain aspects of system development by allowing access to network-enabled robot hardware from any computer connected to a Rajant Breadcrumb. However, this configuration left the performance of the communications system vulnerable to unexpected sources of network traffic. Such sources were inadvertently introduced on multiple occasions during development. On each occasion, a new piece of hardware had been introduced that defaulted to a communication mode that generated a high-frequency stream of packets with a layer 2 broadcast destination address. This would result in a subtle but significant degradation of the communications system performance during subsequent system tests, requiring manual traffic analysis to detect and identify the offending hardware. Prior to the Final Event, it was decided that the Rajant Breadcrumbs would be configured to use a MAC address whitelist so that only layer 2 packets originating from the base station and robot autonomy computers could be bridged over the mesh network.
\section{Localisation and Mapping}
\label{sec:localisation_mapping}
Robust localisation and mapping are critical to downstream robotics tasks. For example, as described in the next section, robot autonomy requires the information of localisation and the environment map for path planning, multi-agent coordination requires knowledge of each robot's position on a shared map, and in SubT, detected artefacts must be reported with accurate locations. This section briefly describes Wildcat, our multi-agent lidar-inertial SLAM system used in the DARPA Subterranean Challenge. We refer the reader to
\cite{ramezani2022wildcat} for additional technical and implementation details. A consolidated report on the experiences of the various SubT teams with localisation and mapping can be found in \cite{ebadi_present_2022}.
\subsection{Wildcat SLAM}
\label{sec:wildcat}
A diagram of Wildcat is shown in Figure~\ref{fig:wildcat_diagram}. Wildcat has two major modules: (i) lidar-inertial odometry, and (ii) pose-graph optimisation. In the following, we briefly describe each module and present our SLAM results in the prize run of the DARPA Subterranean Challenge Final Event.
\begin{figure}[t]
\centering
\includegraphics[width=120mm]{figures/flowchart.png}
\caption{A diagram of Wildcat, our multi-agent lidar-inertial SLAM system. This diagram shows the components of Wildcat ran on each robot (left) and those that ran on the base station (right).}
\label{fig:wildcat_diagram}
\end{figure}
Wildcat odometry is a real-time sliding-window optimisation method that fuses IMU and lidar measurements collected within a three second window to estimate robot trajectory at a high rate. The odometry module runs on each robot independently.
First, Wildcat generates surfels from lidar points by voxelising them and fitting an ellipsoid to the points residing in each voxel. Only those surfels that are sufficiently planar are kept.
After establishing an initial set of correspondences between the surfels, Wildcat then estimates robot trajectory by minimising the point-to-plane distance between the matched surfel pairs and also the error between the predicted and collected IMU measurements (angular velocity and linear acceleration). Wildcat odometry alternates between matching surfels and estimating robot trajectory for a fixed number of iterations. Our method uses cubic B-spline interpolation to remove distortion from lidar points (caused by the motion of robot and lidar) and to efficiently fuse asynchronous measurements from lidar and IMU in the previously mentioned optimisation problem; see \cite{ramezani2022wildcat}.
Every five seconds, the locally optimised robot trajectory is used to create a local surfel map. We call each rigid local map (together with additional information such as the corresponding segment of trajectory estimate) a \emph{frame}. Frames remain rigid after creation and thus their state can be represented by one of the underlying poses. Each robot stores its own frames, as well as frames received from other robots in a database. Frame databases are synchronised between the agents (robots and the base station) using Mule whenever peer-to-peer communication is possible (see \secref{sec:mule}); frames are suppressed from being shared if the overlap to the previously shared frame exceeds a threshold. Wildcat's pose-graph optimisation (PGO) module (also referred to as Atlas) runs separately on each agent and aims to produce an independent, globally consistent estimate of the team's map and trajectories. This module detects intra-robot and inter-robot loop closures using all available frames. Each agent then independently optimises the team's collective pose graph whose nodes correspond to (unknown) frames' representative poses, and whose edges correspond to odometry and loop-closure measurements.
In the prize run, the average total size of frames generated by our four UGVs (introduced in~\secref{sec:UGV_platforms}) was about 21.5\,MB per robot.
Figure~\ref{fig:prizerun_multiagent_offline} shows the map created collaboratively by our robots by the end of the prize run. According to DARPA, this map has ``0\% deviation'' from the surveyed ground truth where ``deviation percentage'' is defined as the percentage of points that are further than one meter from the surveyed point cloud.\footnote{\url{https://www.youtube.com/watch?v=SyjeIGCHnrU&t=1932s}} \figref{fig:prizerun_mapping_comparison} shows the maps created by all teams by the end of the prize run. Green (resp., orange) points correspond to map points whose distance from the surveyed point cloud is less (resp., more) than one meter. Our team produced the most accurate map at the Final Event, while also having ``91\% coverage'' according to DARPA. We conducted our own quantitative analysis using the surveyed point cloud map provided by DARPA in \cite{ramezani2022wildcat}; the results show that the average distance between our map points and the nearest point in the reference map (after aligning the two maps) is about 3\,cm.
Moreover,~\figref{fig:ma_pose_graph} depicts the collective pose graph based on one of the agents during the Prize Run. In total, 3950 frames (grey nodes) were shared between four agents out of which only 49 nodes were considered as root nodes (green nodes) whose poses were estimated through the pose graph optimisation. If frames sufficiently overlapped with a root node, they are considered as child nodes and they are rigid relative to the root node. This strategy decrease the number of root nodes as a result the number of parameters in the pose graph optimisation allowing Wildcat to deal with scalability in an efficient manner.
\begin{figure}[t!]
\centering
\includegraphics[width=120mm]{figures/prizerun_render_annotated2.jpg}
\caption{Multi-agent globally optimised Wildcat SLAM map from the robots deployed by team CSIRO Data61 during the 60\,min prize run. Point clouds collected by different robots are shown by colour, while white lines delineate the three course environments (i.e., urban, tunnel and cave).}
\label{fig:prizerun_multiagent_offline}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=120mm]{figures/SubT_finals_mapping_results.png}
\caption{Visual comparison of online mapping data reported to DARPA during the competition runs for the eight finalist teams during the Final Prize Run,$^*$ with points matching ground truth shown in green, and non-matching points shown in orange. \\ $^*$\url{https://youtu.be/SyjeIGCHnrU?t=1676}}
\label{fig:prizerun_mapping_comparison}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=120mm]{figures/bluey.png}
\caption{Pose-graph collected from four agents on one of the agents whose location is indicated by the red and green coordinate frame, during frame sharing. The white lines represent the connectivity between the children frames whereas the green lines indicate the connectivity between root nodes.}
\label{fig:ma_pose_graph}
\end{figure}
\section{Autonomy}
\label{sec:autonomy}
Autonomy is critical in SubT due to the dual limitations of a single human supervisor, and the communications challenges of underground environments. This section describes the solution utilised for this. For UGVs, we first describe the local autonomy in \secref{sec:ugv_local_autonomy}, then the global autonomy in \secref{sec:ugv_global_autonomy}, and finally the multi-robot task allocation in \secref{sec:MRTA}. Subsequently, we describe the UAV autonomy in section \secref{sec:uav_autonomy}.
Block diagrams of the respective UGV and UAV autonomy systems are show in \figref{fig:autonomy_block_diagrams}. The key component in common between the two is the Wildcat SLAM system, described in \secref{sec:wildcat}.
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=0.9\textwidth]{figures/ugv_autonomy_block_diagram.pdf}}
\hfill
\subfloat[]{\includegraphics[width=0.55\textwidth]{figures/UAV_autonomy_al2.pdf}}
\caption{Block diagram of autonomy system, (a) for the UGV, and (b) for the UAV.}
\label{fig:autonomy_block_diagrams}
\end{figure}
\subsection{UGV Local Autonomy}
\label{sec:ugv_local_autonomy}
The Wildcat SLAM system, described in \secref{sec:wildcat}, is a key underpinning technology to the solution. Local navigation primarily utilises the point cloud provided by the Wildcat odometry process. This is integrated into a 3D GPU-based occupancy map through the GPU-based Occupancy Homogeneous Map (OHM) system, detailed in \cite{stepanas_ohm_2022}. The occupancy grid is populated to a minimum range of 10\,m (with discrete jumps due to region-based addressing), and a resolution of 0.1\,m. Height maps are extracted from the OHM grid. Support was included for multiple vertical layers in height maps, e.g., to support navigation up a staircase in an area where the region beneath the staircase is also visible, but this remained to be fully utilised in the downstream parts of the stack at the time of the competition. The height maps include awareness of the clearance height required by the respective platform, and so will not output a ground level beneath an overhang with insufficient clearance.
As described in \cite{hines_2020}, an important feature of the height map generation is identification of \textit{virtual surfaces}. These represent horizontal frontiers, where the space above has been identified as free space, but the space below is unknown. Consequently, they are surfaces that have not been directly observed, but observed data implies the potential presence of a surface beneath. The navigation stack utilises this by initially treating them as traversable, so that the agent will move toward them and either observe the surface itself, or identify the fatal incline descending from the top edge, as illustrated in \figref{fig:virtual_surface_railway}.
\begin{figure}[!tb]
\centering
\subfloat[]{\includegraphics[height=40mm]{figures/virtual_surface_1.png}}~
\subfloat[]{\includegraphics[height=40mm]{figures/virtual_surface_2.png}}\\
\subfloat[]{\includegraphics[height=40mm]{figures/virtual_surface_3.png}}~
\subfloat[]{\includegraphics[height=40mm]{figures/virtual_surface_4.png}}\\
\subfloat[]{\includegraphics[height=40mm]{figures/virtual_surface_5.png}}
\caption{Virtual surface processing during final circuit event: (a) A Spot robot autonomously traverses the subway platform, showing a virtual surface falling off the edge of the platform down to the tunnel. (b) As the edge is approached, the virtual surface becomes steeper. (c) This progresses until the steepness reveals the fatal cost at the edge of the obstacle. (d) The robot moves towards the front part of the platform, identifying a virtual surface above the stairs leading to the base of the tunnel. (e) The stairs are observed, revealing traversable terrain leading to the tunnel base.}
\label{fig:virtual_surface_railway}
\end{figure}
Traversability is assessed using the height map at two different ranges and rates. Local traversability analyses terrain at a shorter range (4\,m) but at a higher rate (5\,Hz), whereas global traversability analyses terrain at a longer range (6\,m) but at a lower rate (1\,Hz). The traversability analysis consists of tests on slope and steps. The slope analysis considers hypothetical (circular) robot footprints at each location in the map, and evaluates the resulting attitude. The cell is considered fatal if the slope exceeds a threshold. The step analysis searches for discrete steps exceeding a threshold in small local regions around each cell. The result of the traversability analysis is a classification of unknown (unobserved height), traversable, or fatal, along with the classification of observed or virtual provided with the height map.
The main planner utilised is hybrid A*, as detailed in \cite{hines_2020}. Planning is conducted on a graph where nodes are positioned on a 3D grid with the same 0.1\,m spacing utilised in the height map, and 30\ensuremath{^{\circ}}\xspace in yaw. Edges are derived from motion primitives and costed dynamically as they are visited by A*. Costs were tuned to provide the desired balance between longer paths with gentler slopes and shorter paths with steeper slopes.
\subsubsection{Path follow}
\label{sss:path_follow}
In order to improve robot navigation performance and efficiency, and prolong the lifespan of robot hardware, improvements to the velocity command generation were deemed necessary, leading to the development of a new trajectory generation behaviour.
The new ``Path Follow'' behaviour was developed with three key requirements in mind:
\begin{itemize}
\item Generated trajectories must strictly adhere to set velocity and acceleration limits
\item Generated velocity commands must be continuous and smooth
\item Overall performance must match or exceed previous systems
\end{itemize}
The new Path Follow behaviour meets these requirements through the use of actively updated 10th order B\'{e}zier curves to generate trajectories from the robot base link to a local goal pose. Trajectory generation is optimised such that the form of the trajectory fits the input path as close as possible whilst also adhering to the kinematic constraints of the robot and minimising trajectory completion time.
The Path Follow behaviour makes use of a dynamic short horizon envelope which reaches out from the base link of the robot platform out to the nearest obstacle, defined as at least one fatal cell in the cost map. The horizon defines the pose of the local goal, used as the target for all trajectory generation. The local goal is located on the local path at its intersection with the horizon. The horizon shrinks and grows as the robot moves closer to or further from obstacles and as it does, Path Follow dynamically adjusts the scaling of the robot kinematic limits thus slowing the robot as it approaches obstacles. This slowing around obstacles allows for more precise path tracking around obstacles, and is essential for narrow gap navigation.
A key component of Path Follow is the generation of smooth and continuous trajectories and velocity commands. Newly generated trajectories are generated with specifically set control nodes such that the initial state of the trajectory matches the last robot command. This ensures a smooth continuous kinematic command profile sent to the robot platform whilst constantly regenerating trajectories, and is essential for accurate path tracking.
Active trajectory regeneration is attempted at a rate of 25\,Hz. If a new active trajectory is generated, robot commands are set as $R=T(\Delta t)$ where $R$ is the generated set of robot velocity commands in a twist format and $T$ is the trajectory function with $t$ time as input. If a new trajectory cannot be generated which fits current kinematic constraints the existing active trajectory is iterated and robot commands set as $R=T(t+\Delta t)$. In the rare occurrence that trajectory regeneration fails continuously and a full trajectory is executed, a safety mechanism is built into every generated trajectory to ensure a safe stopping procedure is commanded at the end of each trajectory.
Path Follow works in conjunction with the rest of the behaviour stack and as such has been tuned to be quite aggressive. Path Follow will attempt to follow any given path as best it can regardless of whether the route of the path may result in collision with obstacles. Instead Path Follow will merely slow the robot down to a minimum speed, continue to follow the path and rely on other recovery behaviours such as the Decollide and Orientation Correction behaviours to recover the robot from any potentially dangerous collision state (described under Other behaviours below).
\subsubsection{Gaps planner}
\label{sss:gaps_planner}
The 0.1\,m discretisation resolution utilised in the hybrid A* planner is a significant limitation when seeking to pass through 800\,mm doors with an ATR that is 780\,mm wide.\footnote{The smaller width of the Spot robots was such that difficulties were not encountered and this behaviour was not required for that platform.} The gaps planner was developed to handle this type of case.
Initially, the development focused on reinforcement learning (RL) approaches. The method in \cite{tidd_passing_2021} was successful 93\% of the time in simulation, and 73\% of the time in on-robot tests. Despite these promising results, in the context of SubT where the a priori unseen environment is a large part of the challenge, a conventional planning approach was preferred. The finals course demonstrated the motivation for this decision: The course contained many tunnels that were far narrower than anticipated or previously encountered. The hand-engineered gaps planner adapted well to this type of environment, whereas the RL-based method that was not trained on data similar to this would not have been expected to generalise well.
The gaps planner can be seen as an extension of hybrid A*, which integrates a continuous optimisation step into the search to allow fine repositioning of search nodes within their respective discretisation cells. The search proceeds similarly to regular A*, but when cells are visited, the optimisation step seeks to improve the continuous position and yaw of the node based on the predecessor node and the nearby obstacles. Critically, this allows search nodes to be reliably found when passing through tunnels and doorways that were within one cell's dimensions of the vehicle size (as with hybrid A*, the position discretisation was 0.1\,m, and the yaw discretisation was 30\ensuremath{^{\circ}}\xspace).
In addition to the improved planner, it was also critical to raise the platform's capability to accurately execute the plans. The combination of the hardware upgrades described in \secref{sss:atr} and
the development in \secref{sss:path_follow} provided the required enhancement. Particularly important was Path Follow's adaptive speed control, which reduces velocity based on the distance to an obstacle, allowing for the most accurate control in critical circumstances.
\subsubsection{Other behaviours}
\label{sss:other_behaviours}
Specialised recovery behaviours were developed that have priority in scenarios that may place the robot in an unrecoverable state. The Orientation Correction behaviour activates if the pitch or roll of the robot may lead to the agent tipping over. The Decollide behaviour moves the robot to a nearby non-fatal region, recovering from the case where fatal cost appears within the footprint of the first search node preventing a valid path from being generated.
For the Urban Circuit, a dedicated stair climb behaviour was developed for the Superdroid tracked platforms. This behaviour utilised extension of the robot flipper arms to extend the functional base of the robot, lengthening the lever arm required to tip the agent and preventing toppling down the stairs during ascension or descent. This behaviour could be manually activated by the operator or set to automatically activate when the agent orientation exceeded a pitch threshold. This allowed the operator to teleoperate the agent onto stairs at which point the behaviour would take over and complete the rest of the traversal in a safe manner (demonstrated during the Urban Circuit). Work on autonomous stair detection was deprioritised along with the Superdroids after the addition of the Boston Dynamics Spot robots to the team. Not only were the Spot's much more agile over a wider range of terrain conditions, their ability to handle stairs was superior to the tracked platforms that are unable to perform complex maneuvers on stairs (for example stopping and turning). Details on stair climbing with Spot are discussed in the Spot integration section below.
\subsubsection{Spot integration}
\label{sss:spot_local_nav}
The Spot platform was a late inclusion in the team, with integration work commencing in April 2021. The platform quickly evolved into a highly capable team member which provided unique capabilities traversing stairs and tight passages.
We found it critical to integrate Spot's internal cameras in order to address stairs and negative obstacles, since the location of our pack (see \figref{fig:annotated_robot_platforms}) provided limited visibility at steep elevation angles in front of the robot, and no visibility behind (note that the Spot robot must reverse down stairs due to its leg design). Spot's internal navigation capabilities presented many choices for integration with the autonomy stack. Spot internally generates a high resolution (0.03\,m) height map to a range of 1.9\,m based on its five depth cameras. Initially, this height map was blended with the lidar height map generated by OHM, and the plan produced by hybrid A* was truncated at the point where it departed the local map. However, this resulted in two problems, firstly relating to the plans themselves, and secondly relating to the quality of the height map.
The difficulty encountered with the first approach for path planning was that Spot's internal planner appeared to generate plans that approximated a slerp (Spherical Linear Interpolation) of the current pose and the provided goal pose, rather than constructing plans that are logically consistent with the environment. This was acceptable in more open environments, but sometimes prevented navigation in narrow corridors. Consequently, the local plan produced by hybrid A* was passed to Spot as a timed trajectory; this improved performance significantly and the trajectory was generally well-followed.
The difficulty with the height map was found when developing the capability to autonomously navigate stairs. Specifically, the difficulty occurred when the Spot commenced from the top of a platform from which stairs descended, but was positioned such that the stairs were not visible, but part of the ground plane at the bottom of the stairs was visible. In this circumstance, rather than marking the unobserved region as unknown, the processing hallucinated the continuation of the ground plane up to the edge of the top platform, without stairs. This was indistinguishable from an instance where there was a genuine negative obstacle, and so would prevent the platform from moving toward the edge to look for stairs.
In contrast, the OHM-based virtual surface processing would continue to approach the edge until either the stairs become visible, or the free space observed implies a non-traversable slope. To exploit this capability with the Spot, the native depth cameras were integrated into OHM. The front-left, front-right and rear cameras were integrated into OHM at 5\,Hz; the lidar provided adequate visibility on the sides and the side cameras were not found to be required. Due to the focus on supplementing lidar coverage at close ranges, camera rays were truncated to 2\,m, and the resolution was decimated to match the height map resolution at that range. Online processing of this data stream and generation of virtual surfaces was possible with the GPU-based OHM implementation on the Intel NUC's integrated graphics.
Spot provides gaits for walking, crawling and stairs. The stair gait is documented as slowing speed and pitching the body to observe stairs; testing appeared to indicate that it also conditions the robot to expect flat foot holds. Difficulties with autonomous identification of stairs due to the poor visibility from common viewing angles led to an approach which utilised the walk gait exclusively. This was partly motivated by testing which showed that stair traversal in the walk gait was generally acceptable, whereas traversal of slopes or rough terrain of similar pitch in the stair gait was usually catastrophic. Because the spot leg design necessitates backward descent of stairs, the path cost was tuned to penalise forward motion down greater than a given angle, resulting in the desired effect. As discussed in \secref{ss:FinalPrizeRun}, this approach was successful, and stairs were autonomously ascended and descended by Spot robots out of communications range during the Final Event. It is possible that improved stability could be obtained by switching to the crawl gait on rough terrain; experimentation with this concept is the subject of future work.
Typical battery life for the Spot was 40-45\,min, which is insufficient to last an entire run. For this reason, a ``battery return'' behaviour was incorporated, which forced the robot to navigate back to the base area when the battery percentage hit a critical threshold.
\subsection{UGV Global Autonomy}
\label{sec:ugv_global_autonomy}
The role of the global navigation system is to build an expanding map of the traversable terrain observed by all UGV agents, allowing any agent to navigate to any point observed by itself or any other agent. This is made possible by the PGO-based SLAM system described in \secref{sec:wildcat}. Separate traversability maps (submaps) are generated for each root node in the SLAM graph, and like the SLAM frames themselves, traversability submaps are shared between agents.
As described previously, global autonomy utilises the same traversability analysis as local navigation, but with maps generated with a longer range and at a lower rate. Subsequent observations which fall within the time range of the same SLAM root node are merged into the same image representation. Data with mismatching heights are handled by incorporating additional layers in the submap. The maps incorporate data for observed traversable and lethal surfaces; virtual surfaces are not incorporated into the map. Height maps and cost maps are also shared between agents; the data rate involved with these representations was found to be an order of magnitude lower than the SLAM frames. Compact graph representations of submaps are obtained by applying superpixel methods to the images, incorporating channels for the fatal traversability signal and height. Connections between submaps and submap layers are identified by finding superpixels which overlap between them. Global path planning is conducted utilising A* on the graph with nodes corresponding to superpixels. Edge costs are obtained through distances, with additional penalties based on slope and roughness (averaged over the superpixel for each node), which are designed to match the penalties used in local path planning.
Dynamic obstacles present a particular challenge as paths previously observed as traversable must be updated to reflect the path that has been closed. The local navigation stack was capable of traversing extremely difficult terrain, but this sometimes took several attempts. It was important for the global map update process to be tuned accordingly. It was also made difficult because both local and global navigation were based on height maps, and thus were unable to distinguish between a changed environment and difference in observation perspective of the same, unchanged 3D structure. For example, from one perspective, we may observe the underside of a ramp crossing to an upper level, and declare an obstacle only where the clearance becomes insufficient. From another perspective, we may observe the top of the ramp, continuing up to the next level.
Because of this difficulty, the approach operated directly on the global superpixel graph. Specifically, whenever navigation failures occurred (i.e., timeouts without significant progress towards the goal), the source and destination nodes locations in the global graph were stored, and edges between superpixels containing those two positions were subsequently suppressed. The exception to this was edges which had previously been traversed by an agent. Because the cost of falsely mistaking the path home was high (e.g., preventing robots from returning to synchronise data), rather than suppressing these edges altogether, instead a high traversal cost was applied to them. Accordingly, any path not utilising that edge would be preferred. In the case where the edge is still utilised, traversal failures will trigger task failures as described in \secref{sec:MRTA}, which will in turn trigger selection of exploration tasks that have potential to discover the necessary alternative routes. Overall, this approach was effective, but could take some time to resolve the correct map.
Whereas UGV exploration in earlier stages of the program was based on the 3D point cloud visibility work in \cite{williams2020exploration}, in finals we utilised traversability frontiers, exploring to the boundary between observed traversable and unknown space. Thus frontiers were attached to superpixels which bordered unknown space, performing a natural clustering of frontier pixels. Again, due to the multi-agent global navigation representation, a region will only be marked as a frontier if it has not been observed by any UGV.
Connections are only made between submaps that are within a local neighbourhood in the SLAM graph. Accordingly, if a region is revisited but loop closure has not occurred, the previously visited area will be explored as if it is being observed for the first time. This is a desirable behaviour, as this additional exploration provides the data required for loop closure to occur. The process could likely be made more rapid by explicitly reasoning over the exact data necessary for loop closure (e.g., active SLAM, \cite{placed2022survey}, executing actions aimed at collecting the data necessary for loop closure to occur).
Conceptually, the UAV could contribute to the UGV maps in the same way, but this was not exploited since the UAV does not run the same occupancy mapping, height mapping and traversability analysis pipeline, and the raw data is prohibitively large. %
Part of the global navigation graph from the Final Prize Run is shown in \figref{fig:topo_graph_finals}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figures/topo_graph_finals.pdf}
\caption{Portion of the topometric (global) graph constructed during the Final Prize Run. The railway platform and stair portions correspond to the traversability maps illustrated in \figref{fig:virtual_surface_railway}.}
\label{fig:topo_graph_finals}
\end{figure}
\subsection{Multi-Robot Task Allocation}
\label{sec:MRTA}
The multi-robot task allocation system provides the methodology for agents to autonomously and collaboratively agree on assignments of tasks to robots, based on a decentralised market-based process. This allows collaborative assignment of tasks to continue as the communications topology changes. For example, if two robots are deep in the course and have communication with each other but not with the base station, they can seamlessly continue negotiating task assignments.
Each agent is allocated a bundle of tasks. The metric which task allocation seeks to optimise is the total reward of the bundles, where rewards have an exponential time discount based on the expected completion time of the task. For example, if we denote the bundle for agent $a$ as $\vec{p}_a=(j_{a,1},...,j_{a,{n_a}})$, the reward of task $j$ as $c_j$, the traversal cost from task $j'$ to $j$ as $t(j',j)$, and the duration of task $j$ as $T(j)$, then the total reward is:
\begin{align}
S(\vec{p}) &= \sum_{a=1}^A\sum_{i=1}^{n_a} \lambda^{\tau^{a,i}(\vec{p}_a)}c_{j_{a,i}}
\intertext{where}
\tau^{a,i}(\vec{p}_a) &= \sum_{k=1}^i t(j_{a,k-1},j_{a,k}) + T(j_{a,k})
\end{align}
$\tau^{a,i}$ denotes the time when execution of the $i$-th task $j_{a,i}$ completes, and $t(j_{a,0},j_{a,1})$ denotes the cost of navigating from the agent's current position to the first task. The Consensus-Based Bundle Algorithm (CBBA) operates by building the bundle incrementally, bidding on the task which produces the largest increase in the bundle reward, where the reward increase is evaluated by inserting the task into the best position in the bundle.
The primary task utilised by the task allocator is exploration. For the exploration task, frontier nodes identified by global navigation are clustered to provide tasks consisting of frontiers that are nearby in terms of global path distance. Each agent independently generates its own set of frontiers and tasks based on traversability data received from all agents. This avoids the solution of a decentralised clustering problem (which would be greatly complicated by regular periods of disrupted communication). Duplicate assignments are avoided by treating nearby tasks belonging to different agents as equivalent for the purposes of bidding, so that assignment of an agent to a task requires outbidding another agent assigned to nearby tasks.
Tasks can be bid upon at any time, and new tasks continually arrive as agents move through the region. Each agent maintains its task bundle, which is limited to a maximum number of tasks, and a maximum expected duration. Because of the open-ended nature of exploration tasks, their duration is set such that two exploration tasks will not be added to the bundle at the same time. In order to allow the bundle to be adapted once it is full, we consider bidding actions which drop the final element from the bundle sequence and add a new element (into an optimised position, as is standard in the bidding process).
Methods were developed which estimated the reward of a frontier through bounds on the new volume that might be observed from the candidate pose. However, this was found to be counter-productive, since features such as narrow tunnels and tight doorways were of critical importance. Thus, without cognisance of this higher level semantic information, the standard reward estimates were found to be unhelpful, frequently triggering undesirable behaviour (e.g., stopping exploration of a region of interest in favour of a more open area). For this reason, exploration tasks have a fixed reward, such that selection is based purely on the path cost.
Motivated by \cite{brass2011multirobot} and by the tree-like structure of many of the environments of interest, we encourage the agents to separate by penalising the shared component of the path from each robot back to the base. In \cite{obrien_2022}, this is shown to improve performance in environments that are well approximated as trees.
As well as exploration tasks, the task allocator also supports ``return to synchronise'' tasks (i.e., return toward the base until all data is uploaded to and downloaded from the base), and ``drop node'' tasks. Synchronisation tasks benefit from multi-agent implementation so that if another agent is in a location that allows it to return more quickly and it has all of the data from the agent that needs to synchronise, it can bid on the task. Drop node tasks are manually generated, and can be executed by any agent that has a communications node available. Logic was developed to automatically deploy communications nodes based on signal strength, but it was not deployed in competition due to the risk of rendering a narrow tunnel impassable. Further details of the task allocation method can be found in \cite{obrien_2022}.
\subsection{UAV Autonomy}
\label{sec:uav_autonomy}
The UAV autonomy used in the final competition broadly follows the autonomy used in previous competitions, with improvements focused on exploration. The UAV utilises the navigation functionality commercially offered by Emesent as Autonomy Level 2, providing both local and global navigation solutions.\footnote{Emesent AL 2 (2021). https://www.emesent.io/autonomy-level-2.} The UAV features a manager node to coordinate local and global planning, and implement core behavioural primitives. The manager receives higher level tasks from the operator, such as move to these waypoints, explore, etc. It coordinates activities to achieve these tasks, and interfaces with the lower level autonomy. This modular architecture has simplified the process of continual improvement to our higher level autonomy functionality.
To ease operator load, the UAV supports four major control modes: exploration, 3D waypoints, 2D waypoints, and planar waypoints. 3D waypoints specify an exact position in space where the UAV must move, useful to get the UAV into narrow openings when other alternatives are available (i.e., moving into a shed). 2D waypoints specify a location in the horizontal plane, but leave height free, which is useful for general purpose commands. Planar waypoints simply require that the UAV reach any point on a user-specified plane, but does not specify where. It is often selected as a vertical plane, in order to provide a direction of travel without the need for a precise goal. This is useful for sending the UAV to a general location (i.e., not just basic exploration) in a space whose approximate layout is very roughly known, e.g., go 100~m down a tunnel and then turn left at the branch. It can also serve as another form of more directed exploration.
Frontier generation and selection broadly follows that described in \cite{williams2020exploration}, with frontier selection based on a scoring function balancing frontier size, proximity, and alignment with previous exploration. Improvements focused on dealing with invalid waypoints, motivated by experiments at an indoor paintball course. The paintball course featured many small windows, narrow doorways, and other non-traversable apertures which generated a large number of frontiers that were not reachable by the UAV, causing exploration to become stuck for significant periods of time. Mitigation efforts focused on utilising traversability information provided by the planner. The planner published its search tree after every planning iteration, which the exploration code used in two ways. The first was checking whether the planner was able to find a path sufficiently deep into a given frontier. If so, the frontier was marked as reachable, and its score for selection purposes was increased, strongly biasing exploration towards known-reachable frontiers. Furthermore, the closest point to the center of a reachable frontier was retained as a witness point. When a frontier was cleared, other frontiers that contained the witness point of the cleared frontier would be marked as reachable. This provided additional robustness when frontiers were modified at distances that exceeded the planning horizon, which was particularly important for large open spaces. An example exploration trajectory of the indoor paintball course is show in \figref{fig:paintball_exploration}.
Previously, a frontier was only marked as unreachable if the UAV had failed to make substantial progress towards a frontier for 20\,s. These delays caused by unreachable frontiers were reduced by identifying when the UAV was moving away from the target frontier, indicating that the frontier was unreachable. To do so, the exploration manager would compute the minimum distance from the search tree to the target frontier, and if below a threshold (3.2\,m) indicating that the UAV was in close proximity to the target frontier, it would compare the search tree distance to the distance from the end of the actual plan associated with the search tree to the frontier. The path end distance being significantly larger than the search tree distance (2\,m) indicated that the path directly to the frontier was blocked, and the planner was attempting to find an alternate route. In this case, the exploration manager would mark the frontier as unreachable, and exclude it, and any other frontiers with centers inside the unreachable frontier, from consideration.
The previous iterations of the UAV exploration code ignored the global structure of the environment, using only the direct Euclidean distance between UAV and frontier when selecting frontiers, with obvious downsides in larger or more complex environments. For the final competition, the UAV exploration manager was enhanced to use the SLAM pose graph to reason about the global environment. When a frontier was created, it was associated with a frame in the pose graph, and the UAV was continuously associated with the currently active frame used by the SLAM module. Unfortunately, global SLAM optimisation had significant latency, often exceeding 10 sec, leading to poor associations. As a result, frontiers and the UAV were reassociated with the closest of their original frame and neighboring frames in the pose graph. When selecting a frontier, the UAV was limited to considering frontiers that were, after reassociation, associated with frames adjacent to the UAVs current frame in the pose graph. This limited consideration to frontiers that could be reasonably expected to be reachable by the local planner. If there were no frontiers in the adjacent frames, the UAV timed out in making progress towards a frontier, or the UAV marked its current frontier as unreachable, then the exploration manager would check if relocation was required. Relocation was required if there were no frontiers marked as reachable associated with the current UAV frame or adjacent frames. If relocation was required, the UAV chose the frame associated with the most, reachable frontiers, tiebreaking in favour of frames with more total associated frontiers. The exploration manager then used the SLAM pose graph to compute a path to the target frame, relocated there, then restarted exploration.
\begin{figure}[t]
\centering
\includegraphics[width=100mm]{figures/paintball_exploration.png}
\caption{Trajectory of UAV (red line) performing exploration at an indoor paintball course prior to the final competition. Background colour shows height, illustrating the UAV's path through the maze-like course.}
\label{fig:paintball_exploration}
\end{figure}
\section{Perception}
\label{sec:perception}
The Camera Based Perception framework used in the Final Event was similar to that used in the earlier circuit events, wherein artefacts detected in the camera stream by the DeNet object detector \citep{Tychsen_Smith_2017,Tychsen_Smith_2018} were localized in 3D using lidar depth measurements. Two major additions to the framework for the Final Event were the introduction of multi-agent artefact tracking and sending updated information of a previously detected artefact.
The main purpose for the introduction of the multi-agent artefact tracking was to reduce the number of artefact reports seen by the operator. For the Urban Circuit, each agent tracked artefact locations in the odometry coordinate system and thus any drift in the odometry trajectory could possibly result in new detections being erroneously associated with known artefacts. These errors were mitigated at the Urban Circuit by removing known artefacts from the tracker memory when they had not been detected for 30\,s or the agent was more than 30m away from the artefact. Unfortunately, this meant that an agent would re-detect artefacts when visiting previously explored space. Maintaining the position of artefacts in the global map alleviates the issue of drift and thus artefacts can be remembered indefinitely. The similar problem of one or more agents visiting the same area and detecting the same artefacts can also be mitigated by using Wildcat's loop closure capability to establish correspondence between each agent's global map (see \secref{sec:wildcat} for more details).
The need to update information of a previously detected artefact was to address a common problem when using the artefact tracker to eliminate duplicate detections. In some instances, a true-positive would be correctly identified by the object detector but the operator was unable to confidently confirm the detection in the image because the artefact was either poorly illuminated or too small. For the Final Event, we introduced a policy where the operator GUI was updated with subsequent detections if the detection was at least 1\,m closer than a previously sent detection. The operator could then scroll through all of the artefact observations in order to more confidently confirm the detection. This functionality was easily implemented by assigning every tracked artefact a globally unique identifier and including the identifier in every sent artefact report.
The training dataset continued to expand after the Urban Circuit. The dataset used to train the DeNet detector model for the Final Event consisted of over 30,000 images of the nine artefacts at twelve different locations (21,000 annotated and 9,000 negative). The images were captured using a variety of mobile phones as well as the cameras used by the UGVs and UAVs.
WiFi and gas detections were presented to the user as a marker located at the position of the respective agent when detection occurred, and coloured by the strength of the detection (i.e., RSSI for WiFi, and detected density for gas). This provided the necessary information for the human supervisor to either detect and locate the respective artefact, or command a robot to collect additional information. As discussed in \secref{sec:comms_improvements}, the active scanning required for Bluetooth detection was found to cause significant communications disruption, so Bluetooth detection was disabled. Accordingly, we were reliant on WiFi detection for cell phone artefacts (a visual detector was trained, but in practice due to the small size and indistinct appearance, it was rarely detected), and visual detection for the cube artefact.
\section{Human Robot Teaming}
\label{sec:human_robot_teaming}
The role of the human supervisor was to provide high level guidance to the robots while assessing incoming object detections and sending reports to the DARPA server. As the number of platforms increases, the bandwidth for a single human supervisor to manage individual agents becomes limited.
Previous experience showed the value of providing the human supervisor with full control range of the robots. While the dominant mode of operation was autonomous, fallback modes including waypoint navigation (or, more generally, missions consisting of scripted sequences of operations) and teleoperation proved useful in unexpected situations. With the focus on autonomous operation, interfaces were provided to permit entry of high-level guidance, in the form of prioritisation regions. These were specified geometrically, altering task priorities either within a region, or for any task downstream of the region in the shortest path tree commencing from the base location; examples of this are illustrated in \figref{fig:gui}. These latter graph-based priority regions were found to be a particularly valuable improvement, as they allowed prioritisation of an region of unknown shape and extent which lay beyond a junction.
A multimodal task-based graphical user interface (GUI) enabled the human supervisor to efficiently assess the status of each platform at a glance and provide mixed-level commands as needed (from teleoperation to fully autonomous exploration). \cite{chen_multimodal_2022} describes details of the user interface employed at the Final Event.
The operator interface was separated into two distinct windows: an artefact review window for efficiently assessing RGB images from detected objects and their localisation, and an operations window for interfacing with the robots. The example in \figref{fig:gui} shows the operations window for interfacing with three robots in autonomous exploration, illustrating prioritisation regions, and interactive task markers, which allow the operator to manually assign or cancel individual tasks. The human supervisor can assess several components of the robot health by glancing at the coloured octagonal ring around the robot markers, including the communication rate, percentage of data missing from the ground station, mission state, and any errors. The robot markers are a depiction of the robot type, and for the ATRs, display the number of remaining communications nodes and the UAV launch state.
The artefact review window presents the operator with both a list of objects and a map view showing their locations. The operator can quickly scan through new detections and either save or reject them. Gas and WiFi detections are illustrated by markers showing the detection location, with opacity indicating the concentration and signal strength respectively. In this case, the operator infers and indicates the source location on the map.
\begin{figure}
\includegraphics[trim=15cm 0cm 0cm 0cm, clip,width=\textwidth]{figures/gui_overview_sim_corrected_color.png}
\caption{Example of the map-based GUI in the operations user interface. Interactive ``lollipop'' markers enable the operator to manually assign or cancel tasks. The box in the lower left of the image defines a geometric prioritisation region ``Task: r3/3'' for robot \textbf{r3}, and ``Task: r1/2'' defines a graph-based priority region for robot \textbf{r1} (where the affected part of the graph is shown with purple shading). The octagon indicator surrounding the robot shows various aspects of status including SLAM, navigation, communications and tasking.}
\label{fig:gui}
\end{figure}
\section{Results at the Final Circuit Event}
\label{sec:discussion}
The Final Event was held at the Louisville Mega Cavern in Kentucky, USA, on 21-24 September 2021, and consisted of two preliminary rounds (30\,min runs conducted on 21 and 22 September) and a final prize round (a 60\,min run conducted on 23 September). Team CSIRO Data61's objectives for the preliminary runs were to maximise information gained from the course and ensure each platform was fielded before the prize run while minimising risk to hardware. Without a full set of either human or robot team members (due to COVID travel restrictions), the robot roster for the three circuits faced in Louisville was carefully deliberated. Any post-run repairs could only be performed by a skeleton crew with over-the-shoulder support from experts in Australia using telepresence robots.
Based on extensive testing, the ATR robots (Rat and Bear) were trusted as exceptionally robust in a wide range of unstructured conditions, while the Spot platforms (Bluey and Bingo) had greater strengths in terrain coverage, and were particularly critical in urban areas (for example with narrow stairs). The aerial vehicles (H1 and H2) were reserved for open locations in the course with high ceilings. These preconceptions guided strategic planning for the preliminary runs. For each preliminary run, both ATRs would enter the course carrying deployable communications nodes, alongside a single Spot. For the first preliminary run a single UAV was carried by one of the ATRs, in the second preliminary run a UAV was attached to each ATR. This roster allowed for redundancy, in the event of misfortune with a Spot or UAV, there would be a set of platforms available for the Final Event.
The Spot robots were generally more agile over a wider range of terrain conditions than the large ATRs. The strategy that evolved through testing was to send in a Spot robot first, followed by an ATR to bridge communications. Spot robots were able to autonomously traverse stairs, favouring urban settings, whereas the ATRs proved their robustness in challenging cave conditions. The confidence of the human supervisor in the abilities and limitations of each platform was cemented through extensive testing (based primarily on weekly test sessions at the CSIRO QCAT site, which incorporated urban industrial regions, a terrain park, and a synthetic tunnel environment shown in \figref{fig:QCAT_tunnel}).
\begin{figure}[!b]
\centering
\subfloat[]{\includegraphics[height=3cm]{figures/QCAT_Tunnel.jpg}}~
\subfloat[]{\includegraphics[height=3cm]{figures/Tunnel_branch.jpg}}~
\subfloat[]{\includegraphics[height=3cm]{figures/Tunnel_cathedral.jpg}}
\caption{An aerial view of the shade-cloth tunnel with an overall length of over 300\,m built at CSIRO's QCAT site in Pullenvale, QLD, Australia (a), an inside view (b) and stairs and mezzanines built inside the tunnel (c).}
\label{fig:QCAT_tunnel}
\end{figure}
\subsection{Preliminary Run 1}
\label{ss:Prelim1}
The agents sent into the first preliminary run were Rat (ATR) with marsupial UAV platform H2, Bear (ATR), and Bluey (Spot). Bluey was the first robot sent into the course, and upon confirming three distinct environment types (urban, tunnel, and cave, see \figref{fig:prizerun_multiagent_offline}), the supervisor elected to send Bluey to explore the urban section. Rat was sent into the tunnel environment, and Bear into the cave section. \figref{fig:prelim1_results} shows the course traversal from each robot (\figref{fig:prelim1_results}a), and the object class and location of reports that were correct (\figref{fig:prelim1_results}b).
Communication nodes were placed at the entrance to the three sections, however, Bluey quickly lost communication with the ground station exploring the urban section. Connection was maintained between the ground station and the ATRs throughout the run. A connection was established between Rat and Bluey late in the run, enabling map information to be regained on the ground station. The recovered information revealed that Bluey had fallen traversing train tracks in the urban environment (within the first 10\,mins of the run).
A total of seven objects were correctly detected and reported in the 30\,min run; five were detected visually and two were WiFi detections of cellphones. Another three were successfully detected, however, did not make their way back to the ground station, or were not efficiently displayed to the human supervisor before the end of the run. One artefact was detected but not reported as its probability was below the threshold for reporting. Post run analysis revealed a fault on the 2.4\,GHz channel of one of the communication nodes at the ground station, this was replaced before preliminary run 2.
The UGVs attempted to send a total of 188 visual artefact reports to the operator (6.26 reports per minute from all UGVs). Post analysis of the sent reports showed that 13 reports were true positives and the remaining 175 were false positives. Approximately one quarter of the false positives were due to artefacts being detected on the robot itself. This image mask configuration error was easily corrected in time for the second preliminary run.
\figref{fig:prelim1_results} highlights that two or more robots explored the area near the start of the course as well as a passage which connects the cave and tunnel circuits. Analysis of the new artefact sharing capability of the artefact tracker showed that 13 artefacts (true and false positives) were detected by more than one robot. This feature reduced the number of artefact reports sent to the operator for inspection by 17 reports.
\begin{figure}[h!tb]
\centering
\subfloat[Robot Path]{\includegraphics[width=0.6\columnwidth]{figures/prelim1_coverage.png}}
\subfloat[Object Reports]{\includegraphics[width=0.4\columnwidth]{figures/prelim1_objects.png}}
\caption{Course coverage and successful object reports for preliminary run 1. The starting area is located at the top left of each image. (a) shows robot paths by colour for Spot robot Bluey, ATRs Rat and Bear, and UAVs H2 (which was not launched). (b) shows the final map and object reports, based on information from the base station. The various artefact detections are shown as small dots in corresponding colours, while successfully scored artefacts are shown as large spheres, marked with the artefact time and scoring time (in seconds).
}
\label{fig:prelim1_results}
\end{figure}
\subsection{Preliminary Run 2}
\label{ss:Prelim2}
The agents sent into the second preliminary run were Rat and Bear (ATRs) both with marsupial UAVs (H1 and H2), and Bingo (Spot). Bingo entered first, and was sent into the urban environment. The human supervisor was unable to prevent Bingo following the fate of Bluey from the first run. The robot quickly lost communications with the ground station, and slipped on the train tracks early in the run ($\sim$8\,min). Data from this robot was not recovered during the run.
Rat was tasked to operate in the cave environment and a small cavern was identified for launching a UAV. The UAV (H2) was launched successfully and thoroughly explored the cavern. Rat become immobilised soon after launch with a suspected motor fault and remained in the cavern for the remainder of the run.
Bear was sent after Bingo into the urban section attempting to improve the communications backbone, however, a dynamic obstacle had activated as Bingo entered a warehouse area preventing Bear from pursuing Bingo directly. A communications node was dropped with Bear at a junction expected to help with communications to Bingo, but this was not successful.
Bear explored several small office rooms before discovering an alternative entrance to the warehouse area. A second UAV (H1) was successfully launched in this area, however, the robot crashed spectacularly after ingesting debris soon after launch. Bear was the last active robot, and the final minutes were spent attempting to recover data from the lost Spot through the tunnel environment as in the previous run. Time ran out before this was achieved.
A total of eight artefacts were successfully reported during the 30\, min run. Seven of the artefact detections were from visual detections and the remaining artefact was detected using WiFi. Importantly, we detected every artefact we observed. The UGVs sent a total of 106 reports for the run (average of 3.53 reports per minute). Post analysis of the sent reports showed that 11 were true positives and the remaining 95 were false positives. Sharing artefact reports with all agents reduced the number of artefact reports sent to the operator for inspection by 5 reports.
\figref{fig:prelim2_results} shows the course traversal and correct object reports from the second preliminary run. \figref{fig:uav_detections} shows examples of object detections from the UAVs. After the run several repairs were conducted including replacing an ATR motor and fixing antennas on damaged communication nodes. Repairs were performed with long distance support from Australia through ``Double 3'' telepresence robots from Double Robotics.
\begin{figure}[ht]
\centering
\subfloat[Robot Paths]{\includegraphics[width=0.65\columnwidth]{figures/prelim2_coverage2.png}}
\subfloat[Object Reports]{\includegraphics[width=0.35\columnwidth]{figures/prelim2_objects.png}}
\caption{Course coverage and successful object reports for preliminary run 2. The starting area is located at the top left of each image. (a) shows robot paths by colour for Spot robot Bingo, ATRs Rat and Bear, and UAVs H1 and H2. (b) shows the final map and object reports, based on information from the base station. The various artefact detections are shown as small dots in corresponding colours, while successfully scored artefacts are shown as large spheres, marked with the artefact time and scoring time (in seconds).}
\label{fig:prelim2_results}
\end{figure}
\begin{figure}
\captionsetup[subfloat]{labelformat=empty}
\centering
\subfloat[]{\includegraphics[height=6cm]{figures/drone_detection1.png}}
\hfill
\subfloat[]{\includegraphics[height=6cm]{figures/drone_detection2.png}}
\caption{Example object detections from the UAVs.}
\label{fig:uav_detections}
\end{figure}
\begin{figure}
\captionsetup[subfloat]{labelformat=empty}
\centering
\subfloat[]{\includegraphics[trim=0cm 1.1cm 3cm 6cm, clip, height=2.5cm]{figures/detection_backpack.png}}
\subfloat[]{\includegraphics[trim=0cm 1.1cm 3cm 6cm, clip, height=2.5cm]{figures/detection_rope.png}}
\subfloat[]{\includegraphics[trim=0cm 1.1cm 3cm 6cm, clip, height=2.5cm]{figures/detection_helmet.png}}
\hfill
\subfloat[]{\includegraphics[trim=0cm 1.1cm 3cm 6cm, clip, height=2.5cm]{figures/detection_drill.png}}
\subfloat[]{\includegraphics[trim=0cm 1.1cm 3cm 6cm, clip, height=2.5cm]{figures/detection_fire_extinguisher.png}}
\subfloat[]{\includegraphics[trim=0cm 1.1cm 3cm 6cm, clip, height=2.5cm]{figures/detection_vent.png}}
\caption{Example object detections from the UGVs.}
\label{fig:ugv_detections}
\end{figure}
\subsection{Final Prize Run}
\label{ss:FinalPrizeRun}
All available robots were taken into the final prize run, including two ATRs (Rat and Bear), two Spot robots (Bluey and Bingo), and two UAVs (H1 and H2). Both Spot robots were sent in first, one into the urban environment (Bluey), and one into the cave environment (Bingo). Both ATRs placed communication nodes at the intersection of the three environments, then Rat was sent into the tunnel section, and Bear into the cave.
Bingo rapidly progressed through the cave environment, and quickly disappeared from communications. The human supervisor carefully de-prioritised the region leading to the train tracks in the urban section, preventing Bluey from repeating the mistake of the two preliminary runs. Bluey lost communications as it headed towards the train platform at the end of the urban section.
Rat explored the tunnel section, making slow progress over tangled fire hose. The robot dropped a communications node at a junction, and made an unsafe turn over the rail of a mine track resulting in the de-tracking and immobilisation of the robot ($\sim$28\,min into the run). The tunnel was narrow and the ceiling height too low to launch the UAV without risking catastrophic failure of the map. Bear made slow progress through the cave environment, successfully detecting an object in the small cavern (previously explored in preliminary run 2), then returning to the main channel of the cave to follow Bingo's path.
Approximately 30\,min into the run, communications with Bluey were re-established. While out of communications, Bluey had climbed stairs to reach the subway platform, explored the top of the platform, descended stairs to the subway tunnel and exited towards the tunnel course. In the process, Bluey briefly reconnected with Bingo, and was subsequently able to mule part of its data back to the ground station. Bingo had located a large cavern at the end of the cave section, and had appeared to have fallen (post-run analysis confirmed Bingo fell at 22\,min). The human supervisor directed Bluey towards the large cavern, and prioritised getting Bear to this location with a UAV.
Bear was unable to make progress through the cave environment with the UAV mounted on the back due to the low clearance of the tunnel. There appeared to be a path leading to the large cavern through to the tunnel section, and the human supervisor directed Bear to backtrack to this location. However, a dynamic obstacle had closed the passage behind Rat, and the only alternative route was blocked by Rat, which was immobile. The human supervisor had to remove the UAV to enable the traversal of the narrow cave to recover data from the Spots. With time running out, the human supervisor relied heavily on teleoperation for faster traversal, ensuring communications were maintained (by dropping communication nodes along the way). Bear was controlled to the smaller cavern to launch the UAV, then made quick progress through the cave section towards the large cavern. In the final minutes, Bear established a connection with Bluey (fallen at $\sim$40\,min), and the final scored object detection came through to the ground station (reported at 25\,s remaining).
A total of 23 objects were successfully detected. Another four objects were detected but not reported. Of these, one detection did not make it back to the ground station in time, and three non-visual detections (gas and two cellphones) were difficult to localise given the high load of the human supervisor at the end of the run (Table~\ref{tab:PrizeRunDetection}). \figref{fig:prize_run_results} shows the course traversal for each ground platform and the scored object reports. Table~\ref{tab:PrizeRunDetectionByPlatform} shows the percentage of object reports by platform type (ATR, Spot, UAV). These results indicate the Spot robots were slightly better at detecting objects than the ATR's (more detections for each meter of terrain covered). This can be explained by the increased field of view from an elevated perception pack and reduced occlusion compared with the UAV-carrying ATRs. The UAVs successfully detected objects in the preliminary runs but were not utilised for object detections in the final prize run, as the ideal launch locations were not able to be reached by the UAV-carrying ATRs. The UAV that was launched in the prize run did not move from its hover position and did not report any detections; as previously described, it was launched in an already-explored cavern for the purpose of providing additional clearance for the ATR.
\begin{figure}[b!]
\centering
\subfloat[Robot Paths]{\includegraphics[width=0.85\columnwidth]{figures/prize_run_coverage2.png}}
\hfill
\subfloat[Object Reports]{\includegraphics[width=0.85\columnwidth]{figures/prize_run_objects.png}}
\caption{Course coverage and successful object reports for the final prize run. The starting area is located at the top left of each image. (a) shows robot paths by colour for Spot robots Bluey and Bingo, ATRs Rat and Bear, and UAVs H1 and H2 (the latter of which was not launched). (b) shows the final map and object reports, based on information from the base station. The various artefact detections are shown as small dots in corresponding colours, while successfully scored artefacts are shown as large spheres, marked with the artefact time and scoring time (in seconds).}
\label{fig:prize_run_results}
\end{figure}
The mode of operation of each robot as a function of time is shown in \figref{fig:autonomy_mode_prize_run}, illustrating the use of the directed autonomy functionality to achieve the results previously described. A common pattern is the use of waypoints to position a robot in the desired area, followed by autonomous operation, either in the default mode (without prioritisation), or using prioritisation to ensure continued progress in the desired direction. For example, Bingo (r5) shows extensive use of prioritisation regions to achieve the desired result, while Bluey (r2) shows both task prioritisation regions and manual task assignment to direct it to the region of interest. Once Bear (r3) was the only functional agent within communications range, it was controlled using waypoints and later, in the time critical period, using teleoperation. The percentage of a robots run under each mode of operation is provided in Table \ref{table:final_run_modes}, showing the degree of operator intervention for each agent as a percentage of the robots total operational time.
\begin{figure}[tb]
\centering
\includegraphics[width=0.75\columnwidth]{figures/autonomy_mode_prize_run.pdf}
\caption{Mode of operation of the autonomy system by time for each UGV during the prize run. Teleoperation denotes direct joystick control, whereas Waypoint Command denotes navigation to a specified waypoint. Drop Comm Node Command denotes an operator command to navigate to a specified location and drop a communications node. Default Task denotes the regular autonomous mode of task allocation (without priorization), whereas Manual Task denotes an operator override to execute a specific task, and Prioritized task denotes task allocation where the selected task was in a region that had been prioritized by the operator.}
\label{fig:autonomy_mode_prize_run}
\end{figure}
\begin{table}[!ht]
\centering
\caption{Percentage of the prize run in each mode of operation for each platform, summarising data in \figref{fig:autonomy_mode_prize_run}. See \figref{fig:autonomy_mode_prize_run} for description of robot modes.}
\label{table:final_run_modes}
\begin{tabular}{l c c c c}
\toprule
& Rat-r1 & Bear-r3 & Bluey-r2 & Bingo-r5 \\
& (ATR) & (ATR) & (Spot) & (Spot) \\
\midrule
Waypoint Command & 15.8\% & 54.1\% & 11.5\% & 15.3\%\\
Drop Comm Node Command & 1.7\% & 1.3\% & 0.0\% & 0.0\%\\
Teleoperation & 12.5\% & 16.7\% & 1.6\% & 1.3\%\\
Prioritized Task & 0.0\% & 0.2\% & 31.3\% & 72.9\%\\
Manual Task & 4.1\% & 15.2\% & 41.5\% & 0.0\%\\
Default Task & 65.9\% & 12.5\% & 14.1\% & 10.5\%\\
\bottomrule
\end{tabular}
\end{table}
The distribution of time that the ATRs spent using each planner, i.e., default hybrid A*, or the gaps planner of Section \ref{sss:gaps_planner}, is shown in Table \ref{table:final_run_planner}. The Spot robots are not shown as they only use hybrid A* (with the smaller agent dimension, the gaps planner was not found to be necessary). The table shows that the ATRs utilised the gaps planner for 21.3\% of the time, or 22.9\% of time where a plan was active. This was much larger than expected, and is due to the extensive narrow tunnels in the course. In most test environments, the gaps planner only activated in order to pass through narrow doorways; in this course, it enabled (slow) progress across parts of the course that would have been otherwise impassible.
\begin{table}[!ht]
\centering
\caption{Percentage of the prize run where ATR robots utilized each planner (out of time when path follow behaviour was active). ``None'' indicates that no planner was active, i.e., either the planner(s) are still generating a plan, or that no feasible plan was found.}
\label{table:final_run_planner}
\begin{tabular}{l c c c}
\toprule
& Rat-r1 (ATR) & Bear-r3 (ATR) & Overall ATR\\ [0.5ex]
\midrule
Hybrid A* & 78.0\% & 68.2\% & 71.6\% \\
Gaps & 15.1\% & 24.6\% & 21.3\% \\
None & 6.9\% & 7.2\% & 7.1\% \\
\bottomrule
\end{tabular}
\end{table}
The distribution of time that the robots spent using each behaviour is shown in Table \ref{table:final_run_behaviour}, focusing on autonomous motion-based behaviours, i.e., excluding teleoperation and stopped behaviours. As expected, the decollide behaviour is utilized for a small proportion of the time, though its use reenables the path planning behaviour. Orientation correction exists to prevent robot tipping, and saw a single activation for a fraction of a second.
\begin{table}[!ht]
\centering
\caption{Percentage of the prize run where ATR robots utilized each behaviour (focusing on autonomous motion behaviours, excluding stopped and teleoperation-related behaviours). *Path following is achieved through different behaviours on ATR and Spot, utilizing the method of Section \ref{sss:path_follow} for ATR, Section \ref{sss:spot_local_nav} for Spot.}
\label{table:final_run_behaviour}
\begin{tabular}{lccccc}
\toprule
& Rat-r1 & Bear-r3 & Bluey-r2 & Bingo-r5 & Overall \\
& (ATR) & (ATR) & (Spot) & (Spot) & \\
\midrule
Orient correction & 0\% & 0.002\% & 0\% & 0\% & 0.0008\% \\
Path follow* & 96.0\% & 93.3\% & 98.2\% & 97.4\% & 95.7\% \\
Decollide & 4.0\% & 6.7\% & 1.8\% & 2.6\% & 4.3\% \\
\bottomrule
\end{tabular}
\end{table}
The final pose graph for one agent (Bluey-r2) is shown in \figref{fig:ma_pose_graph}. The total number of loop closure edges in this graph is three. \figref{fig:bluey_loop_closure_time} shows the number of loop closure edges in the pose graph as a function of time (i.e., the number of edges that are not implied by odometry). When a loop closure occurs, there are often additional transient edges, which subsequently disappear as the graph is simplified, demoting some of the root nodes to become child nodes. The figure also shows that additional loop closures occurred after the end of the run (i.e., after the 60\,min mark); these occurred as data from Bear (r3) continued to be relayed through a slow (due to low SNR) communications link.
\begin{figure}[tb]
\centering
\includegraphics[width=0.75\columnwidth]{figures/bluey_num_loop_closure.pdf}
\caption{Number of loop closure edges (i.e., edges not implied by odometry) as a function of time during the final prize run. Note that the run ends at 60\,min; loop closures after that point are due to the additional data subsequently relayed.}
\label{fig:bluey_loop_closure_time}
\end{figure}
The agents sent a total of 203 artefact reports to the operator during the one hour mission (an average of 3.38 reports per minute). Of the 203 reports, 29 reports were of true positives and the remaining 174 were false positives. An analysis of the new multi-agent artefact tracking system showed that multiple agents had detected the same artefact (true positive or false positive) at 16 different locationsand had reduced the number of duplicate artefact observations seen by the operator by 27 reports. \figref{fig:ugv_detections} shows examples of object detections from the UGVs.
\begin{table}[th]
\centering
\caption{Summary of detection results in final prize run. *Cube counted as visual, cell phone as non-visual. $\dagger$Data not communicated to base in time. $\ddagger$Non-visual detection but too sparse/non-specific.}
\begin{tabular}{lcc}
\toprule
& Visual & Non-visual* \\
\midrule
Detected and scored & 20 & 3 \\
Detected but not reported/scored & 1$\dagger$ & 4$\ddagger$ \\
Passed but not detected & 3 & 0 \\
Did not pass & 9 & 0\\
\bottomrule
\end{tabular}
\label{tab:PrizeRunDetection}
\end{table}
\begin{table}[th]
\centering
\caption{Percentage of object detections and terrain coverage by robot type in the final prize run.}
\begin{tabular}{lccc}
\toprule
& Total Reports & True Positive & Terrain Coverage\\
\midrule
ATR & 43\% & 44\% & 49\% \\
Spot & 57\% & 55\% & 51\%\\
UAV & 0\% & 0\% & 0\%\\
\bottomrule
\end{tabular}
\label{tab:PrizeRunDetectionByPlatform}
\end{table}
\figref{fig:mule_load} shows the cumulative data generated by each of the autonomy processes of each ground agent during the final prize run. The figure shows that by far the largest contributor is the SLAM odometry frames required to permit each agent to build a unified map. Cost map bundles are the second contributor, which allow building of the unified traversability map on top of the SLAM solution. Object detections utilise a similar amount data, enabled by the tracking methods described in Section \ref{sec:perception}. Task definitions and bundles and sync auctions represent data used by the task allocator to achieve a common understanding of the task set and robot-task assignments. Note that Bingo (r5) fell at around 20\,min, but due to its resting position, odometry frames did not appear sufficiently similar to be suppressed, and thus the agent continued to generate significant odometry data. Conversely, Bluey (r5) fell at around 40\,min, and generated little odometry data thereafter.
\begin{figure}[tb]
\centering
\includegraphics[width=0.95\columnwidth]{figures/mule_load.png}
\caption{Cumulative data load (megabytes) for each of the ground robots during the 60 minute final prize run.}
\label{fig:mule_load}
\end{figure}
Table~\ref{tab:PrizeRunAvgCPUPack} shows the average CPU usage for each of the processes on the perception pack, while Table~\ref{tab:PrizeRunAvgCPUAutonomy} shows the average CPU usage for processes on the autonomy computer. SLAM, image processing and object detection use similar CPU resources, although the latter two also use GPU resources. The heaviest CPU usage on the autonomy computer is OHM, which utilises the GPU for ray tracing, and the CPU for data pre-processing and height map generation. Differences between ATRs and Spots are caused by the Spot's different set of behaviours (e.g., passing a trajectory for spot to follow rather than using path follow), higher speed, greater use of autonomous exploration (and hence task allocation), and processor differences.
As reported by~\cite{chung_into_2023}, Team CSIRO Data61 excelled in the final prize run using alternate relevant evaluation metrics, achieving the lowest map deviation, greatest map coverage, highest report success rate (scored/submitted), and most accurate report with the smallest detection error (meters from ground truth). We were also the fastest team to enter the course with a robot (seconds from run start).
\begin{table}[th]
\centering
\caption{Average percentage of CPU usage on the perception pack in the final prize run (where 100\% denotes utilization of a full virtual core). Image processing encompasses image acquisition, rectification, and recording, and likewise lidar/IMU encompasses the respective signal acquisition and recording.}
\begin{tabular}{lcc}
\toprule
& CPU usage (ATR) & CPU usage (Spot) \\
\midrule
SLAM & 136.5\% & 158.5\% \\
Image processing & 135.5\% & 140.5\% \\
Object detection & 127\% & 133.5\% \\
Lidar/IMU & 73\% & 72.5\% \\
WiFi detection & 13\% & 14\%\\
\bottomrule
\end{tabular}
\label{tab:PrizeRunAvgCPUPack}
\end{table}
\begin{table}[th]
\centering
\caption{Average percentage of CPU usage on the autonomy computer in the final prize run (where 100\% denotes utilization of a full virtual core).}
\begin{tabular}{lcc}
\toprule
& CPU usage (ATR) & CPU usage (Spot) \\
\midrule
OHM (occupancy mapping) & 117.0\% & 112.4\% \\
Cost map generation & 24.3\% & 27.2\% \\
Behaviours & 14.4\% & 7.4\% \\
Hybrid A* planner & 6.2\% & 12.6\% \\
Gaps planner & 22.0\% & NA \\
Global mapping and planning & 26.3\% & 29.0\% \\
Exploration & 6.0\% & 14.9\% \\
Task allocation & 5.1\% & 10.8\% \\
Communications & 2.0\% & 1.8\% \\
External interfaces & 40.5\% & 64.0\% \\
Recording & 47.4\% & 20.2\% \\
\bottomrule
\end{tabular}
\label{tab:PrizeRunAvgCPUAutonomy}
\end{table}
\section{Lessons Learned}
\label{sec:Lessons}
The intensive development over the SubT program has provided us with a number of useful insights about how to conduct an activity of such scale. Some of these reflect things that our team did well, while others are hard-learned lessons where in hindsight we see the need to do things differently. In this section, we aim to chronicle some of these.
\subsection{Development and Testing}
Prior to the tunnel circuit, a synthetic tunnel environment was constructed on site, as illustrated in \figref{fig:QCAT_tunnel}. Over time, this was expanded to include stairs, mezzanines, and a terrain park. Testing initially focused on this environment, but grew to incorporate as many elements as we could recreate on site, for example, incorporating a long traverse from the tunnel to industrial warehouse regions. In the end, the team's strengths and weaknesses reflect the environments to which we had regular access. We did not have regular access to a representative underground communication environment, which made development of features such as autonomous communications node dropping difficult to validate (thus this task remained manual).
Throughout the program, we maintained a regular cadence with weekly integration testing. These test activities served several functions; providing the team with a holistic view of where each person's work fits into the capability as a whole, enabling clear prioritisation of work by regularly demonstrating the significance of different issues, and quickly revealing problems that arise when integrating work from multiple developers. While this process was invaluable in the lead up to challenge events, during other development periods, some team members found it limiting due to the time occupied by the test itself, as well as post-test analysis of results. Subsequently, in follow-on work, we have dropped back to fortnightly tests.
The weekly test regime greatly clarified the robustness requirements for agents. Platforms were run for many hundreds of hours, and the need to address issues arising from intermittent failures was highlighted by the impact they had on the overall test conduct and consequent team efficiency.
The team benefited from high standards in software development, including use of continuous integration servers, and peer review through enforced pull requests. The high quality Gazebo-based simulation environment was critical to development, and productivity was noticeably slower on features that were not adequately modelled in simulation but rather required extensive on-robot testing.
Due to the aggressive development schedule, it was regularly the case that hardware was not complete on the full robot fleet until shortly before each event. This last-minute scale up of the robot fleet led to a range of issues. Again, this can be viewed as being related to limitations in simulation. For example, communications were not well-modelled in simulation, so extensive difficulties were experienced when the fleet was scaled up towards the end of the campaign, providing higher traffic and more complications unique to each platform class. Similarly, computational limitations of the simulation environment in general did not support testing of the full fleet size on available hardware; work has since been conducted to enable the use of parallel computing environments in simulation, permitting greater scaling.
\subsection{Platforms}
Reflecting on the progression of our team's platforms, a key strength of our approach was a willingness to pivot rapidly, embracing opportunities to leverage developments in commercial offerings. Our original concept of operations centred around a bespoke hexapod design \citep{steindl_2020}, with a goal of providing the ability to navigate extreme terrain. This concept adapted based on two major learnings. Firstly, the surprising capability of the tracked BIA5 OzBot ATR platform on rough terrain (e.g., slopes up to 60\ensuremath{^{\circ}}\xspace) significantly changed our view of the trade-off between platform types. Secondly, the engineering effort involved in developing a platform to the point where it has sufficient robustness to be a viable candidate in the challenge context was difficult to sustain under the resource constraints. As new, commercially available platforms emerged with the benefit of far greater engineering investments, the cost/benefit of bespoke development became less compelling.
Another aspect of our team's experience with platforms was the significant engineering effort required to adapt commercial platforms intended for teleoperation to robust autonomous operation. As described in \secref{sss:atr}, autonomous systems sometimes exerted control outside the designers' expectations, which led to outcomes such as motor burnout, and additionally, stock control systems made precise motion difficult. This was experienced with a range of wheeled and tracked platforms.
Through the duration of the program, the capability of commercial quadruped platforms has also increased greatly. As discussed above, our initial concept was on hexapod platforms based on the intuition that the additional legs would provide valuable improvements to stability on rough terrain. Again, the tremendous commercial investment in platforms such as Boston Dynamics Spot and ANYbotics ANYmal shows that the additional maturity of these quadruped platforms overcomes any advantage that an early prototype hexapod may hold. While these have come a long way, our own experience with Spot shows that falls on rough terrain are still an issue, and our original hypothesis regarding hexapod platforms may still stand, though the engineering investment necessary to test it would be large.
Finally, our approach of a common sensing pack and navigation stack paid large dividends throughout the program. All ground platforms utilised the same sensing solution and autonomy varied only through minor configuration parameter changes, and utilising outputs at different levels (e.g., sending trajectories to Spot vs low level control of tracked platforms).
\subsection{SLAM}
Our SLAM solution evolved significantly during the course of the program, especially in aspects relating to multi-agent systems. Our solution was quite sufficient in the environments tested, and rarely presented a limitation to overall performance. However, there are a variety of qualifications on that outcome:
\begin{itemize}
\item The sensor payload with the spinning lidar is both expensive and heavy. In many applications, it is desirable or essential to use smaller, lighter and cheaper sensor configurations. Understanding whether these sensor configurations can provide similarly adequate performance is a topic of further study.
\item SLAM performance appeared sufficient in the dust and smoke (e.g., fog machine) obscurants tested in SubT. This is specific to the obscurants encountered; each obscurant may react differently with the lidar signals.
\item Issues with place recognition for robot wake-up were addressed procedurally, in accordance with the competition rules. Robust and reliable wake-up location in more general problems remains a topic of interest.
\item Due to the environment scale and excellent odometry performance, place recognition was not found to be necessary for loop closure. This problem is still open for larger scale (spatial or temporal) missions, or systems with poor odometry performance.
\item The regime of sharing frames between robots and solving the SLAM problem independently on each agent was shown to be highly effective, but there are limits to its scalability. True distributed computation that accommodates larger scale but maintains the ability to address problems with poor communication remains an interesting, open problem.
\end{itemize}
\subsection{Autonomy}
One key point of realisation for our team occurred in our local Cave Circuit event in September 2020. As described in \cite{hines_2020,hudson2021heterogeneous}, the ATR robots covered extreme terrain exceptionally well. They rolled on a number of occasions, each of which was a subject of close investigation. Some were due to subsidence of the terrain under the robot, a difficult problem that was unmodelled in the terrain analysis approach. Some were identified as unexpected conditions in the behaviour stack, which were easily addressed. Most, however, occurred under teleoperation when the operator intervened to force the robot to navigate to areas where autonomy was refusing to go. The conclusion from this point was that the local navigation capability was at the point where, subject to the situation awareness constraints and latency experienced by the operator, autonomous navigation performed better on this terrain than teleoperation.
The conclusion on global navigation was somewhat different. Although performance has steadily improved throughout the development, it remains the case that the global maps have imperfections that benefit considerably from operator input. Most difficult is the trade-off between falsely clearing frontiers in narrow doorways and failing to correctly clear frontiers when visited, leading to revisits of the same space. This is exacerbated by the fact that it is often difficult to distinguish traversable and non-traversable openings without attempting them.
A related lesson was on the scoring used for selecting frontiers. As discussed in \secref{sec:ugv_global_autonomy}, it was found that size-based scoring as is common in next best view planners, often led to undesirable behaviour such as declining to enter a small opening. In the SubT context where such traversals are of prime importance, our conclusion was that, in the absence of semantic analysis covering cases such as doors, size-based scoring was unhelpful.
Finally, the human/robot interface concept has come on an interesting journey, starting from a highly manual waypoint-based interface in the tunnel circuit, to relying on fully autonomous explore without option for human input in the urban circuit, and finally arriving at a system which permits directed autonomy, with a complete set of tools for operator prioritisation at the Final Event. The emphasis on the human/robot team in this concept significantly contributed to our result.
\subsection{Perception}
The object detection capability had interesting lessons related to the generalisation error imposed by the competition structure. As much as we could collect test data in as wide a range of environments as possible, the unique and unpredictable nature of the environments presented in the challenge events inevitably led to significant model mismatch. Consequently, false detections were often a challenge, for which the only effective mitigation was temporal analysis (i.e., object tracking).
Another approach attempted but not deployed was to improve the operator's overall situational awareness via the use of a persistent coloured point cloud. The goal was to augment the 3D structure information computed by Wildcat from lidar data with colour information obtained from each agent's onboard cameras. The displayed point cloud would dynamically update as the agents explored the unknown environment and would remain visible for the entire duration of the mission. It was hoped that the persistence would allow the operator to virtually teleport to any point in the explored environment in order to look for artefacts and/or make more informed decisions on each agent's progress or current task \citep{Vechersky_2018}.
The feature was implemented to run in real-time on the agents with a resolution of one point per 30\,mm$^3$ voxel, and a rate of 4\,Hz per camera, limiting bandwidth by compressing and sending only new points to the base station for visualisation. The work required to reconstruct the point cloud on the base station was unable to be completed in time for the Final Event due to competing priorities. %
A complete study of this functionality (including bandwidth impacts) will be pursued in the future; an example of the coloured point cloud is shown in \figref{fig:coloured_point_cloud_subt_urban}.
\begin{figure}[t]
\centering
\includegraphics[width=120mm]{figures/coloured_point_cloud_subt_urban.png}
\caption{The raw coloured point cloud showing a mural in the circuit, calculated online by Team CSIRO Data61's Bluey platform during the Final Event.}
\label{fig:coloured_point_cloud_subt_urban}
\end{figure}
\begin{figure}[!b]
\centering
\subfloat[]{\includegraphics[height=3.5cm]{figures/Remote_support.jpg}}~
\subfloat[]{\includegraphics[height=3.5cm]{figures/Remote_debrief.jpg}}~
\subfloat[]{\includegraphics[height=3.5cm]{figures/Remote_support_2.jpg}}
\caption{An engineer from the Australia based team providing remote advice via teleprecence robot to a US based team member to perform emergency electrical repairs on the Boston Dynamics Spot Quadruped's Autonomy payload (a), A US based team member providing run debrief to the Australia based team after a preliminary run (b) and an engineer from the Australia based team working US East coast hours from Brisbane to provide support during the Final Event (c).}
\label{fig:Remote_support}
\end{figure}
\subsection{Remote Support}
Finally, due to COVID-19 induced limitations on the composition of our deployed team for the Final Event led to a very challenging experience, where much of the development team provided remote support from the opposite side of the world. The Australian development team switched to the US East coast time zone for the period of the deployment (i.e., both the lead-up and the actual event), and provided support through telepresence robots and video conferencing (\figref{fig:Remote_support}). Sending robot recordings back to Australia was challenging due to the lack of high-bandwidth connections in the deployed location, and was generally only achievable after stripping out all but the most critical data from the log files.
Hardware work performed by the deployed team included replacing an ATR motor, fixing wiring faults with an autonomy computer on a Spot robot, and replacing broken drop node compute modules. In each case, with the help of duplicate hardware held back in Australia, the development team performed the procedure, capturing detailed step-by-step instructions and photographs, which were sent to the deployed team who executed the procedure under remote supervision.
It was also necessary to be realistic about the robot team composition that could be supported. For example, a decision was made to not ship the DTR as it was not considered feasible for the small deployed team to support a fourth platform type.
\section{Conclusions}
\label{sec:conclusion}
We have presented the system Team CSIRO Data61 deployed at the DARPA SubT Challenge finals. Special emphasis was given to the improvements and changes made to our systems and approach since Phase I (Tunnel Circuit) and Phase II (Urban and Cave Circuits) of the challenge. The reasons for these changes were also explained. Results from the Final Event were presented and analysed. We also provided insights and lessons learned over the overall campaign. The paradigm of using the same sensing and autonomy payloads on different robot platforms allowed us to effectively scale our fleet. This also allowed us to pivot to new platform types with minimum lead time as demonstrated by us fully integrating the Spot platform in to our fleet just a few months before the Final Event. In the final prize run, we had all but one of our robots being immobilised due to various challenge elements in the course leading up to the final minutes of the run. Despite this attrition of agents, the overall system performed as designed to provide resilience against this and we managed to create the most accurate maps of the environment as well as tie for the top score. Therefore, we have demonstrated how our paradigm of heterogeneous robot teams with unified perception and autonomy allowed Team CSIRO Data61 to achieve a remarkable outcome at the SubT Challenge finals, even without being able to send the full development team from Australia to the event in the US.
\subsubsection*{Acknowledgments}
The authors would like to thank Nicolas Hudson, Erin McColl, Nicola Cowen, William Docherty, Megan Croker, Julie Noblitt, Dawn Lillington, Margaret Donoghue and Alison Donnellan for their support leading up to and during the Final Event of the SubT Challenge. The authors also thank Tim Chung, Viktor Orekov and the DARPA team for the extraordinary efforts and support during the SubT program.
\bibliographystyle{apalike}
|
{
"arxiv_id": "2302.13645",
"language": "en",
"timestamp": "2023-02-28T02:26:19",
"url": "https://arxiv.org/abs/2302.13645",
"yymm": "2302"
} | \section{Introduction }
Unlike the \emph{bare} photon, which is the structureless gauge boson of QED, the \emph{resolved} photon can be endowed with a
structure owing to the energy-time uncertainty principle, which allows it to fluctuate into a charged fermion antifermion pair.
These quantum fluctuations allow the possibility of photon-photon interactions. These interactions between two photons are experimentally accessed
by colliding energetic $e^{\pm}$ with $e^-$ beams~\cite{walsh}. The photon structure functions are probed in experiments
when one of the photons has high virtuality ($Q^2 >> 0$) and the other is almost real ($P^2 \sim 0$). These scattering processes are analogous to deep inelastic scattering (DIS) experiments on a hadron but with a photon as the probed system. The point-like contribution to the photon structure
function can be calculated perturbatively~\cite{Nisius:1999cv}. On the other hand, the hadron-like part of the photon structure function involves nonperturbative contributions, which are usually parameterized using the vector meson dominance (VMD) model~\cite{Peterson:1982tt}. One of the seminal papers was written by Witten~\cite{Witten:1977ju}, which investigated the photon structure function in an asymptotically free gauge theory and studied the scale dependence. The photon structure function
has been experimentally studied over a wide kinematic range starting from the first measurement by the PLUTO Collaboration~\cite{Berger:1981bh}. The calculability of the photon observables provides a very good opportunity to use a photon as a testing ground for a light-front Fock-space based approach aimed at calculating detailed properties of the resolved hadron. In this work, we investigate a few key observables and compare our results with estimates made in perturbative QED as well as with available experimental data.
For context, consider a hadron where GPDs were introduced in relation to deeply virtual Compton scattering (DVCS)~\cite{Ji:1996nm,Radyushkin:1996nd,Mueller:1998fv}, and are rich in information since they contain combined information about the form factors and the parton distribution functions (PDFs). GPDs reduce to ordinary PDFs in the forward limit of zero momentum transfer and, when integrated over the longitudinal momentum fraction $x$, they give the corresponding hadron form factor. These GPDs have been studied extensively both theoretically as well as experimentally from DVCS data on a hadron target~\cite{gpd_all}.
In a similar vein, the GPDs for the photon were introduced in Ref.~\cite{Friot:2006mm}, where the authors considered DVCS ($\gamma^* \gamma \rightarrow \gamma \gamma$) taking the nearly real photon as a photon target. Their calculation was done at leading order in the electro-magnetic coupling $\alpha_{\mathrm{em}}$ and zeroth order in the strong coupling $\alpha_{\mathrm{s}}$ by considering all the amplitudes corresponding to the Born order diagrams for $\gamma^* \gamma \rightarrow \gamma \gamma$.
The first calculation for the photon GPDs with non-zero transverse momentum transfer was presented in Ref.~\cite{Mukherjee:2011bn}. This perturbative calculation was based on a light-front Hamiltonian framework, where the Fock space expansion of the photon state was truncated at the two-particle ($q\bar{q}$) sector. Both the polarized and the unpolarized photon GPDs were obtained using overlaps of photon light-front wave functions (LFWFs). The photon GPDs for non-zero skewness using the same approach were reported in Refs.~\cite{Mukherjee:2011an,Mukherjee:2013yf}.
Somewhat less explored are the transverse momentum dependent distribution functions (TMDs) for the photon. For the hadron, these are objects of substantial interest; these give the distribution of quarks and gluons in the hadron in three-dimensional momentum space~\cite{Angeles-Martinez:2015sea,Barone:2001sp,Accardi:2012qut}. Experimentally, the TMDs are accessed via semi-inclusive reactions like the semi-inclusive deep inelastic scattering (SIDIS)~\cite{Brodsky:2002cx,Bacchetta:2017gcc} and Drell-Yan processes~\cite{Ralston:1979ys,Donohue:1980tn,Tangerman:1994eh}. At leading twist for a spin-half hadron, like the nucleon, there are eight quark TMDs, out of which six are time-reversal even (T-even) and two are time-reversal odd (T-odd). For spin-one hadrons, like the deuteron or vector mesons, there are additional TMDs following the increase in the spin degrees of freedom~\cite{Hoodbhoy:1988am,Hino:1999qi,Bacchetta:2000jk}. A model calculation with a covariant formalism for the leading-twist time-reversal even TMD for a spin-one target was shown in Ref.~\cite{Ninomiya:2017ggn}.
In this work, we utilize the Basis Light-Front Quantization (BLFQ) approach for calculating the observables for the photon and comparing them with the results from the perturbative approach. BLFQ is a nonperturbative approach for solving bound state problems \cite{blfq1}, which uses the \emph{front-form} of relativistic dynamics.
BLFQ employs the light-front Hamiltonian~\cite{brodsky1}, which is diagonalized in a truncated Fock space to obtain the stationary states. So far, BLFQ has been successfully applied to problems within QED such as the electron anomalous magnetic moment~\cite{maris,zhao}, the strong coupling bound-state positronium problem~\cite{li}, and the GPDs~\cite{zhao2} and the TMDs~\cite{Hu:2020arv} of the physical electron and the real photon~\cite{Nair:2022evk}. More recently, BLFQ has been employed to solve light mesons~\cite{Jia:2018ary,Lan:2019vui,Lan:2019rba,Adhikari:2021jrh,Lan:2021wok,Mondal:2021czk}, heavy quarkonia~\cite{Li:2015zda,Li:2017mlw,Li:2018uif,Lan:2019img}, heavy-light mesons~\cite{Tang:2018myz,Tang:2019gvn}, nucleon
\cite{Xu:2019xhk,Xu:2021wwj,Liu:2022fvl,Hu:2022ctr}, heavy baryons~\cite{Peng:2022lte}, and the
all-charm tetraquark~\cite{Kuang:2022vdy} as QCD bound states.
One of the primary goals of this work is to compare the results of the BLFQ approach with the perturbative results for the photon, in order to determine how well this approach can describe a relativistic composite system and to verify the renormalization procedure. We utilize the sector-dependent renormalization technique \cite{Karmanov:2008br,Karmanov:2012aj} and the wavefunction scaling approach to mitigate the effects of the artifacts originating with Fock space truncation. Thus our work serves as a test for the methodology employed in solving bound state problems within the BLFQ approach. In addition, we anticipate that our approach will be useful for comparing theory with future experiments such as those at the Electron-Ion Collider (EIC).
We organize the paper by first introducing in Sec.~\ref{secblfq} the basic methodology involved in solving any stationary state problem in BLFQ. Then in Sec.~\ref{secrenorm}, we discuss briefly the strategy used for the renormalization and wavefunction rescaling. In Sec.~\ref{secobs}, we show the formulation of various observables that we calculate within BLFQ and compare them with the corresponding perturbative formulation. In Sec.~\ref{secnum}, we present our numerical results for various observables of the real and the virtual photons and show the comparisons with the perturbative results and with experimental data. Finally, we summarize our work in Sec.~\ref{con}.
\section{Basis Light-Front Quantization}
\label{secblfq}
BLFQ aims at solving the following eigenvalue equation to obtain the mass spectrum and the LFWFs:
\begin{eqnarray}
\mathrm{H_{LC}} \mid \psi \rangle &=& M^2 \mid \psi \rangle ,
\label{eveq}
\end{eqnarray}
\noindent
where $\mathrm{H_{LC}} = P^+ P^- - \left(P^{\perp}\right)^2 $ is the light-front Hamiltonian and the operators $P^+ $, $P^-$, and $P^{\perp}$ are the longitudinal momentum, the light-front quantized Hamiltonian, and the transverse momentum, respectively.
The diagonalization of Eq.~(\ref{eveq}) using a suitable matrix representation for the Hamiltonian generates the invariant-mass spectrum ($M$) and the
light-front state vectors $\left(\mid \psi \rangle \right)$. The basis states for BLFQ are expanded in terms of Fock state sectors, where each basis state has longitudinal, transverse, and spin degrees of freedom. The longitudinal degrees of freedom are discretized by imposing an antiperiodic (periodic)
boundary conditions for the fermions (bosons) within a box of length $2L$ such that $-L \le x^- \le +L$. $P^+$ is further parameterized as $K = \sum_i \kappa_{i}$ such that $P^+ = 2 \pi K/L$. The longitudinal momentum fraction of the $i^{\mathrm{th}}$ parton is then defined as $x_i = k_i^+/P^+ = \kappa_{i}/K$. The sums run over all partons in each many-parton Fock space basis state.
The longitudinal momenta are discretized by fixing the value of $K$. The parameter $K$ can be seen as the ``resolution" in the longitudinal direction, and hence a resolution on the PDFs. When $K \rightarrow \infty$, we approach the continuum limit in the longitudinal direction. Two-dimensional (2D) harmonic oscillator (HO) modes are chosen as the basis states for the transverse degrees of freedom.
The HO states are specified by $n$, $m$, and $\Omega$, where $n$ and $m$ are the principal and the orbital angular quantum numbers, respectively and $\Omega$ represents the HO energy. The HO wavefunctions are conveniently represented in terms of a dimensionless parameter $\rho = |p^{\perp}| /b_0$, where the scale parameter $b_0\equiv\sqrt{M_0 \Omega}$ and $M_0$ has mass dimension. We use an $x$-dependent scale parameter $b$ such that $b=b_0\sqrt{x(1-x)}$ consistent with the coordinate adopted in Refs.~\cite{li,maris} where $x$ is taken to be the electron's light-front momentum fraction.
When expressed using polar coordinates $(\rho,\phi)$, the HO wavefunctions can be written in terms of generalized Laguerre polynomials $L_n^{|m|}\left(\rho^2\right)$,
\begin{eqnarray}
\phi_{nm}\left( p^{\perp} \right) = \sqrt{\frac{2\pi}{M_0\Omega}}\sqrt{\frac{2n!}{(|m|+n)!}}e^{im\varphi} \rho^{|m|}e^{-\rho^2/2}~L_n^{|m|}\left(\rho^2\right) .
\label{wavefn}\end{eqnarray}
The continuum limit in the transverse direction is dictated by the parameter $N_{\rm max} \rightarrow \infty$, where $N_{\mathrm{max}} \ge \sum_i \left(2n_i + |m_i|+1 \right)$. The ultra-violet (UV) and the infra-red (IR) cutoffs in the transverse direction are also determined by $N_{\rm max}$. In this work, we study the photon by truncating the Fock space expansion at the two-particle sector. At fixed light-front time, the photon
state can be expressed schematically as follows:
\begin{eqnarray}
\mid \gamma_{\mathrm{phy}} ~\rangle = \mid \gamma ~\rangle + \mid e^+ e^- ~\rangle .
\label{fock}
\end{eqnarray}
%
The full light-front QED Hamiltonian that we diagonalize can be written as
%
\begin{eqnarray}
\hat{H} = \hat{H}_{\mathrm{QED}} + \hat{H'},
\label{fullhami}
\end{eqnarray}
%
where $\hat{H}_{\mathrm{QED}} = P^+ \hat{P}^{-}_{\mathrm{QED}} - \left(\hat{P}^{\perp}\right)^2$. The light-front QED Hamiltonian $\hat{P}^{-}_{\mathrm{QED}}$ relevant to the photon's leading two Fock sectors in the light-cone gauge $A^+ = 0$ is~\cite{Nair:2022evk,Zhao:2014xaa}
\begin{eqnarray}
\hat{P}^{-}_{\mathrm{QED}} =
\int dx^- d^2x^{\perp}\left[
\frac{1}{2}\bar{\Psi}(x)
\gamma^+ \frac{m_e^2 + \left(i\partial^{\perp}\right)^2}{i\partial^+}
\Psi(x) + \frac{1}{2} A^k \left( i\partial^{\perp}\right)^2 A^k
\right] +
V_{\mathrm{int}},
\label{eff}\end{eqnarray}
where $m_e$ is the electron mass, $\Psi$ and $A_{\mu}$ are the fermion and the gauge boson fields, respectively. Note that with the leading two Fock sectors, the instantaneous-fermion interaction does not contribute. On the other hand,
the instantaneous-photon interaction (IPI) contributes to the overall renormalization factor, thus the intrinsic structure of the physical photon remains unaffected.
The IPI must be accompanied by the explicit photon exchange contribution from higher Fock sectors (which are not present in this work) in order to cancel the small-$x$ divergences \cite{Zhao:2014xaa}. Therefore, we choose to exclude the IPI term
from the Hamiltonian. The first two terms correspond to the kinetic energy of the electron and the photon, respectively, while the last term represents the interaction vertex ($V_{\mathrm{int}}$) for a photon creating a fermion anti-fermion pair.
The interaction term $V_{\mathrm{int}}$ in Eq.~(\ref{eff}) is given by
\begin{eqnarray}
V_{\mathrm{int}} = e \int dx^- d^2x^{\perp} \bar{\Psi}(x) \gamma^{\mu} \Psi(x)A_{\mu}(x)\Big{|}_{x^+=0},
\end{eqnarray}
where $e$ is the physical electromagnetic coupling constant. Lastly, we need to introduce a Lagrange multiplier ($\lambda$) to the Hamiltonian to separate the center-of-mass (CM) motion from the intrinsic motion. The CM motion is involved because of the use of single-particle coordinates and the HO basis states coupled with $N_{\mathrm{max}}$ truncation allow for the factorization of CM and intrinsic motions. The factorization is achieved numerically by adding a Lagrange multiplier term proportional to the HO Hamiltonian that acts only on the CM, $H_{\rm CM}$, with positive coefficient $\lambda$~\cite{zhao},
\begin{eqnarray}
\hat{H'} = \lambda \left(\hat{H}_{\mathrm{CM}} - 2b^2 I\right).
\label{hamifinal}
\end{eqnarray}
The CM excitations are shifted up by the Lagrange multiplier term. The value of $\lambda$ is chosen large enough such that the low-lying spectra of interest are not affected by the CM excitation~\cite{li}. More details on the CM factorization in BLFQ can be found in Ref.~\cite{Wiecki:2014ola}.
\section{Photon Mass Renormalization and Rescaling}
\label{secrenorm} The photon mass needs to be renormalized and this is achieved by employing a sector-dependent renormalization procedure~\cite{Karmanov:2008br,Karmanov:2012aj}. This renormalization procedure is performed numerically by implementing a root-finding algorithm to obtain the value of the mass counter term ($ {\mathrm{m_{ct}}}$), which is added to the bare photon mass ($m_0$) such that the ground state eigenvalue of the full Hamiltonian becomes equal to the physical photon mass ($m_\gamma$). This mass counter term ${\mathrm{m_{ct}}} = m_\gamma - m_0$ represents the numerical mass correction required to obtain the mass squared, $m_\gamma^2$, of the physical photon system. For the real photon $m^2_\gamma=0$ and for the time-like (space-like) virtual photon $m^2_\gamma > 0 ~(m^2_\gamma<0)$.
The mass counter-term is a function of the truncation parameters $N_{\mathrm{max}}$ and $K$. It is expected to increase \cite{brodsky1} with increasing value of the truncation parameters as can be seen in Fig.~\ref{fig1} (a).
The act of truncating the Fock sector expansion results in the violation of the Ward-identity and hence, we introduce a rescaling factor to repair this violation following the similar procedure as employed for the physical electron~\cite{Zhao:2014hpa,Zhao:2014xaa,Chakrabarti:2014cwa,Brodsky:2004cx}.
We rescale the naive photon observables $\mathcal{O}$ by the rescaling factor ($Z_2$) to obtain the rescaled observables $\mathcal{O_\mathrm{rs}}$ such that
%
\begin{eqnarray}
\mathcal{O_\mathrm{rs}} = \frac{\mathcal{O}}{Z_2},
\label{rs}
\end{eqnarray}
%
where
%
\begin{eqnarray}
Z_2 = \sum_{|\gamma\rangle} \Big{|} \Big{\langle} \gamma \Big{|} \gamma_{\mathrm{phys}} \Big{\rangle} \Big{|}^2 .
\label{z2}
\end{eqnarray}
%
The sum shown in Eq.~(\ref{z2}) runs over all basis states present in the photon Fock sector $\left(|\gamma\rangle \right)$. The photon rescaling factor $Z_2$ can be interpreted as the probability of finding a bare photon inside a physical photon. The behavior of the rescaling factor $Z_2$ with increasing basis truncation parameters is shown in Fig.~\ref{fig1} (b).
\begin{figure}[htp!]
\centering
\includegraphics[width=8.4cm,height=6.5cm,clip]{mass_ct.pdf}
\includegraphics[width=8.4cm,height=6.5cm,clip]{z2.pdf}
\caption{\label{fig1} Plot (a) is for the mass counter term $ {\mathrm{m_{ct}}}$ vs. the basis truncation parameter $N_{\mathrm{max}}$ and plot (b) is for the wave function renormalization factor $Z_2$ vs. $N_{\mathrm{max}}$. Both results are calculated with $N_{\mathrm{max}} = K$ and $b_0=m_e=0.511 ~\mathrm{MeV}$.}
\end{figure}
The $N_{\mathrm{max}}$ values used in the fitting shown in Fig.~\ref{fig1} are over an unequally spaced grid where
the first grid starting from $N_{\mathrm{max}} = 10$ up to $N_{\mathrm{max}} = 100$ is in steps of one and the second grid from $N_{\mathrm{max}} = 100$
up to $N_{\mathrm{max}} = 200$ is in steps of twenty and the final value being at $N_{\mathrm{max}} = 250$.
The fitting was done with equal weight for all the points on the grid.
A power law function of the form $f_0\left(N_{\mathrm{max}}\right) = a_0 \left(N_{\mathrm{max}}\right)^{\beta_0} $ was chosen as the fitting function for the mass-counter term values in Fig.~\ref{fig1} (a). The wave-function renormalization factor $Z_2$ was fitted with two functions of the form $f_1\left(N_{\mathrm{max}}\right) = a_1 \left(N_{\mathrm{max}}\right)^{\beta_1}$ and $f_2\left(N_{\mathrm{max}}\right) = a_2 \left(N_{\mathrm{max}}\right)^{\beta_2} + c_2$. The best fit values for all these parameters were found to be $a_0 = 0.045$ , $\beta_0 = 0.4772$, $a_1 = 1.002$ , $\beta_1 = -0.0027$, $a_2 = -0.0296$ , $\beta_2 = 0.0685$ and $c_2 = 1.0293$.
\section{Photon Observables}
\label{secobs}
We consider the physical (dressed) photon as a spin-one composite particle, with a bare photon and an electron-positron pair that appears from quantum fluctuation as its partons. We compute the TMDs and the GPDs of the electron (positron) inside the physical photon, as well as the structure function of the photon. We further investigate the effect of nonzero photon mass on the observables. The unpolarized $f^1_{\gamma}$ and the polarized $g^{1L}_{\gamma}$ TMDs for a spin-one target, under the approximation that the gauge link is the identity operator, are given by the following correlators at uniform light-front time $y^+=0$,
%
\begin{eqnarray}
f^1_{\gamma}\left(x,\left(k^{\perp}\right)^2\right) &=& \int \frac{dy^- d^2y^{\perp}}{(2\pi)^3}e^{ik.y} \Big{\langle} \gamma\left(P,\Lambda\right) \Big{|} \bar{\psi}(0)\gamma^+ \psi(y^-) \Big{|} \gamma\left(P,\Lambda\right) \Big{\rangle}\Big|_{y^+=0},
\label{tmdeqf1} \nonumber \\
g^{1L}_{\gamma}\left(x,\left(k^{\perp}\right)^2\right) &= &\int \frac{dy^- d^2y^{\perp}}{(2\pi)^3}e^{ik.y} \Big{\langle} \gamma\left(P,\Lambda\right) \Big{|} \bar{\psi}(0)\gamma^+ \gamma_5 \psi(y^-) \Big{|} \gamma\left(P,\Lambda\right) \Big{\rangle}\Big|_{y^+=0}.
\label{tmdeqg1l}
\end{eqnarray}
%
The physical photon state $\mid \gamma(P,\Lambda) \rangle$ with momentum $P$ and light-front helicity $\Lambda$ can be expressed by considering all
Fock components as~\cite{Brodsky:2000xy}
\begin{eqnarray}
\Big{|} \gamma(P,\Lambda) \Big{\rangle} &=& \sum_n \sum_{\lambda_1...\lambda_n} \int
\prod_{i=1}^n \left[ \frac{dx_i d^2k_i^{\perp}}{\sqrt{x_i}16\pi^3}\right] 16\pi^3 \delta \left(
1- \sum_{i=1}^n x_i
\right) \nonumber \\ && \delta^2 \left( \sum_{i=1}^n k_i^{\perp}\right) \psi^{\Lambda}_{\lambda_1...\lambda_n}\left(
x_i,k_i^{\perp}
\right) \Big{|} n,x_i P^+, x_i P^{\perp} + k_i^{\perp}, \lambda_i\Big{\rangle} ,
\label{multi}\end{eqnarray}
where $x_i = \frac{k_i^+}{P^+}$ and $k_i^{\perp}$ are the longitudinal momentum fraction and the relative transverse momentum of the $i^{\mathrm{th}}$ parton, respectively. $\lambda_i$ is the light-front helicity of the parton and $n$ denotes the number of particles in a Fock state. The physical transverse momentum of the parton is given by $p_i^{\perp} = x_i P^{\perp} + k_i^{\perp}$ and the physical longitudinal momentum is $p_i^+ = k_i^+ = x_iP^+$. The LFWFs $\psi^{\Lambda}_{\lambda_1...\lambda_n}$ are boost-invariant and depend only on $x_i$ and $k_i^{\perp}$.
The BLFQ LFWFs are generated by solving the eigenvalue problem of the Hamiltonian defined in Eq.~(\ref{fullhami}) using the plane wave and the harmonic oscillator basis functions with the truncated Fock sectors mentioned in Eq.~(\ref{fock}). For the second Fock sector, the corresponding component of the LFWF in the momentum space can then be expressed as follows:
%
\begin{eqnarray}
\Psi_{\lambda_e,\lambda_{\bar{e}}}^{\Lambda}\left(x_e,p_e^{\perp},x_{\bar{e}},p_{\bar{e}}^{\perp}\right) =
\sum_{\substack{n_e,m_e \\ n_{\bar{e}},m_{\bar{e}}}} \Big[\Psi\left( \alpha_e,\alpha_{\bar{e}}\right) \phi_{n_e,m_e }
\left( p_e^{\perp} \right)\phi_{n_{\bar{e}},m_{\bar{e}}}\left( p_{\bar{e}}^{\perp} \right)\Big],
\label{single}\end{eqnarray}
%
where $\alpha_i = \left( \kappa_i,\,n_i,\,m_i,\,\lambda_i\right)$ is the compact notation which encompasses all the four quantum numbers associated with a single particle basis state namely the longitudinal degrees of freedom ($\kappa_i$), the two transverse degrees of freedom ($m_i,n_i$) and the spin degree of freedom ($\lambda_i$). $\Psi\left( \alpha_e,\alpha_{\bar{e}}\right) = \langle \alpha_e,\alpha_{\bar{e}} \mid P,\Lambda \rangle$ are the components of the eigenvectors generated from the diagonalization of the Hamiltonian. These single-particle momentum dependent wavefunctions, Eq.~(\ref{single}), contain CM excitations, which need to be factored out. After factorizing out the CM excitations we end up with corresponding components of the LFWFs in the relative coordinates $\Psi_{\lambda_e,\lambda_{\bar{e}}}^{\Lambda}\left(x_e,k_e^{\perp},x_{\bar{e}},k_{\bar{e}}^{\perp}\right) $ relevant for Eq.~(\ref{multi}). The unpolarized and the polarized TMDs can be expressed as overlaps of the relative momentum dependent wavefunctions as,
\begin{eqnarray}
f^1_{\gamma}(x,(k^{\perp})^2) &=& \frac{1}{2}\int [de\bar{e}]\sum_{\Lambda,\lambda_e,\lambda_{\bar{e}}}
\Psi_{\lambda_e,\lambda_{\bar{e}}}^{*\Lambda}\left(x_e,k_e^{\perp},x_{\bar{e}},k_{\bar{e}}^{\perp}\right)
\Psi_{\lambda_e,\lambda_{\bar{e}}}^{\Lambda}\left(x_e,k_e^{\perp},x_{\bar{e}},k_{\bar{e}}^{\perp}\right), \\ \nonumber
g^{1L}_{\gamma}(x,(k^{\perp})^2) &=& \frac{1}{2}\int [de\bar{e}]\sum_{\Lambda,\lambda_e,\lambda_{\bar{e}}} \lambda_e
\Psi_{\lambda_e,\lambda_{\bar{e}}}^{*\Lambda}\left(x_e,k_e^{\perp},x_{\bar{e}},k_{\bar{e}}^{\perp}\right)
\Psi_{\lambda_e,\lambda_{\bar{e}}}^{\Lambda}\left(x_e,k_e^{\perp},x_{\bar{e}},k_{\bar{e}}^{\perp}\right),
\label{tmdeq}
\end{eqnarray}
where
\begin{eqnarray}
[de\bar{e}] = \frac{dx_{e}dx_{\bar{e}} d^2k_{e}^{\perp} d^2k_{\bar{e}}^{\perp}}{2(2\pi)^3} \delta\left(x_{e} + x_{\bar{e}}-1\right)
\delta^2\left(k_{e}^{\perp} + k_{\bar{e}}^{\perp}\right) \delta \left(x - x_{e}\right) \delta^2\left( k^{\perp} - k^{\perp}_{e}\right).
\end{eqnarray}
On the other hand, the unpolarized and the polarized GPDs for the dressed photon can be defined through the following nonforward matrix elements~\cite{Friot:2006mm}:
\begin{eqnarray}
F_{\gamma}(x,t) &=& \int \frac{dy^-}{8\pi}e^{\frac{-iP^+ y^-}{2}} \Big{\langle} \gamma(P',\Lambda) \Big{|} \bar{\psi}(0)\gamma^+ \psi(y^-) \Big{|} \gamma(P,\Lambda) \Big{\rangle}\Big|_{y^+=0,y^{\perp} =0} ,\\ \nonumber
\tilde{F}_{\gamma}(x,t) &=& \int \frac{dy^-}{8\pi}e^{\frac{-iP^+ y^-}{2}} \Big{\langle} \gamma(P',\Lambda) \Big{|} \bar{\psi}(0)\gamma^+ \gamma_5 \psi(y^-) \Big{|} \gamma(P,\Lambda) \Big{\rangle}\Big|_{y^+=0,y^{\perp} =0}.
\label{gpdeqmain}\end{eqnarray}
We choose a frame, where the initial and final four momenta of the photon are given by
\begin{eqnarray}
P &=& \left( P^+ , 0^{\perp} , 0 \right) ,\nonumber \\
P' &=& \left( P^+ , -\Delta^{\perp} , \frac{\left(\Delta^{\perp}\right)^2}{P^+} \right).
\end{eqnarray}
Therefore, the momentum transferred to the photon is
\begin{eqnarray}
\Delta = P - P' = \left( 0, \Delta^{\perp} , \frac{t}{P^+} \right),
\end{eqnarray}
with $t = -\left(\Delta^{\perp}\right)^2$ being the square of the momentum transfer in the transverse direction.
The overlap representation for the photon GPDs reads,
\begin{eqnarray}
F_{\gamma}(x,t) &=& \frac{1}{2}\int \{de\bar{e}\} \sum_{\lambda_e,\lambda_{\bar{e}}}
\Psi_{\lambda_e,\lambda_{\bar{e}}}^{*\uparrow}\left(x_e,k_e^{\perp}-(1-x)\Delta^{\perp},x_{\bar{e}},k_{\bar{e}}^{\perp}+x\Delta^{\perp}\right)
\Psi_{\lambda_e,\lambda_{\bar{e}}}^{\uparrow}\left(x_e,k_e^{\perp},x_{\bar{e}},k_{\bar{e}}^{\perp}\right), \nonumber \\
\tilde{F}_{\gamma}(x,t) &=& \frac{1}{2}\int \{de\bar{e}\} \sum_{\lambda_e,\lambda_{\bar{e}}}
\lambda_e\Psi_{\lambda_e,\lambda_{\bar{e}}}^{*\uparrow}\left(x_e,k_e^{\perp}-(1-x)\Delta^{\perp},x_{\bar{e}},k_{\bar{e}}^{\perp}+x\Delta^{\perp}\right)
\Psi_{\lambda_e,\lambda_{\bar{e}}}^{\uparrow}\left(x_e,k_e^{\perp},x_{\bar{e}},k_{\bar{e}}^{\perp}\right)
\label{gpdeq},
\end{eqnarray}
where
\begin{eqnarray}
\{de\bar{e}\} &=& \frac{dx_{e}dx_{\bar{e}} d^2k_{e}^{\perp} d^2k_{\bar{e}}^{\perp}}{2(2\pi)^3} \delta(x_{e} + x_{\bar{e}}-1) \delta^2\left(k_{e}^{\perp} + k_{\bar{e}}^{\perp}\right) \delta (x - x_{e})
.\end{eqnarray}
We now briefly describe how the perturbative results are obtained. The results in perturbation theory are calculated using the two-component form of light-front field theory~\cite{Zhang:1993dd}, wherein the component $A^-$
of the photon field is eliminated by choosing $A^+ =0$ gauge. Thus, only the transverse component of the photon field $A^{\perp}$ survives. The $\psi^-$ fermionic field is constrained and written in terms of $\psi^+$~\cite{Zhang:1993dd}. The photon state can be written in terms of its Fock components~\cite{Friot:2006mm}. We truncate the Fock space expansion at the two particle sector :
\begin{eqnarray}
\label{fockpert}
\Big{|} \gamma (P,\Lambda) \Big{\rangle}&=& a^{\dagger}\left(P,\Lambda \right) \Big{|} 0 \Big{\rangle} \\ \nonumber &+&
\sum_{\lambda_e,\lambda_{\bar{e}}} \int {dp_{e}} \int {dp_{\bar{e}}}
\sqrt{2(2\pi)^3 P^+} \delta^3\left(P-p_{e}-p_{\bar{e}}\right) \Phi^{\Lambda}_{\lambda_e,\lambda_{\bar{e}}}\left(p_{e},p_{\bar{e}}\right) b^{\dagger}\left(p_{e},\lambda_e\right)d^{\dagger}\left(p_{\bar{e}},\lambda_{\bar{e}}\right) \Big{|} 0\Big{\rangle} ,
\end{eqnarray}
where we use the abbreviation ${dp_j} = \frac{dp_j^+ d^2p_j^{\perp}}{\sqrt{2(2\pi)^3p_j^+}}$ such that $j \in \{ e, \bar{e} \}$. The two particle LFWF written in terms of the physical momenta, $\Phi^{\Lambda}_{\lambda_e,\lambda_{\bar{e}}}\left(p_{e},p_{\bar{e}}\right)$ represents the Fock component containing one electron and a positron; the two-particle LFWF can be expressed in terms of the Jacobi variables $x_j = \frac{k_j^+}{P^+}$ and $k_j^{\perp} = p_j^{\perp} - x_j P^{\perp}$ as follows \cite{kundu1}:
\begin{eqnarray}
\Psi_{\lambda_e,\lambda_{\bar{e}}}^{\Lambda}\left(
x_e,k^{\perp}_e
\right) &=& \frac{1}{m_{\gamma}^2+ \frac{m_e^2+\left(k^{\perp}_e\right)^2}{x_e(1-x_e)}}\frac{e}{\sqrt{2(2\pi)^3}}\chi_{\lambda_e}^{\dagger}\Bigg[
\frac{\left(\sigma^{\perp}.k^{\perp}_e\right)}{x_e}\sigma^{\perp} -
\sigma^{\perp} \frac{\left(\sigma^{\perp}.k^{\perp}_e\right)}{1-x_e} -i \frac{m_e}{x_e(1-x_e)}\sigma^{\perp} \Bigg]\chi_{-\lambda_{\bar{e}}}\epsilon_{\Lambda}^{\perp*},
\label{tpwf}\end{eqnarray}
such that $ x_e + x_{\bar{e}} =1 $ , $k_e^{\perp} + k_{\bar{e}}^{\perp} = 0$ and the two particle LFWF with physical momenta shown in Eq.~\ref{fockpert} is related to the LFWF with relative Jacobi momenta in Eq.~\ref{tpwf} by the Jacobian, $\sqrt{P^+}$, of the transformation such that $\sqrt{P^+}\Phi^{\Lambda}_{\lambda_e,\lambda_{\bar{e}}}\left(p_{e},p_{\bar{e}}\right) =\Psi_{\lambda_e,\lambda_{\bar{e}}}^{\Lambda}\left(
x_e,k^{\perp}_e \right)$. The wavefunction in Eq.~\ref{tpwf} is written in the two component formalism~\cite{Zhang:1993dd,kundu1}. The two component fermionic spinor is denoted by $\chi_{\lambda_j}$ such that $\chi_{+1} =\frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 0 \end{pmatrix}$ and $\chi_{-1} =\frac{1}{\sqrt{2}} \begin{pmatrix} 0 \\ 1 \end{pmatrix}$. $\epsilon_{\pm1}^{\perp} =\frac{1}{\sqrt{2}}\left( \mp 1 ,- i \right)$ is the polarization vector of the photon and $\sigma^{\perp}$ are the usual $2 \times 2$ complex Pauli matrices.
The explicit analytical expressions for the photon observables using the perturbative wavefunction are given by
\begin{eqnarray}
\label{gpd_pert}
F_{\gamma}(x,t) &=& \frac{e^2}{8\pi^3} \Big[
\left(
(1-x)^2+x^2
\right)(I_1+I_2+LI_3) +2 m_e^2 I_3
\Big], \nonumber\\
\tilde{F}_{\gamma}(x,t) &=& \frac{e^2}{8\pi^3} \Big[
\left(
x^2-(1-x)^2
\right)(I_1+I_2+LI_3) +2 m_e^2 I_3
\Big],
\end{eqnarray}
and
\begin{eqnarray}
\label{perttmd}
f^1_{\gamma}\left(x,\left(k^{\perp}\right)^2\right) &=& \frac{e^2}{8\pi^3} \frac{\left(m_e^2 + (k^{\perp})^2 (2x^2-2x+1)\right)}{\left( (k^{\perp})^2 +m_{\gamma}^2 x(1-x) +m_e^2\right)^2}, \nonumber \\
g^{1L}_{\gamma}\left(x,\left(k^{\perp}\right)^2\right) &=& \frac{e^2}{8\pi^3} \frac{\left(m_e^2 + (k^{\perp})^2 (2x-1)\right)}{\left( (k^{\perp})^2 +m_{\gamma}^2 x(1-x) +m_e^2\right)^2},
\end{eqnarray}
with the integrals
\begin{eqnarray}
I_1 &=& I_2 = \int \frac{d^2 k^{\perp}}{D} = \pi ~\mathrm{Log} \Bigg[
\frac{\Lambda_{\mathrm{pert}}^2 +m_e^2 - m_{\gamma}^2x(1-x)}{\mu^2 +m_e^2- m_{\gamma}^2x(1-x)}\Bigg] ,\nonumber \\
I_3 &=& \int \frac{d^2 k^{\perp}}{DD'} = \int_0^1 d\beta \frac{\pi}{P(x,\beta,t) },
\label{int123}\end{eqnarray}
where $L = -2m_e^2 + 2m_{\gamma}^2x(1-x) + t(1-x)^2$; $D = (k^{\perp})^2 - m_{\gamma}^2x(1-x) +m_e^2$; $D'=(k^{\perp})^2 -t (1-x)^2 -2k^{\perp}.\Delta^{\perp}(1-x) -m_{\gamma}^2x(1-x)+m_e^2$;
and $P(x,\beta,t) = -m_{\gamma}^2x(1-x)+m_e^2 -t\beta(1-\beta)(1-x)^2$. $\Lambda_{\mathrm{pert}}$ and $\mu$ are the ultraviolet ($\mathrm{UV}$) and infrared ($\mathrm{IR}$) cut-offs of the transverse momentum integral respectively. Here $x$ and $k^{\perp}$ denote the electron momenta such that $x = x_e$ and $k^{\perp} = k^{\perp}_e$.
The QED component of the photon structure function \cite{Berger:2014rva}, which is related to the GPD at the forward limit, is defined as
\begin{eqnarray}
F_{2,\mathrm{QED}}^{\gamma}(x) = 2 x F_{\gamma}(x,0).
\label{psfeq}
\end{eqnarray}
\section{Numerical Results}
\label{secnum}
\begin{figure}[htp!]
\centering
\includegraphics[width=9.5cm,height=7.5cm,clip]{sfcompared_combined}
\caption{\label{fig2} The photon structure function $F_{2,\mathrm{QED}}^{\gamma}(x)$ for three different values of the basis truncation parameter $K=N_{\mathrm{max}} = (50,\,100,\,150)$.
The BLFQ results are shown with lines, while the different symbols represent the corresponding perturbative results.}
\end{figure}
For the numerical calculations, the values of parameters common to both the perturbative and the BLFQ computations are kept the same. These common parameters are the fermion mass in the two particle Fock sector $m_e = 0.511 ~\mathrm{MeV}$, vertex coupling constant $e = 0.3$, and the photon mass $m_{\mathrm{\gamma}} =0$. The energy scale parameter of the BLFQ basis is set to $b_0 = m_e = 0.511 ~\mathrm{MeV}$. The numerical codes for BLFQ were written in Fortran.
Single precision (32 bits) is inadequate to calculate the GPDs for large values of $N_{\mathrm{max}}$ and $K$ and hence, we perform the numerical integration required to calculate the GPDs using the FM package, which is a FORTRAN package for Floating-point Multiple-precision arithmetic~\cite{fm}.
Figure~\ref{fig2} shows the photon structure function as defined in Eq.~(\ref{psfeq}). $F_{2,\mathrm{QED}}^{\gamma}(x)$ is plotted as a function of $x$ and we compare our BLFQ computations with the perturbative results for three different values of the basis truncation parameter $N_{\mathrm{max}}$. Unless specified otherwise we show all results with the longitudinal truncation parameter
$K = N_{\mathrm{max}}$.
The perturbative UV cutoff $\Lambda_{\mathrm{pert}}$ in Eq.~(\ref{int123}) is related to the transverse basis truncation parameter $N_{\mathrm{max}}$ in BLFQ. Our choice of the $x$-dependent scale parameter $b$ translates to an $x$-dependent UV cutoff: $\mathrm{UV_{co}} = b_0 \sqrt{x(1-x)2N_{\mathrm{max}}}$~\cite{Chakrabarti:2014cwa} , which is adopted as the UV cutoff in the perturbative calculations. This $x$ dependent UV cutoff convention has been used throughout in our calculation. Note that the perturbative results for the photon GPDs calculated previously in Ref.~\cite{Mukherjee:2011bn} and Ref.~\cite{Friot:2006mm} are for $x$ independent UV cutoff. We observe that the peak of the structure function increases with increasing the UV cutoff. We find excellent agreement between our BLFQ computations and the perturbative results.
\begin{figure}[htp!]
\centering
\includegraphics[width=8.4cm,height=6.5cm,clip]{sfcompared_exp_opal_blfq}
\includegraphics[width=8.4cm,height=6.5cm,clip]{sfcompared_exp_pluto_blfq}
\caption{\label{fig3} We compare our BLFQ results for the photon structure function $F_{2,\mathrm{QED}}^{\gamma}(x)$ with the experimental data for QED photon structure function reported in Ref.~\cite{Nisius:1999cv}. Plot (a) shows the comparison with the results from the OPAL Collaboration~\cite{OPAL:1999rcd} and plot (b)
shows the comparison with the results from the L3 Collaboration~\cite{L3:1998ijt}, PLUTO Collaboration~\cite{PLUTO:1984gmq}, CELLO Collaboration~\cite{CELLO:1983crq} and DELPHI Collaboration~\cite{DELPHI:1995fid}. The experimental data are shown with symbols and the numbers inside the parentheses indicate the experimental scale in $\mathrm{GeV}^2$. The experimental data are scaled by a factor of $\alpha =1/137$. The lines show BLFQ results for different values of $N_{\mathrm{max}}$ which are related to the experimental scales as discussed in the text. The horizontal error bars indicate the $x$ bin boundaries.}
\end{figure}
\begin{figure}[htp!]
\centering
\includegraphics[width=8cm,height=7cm,clip]{sf_global_exp}
\includegraphics[width=8cm,height=7cm,clip]{sf_global_blfq}
\caption{\label{fig4} Plot (a) shows the $F_{2,\mathrm{QED}}^{\gamma}(x)$ data from different experiments as a function of $Q^2$ for different values of $x$ as shown in Ref.~\cite{Nisius:1999cv}. The black solid lines in plot (a) correspond to the structure function in the leading logarithmic approximation as shown in Ref.~\cite{Nisius:1999cv}. An increasing integer value of $N$ is added to each fixed $x$ result of the structure function in order to separate them from each other. The top two data points for PLUTO at $Q^2= 40 ~\mathrm{GeV}^2$ belong to $N =8$ and $N =10$. Plot (b) shows our BLFQ results for $F_{2,\mathrm{QED}}^{\gamma}(x)$ as a function of $N_{\mathrm{max}}$ for similar values of $x$ where the numbers in the legend denote $x$ values.}
\end{figure}
In Fig.~\ref{fig3}, we compare the BLFQ results with the experimental data for the QED photon structure function reported in Ref.~\cite{Nisius:1999cv}.
The experimental data are from the OPAL Collaboration~\cite{OPAL:1999rcd},
L3 Collaboration~\cite{L3:1998ijt}, PLUTO Collaboration~\cite{PLUTO:1984gmq}, CELLO Collaboration~\cite{CELLO:1983crq} and DELPHI Collaboration~\cite{DELPHI:1995fid}. The error in the data points shows the total error, which includes the statistical and systematic errors combined in quadrature. The
horizontal error bar indicates the bin boundaries for $x$. The experimental data reported in Ref.~\cite{Nisius:1999cv} are for $F_{2,\mathrm{QED}}^{\gamma}(x)/\alpha$. So we rescale the experimental data by a factor of $\alpha = 1/137$ to compare them with our BLFQ results for $F_{2,\mathrm{QED}}^{\gamma}(x)$. We show our results with increasing values of $N_{\mathrm{max}}$ to study the cutoff dependence of our results. The $N_{\mathrm{max}}$ values are chosen in order to provide a reasonable comparison over the wide range of the experimental scale. Our intention here is to show the qualitative behavior of the UV cutoff determined by $N_{\mathrm{max}}$. We further compare our $N_{\mathrm{max}}$ dependence with the data by analyzing them in a different way as shown in Fig.~\ref{fig4}.
%
In Fig.~\ref{fig4}(a), we show the experimental data on the QED photon structure function from Ref.~\cite{Nisius:1999cv}, where the data are shown as function of the experimental scale $Q^2$ for different central $x$ values. An integer value $N$ corresponding to the $x$ bin number is added to each measurement to have sufficient vertical separation. If the experimental data do not have the exact $x$ bin central value as indicated then the nearest $x$ bin central value is used. In Fig.~\ref{fig4}(b), we show our BLFQ result for $F_{2,\mathrm{QED}}^{\gamma}(x)$ as function of $N_{\mathrm{max}}$ for different $x$ values similar to Fig.~\ref{fig4}(a). We observe from this comparison that as $x$ increases the slope also increases for both the data and our BLFQ results. The approximate relation between the experimental scale $Q^2$ and the transverse cut-of in BLFQ $N_{\rm max}$ is identified as $Q^2 \approx b_0^2\, x(1-x)\,2N_{\mathrm{max}}$.
\begin{figure}[htp!]
\centering
\includegraphics[width=8.4cm,height=6.5cm,clip]{tmd_vs_kp2_unpol}
\includegraphics[width=8.4cm,height=6.4cm,clip]{tmd_vs_kp2_pol}\\
\includegraphics[width=8.4cm,height=6.5cm,clip]{tmd_vs_x_unpol}
\includegraphics[width=8.4cm,height=6.5cm,clip]{tmd_vs_x_pol}
\caption{\label{fig5} Plots (a) and (b) show the photon unpolarized TMD $f^1_{\gamma}(x,(k^{\perp})^2)$ and the photon helicity TMD $g^{1L}_{\gamma}(x,(k^{\perp})^2)$, respectively as functions of $(k^{\perp})^2$ for fixed $x$, whereas plots (c) and (d) present them as functions of $x$ for fixed $(k^{\perp})^2$. We compare our BLFQ results (lines) with the perturbative results (symbols). The BLFQ results are obtained by averaging over the BLFQ computations at $N_{\mathrm{max}} = \{100,102,104\}$ for $K = 100$. Both the BLFQ and the perturbative results for $(k^{\perp})^2 = 0.053 ~\mathrm{MeV}^2$ in plots (c) and (d) are scaled by a factor of $0.5$ to enhance visualization. }
\end{figure}
In Fig.~\ref{fig5} we show our two-dimensional structure of the real photon's TMDs. The TMDs calculated in BLFQ have oscillations in the transverse direction resulting from the oscillatory nature of the HO basis functions employed in the transverse plane. In order to reduce these oscillations, we adopt an averaging scheme~\cite{Hu:2020arv}, wherein we average three results at different $N_{\mathrm{max}}$ with fixed $K$. The averaging scheme is achieved by taking an average of averages. We first take the average of the results obtained at $N_{\mathrm{max}} = n$ and $N_{\mathrm{max}} = n + 2$. Furthermore, we take another average of the results obtained at $N_{\mathrm{max}} = n+2$ and $N_{\mathrm{max}} = n + 4$. The final result is obtained by taking average of the previous two averages. We denote this averaging method of three values as $N_{\mathrm{max}} = \{n, n + 2 ,n+4\}$. We set $K = n+4$, while calculating all the averages~\cite{Hu:2020arv}.
Figures~\ref{fig5} (a) and (b) show our results for the unpolarized TMD $f^1_{\gamma}$ and the photon helicity TMD $g^{1L}_{\gamma}$ as functions of $(k^{\perp})^2$ for three fixed values of $x = (0.055,\, 0.505,\, 0.805)$. We observe that the TMDs reach their maximum value at zero transverse momentum of the electron (positron) and at exactly $(k^{\perp})^2 = 0$, where the TMDs become independent of $x$. The unpolarized TMD $f^1_{\gamma}$ remains positive over the chosen range of $(k^{\perp})^2$, whereas the helicity TMD $g^{1L}_{\gamma}$ becomes negative over certain regions for particular values of $x$.
Figures~\ref{fig5} (c) and (d) show our results for the unpolarized TMD $f^1_{\gamma}$ and the photon helicity TMD $g^{1L}_{\gamma}$ as functions of $x$ for three fixed values of $(k^{\perp})^2 = (0.053,\,0.504,\,1.0)~ \mathrm{MeV^2}$. For a fixed value of $(k^{\perp})^2$, the unpolarized TMD $f^1_{\gamma}$ approaches its maximum value when either the electron or the positron carries most of the longitudinal momentum fraction, which happens at the end points of $x$. Meanwhile, $f^1_{\gamma}$ approaches its minimum value at $x =0.5$, i.e., when the electron-positron pair share exactly equal momenta. The polarized TMD $g^{1L}_{\gamma}$ breaks this symmetry over $x$ as can be seen in Fig.~\ref{fig5}(c) and becomes negative for certain values of $x$ and $(k^{\perp})^2$. Overall, we observe that both the TMDs calculated in our BLFQ approach are in excellent agreement with the corresponding perturbative results. The averaging strategy used to curb the oscillatory behavior in the BLFQ computations has its limitations due to finite basis artifacts which become prominent at the endpoints of the $x$ range.
\begin{figure}[htp!]
\centering
\includegraphics[width=8.4cm,height=7.5cm,clip]{tmd3D_blfq}
\includegraphics[width=8.4cm,height=7.5cm,clip]{tmd3D_pert}
\includegraphics[width=8.4cm,height=7.5cm,clip]{tmd3D_g1l_blfq}
\includegraphics[width=8.4cm,height=7.5cm,clip]{tmd3D_g1l_pert}
\caption{\label{fig6}
3D plots for photon unpolarized TMD $f^1_{\gamma}(x,(k^{\perp})^2)$ and the photon helicity TMD $g^{1L}_{\gamma}(x,(k^{\perp})^2)$. Plots (a) and (c) correspond to the BLFQ results and plots (b) and (d) represent the perturbative results. The BLFQ results are obtained by averaging over the BLFQ computations at $N_{\mathrm{max}} = \{100,102,104\}$ for $K = 100$.}
\end{figure}
Figure~\ref{fig6} compares the 3D plots for the unpolarized TMD $f^1_{\gamma}$ and the polarized TMD $g^{1L}_{\gamma}$, respectively with the corresponding plots from perturbation theory. The $f^1_{\gamma}$ is symmetric over $x$, while this symmetry is broken in the $g^{1L}_{\gamma}$. The peaks observed in the transverse $(k^{\perp})^2$ direction fall off as $(k^{\perp})^2$ increases and it falls off to negative values at certain $x$ range for the $g^{1L}_{\gamma}$. We observe good consistency between the BLFQ and the perturbative results. The oscillations seen around the endpoint of the $x$ range are the reflections of the finite basis artifacts.
\begin{figure}[htp!]
\centering
\includegraphics[width=8.4cm,height=6.5cm,clip]{tmd_vs_kp2_mgpos}
\includegraphics[width=8.4cm,height=6.5cm,clip]{tmd_vs_kp2_mgneg}
\includegraphics[width=8.4cm,height=6.5cm,clip]{tmd_vs_x_mgpos}
\includegraphics[width=8.4cm,height=6.5cm,clip]{tmd_vs_x_mgneg}
\caption{\label{fig7} Plots for virtual photon unpolarized TMD $f^1_{\gamma^*}(x,(k^{\perp})^2)$, where (a) and (b) are for $f^1_{\gamma^*}$ vs. $(k^{\perp})^2$ at fixed $x$ . Plots (c) and (d) are for $f^1_{\gamma^*}$ vs. $x$ at fixed $(k^{\perp})^2$.
Plots (a) and (c) are for the time-like virtual photon with $m_{\gamma}^2 = 0.1 ~\mathrm{MeV}^2$, whereas
plots (b) and (d) are for the space-like virtual photon with $m_{\gamma}^2 = - 0.1 ~\mathrm{MeV}^2$.
We compare our results (lines) with the perturbative results (symbols). The BLFQ results are obtained by averaging over the BLFQ computations at $N_{\mathrm{max}} = \{46,48,50\}$ for $K = 50$. }
\end{figure}
\begin{figure}[htp!]
\centering
\includegraphics[width=8.4cm,height=7.5cm,clip]{tmd3D_blfq_pos_mg20point1}
\includegraphics[width=8.4cm,height=7.5cm,clip]{tmd3D_pert_pos_mg20point1}
\includegraphics[width=8.4cm,height=7.5cm,clip]{tmd3D_blfq_neg_mg20point1}
\includegraphics[width=8.4cm,height=7.5cm,clip]{tmd3D_pert_neg_mg20point1}
\caption{\label{fig8} 3D plots for virtual photon unpolarized TMD $f^1_{\gamma}(x,(k^{\perp})^2)$. Plots (a) and (b) are for $m_{\gamma}^2 = 0.1 ~\mathrm{MeV}^2$. Plots (c) and (d) are for $m_{\gamma}^2 = - 0.1 ~\mathrm{MeV}^2$. We compare our BLFQ computations with the perturbative results. The BLFQ results are obtained by averaging over the BLFQ computations at $N_{\mathrm{max}} = \{46,48,50\}$ for $K = 50$. }
\end{figure}
\begin{figure}[htp!]
\centering
\includegraphics[width=8.4cm,height=7.5cm,clip]{plot3d_blfq_mg2_fixed_kp_0point01}
\includegraphics[width=8.4cm,height=7.5cm,clip]{plot3d_pert_mg2_fixed_kp_0point01}
\includegraphics[width=8.4cm,height=7.5cm,clip]{plot3d_blfq_mg2_fixed_x_0point49}
\includegraphics[width=8.4cm,height=7.5cm,clip]{plot3d_pert_mg2_fixed_x_0point49}
\caption{\label{fig9} 3D plots for virtual photon unpolarized TMD $f^1_{\gamma^{*}}(x,(k^{\perp})^2)$, where plots (a) and (b) are performed as functions of photon mass squared $m_{\gamma}^2$ and $x$ at $(k^{\perp})^2 = 0.01 ~\mathrm{MeV}^2$. Plots (c) and (d) are performed as functions of $m_{\gamma}^2$ and $(k^{\perp})^2$ at $x = 0.49$. We compare our BLFQ computations with the perturbative results. The BLFQ results are obtained by averaging over the BLFQ computations at $N_{\mathrm{max}} = \{46,48,50\}$ for $K = 50$. }
\end{figure}
It is interesting to employ our approach in order to obtain TMDs for both space-like and time-like virtual photons. Following the discussion in Sec.~\ref{secrenorm}, we therefore obtain BLFQ results for the virtual photon TMDs by renormalizing the bare photon mass squared to have a nonzero value.
Figures~\ref{fig7}(a) and \ref{fig7}(b) present the results for virtual photon unpolarized TMD $f^1_{\gamma}$ versus $(k^{\perp})^2$ for three fixed values of $x = (0.05,\, 0.51,\, 0.81)$. Figures~\ref{fig7}(c) and \ref{fig7}(d) are for $f^1_{\gamma}$ versus $x$ for three fixed values of $(k^{\perp})^2 = (0.01,\,0.1,\,1.0)~ \mathrm{MeV^2}$. Plots (a) and (c) are for $m_{\gamma}^2 = 0.1 ~\mathrm{MeV}^2$, whereas plots (b) and (d) are for $m_{\gamma}^2 = - 0.1 ~\mathrm{MeV}^2$.
Unlike the real photon TMDs (Fig.~\ref{fig5}), the TMDs for the virtual photon at $(k^{\perp})^2 = 0$ are not independent of $x$. For fixed value of $(k^{\perp})^2$, we observe that the TMDs show a maximum (minimum) around $x=0.5$, when $m_{\gamma}^2 > 0$ ($m_{\gamma}^2 < 0$).
We show the 3D plot for the virtual photon unpolarized TMD for $m_{\gamma}^2 = 0.1 ~\mathrm{MeV}^2$ in Figs.~\ref{fig8}(a) and \ref{fig8}(b), whereas Figs.~\ref{fig8}(c) and \ref{fig8}(d) are for $m_{\gamma}^2 = - 0.1 ~\mathrm{MeV}^2$. Our BLFQ results are compared with the corresponding perturbative results. The effect of nonzero mass squared is mostly concentrated around the small $(k^{\perp})^2$ region. When compared with the real photon unpolarized TMD (Fig.~\ref{fig6}), we observe that the $x$ dependence of the virtual photon TMDs behave differently near $(k^{\perp})^2 \sim 0$. This is due to the fact that the photon mass squared term is accompanied by an $x$-dependent factor as can be seen in the analytic expression of the perturbative results in Eq.~(\ref{perttmd}) and this mass squared term becomes dominant near $(k^{\perp})^2 \sim 0$. The oscillations observed in the BLFQ results are again indicators of the size of our finite basis artifacts.
In Figs.~\ref{fig9}(a) and \ref{fig9}(b), we compare the BLFQ result with the perturbative result for the $f^1_{\gamma}$, where the TMDs are plotted as functions of $m_{\gamma}^2$ and $x$ at fixed value of $(k^{\perp})^2 = 0.01 ~\mathrm{MeV}^2$. We observe a minimum at $x=0.5$ for negative $m_{\gamma}^2$, which transforms to a maximum for positive $m_{\gamma}^2$. The perturbative results exhibit similar behavior. In Figs.~\ref{fig9}(c) and \ref{fig9}(d), we compare the BLFQ result with the perturbative result, where the TMDs are plotted as functions of $m_{\gamma}^2$ and $(k^{\perp})^2$ at a fixed value of $x = 0.49$. Here, we observe that the effect of $m_{\gamma}^2$ is prominent around $(k^{\perp})^2 \approx 0 $. The maximum value of the TMDs decreases as the photon mass squared changes from positive to negative. Again, the perturbative results show excellent agreement with our BLFQ results.
\begin{figure}[htp!]
\centering
\includegraphics[width=8.4cm,height=6.5cm,clip]{gpd_vs_x_unpol.pdf}
\includegraphics[width=8.4cm,height=6.5cm,clip]{gpd_vs_x_pol.pdf}
\includegraphics[width=8.4cm,height=6.5cm,clip]{gpd_vs_t_unpol.pdf}
\includegraphics[width=8.4cm,height=6.5cm,clip]{gpd_vs_t_pol.pdf}
\caption{\label{fig10}
Plot (a) for the photon unpolarized GPD and plot (b) for the polarized GPD vs $x$ for three fixed values of $-t = (0.00,1.00,5.00)~\mathrm{MeV}^2$. Plot (c) for the photon unpolarized GPD and plot (d) for the polarized GPD vs $-t ~(\mathrm{MeV}^2)$ for three fixed values of $x = (0.155,0.605,0.905)$. We compare our results (lines) with the perturbative results (symbols). The BLFQ results are obtained at $N_{\mathrm{max}} = K = 200$.}
\end{figure}
In Figs.~\ref{fig10}(a) and \ref{fig10}(b), we show the results for the real photon unpolarized GPD $F(x,t)$ and polarized GPD $\tilde{F}(x,t)$ as functions of $x$ for three values of the momentum transfer $-t = (0.00,\, 1.00,\, 5.00)~\mathrm{MeV^2}$. The BLFQ computations are compared with the corresponding perturbative results. We observe that with increasing transverse momentum transfer $-t$, the BLFQ results in the low-$x$ region deviate from the perturbative results. For $-t = 0 ~\mathrm{MeV^2}$, we observe that the unpolarized photon GPD is symmetric over $x$, since both electron and positron are equally massive. For nonzero values of $-t$, this symmetry is broken and it becomes more asymmetric with increasing $-t$. The polarized GPD changes sign in the region $x < 0.4$. The change of sign observed in the polarized GPD depends on the transverse momentum transfer $-t$ and the electron mass $m_e$ as can be seen from the analytic expression of the perturbative result in Eq.~(\ref{gpd_pert}). Both the unpolarized and the polarized GPDs become independent of $-t$ when the momentum fraction carried by the electron approaches unity.
In Figs.~\ref{fig10}(c) and \ref{fig10}(d), we present the results for the real photon unpolarized GPD $F_{\gamma}(x,t)$ and polarized GPD $\tilde{F}_{\gamma}(x,t)$ as functions of $-t~(\mathrm{MeV^2})$ for three values of $x = (0.155,\, 0.605,\ 0.905)$. The BLFQ computations for the unpolarized GPD $F_{\gamma}(x,t)$ deviate from the perturbative results as the value of $x$ decreases and $-t$ increases. This deviation is less prominent for the polarized GPD $\tilde{F}_{\gamma}(x,t)$.
Figure~\ref{fig11} illustrates the 3D plots for the photon unpolarized and polarized GPDs calculated at $N_{\mathrm{max}} = K = 200$. The photon GPDs are nearly independent of $-t$ for large $x$ and the $-t$ dependence can be seen when $x$ decreases. This behavior can be understood from the analytic expression for the perturbative results shown in Eq.~(\ref{int123}), where the $-t$ dependence comes with the factor $(1-x)^2$. We observe that the qualitative behavior of the BLFQ computations for the photon GPDs is in agreement with the perturbative results.
\begin{figure}[h]
\centering
\includegraphics[width=8.4cm,height=7.5cm,clip]{gpd_unpol_3d_blfq}
\includegraphics[width=8.4cm,height=7.5cm,clip]{gpd_unpol_3d_pert}\\
\includegraphics[width=8.4cm,height=7.5cm,clip]{gpd_pol_3d_blfq}
\includegraphics[width=8.4cm,height=7.5cm,clip]{gpd_pol_3d_pert}
\caption{\label{fig11}
3D plots for the photon unpolarized and polarized GPDs. Plots (a) and (b) are the unpolarized GPD from BLFQ and the perturbation theory, respectively. Plots (c) and (d) are the polarized GPD from BLFQ and the perturbation theory, respectively. Here $N_{\mathrm{max}} = K = 200$. }
\end{figure}
The differences observed in Fig.~\ref{fig11} between the BLFQ and perturbative results for the GPDs are quantified in Fig.~\ref{fig12}, where we plot the relative percentage difference between $F_{\gamma_{\mathrm{BLFQ}}}(x,t)$ and $F_{\gamma_{\mathrm{pert}}}(x,t)$ as a function of $N_{\mathrm{max}}$. We choose three values for the pair $(x,t)$ such that the deviation is clearly visible at the scale of the plots shown in Fig.~\ref{fig11}.
We observe that as $N_{\mathrm{max}}$ increases this deviation decreases and we expect that the two results will converge as $N_{\mathrm{max}} \rightarrow \infty$. At $N_{\mathrm{max}} = 200$, the deviation is less than $5 \%$ for the chosen values of the pair $(x,t)$.
\begin{figure}[h]
\centering
\includegraphics[width=7.5cm,height=6cm,clip]{gpd_unpol_diff}
\caption{\label{fig12}
Plot for the relative percentage difference between the GPDs calculated in BLFQ ($F_{\gamma_{\mathrm{BLFQ}}}(x,t)$) and with perturbation theory ($F_{\gamma_{\mathrm{pert}}}(x,t)$) as a function of the basis truncation parameter $N_{\mathrm{max}} = K$.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=7.5cm,height=5.0cm,clip]{gpd_virtual_unpol_vs_x}
\includegraphics[width=7.5cm,height=5.0cm,clip]{gpd_virtual_pol_vs_x}\\
\includegraphics[width=7.5cm,height=5.0cm,clip]{gpd_virtual_unpol_vs_t}
\includegraphics[width=7.5cm,height=5.0cm,clip]{gpd_virtual_pol_vs_t}
\caption{\label{fig13}
Comparison between the real and the virtual photon GPDs. Plots (a) and (b) are for the GPDs vs $x$ for a fixed value of $-t = 0.1~\mathrm{MeV}^2$ whereas plots (c) and (d) are vs $-t$ for a fixed value of $x = 0.51$.
All plots are shown for three values of the photon mass $m_{\gamma}^2 = \left(-0.1,0.0,0.1\right) \mathrm{MeV}^2$. Here $N_{\mathrm{max}} = K = 50$.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=8.4cm,height=7.5cm,clip]{gpd_virtual_pos_unpol_blfq_3d}
\includegraphics[width=8.4cm,height=7.5cm,clip]{gpd_virtual_pos_unpol_pert_3d}\\
\includegraphics[width=8.4cm,height=7.5cm,clip]{gpd_virtual_pos_pol_blfq_3d}
\includegraphics[width=8.4cm,height=7.5cm,clip]{gpd_virtual_pos_pol_pert_3d}
\caption{\label{fig14}
3D plots for the virtual photon GPDs. Plots (a) and (b) are the unpolarized GPD results from BLFQ and the
perturbation theory, respectively. Plots (c) and (d) are the polarized GPD results from BLFQ and the
perturbation theory, respectively. The photon mass is set to $m_{\gamma}^2 = 0.1 ~\mathrm{MeV}^2$ and
$N_{\mathrm{max}} = K = 50$.}
\end{figure}
Figure~\ref{fig13} compares the BLFQ results for the virtual photon GPDs with those of the real photon. In plots \ref{fig13}(a) and \ref{fig13}(b), we present the photon GPDs as functions of $x$ for fixed value of $-t = 0.1~\mathrm{MeV}^2$ and in plots \ref{fig13}(c) and \ref{fig13}(d), we show them as function of $-t$ for fixed value of $x = 0.51$. Plots \ref{fig13}(a) and \ref{fig13}(c) are the results for the unpolarized GPDs, whereas plots \ref{fig13}(b) and \ref{fig13}(d) are the results for the polarized GPDs. We compute the virtual GPDs for two values of the photon mass $m_{\gamma}^2 = 0.1 ~\mathrm{MeV}^2$ and $m_{\gamma}^2 = -0.1 ~\mathrm{MeV}^2$.
We observe that the GPDs of the time-like virtual photon, i.e., with positive $m_{\gamma}^2$ have a higher magnitude, whereas the one with negative $m_{\gamma}^2$, i.e., for the space-like virtuality, has a lower magnitude when compared to the real photon. As observed in the TMDs (see Fig.~\ref{fig6} and Fig.~\ref{fig9}), the difference between the real and the virtual photon TMDs are mostly localized around the low $k^{\perp}$ region. Since the $k^{\perp}$ direction is integrated out in GPDs, this difference is translated in the form of a change in magnitude as observed in Fig.~\ref{fig13}.
In Fig.~\ref{fig14}, we illustrate the 3D structure for the virtual photon unpolarized and polarized GPDs. The BLFQ computations are compared with the corresponding results from perturbation theory. As observed for the real photon GPDs in Fig.~\ref{fig11}, the difference between BLFQ results and the perturbative results lies in regions close to $x=0$. The qualitative behavior of the virtual photon GPDs is similar to those of the real photon.
\section{Conclusion}
\label{con}
In this work, we obtained the real and virtual photon LFWFs
as the eigenvectors of the QED Hamiltonian in the light-cone gauge by solving its mass
eigenstates using the theoretical framework of Basis Light-Front Quantization (BLFQ). We investigated the structure of the photons by considering them within the constituent bare photon and electron-positron Fock sectors. We employed the resulting LFWFs to compute various observables of the photon such as the structure functions, TMDs, and GPDs. The BLFQ results were compared with those from leading-order perturbation theory. Our calculations also employed sector-dependent renormalization and the rescaling of the wavefunction, which is required to compensate for the artifacts coming from the Fock sector truncation. We found good consistency between the BLFQ results and the perturbative calculations. Thus, this work supports the reliability of the BLFQ approach in solving such stationary state problems. The virtual photon LFWFs can be further utilized to study exclusive vector meson production in virtual photon-proton or photon-nucleus scattering.
\section{ACKNOWLEDGMENTS}
S. N and C. M. thank the Chinese Academy of Sciences Presidents International Fellowship Initiative for their support via Grants No. 2021PM0021 and 2021PM0023, respectively. C. M. is supported by new faculty start-up funding from the Institute of Modern Physics, Chinese Academy of Sciences, Grant No. E129952YR0. X. Z. is supported by new faculty startup funding by the Institute of Modern Physics, Chinese Academy of Sciences, by Key Research Program of Frontier Sciences, Chinese Academy of Sciences, Grant No. ZDB-SLY-7020, by the Natural Science Foundation of Gansu Province, China, Grant No. 20JR10RA067, by the Central Funds Guiding the Local Science and Technology Development of Gansu Province, Grant No. 22ZY1QA006 and by the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB34000000. J. P. V. is supported in part by the Department of Energy under Grants No. DE-FG02-87ER40371, No. DE-SC0018223 (SciDAC4/NUCLEI), and DE-SC0023495 (SciDAC5/NUCLEI).
|
{
"arxiv_id": "2302.13617",
"language": "en",
"timestamp": "2023-02-28T02:25:27",
"url": "https://arxiv.org/abs/2302.13617",
"yymm": "2302"
} | \section{Supplementary material}
\paragraph{Skyrme-HFB calculations}
We used the MOCCa code of Ref.~\cite{Ryssens16} to solve the self-consistent
Skyrme-HFB equations, representing the nucleonic single-particle wavefunctions
on a three-dimensional coordinate grid, resulting in an easily-controlled
numerical accuracy that is independent of the nuclear shape considered~\cite{Ryssens15b}.
The numerical conditions were
identical to those employed in Ref.~\cite{Scamps21}: both $^{197}$Au and $^{238}$U
were represented on a Cartesian mesh with $N_X = N_Y = N_Z = 36$ points in each
direction, spaced equidistantly with $dx=0.8$ fm. Since a complete single-particle
basis on this mesh would require monstrous amounts of memory, we iterated
for each nucleus only the $(N+Z+340)$ single-particle states with lowest
single-particle energy.
The parametrizations in the BSkG- and SV-families were used as originally
published. For all other parametrizations we modified the pairing terms of the
EDF. In the case of the SLy-family, no such terms were part of the parameter
adjustment at all. The UNEDF-family did include such terms but relied on
a numerical representation in terms of harmonic oscillator basis functions,
resulting in different pairing properties that cannot be reproduced by our
coordinate-space representation. In these cases, we employed the following
simple form to introduce `surface-peaked' pairing terms in the EDF:
\begin{align}
E_{\rm pair} &= \sum_{q=p,n} \frac{V_{q}}{4} \int d^3\bold{r}
\, \left[1 - \left(\frac{\rho_0(\bold{r})}{\rho_{\rm sat}}\right) \right]
\tilde{\rho}_q^*(\bold{r}) \tilde{\rho}_q(\bold{r}) \, ,
\end{align}
where $\rho_{\rm sat} = 0.16 $ fm$^{-3}$ and $\rho_0(\bold{r})$ is the isoscalar
density. The definition of the pairing densities $\tilde{\rho}_{q}(\bold{r})$ is
standard in the literature (see for example Ref.~\cite{Ryssens21}): we calculate
them with cutoffs at 5 MeV both above and below the Fermi energy as in Ref.~\cite{Krieger90}. $V_n$ and $V_p$ are
parameters that characterize the overall strength of neutron and proton pairing,
respectively. All SLy-family parametrizations we employ here have similar
effective mass and so we use $V_n = V_p = -1250 $ MeV fm$^{-3}$ for all of
them, following Ref.~\cite{Rigollet99}. For the UNEDF-family, we adjusted the pairing
strengths to roughly reproduce the experimental three-point mass-staggering for
protons and neutrons for $^{238}$U. This resulted in values
of $(V_n, V_p)$ of $(-850,-1250)$, $(-920, -1250)$ and $(-950,-1350)$ MeV fm$^{-3}$
for UNEDF0, UNEDF1 and UNEDF2 respectively. None of the results we report
here depend strongly on the values of these parameters.
To save on computational resources, we restricted our simulations to nuclear
configurations invariant under three plane reflections. Imposing these
self-consistent spatial symmetries allowed us to limit the explicit
numerical representation to only one-eight of all mesh points. We also assumed the
conservation of time-reversal symmetry in nearly all calculations, allowing us
to reduce the computational effort by another factor of two. The sole exception
was the BSkG2 calculation for $^{197}$Au, where we accounted for the full
effect of the odd neutron: the breaking of time-reversal symmetry through the
self-consistent blocking procedure and all so-called `time-odd' terms of the
EDF~\cite{Ryssens22}. The latter are not well-defined for the other
parametrizations, such that we relied on the equal filling approximation to
perform self-consistent blocking calculations for $^{197}$Au
without breaking time-reversal symmetry in all other cases~\cite{Martin08}.
Irrespective of symmetry choices, we used a strategy based on the gradient
algorithm of Ref.~\cite{Bertsch11} to construct the blocked state with minimum total
energy after convergence.
\paragraph{Wood-Saxon fits and supplementary files}
We adjusted the parameters of the WS form to
reproduce the values of the total density $\rho_0(\bold{r})$ at the mesh
points in the EDF calculation. We limited outselves to five deformation
parameters: $(\beta^{\rm WS}_{20}, \beta^{\rm WS}_{22}, \beta^{\rm WS}_{40},
\beta^{\rm WS}_{42}, \beta^{\rm WS}_{44})$, which led to good fits for both
$^{238}$U and $^{197}$Au. Allowing for the polarisation of the surface
diffuseness as in Ref.~\cite{Scamps13} does not meaningfully change the
extracted deformation parameters but does allow for a better fit. We omitted
this possibility as these degrees of freedom have so far not been studied in
hydrodynamic simulations of heavy ion collisions. A more modest improvement
of the fit for $^{238}$U can be achieved by including the $\ell = 6$ deformation
parameters, but these do not impact the quadrupole and hexadecapole deformations
much for this nucleus.
A complete set of the multipole moments $\beta_{\ell m}$ and fitted WS deformation
parameters $\beta^{\rm WS}_{\ell m}$ for all Skyrme parametrizations is included in the supplementary
files \texttt{Au197.dat} and \texttt{U238.dat}. The structure of these files
is clarified by Tab.~\ref{tab:parameters}, where we also include as examples
the values obtained for both nuclei with the BSkG2 parametrization~\cite{Ryssens22}.
For convenience, we also report the quadrupole deformation in terms of its
total size $\beta_2$ and the triaxiality angle $\gamma$. These are linked
to the $\beta_{20}$ and $\beta_{22}$ moments
through~\cite{Scamps21}:
\begin{align}
\beta_{2} &= \sqrt{\beta_{20}^2 + 2 \beta_{22}^2} \, , \\
\gamma &= \text{atan} \left( \sqrt{2}\beta_{22}/ \beta_{20}\right)\, .
\end{align}
Analogous conversion relations apply to the WS deformation
parameters, $\beta_2^{\rm WS}$ and $\gamma^{\rm WS}$.
\begin{table}
\begin{tabular}{lr@{\quad}d{4.3}@{\quad}d{3.2}@{\quad}}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
& Column & \multicolumn{1}{c}{$^{197}$Au} & \multicolumn{1}{c}{$^{238}$U}\\
\hline\noalign{\smallskip}
$R_d$ (fm) & 1 & 6.620 & 7.068 \\
a (fm) & 2 & 0.519 & 0.538 \\
\hline\noalign{\smallskip}
$\beta_{20}$ & 3 & +0.089 &+0.280 \\
$\beta_{22}$ & 4 & +0.065 & 0.000 \\
$\beta_{2}$ & 5 & +0.128 &+0.280 \\
$\gamma$ (deg) & 6 & 45.9 & 0 \\
$\beta_{40}$ & 7 & -0.017 &+0.153 \\
$\beta_{42}$ & 8 & -0.011 & 0.000 \\
$\beta_{44}$ & 9 & -0.010 & 0.000 \\
\hline\noalign{\smallskip}
$\beta^{\rm WS}_{20}$ & 10 & +0.098 &+0.247 \\
$\beta^{\rm WS}_{22}$ & 11 & +0.076 & 0.000 \\
$\beta^{\rm WS}_{2}$ & 12 & 0.145 & 0.247 \\
$\gamma^{\rm WS}$ (deg) & 13 & 47.6 & 0 \\
$\beta^{\rm WS}_{40}$ & 14 & -0.025 &+0.081 \\
$\beta^{\rm WS}_{42}$ & 15 & -0.018 & 0.000 \\
$\beta^{\rm WS}_{44}$ & 16 & -0.018 & 0.000 \\
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\end{tabular}
\caption{ Multipole moments and best-fit WS parameters
$R_d, a$ and $\beta^{\rm WS}_{\ell m}$ for the one-body
densities of $^{197}$Au and $^{238}$U, as obtained with the BSkG2
Skyrme parametrization. The corresponding column numbers in the
supplementary files \texttt{Au197.dat} and \texttt{U238.dat} are
indicated.
}
\label{tab:parameters}
\end{table}
|
{
"arxiv_id": "2302.13650",
"language": "en",
"timestamp": "2023-02-28T02:26:29",
"url": "https://arxiv.org/abs/2302.13650",
"yymm": "2302"
} | \section{Introduction}
Privacy is the right of individuals to keep personal information to themselves \cite{PP1}. While many systems are built with configurations to enable users to exercise this right, managing privacy is still a difficult problem. On one hand, collaborative systems, such as Online Social Networks and Internet of Things, contain a vast amount of content that pertain to a single individual, making it difficult, if not impossible, for individuals to attend to each piece of content separately~\cite{AS2}. Recent research on privacy agents shows promising results on how agents can help with privacy, such as on detecting privacy violations~\cite{kokciyan2016p}, recommending sharing behavior~\cite{fogues2017sosharp, squicciarini2017tag}, and learning privacy preferences~\cite{kurtan2021assisting,tonge2020image}. An important aspect to consider is \textbf{co-owned} content, such that the content does not belong to a single individual (e.g., medical information), but pertains to multiple people (e.g., a group photo or co-edited document \cite{PT5}). These co-owners of the content can and do have conflicting desires about the usage of the content, leading to what is termed as \textbf{multiuser privacy conflicts (MPCs)} \cite{CS10, PT11}.
Various decision-making techniques, such as auctions, negotiation, and argumentation have been employed to build systems to resolve MPCs. Simply put, each user that participates in these systems is represented by a privacy agent that knows its user's privacy requirements. The agent participates in the decision-making system on behalf of its user. For auction-based systems, this means bidding on its user's behalf or for argumentation-based systems, this would correspond generating arguments on behalf of its user. Through participation in this system, the agents decide if and how to share co-owned content by resolving conflicts.
Experimental evaluations on these systems yield good performance results. However, it is also known that users have concerns when it comes to using software tools for managing various elements of their privacy~\cite{story2021awareness,jin2022exploring}. Many existing studies of collaborative systems indicate the importance of {\it trust} in making systems usable by individuals~\cite{IOT2,colnago2020informing}. We argue that to realize trust, the privacy agent of a user should satisfy the following properties:
\mypara{Concealment:} The privacy agent will know the privacy constraints of the user, either through elicitation or learning over time. When the agent is interacting with others to resolve conflicts, it should reveal as little as possible about these privacy constraints, since the privacy constraints themselves are private information. Therefore, users would know that their privacy is safe with the agent~\cite{IOT2,IOT8}.
\mypara{Equity:} Different users have different privacy stances, in terms of their motivation and knowledge. While some users would fight not to share a piece of content, others will be indifferent. Contrary to some of the existing work in AI that favors users with certain properties~\cite{bias1,bias2}, we do not want any user to be left behind. Ideally, the privacy agent should take the privacy stance of the user into account and be able to help different types of users as equally as possible; thereby creating equity~\cite{PP1,PP10}.
\mypara{Collaboration:} It is possible that a number of agents that participate in the same conflict resolution have similar privacy concerns or complementary information to support a particular privacy decision~\cite{AS11}. Their agents should be able to collaborate in groups.
\mypara{Explainability:} It is well-studied that often users do not trust privacy tools because of misconceptions~\cite{story2021awareness}. One solution for this is to make the tools explicit to users. But, more importantly, if the agent itself can provide explanations as to why it has taken certain actions, then its user can understand and even configure the agent better for future interactions~\cite{PT2, miller2019explanation}.
Accordingly, this paper proposes a new Privacy Agent for Content Concealment in Argumentation to Reinforce Trust (PACCART). PACCART can conceal its user's privacy requirements at different levels, while still resolving conflicts. By adapting to different privacy understandings of users, PACCART will provide equitable treatment. At the same time, PACCART will enable agents to work together towards a shared desired outcome. Finally, it will help users understand the actions it is taking. To the best of our knowledge, this is the first privacy agent that brings these desirable properties together. We made PACCART openly available\textsuperscript{\ref{footnote1}}.
The rest of this paper is organized as follows: Section \ref{section2} explains the necessary background theory on argumentation-based agreement systems. Section \ref{section3} formalizes the PACCART model. Section \ref{section4} discusses our realization of the model\footnote{\label{footnote1}\href{https://github.com/PACCART/PACCARTpaper}{https://github.com/PACCART/PACCARTpaper}} and our experimental results. Section \ref{section5} discusses the user study and its results. Finally, Section \ref{section6} systematically compares our approach with related work and gives pointers for future directions.
\section{Background} \label{section2}
We advocate that for an agent to exhibit these four criteria, it is useful to be able to express the relations between privacy preferences in a semantic manner. Thus, as an underlying agreement system, we opt for argumentation as opposed to other decision-making mechanisms such as auctions or negotiation. Below, we review how a privacy agent would use argumentation theory and how by using a dialogical argumentation system it can resolve privacy disputes.
\subsection{Argumentation Theory}\label{argtheory}
Our agent model makes use of argumentation theory for its reasoning.
We follow the \textbf{structured argumentation} formalism of ASPIC+~\cite{AM12}, but we differentiate between premises and preferences and rename preferences as biases. Thus, an ASPIC+ \textbf{argumentation} or \textbf{dispute} $d=\langle P,R,B,C \rangle$ consists of \textbf{premises} $P = P_o \cup P_n$ (ordinary premises $P_o$ and necessary premises $P_n$), \textbf{rules} $R = R_s \cup R_d$ (strict rules $R_s$ and defeasible rules $R_d$), \textbf{biases} $B = B_p \cup B_r$ (premise biases $B_p$ and rule biases $B_r$) and Contraries $C$.
A dispute is held between two opposing agents, \textbf{proponent} $a_p$ and \textbf{opponent} $a_o$. Agents have access to their \textbf{knowledge base} \textit{KB}, which contains premises, rules and contraries. With this content, agents can form \textbf{arguments}. In order to win the dispute, agents are able to \textbf{attack} each other’s arguments and can \textbf{support} (or \textbf{defend}) their own arguments with subarguments in order to try to win the dispute \cite{AM15}. In some cases an agent is also able to \textbf{forfeit}, giving up on winning the dispute. Arguments can be attacked on their \textbf{weak points}, which is any subargument that is either a consequent of a defeasible rule or any ordinary premise. \textbf{Useful} arguments are arguments that, when added to the dispute, successfully attack any opponent's current arguments. Acceptability conditions of winning or losing are dependent on the chosen \textbf{semantics}. Baroni et al. \cite{AM16} offer an overview of different semantics and their meaning, including \textbf{grounded, preferred, complete} and \textbf{stable} semantics.
\subsection{Dispute Protocol}\label{background}
In order for an argumentation agent to be able to hold a dispute with other agents about a \textbf{subject}, it follows a communication protocol. The protocol allows agents to \textbf{extend} the dispute, meaning that they take turns adding arguments from their knowledge base to the dispute in order to either defend or attack the dispute subject.
\begin{algorithm}
\caption{Agent Dispute Extension Protocol}\label{alg:protocol}
\begin{algorithmic}[1]
\Require Agents $A = \{a_p,a_o\}$, each with $KB=\angles{P, R, C}$
\Ensure Determine winner of dispute $d$
\State $a \gets a_p$
\While{$d$ is not forfeited}
\If{$a$ can extend $d$}
\State $a$ extends $d$
\If{$a$ is $a_p$}
\State $a \gets a_o$
\Else
\State $a \gets a_p$
\EndIf
\Else
\State $a$ forfeits $d$
\EndIf
\EndWhile
\end{algorithmic}
\end{algorithm}
Argumentation systems like PriArg~\cite{AS1} utilize this kind of extension protocol, as seen in Algorithm \ref{alg:protocol}. According to the extension protocol, if an agent is able to extend the dispute, it does so. An agent extends the dispute by adding any sufficient argument from its knowledge base. Therefore, as soon as an agent is unable to extend the dispute any further, it forfeits the dispute.
The winner of a dispute is determined by evaluating the outcome according to grounded semantics. This way the \textbf{burden of proof} initially lies on the proponent of the dispute, after which agents take turns by extending the dispute until one of them wins. This is done because the agent that initializes the dispute has something to gain by defending the subject.
\section{Model} \label{section3}
The PACCART agent consists of a base component, which works similarly to agents in the PriArg system, as it communicates with other agents through a dialogical argumentation framework that follows the same Dispute Extension Protocol, as defined in Section \ref{background}. Following this, four components will be introduced on top of the workings of the base component.
\subsection{Concealment Component}
In the case of argumentation over privacy issues, the information to be concealed consists of all information that a user’s agent can hold in its knowledge base, including those that pertain to the user's privacy preferences. At any time step, we make a distinction between content that is revealed during a dispute and content that is not (yet) revealed.
\begin{definition}[\textbf{Concealed Rules and Premises}] \label{Concealed Rules}
Concealment content sets of Agent $A$'s knowledge base $KB$ in Dispute $d$:
\begin{itemize}
\item A set of Concealed Rules ${R}_c \subseteq {R}$ with properties:
\subitem When Dispute $d$ starts: ${R}_c = {R}$
\subitem When Agent $A$ uses rule $r$ to extend $d$: $R_c\gets R_c \textbackslash {r}$
\item A set of Concealed Premises ${P}_c \subseteq {P}$ with properties:
\subitem When Dispute $d$ starts: ${P}_c = {P}$
\subitem When Agent $A$ uses premise $p$ to extend $d$: $P_c\gets P_c \textbackslash p$
\end{itemize}
\end{definition}
We make a distinction between content that is concealed and content that is not, by keeping track of different sets throughout the dispute. At the initialization stage of the dispute, agents have not yet shared any content with each other, which means that all content is still concealed. While the dispute develops, each time an agent shares content with another agent to extend the dispute, that content is revealed and therefore removed from the set of concealed content.
We formalize PACCART's concealment component by providing it the ability to adopt a \textbf{privacy behavior}, consisting of three concealing aspects: \textbf{Scope, Division} and \textbf{Dedication}.
\mypara{Scope:} At each point in the dispute, if possible, an agent extends the dispute by adding one or more arguments (Algorithm~\ref{alg:protocol}, Step 4). The amount of useful arguments (as defined in Section \ref{argtheory}) that an agent considers to add at any point of time to the dispute, is called its scope. An agent without any focused scope would add all available useful arguments at once. An agent with a focused scope is able to carefully select a smaller set of arguments, and locally gains control over the amount of the added (and therefore revealed) content. The larger the scope of an agent, the more content is added at each step in the dispute.
\mypara{Division:} Not all information is equally important. To be able to denote this,
we split the sets of contents into \textbf{set-families} \cite{brualdi1977introductory} of content. These subgroups can then be ordered to the likings of the agent. This entails splitting the knowledge base into ordered subgroups of different groups of conceal-worthy content. Therefore, based on the original knowledge base $KB = \{P, R, C\}$, we propose an \textbf{ordered subdivided knowledge base ({\textit{OSKB}})}, which includes the following ordered tuples of set-families:
\begin{itemize}
\item An ordered tuple of premises $O_P = \langle P_1,\dots,P_n\rangle$
\item An ordered tuple of rules $O_R = \langle R_1,\dots,R_n\rangle$
\end{itemize}
The relation between these ordered set-families $F_X$ and the sets $X$ (with $X = P,R$) all follow the same properties:
\begin{itemize}
\item $\bigcup F_X = X$
\item $\bigcap F_X = \emptyset$
\item $\forall y\in Y, \forall z \in Z
\big((Y \subseteq F_X \land Z \subseteq F_X) \rightarrow (y = z \leftrightarrow Y = Z)\big)$
\end{itemize}
With the introduced \textit{OSKB}, an agent can order their content based on its concealment preferences. We can therefore treat these two ordered tuples together as one totally ordered knowledge base, subdivided in what we call \textbf{dedication levels}, as follows:
$L = \langle\{O_{P_1}, O_{R_1}\}, \ldots, \{O_{P_n}, O_{R_n}\}\rangle$. Each level contains one or more premises and rules. The first level $L_1$ contains content at the top of the ordering of each of the \textit{OSKB} tuples, which is the content that the agent is the least concerned about revealing. The last level $L_n$ contains content at the bottom of the ordering, indicating the content that the agent considers most important to conceal and therefore has to fully commit to winning the dispute in order to be willing to reveal these pieces of information. The \textbf{exhaustion} of an agent's division aspect indicates the amount ordered subdivisions an agent makes. The more exhaustive content subdivision, the higher amount of levels an agent splits its \textit{OSKB} up into.
An example of four different \textit{OSKB} divisions is shown in Figure \ref{dividing}, where an agent makes no subdivision of its \textit{OSKB} (\ref{l1}), by adding all its arguments to Level 1. Another possibility is it divides its \textit{OSKB} in half, with two levels (\ref{l2}). Furthermore, an agent can choose to divide its \textit{OSKB} in all separate arguments, which yields four levels in this case (\ref{l3}). Note that with this approach, Figure \ref{dividing} shows an example of a level with just one premise ('j'), as only one premise can suffice to form an argument. A final approach consists of an agent dividing its \textit{OSKB} by subdividing all of its content (all rules and premises) over different levels, yielding ten levels in this case (\ref{l4}).
\begin{figure}
\centering
\subfloat[\n{None}\label{l1}]{\includegraphics[width=.225\textwidth]{PaperGraphs/levels1.jpg}}
\quad
\subfloat[\n{Half Arguments} \label{l2}]{\includegraphics[width=.225\textwidth]{PaperGraphs/levels2.jpg}}
\medskip
\subfloat[\n{All Arguments} \label{l3}]{\includegraphics[width=.225\textwidth]{PaperGraphs/levels3.jpg}}
\quad
\subfloat[\n{All Content} \label{l4}]{\includegraphics[width=.225\textwidth]{PaperGraphs/levels4.jpg}}
\caption{Examples of different approaches of the PACCART agent's division aspect. Four arguments consisting of ten pieces of content are divided up into different levels. Solid and dashed lines are for strict rules $R_s$ and defeasible rules $R_d$, respectively.}
\label{dividing}
\Description{This figure shows four examples of different possible divisions of a knowledge base consisting of four arguments of various lengths. In total the arguments consist of ten pieces of content. Pieces of content are marked with letters a through j. In the first example the knowledge base is undivided and therefore it is shown that all content belongs to Level 1. The second example called Half Arguments is shown to be split into two Levels. The third example called All Arguments divides the knowledge base up into four sections, each for one argument. Finally, the example called All Content divides all pieces up into their own level. Note that the levels are assigned randomly, which means that the first piece of content is in Level six and the second in Level three and so on.}
\end{figure}
{\noindent \textbf{Dedication:}} Agents that are able to divide their content into levels, can use this to their advantage. Such agents will initially only provide arguments if they can do so from their first level in their knowledge base. When all arguments in a first level have been depleted, the agents receive the option to either drop to a new level, therefore making a further argument privacy concession, or to forfeit the dispute. This gives agents the ability to weigh their decision to further dedicate to the argumentation. The amount of \textbf{willingness} to drop determines the agent's dedication to continue the dispute. The more willing an agent is to drop dedication levels, the more it will use and therefore reveal the contents of its \textit{OSKB}. This is calculated by whether a certain willingness Threshold $\theta X$ with $X \in [0,100]$ is met at the time of decision whether to commit further to the dispute. This means that an agent with $\theta$75 has a 75\% chance of dropping each level. This entails that the agent example of Figure \ref{l3} has a $0.75^3=42.2\%$ chance to use the content of its final level (as it could drop three times until it reaches its fourth and final level of content), whereas agent example of Figure \ref{l4} has a $0.75^9=7.5\%$ chance to fully commit its \textit{OSKB}.
Any specific combination of all three concealing aspects is called an agent's \textbf{privacy type}. These privacy behaviors are in place for agents to gain additional control over their content concealment during disputes, as well as influence their win rate.
\subsection{Equity Component}
Recall that we want our PACCART agent to be able to help different types of users to deliver on the equity aspect. On user's privacy stances, we follow Dupree et al. \cite{PP2}, who determine a categorization based on stances regarding privacy along two dimensions. We define a user $u$ with \textbf{knowledge} $k \in \{low, medium, high\}$ and \textbf{motivation} $m \in \{low, medium, high\}$. The degree of knowledge indicates the amount of awareness a user has about their privacy and the degree of general knowledge on privacy matters. The degree of motivation indicates the effort a user expends to protect their privacy and the degree of willingness to act on privacy matters. Each system user falls in one of five categories, also known as \textbf{privacy types}:
\begin{itemize}
\item \textbf{Fundamentalists}: high knowledge, high motivation
\item\textbf{Lazy Experts}: high knowledge, low motivation
\item\textbf{Technicians}: medium knowledge, high motivation
\item\textbf{Amateurs}: medium knowledge, medium motivation
\item\textbf{Marginally Concerned}: low knowledge, low motivation
\end{itemize}
Dupree et al. determine the rate at which users fall into these categories: 3\% of users are Fundamentalists, Lazy Experts 22\%, Technicians 18\%, Amateurs 34\% and Marginally Concerned 23\%. This is comparable to the categorical distributions of privacy types of earlier conducted researches \cite{ackerman1999privacy,taylor2003most,sheehan2002toward,consolvo2005location,PP8,PP2}.
\textbf{Indifferent} agents are agents that are not personalized and therefore have an unfocused scope and make no distinction between the importance of content in their \textit{KB}.
In order for PACCART to be an equitable agent, it should adhere to the following equity properties, which are based on earlier research on equity \cite{PP1,PP10}:
\begin{itemize}
\item[\textbf{EP1:}]
The knowledge and motivation of a user is considered and utilized to the fullest extent by their personalized agent.
\item[\textbf{EP2:}] A personalized agent outperforms an indifferent agent.
\item[\textbf{EP3:}] There are no performance outliers between personalized agents; no personalized agent heavily overperforms or underperforms compared to others.
\end{itemize}
EP1 is important because the strengths of the user should be taken into account by their agent. The privacy stance of a user should not be ignored, as this would be unfair towards users that are heavily engaged in protecting their privacy. In the same line, EP2 is important because the agents that are tailored towards a user should not perform worse than an agnostic, basic agent. Providing personalization should be beneficial for users, not disadvantageous. EP3 is important because in order to reach fair outcomes, it should not be the case that the privacy stance of a user exorbitantly influences the performance of their agent. It would e.g. be unfair towards unknowledgeable users if their agents would underperform by design.
In order to meet these properties, we introduce a mapping between users and agents. This way, both knowledge and motivation are used to determine the personalized agent's privacy type. We determine a fitting mapping between users $u$ to their agents $a$ such that $u \rightarrow a_{\text{scope(shortest)}}$, $u_{\text{knowledge}}\rightarrow a_{\text{division}}$, and $u_{\text{motivation}}\rightarrow -a_{\text{dedication}}$.
First, we assign all personalized agents to have a small scope. This is because a small scope is beneficial for all users, independent of privacy stance. When a user has a high privacy stance, they can let their agent subdivide its content in such a way that each piece of content is thoroughly protected. This would mean that the agent already has a small amount of content to choose from, so for a high privacy user the scope has only a little positive impact. However, in order to also protect users who do not have a lot of knowledge or motivation to bring to the dispute, a small scope is also the best fit in order to protect as much content as possible.
Secondly, we map a user's knowledge to their agent's division, because of the degree of user knowledge should correspond with the amount of useful subdivisions of their agent's \textit{OSKB} levels. This means that the higher the user's knowledge, the higher the agent's content dividing. Someone with a high knowledge could benefit from an agent with a high capability of dividing its knowledge base content. This would allow users to provide their agent with their preferences in detail. This is in line with EP1. Similarly, mapping a low knowledge to a low \textit{OSKB} division would also be useful. This is because users with low knowledge have little relevant preference divisions to make in their agent’s knowledge base.
Thirdly, we map a user's motivation inversely to their agent’s dedication, because the amount of motivation of a user should correspond to the dedication of its agent to conceal content (in favor of winning disputes). This means that the higher the user's motivation, the lower the agent's dedication. Users that are highly motivated to protect their data would rather have their agent drop as little levels as possible, even if it would require taking (social) losses. Similarly, users that prefer not to act on privacy matters would want their agents to perform well when it comes to winning disputes, but would not mind agents revealing information to do so. This is also in line with EP1.
This mapping results in five personalized agents, one representative for each user type, as noted in Table \ref{nonmps}. This table also includes an indifferent agent.
\begin{table}
\caption{All three concealing aspects of indifferent PACCART agent and personalized PACCART agents that are matched with representative agents for different user privacy types.}
\begin{tabular}{cccc}
\toprule
\textbf{Privacy Type}& \textbf{Scope} & \textbf{Division} & \textbf{Dedication} \\ \midrule
\textbf{\textit{Indifferent}} & All & None & $\theta100$ \\
\textbf{Fundamentalist} & Shortest & AllContent & $\theta25$ \\
\textbf{Technician } & Shortest & AllArgs & $\theta25$ \\
\textbf{Amateur} & Shortest & AllArgs & $\theta50 $ \\
\textbf{Lazy Expert} & Shortest & AllContent & $\theta75 $ \\
\textbf{M.Concerned} & Shortest & HalfArgs & $\theta75 $ \\
\bottomrule
\end{tabular}
\label{nonmps}
\end{table}
\subsection{Additional Usability Components}
In addition to the Concealment and Equity components two usability measures are taken. A Collaboration component is introduced to support both sides of the dispute to be represented by multiple agents. This is achieved by introducing the notion of teams such that the set of agents $A$ in the protocol now consists of $A = \{T_p, T_o\}$ to support both a proponent team $T_p=\{a_{p1},\ldots,a_{pn}\}$ and opponent team $T_o =\{a_{o1},\ldots,a_{on}\}$. In order to extend a dispute each team of PACCART agents continuously selects one of its agents to extend. A team forfeits when none of its agents can extend the dispute any further. This component allows for multiple PACCART agents to cooperate on a common goal of defending/attacking a privacy related subject. This means that agents can add content from their own \textit{OSKB} to the dispute when other agents in their team fail to do so.
Furthermore, an Explainability component is introduced to give users insights to the working of their agent. The semantic nature of PACCART allows us to produce both textual and visual output. PACCART can provide textual output by considering outcomes and providing feedback to the user. Based on this, it is able to give different kinds of feedback, with a range of detail. It can notify users on a summary (e.g., \textit{"I have won 56\% of today's disputes and managed to conceal 73\% of your content"}) or it can give detailed advice on possible actions to be taken to improve its performance (e.g., providing its user with a list of weak arguments to remove or strengthen). Furthermore, PACCART can provide visual output by showing its user images of the Structured Argumentation Framework \cite{AM12} of final disputes. This gives users a visual overview of (counter)arguments and possible weak points in their content. This component allows users of PACCART to better understand its inner workings and performance.
\section{Experimental Results} \label{section4}
The PACCART agent and the experimental setup are implemented as a C\# program. For the sake of reproducibility, we make this program and experiments open source\textsuperscript{\ref{footnote1}}.
\subsection{Dataset Generation}
We implement a system that generates datasets of disputes according to four parameters. The \textbf{disputeAmount} parameter indicates the amount of generated unique disputes that adhere to the other parameters. A higher input value indicates a larger set of disputes, therefore less prone to outliers. The \textbf{disputeSize} parameter controls the amount of arguments that the dispute can contain. A higher input value indicates larger disputes with more content. The \textbf{maxArgumentSize} parameter dictates the maximum amount of subarguments that each argument can consist of. A higher input value indicates larger arguments with more content and therefore more attackable weak points. Finally, \textbf{maxBranches} is used to control the maximum amount of attacks that each weak point can have, indicating a branching choice in the dispute. A higher input value indicates more options for both agents.
By tuning these parameters, we are able to generate dispute datasets of various shapes and sizes, which makes for exhaustive possibilities for testing functionalities of PACCART.
After preliminary analysis of variables, we generate a dispute dataset based on the default parameter settings (disputeAmount = 200, disputeSize = 20, maxArgumentSize = 10, maxBranches = 2).
\subsection{Experiment 1: Effect of Privacy Behaviors}
\subsubsection{Setting}
The goal of the first experiment is to test the performance of PACCART agents. Agent performance is evaluated on two metrics, average concealment $C_{avg}$ and average win rate $W_{avg}$. We hypothesize the following:
\begin{itemize}
\item[\textbf{H1:}] A smaller scope leads to both increased concealment and increased win rate.
\item[\textbf{H2:}] More exhaustive division leads to increased concealment and decreased win rate.
\item[\textbf{H3:}] A higher dedication leads to decreased concealment and increased win rate.
\end{itemize}
We determine four or five conditions for each of the three privacy behavior aspects, to test the range of PACCART's concealing behaviors. For the scope, we include selecting the \n{Shortest} or \n{Longest} arguments, as well as a \n{Random} argument or \n{All} possible arguments. For the division, we follow the examples of Figure \ref{dividing} and include conditions where \n{None} of the content is split, where the \textit{OSKB} is split into two groups of arguments (\n{HalfArgs}), split into all separate levels of arguments (\n{AllArgs}) or a subdivision where each level contains a single piece of content (\n{AllContent}). The dedication conditions consist of an increasing threshold $\theta$, with $\theta \in \{0, 25, 50, 75, 100\}$ that should be met in order to drop to a new level. These conditions yield 80 possible privacy types. Each of these 80 predetermined agents are set up against all other agents, and simulations are run on the 200 disputes of our dataset. This means that the experiment is run on 16,000 disputes for 80 agent set-ups, totaling in 1,280,000 simulated disputes. For each of the disputes, both agents are evaluated as a proponent, as well as opponent of the dispute, to ensure equal chances of winning.
\subsubsection{Results}
Figures \ref{Graph:Concealment3a} and \ref{Graph:Concealment3b} depict the performance of the 80 different agent privacy behavior types, across all three concealing aspects.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{FinalGraphs/a2.png}
\caption{Average Concealment $C_{avg}$ results for all PACCART privacy types.}
\label{Graph:Concealment3a}
\Description{Grouped bar graph showing the concealment results. The ranges are typically between 0.55 and 0.8, with the exception for the theta 0 results, which ranges between 0.55 and 1. The theta results for None division are all the same. For the other division types, an upward trend is shown.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{FinalGraphs/a1.png}
\caption{Average Win Rate $W_{avg}$ results for all PACCART privacy types.}
\label{Graph:Concealment3b}
\Description{Grouped bar graph showing the win rate results. The ranges are typically between 0.3 and 0.7, with the exception for the theta 0 results, which ranges between 0.1 and 0.7. The theta results for None division are all the same. For the other division types, a downward trend is shown.}
\end{figure}
{\noindent \textbf{Scope:}} We observe from Figures \ref{Graph:Concealment3a} and \ref{Graph:Concealment3b} that the scope of an agent has a significant effect on its performance. Both the average win rate $W_{avg}$ and average concealment $C_{avg}$ increase with a smaller scope. We conclude that a smaller scope has a strictly positive impact on both metrics. This confirms hypothesis H1.
\mypara{Dividing:} All of the \textbf{None} dividing aspect results are equal, independent of dropping willingness. This means that not dividing the \textit{OSKB} negates the effect of the agent's dedication. This is an expected outcome, which happens because there is no division made of the knowledge base so there are no levels for the agent to drop between, even if it would be willing. Outside of this behavior, an upward trend is noticeable in all cases for average concealment, as well as a downward trend in all cases for the win rate, with more exhaustive dividing. This confirms hypothesis H2.
\mypara{Dedication:} When looking at the dedication aspect, we observe an upward trend in all cases for average concealment $C_{avg}$, as well as a downward trend in all cases for win rate $W_{avg}$, with less willing dedication. This is a similar trend as with the dividing aspect of the privacy behavior. This confirms hypothesis H3. Furthermore, Figure \ref{Graph:Concealment3b} shows a significant drop in win rate from $\theta$25 to $\theta$0, while the improvement in concealment is disproportional. This shows that it is beneficial for an agent to be at least somewhat willing to commit to the dispute.
Based on these results, we conclude the following observation:
\begin{observation}\label{Observation 3}
\textit{PACCART's concealment component allows users to keep information private, while also giving them the choice of a trade-off between winning disputes and further protection of information.}
\end{observation}
\subsection{Experiment 2: Effect of User-Agent Mapping in Realistic Setting}
\subsubsection{Setting}
The goal of the second experiment is to evaluate the mappings between agents and users by simulating disputes for each personalized agent in a realistic setting. The results of this mapping will determine whether EP2 and EP3 are met, which means that PACCART is an equitable agent. Therefore, based on this mapping, we further hypothesize:
\begin{itemize}
\item[\textbf{H4:}] Equity property EP2 is met under a mapping where personalized agents are assigned the smallest possible scope.
\item[\textbf{H5:}] Equity property EP3 is met under a mapping where personalized agents are assigned a fitting trade-off between division and dedication.
\end{itemize}
We create a set of opponents according to data of distribution of real life user population as given by Dupree et al. This opponent set therefore contains three Fundamentalist agents, 22 Lazy Expert agents, 18 Technician agents, 34 Amateur agents and 23 Marginally Concerned agents. We call this set of 100 agents the \textbf{Model Population Set MPS}. The MPS is in place because in a practical scenario it is less likely that an MPC occurs between Fundamentalists' agents, as between Marginally Concerned users' agents\footnote{An additional experiment is performed to evaluate the MPS, placing all agents in a non-distributed setting, which yielded similar results.}.
This means that six agents (one indifferent agent and all five personalized agents) compete 100 times against each of the personalized agents, and simulations are run on 200 disputes on the dispute dataset. Overall, the experiment is run on 20.000 disputes for six agent set-ups. Furthermore, agents are again tested twice for all disputes, both as proponent and opponent of the subject, to ensure equal chances of winning.
\subsubsection{Results}
The results of the second experiment can be seen in Figure \ref{graphs equity2}. Again, performance is measured by concealment $C_{avg}$ and win rate $W_{avg}$. As shown in Figure~\ref{graphs equity2}, the indifferent agent performs much worse than the personalized agents on both metrics (only 0.185 for win rate and 0.660 for concealment). This confirms hypothesis H4.
\begin{figure}[H]
\includegraphics[width=\linewidth]{FinalGraphs/b.png}
\caption{Average win rate $W_{avg}$ and Average Concealment $C_{avg}$ for indifferent agent and personalized agents in MPS. Averages between $W_{avg}$ and $C_{avg}$ are indicated with a line.}
\label{graphs equity2} \Description{Grouped bar graph showing the results for all six PACCART agents. The win rate ranges for personalized agents between 0.4 and 0.6. The concealment ranges for personalized agents between 0.9 and 0.7. An average line is shown to be between 0.6 and 0.7 for all personalized agents. Results of an indifferent agent is shown as a side by side comparison, with significantly lower performance.}
\end{figure}
Furthermore, the averages of all personalized agents are in the range between 0.6 and 0.7. This means that although some personalized agents are better at winning or concealing, the overall performance leads to an equitable situation where no users are victimized by the agent's workings. This confirms hypothesis H5.
It is worth noting that an interesting trend occurs between personalized agents, where the Fundamentalist representative’s agent (with the highest privacy stance) wins the least and conceals the most, while the Marginally Concerned representative’s agent wins the most and conceals the least. This trade-off shows how the different privacy stances influence the results.
Based on these results, we conclude the following observation:
\begin{observation}\label{Observation 5}
\textit{PACCART's equity component allows for a well-matched personalization for users of various privacy stances. While personalized PACCART agents overall perform relatively well, a consistent trade-off between win rate and concealment shows that no user is disadvantaged.}
\end{observation}
\section{User Study} \label{section5}
We further conduct a user study to understand what components of PACCART lead to user trust.
\subsection{Setting}
We design a survey in two parts. The first part of the survey has questions on the privacy stance of participants, in order to assess their privacy type. We deliberately use existing questions from the literature to ensure compatibility: three questions used by Westin et al. (e.g., \textit{"How much do you agree with the statement `Most businesses handle the personal information they collect about consumers in a proper and confidential way.'?"})~\cite{PP3} to determine the knowledge of participants on privacy and 10 questions on statements about privacy from the study of Dupree et al., to determine the motivation of participants on privacy (e.g., \textit{"How strongly do you identify yourself with the statement `I would rather choose being social over privacy.'?"}). As validation and to mitigate response bias, we also ask participants directly to self-assess their own knowledge and motivation (e.g., \textit{"How much do you know about digital privacy issues?"}). These questions are all answered on a Likert scale. The full questionnaire is also made openly available\textsuperscript{\ref{footnote1}}.
The second part of the survey has questions on the various components of PACCART as a personal assistant. This part starts with an example scenario. Then a set of questions follows in which participants are asked to rate their perceived trust of such personal assistants on a Likert scale (1 = Strongly Distrust, 5 = Strongly Trust). The first question is on the participants’ initial thoughts of trust on the PACCART base component (an explanation followed by \textit{"How much would you trust to use such a privacy assistant?"}). Then, each separate PACCART component is explained separately and addressed as a question. Afterwards, the participants are asked to rate the agent with all components combined (the base component with all four additional components). Finally, the participants are asked to reconsider their thoughts on the base component. This gives the participants a chance to reflect on their initial thoughts.
The survey is distributed through Qualtrics, an online, secure cloud-based, survey tool. Data is automatically and anonymously recorded through Qualtrics, in accordance with GDPR requirements. The survey is preceded by filling out a consent form. To ensure correctness and clarity, we first perform a small pilot study. Afterwards, the survey is distributed online for a user study. The first part of the survey is used to gather participants for a final interview study, in which we collect opinions on the agent by participants of various privacy stances.
\subsection{Results}
\subsubsection{Pilot Study}
Data and feedback was collected from three participants in the pilot study. Each of these participants were categorized as a different privacy type: one Lazy Expert, one Technician and one Amateur. They had no trouble with filling the survey and found the Qualtrics interface to be non-distracting. However, some feedback was given on the wording of questions and options to answers. We updated the wording of the second part accordingly to avoid ambiguity and then began the actual study.
\subsubsection{User Study}
Data was collected from $117$ voluntary participants in the user study. Based on validation questions and completion requirements, $12$ survey responses are filtered out. Out of the remaining $105$ participants, eight participants self-assessed as Fundamentalists, $20$ participants as Lazy Experts, $22$ as Technicians, $31$ as Amateurs and $24$ as Marginally Concerned users. This is in line with the distributions by Dupree et al. ~\cite{PP2}.
We report the mean (M) and standard deviation (SD) of the results, as well as significance through t-tests (\textit{P}). The results indicate that the initial consideration of the PACCART base component is fairly neutral (M = 2.857, SD = 1.023), slightly leaning towards distrust. The trust ratings given by participants are higher than the initial consideration for both Concealment (M = 2.943, SD = 0.979) as well as Equity (M = 3.171, SD = 1.069). There is a significantly ($P<.001$) positive increase of trust of the combined agent (M = 3.467, SD = 0.974) compared to the initial consideration of the base component. Even more so, when asked to reevaluate the trustworthiness of the agent, the average trust rating significantly ($P<.001$) drops (M = 2.362, SD = 0.982) compared to the combined agent. These results strongly indicate that overall, the principle of PACCART and its components increases the indicated trust of users.
\begin{figure}[H] \centering
\includegraphics[width=\linewidth]{FinalGraphs/c.png}
\caption{Average survey study ratings of participant trust on a Likert scale (1 = Strongly Distrust, 2 = Distrust, 3 = Neutral, 4 = Trust, 5 = Strongly Trust), divided by privacy type.}
\label{Graph:survey}
\Description{Grouped bar graph showing the results for all five participant privacy types. Typically, the Fundamentalist scores are much lower than the other scores, scores ranging between 1 and 2.5 whereas for example marginally concerned results are typically shown to be around 3.}
\end{figure}
The results are split on each of the privacy type categories, as shown in Figure \ref{Graph:survey}. The graph shows the average trust ratings by users with different privacy types. From this graph, we can observe the effect of different stances on privacy on agent trust scores. These results show that the lower the privacy stance, the higher the overall trust in the agent. Fundamentalist participants' highest mean trust rating is a 2.130, whereas Marginally Concerned participants’ lowest mean trust rating is a 2.727. This is in line with our expectations about the privacy types and therefore an indication that the privacy stance assessment part of the survey works as intended.
A further noteworthy observation is that for all of the individual privacy types the reconsideration is rated lower than the initial thoughts on the base component. This indicates that after having read an explanation on what possible components could improve upon the base, participants independently of their privacy type believe the base component to be less trustworthy. When comparing the base component with the total combined agent, trust significantly increases for all user types ($P < .001$) except for Fundamentalists. While the results do indicate an increase of trust for Fundamentalists, the results are not significant ($P= .18$), which is expected because of the naturally low occurrence of users with this high privacy stance. These results strongly indicate that overall, the principle of PACCART and its components increases the indicated trust of users of all privacy stances.
\subsubsection{Interview Study}
Data was collected from eight participants in the interview study. To ensure a well-balanced feedback from the interview participants, a distribution of participants with varying stances is selected. The participants consisted of one Fundamentalist (F1), two Lazy Experts (L1, L2), one Technician (T1), two Amateurs (A1, A2) and two Marginally Concerned users (M1, M2).
The results from the interview study follow the same trend as the results from the user study. The Marginally Concerned participant M2 remarked that they would trust to use PACCART because if they would not use the agent at all, they would have less control about protecting their privacy. T1 and L1 indicated that the agent's concealing behavior deeply increased their trust, as this would increase their control over their privacy. F1 and L1 responded more warily, and indicated that trusting such an agent highly depended on the trustworthiness of its producers, and that their opinion of the agent would depend on the reception by other people within their belief system. A2 indicated that an equitable agent would increase their trust. However, they would find it difficult to understand whether the personalization would be correct for them. This points us to possible future work in which agents learn from their user.
Furthermore, L1 indicated that the possibility of collaboration increased their trust a lot because they could divide their information across multiple agents within different devices. L2 and A2 indicated that feedback from the agent increased trust, as well as would heighten their awareness about their privacy. Finally, many participants (L2, T1, A1) indicated that they would trust to use the fully combined agent, especially if through future additions they would be able to improve its personalized concealing behavior by using the provided feedback to fine-tune their system.
\balance
\section{Discussion} \label{section6}
We first evaluate our approach with respect to three leading approaches that provide privacy agents for MPCs and then discuss future directions in which our work can improve.
\subsection{Comparison}
\begin{table}\caption{Comparison of privacy criteria between approaches.}\label{comparison}
\begin{tabular}{ccccc}
\toprule
Approach & CON & EQU & COL & EXP \\
\midrule
PACCART & \checkmark & \checkmark & \checkmark & \checkmark \\
PANOLA & \checkmark & \checkmark & \ding{55} & \ding{55} \\
ELVIRA & \ding{55} & \ding{55} & \checkmark & \checkmark \\
Filipczuk et al. & \ding{55} & \checkmark & \checkmark & \ding{55} \\ \bottomrule
\end{tabular}
\end{table}
PANOLA \cite{PP1} is an agent that participates in an auction system for privacy. It incorporates privacy types for personalization. PANOLA can learn to bid correctly in order to optimize their privacy preservation. ELVIRA \cite{mosca2021elvira} is a practical reasoning agent, which is designed for collaborative resolutions of MPCs. ELVIRA is both value- and utility-driven, and is able to produce an explanation of its process. The system by Filipczuk et al. \cite{filipczuk2022automated} is a multi-issue negotiation framework, designed to learn from its users' preferences. This system allows users to focus on the privacy issues that they find important, by including the users in the outcome decision loop.
We perform a comparison based on the earlier defined desirable properties for trustworthy assistive MPC approaches: concealment (CON), equity (EQU), collaboration (COL) and explainability (EXP). This comparison is summarized in Table \ref{comparison}.
\mypara{Concealment.} PANOLA conceals the privacy constraints as it operates on an auction system and thus provides bids. Thus, there is no dialogue system that would make it possible to reveal privacy constraints. The approach by Filipczuk et al. uses negotiation to achieve a desired MPC solution and does not provide any particular emphasis on concealment. The formulation of offers can easily reveal privacy constraints of the user. ELVIRA works under the epistemic assumption that agents share the same knowledge, where the uploader agent resolves the conflict for all, thus concealment is not possible. While PACCART shares content through argumentation, with its concealing behavior PACCART has control over which content to share with opposing agents.
\mypara{Equity.} ELVIRA is role-agnostic and thus eliminates the possibility of agents' acquiring advantages through roles. However, it does not provide any explicit treatment to show that the proposed agent helps different types of users well. Filipczuk et al. provide personalization through use of privacy types, which could lead to equity. Both PACCART and PANOLA specifically tailor to different user privacy types and demonstrate how equity is achieved.
\mypara{Collaboration.} PANOLA runs on an auction, on which agents are meant to participate independently. Thus, it does not provide collaboration in groups of agents.
ELVIRA resolves conflicts in a centralized manner. Since all information is shared, the groups are inherent in the system. Filipczuk et al. do not treat collaboration separately, but their use of negotiation could cater for collaboration among groups of agents. PACCART enables agents to form teams in order to compete against others in the system.
\mypara{Explainability.} Where auctions and negotiations are held with bids and offers that can be based on weights and scores, argumentation and reasoning include a way of justification and meaning in their arguments. Therefore, the semantic nature of PACCART and ELVIRA allows for these agents to report back to their users and explain their workings. Neither PANOLA nor Filipczuk et al.'s approach include feedback to the user.
\subsection{Conclusion \& Future Work}
We introduced PACCART, which helps users preserve privacy by enabling automated privacy argumentation.
PACCART aims to induce trust by increasing
content concealment, providing equitable personalizations, enabling multiagent team-based collaboration and explaining its actions through feedback. The agent is designed to be general and is made publicly available as an open-source program together with the dispute dataset generation system, so that they can be used for research as well as in practical applications, such as team collaboration tools (e.g., MS Teams) where co-owned data is shared abundantly and privacy disputes need to be resolved.
Future research could further build upon closing the feedback loop between users and the agent to further increase trust. When users get prompted that their agent lost a dispute because of the lack of arguments, the user could respond by taking action to help and improve the agent fit to its user. Furthermore, introducing mutual feedback opens new possibilities for machine learning approaches. Now, there exists a mapping between users and their personalized agents, which could be changed into the agent learning the preferences of the user instead. Weights could be given to the importance of dedication to win certain disputes, or concealing specific levels of content. The inclusion of reinforcement learning could be a great additional step towards robust and well-adjusted argumentation based privacy assistants.
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.13704",
"language": "en",
"timestamp": "2023-02-28T02:27:56",
"url": "https://arxiv.org/abs/2302.13704",
"yymm": "2302"
} | \section{Introduction}
Gravitational lensing is an important probe to study the physics of compact objects \cite{refsdal1994gravitational,Wambsganss:1998gg}. The recent astrophysical discovery, such as the observation of gravitational waves from compact objects \cite{LIGOScientific:2016aoc}, made it possible to detect compact objects through lensing physics in the near future. Thus, there is a growing interest in the study of gravitational lensing, and some up-to-date work can be found in \cite{Tsukamoto:2021lpm,Ren:2021uqb,Qiao:2021trw,Javed:2022psa,Liu:2022lfb,Atamurotov:2022knb,Gao:2022cds,Luo:2022uij,Sengo:2022jif,Guo:2022muy,Qiao:2022nic,Huang:2023iog,Shan:2023ngi,Suyu:2023jue}.
The lensing effect is extensively studied for different lens \cite{Dabrowski:1998ac,Safonova:2001vz,Eiroa:2002mk,Virbhadra:2002ju,TejeiroS:2005ltc,Nandi:2006ds}. Unfortunately, it turns out to be difficult to distinguish black holes (BHs) and other compact objects such as wormholes (WHs)\cite{Morris:1988cz,Morris:1988tu} and boson stars \cite{Schunck:2003kk}. Those compact objects can mimic the behavior of a black hole \cite{Abramowicz:2002vt,Damour:2007ap,Guzman:2009zz,Tsukamoto:2012xs, Abdikamalov:2019ztb}, and we are unable to observationally exclude their existence at present stage \cite{Berti:2015itd,Barack:2018yly, Cardoso:2019rvt,LISA:2022kgy}. In view of the theoretical importance of black holes as well as other compact objects, it is essential to search for distinctive features of these compact objects \cite{Harko:2009gc,Kovacs:2010xm,Sahu:2012er,Konoplya:2016hmd,Cunha:2017wao,Shaikh:2019hbm,Karimov:2020fuj,Vagnozzi:2022moj}.
There are two motivations for us to consider the lensing effect from a wormhole. On the one hand, recent developments on the light ring reveal unavoidable instabilities for a large variety of horizon-less compact object (and thus exclude the possibility for them to be astrophysically observed), while wormholes might be free from these instabilities \cite{Cardoso:2014sna,Cunha:2017qtt,Cunha:2022gde}. On the other hand, lensing effect from higher-order contributions might be eligible to distinguish black holes and their correspondent wormhole in the weak field limit \cite{Bronnikov:2018nub,Izmailov:2020ypf}. However, it is generically difficult to analytically resolve the deflection angle for lens with complicated metric. Approximation methods are then developed to simplify the evaluation procedure, for example the Amore-Diaz (AD) formalism \cite{Amore:2006pi}, the formalism through Gauss-Bonnet theorem (GBT) \cite{Gibbons:2008rj}, and the post-post-Newtonian (PPN) methods developed by Keeton and Petters \cite{Keeton:2005jd}. While these methods yield rather accurate result at leading order, their predictions on higher order contributions are different from each other. Unfortunately, recent researches pointed out that, contributions from higher-order terms is important to distinguish different lens object, for example the black holes and their mimicker (see e.g. \cite{Izmailov:2020ypf}). Thus, it is important to clarify the precision of gravitational lensing effect with different approximation methods.
In view of the above argument, we study the higher order gravitational lensing effect with different approximation methods, in an exemplified wormhole. One of the most well-known wormhole solutions is the Ellis-Bronnikov wormhole (EBWH), discovered in 1973 \cite{Ellis:1973yv,Ellis:1979bh,Bronnikov:1973fh}. It is based on the theory of Einstein gravity coupled to a free scalar field, which is referred to as a “phantom field". This field is defined by flapping the sign of its kinetic term in the Lagrangian. It was originally believed that this phantom field causes instability in the EBWH, but a recent study has argued that the instability can be cured \cite{Piazza:2004df}. Additionally, another study suggests that slow rotation can stabilize the EBWH \cite{Azad:2023iju}. As a simple yet significant model of a traversable wormhole, the gravitating lensing effect of the EBWH has garnered widespread attention in the scientific community. Most studies on the lensing effect have focused on a special case of the EBWH, where the wormhole mass is zero \cite{Abe:2010ap,Toki:2011zu,Nakajima:2012pu,Yoo:2013cia,Bozza:2015haa,Lukmanova:2016czn,Tsukamoto:2016qro,Jusufi:2017gyu,Asada:2017vxl,Tsukamoto:2017hva,Bhattacharya:2019kkb}.
In this work, we extend the study to the full EBWH metric, and evaluate the deflection angle up to the $1/b^4_\pm$ order in the weak field limit ($b_\pm$ is the re-scaled impact parameter). We find that the PPN result is the most accurate one, followed by the AD result, and finally the GBT result. Moreover, we find that for AD and GBT formalism, the results are sensitive to the choice of coordinate system when the corresponding deflection angle approaches $0$ in the negative-mass branch of universe.
The paper is organized as follows. We discuss the Ellis-Bronnikov wormhole in section \ref{sec:EBWH}, then briefly introduce the lensing physics in section \ref{sec:lensing}. In section \ref{sec:massless}, we make a pedagogical introduction to the approximation formalism, using the massless EBWH as an example. We present our main result, the deflection angle for general EBWH up to second order in different formalism, in section \ref{sec:massive}, and discuss the precision of different methods. We conclude in section \ref{sec:conclusion}.
Throughout this paper, we will adopt the Planck units so that $8\pi G=c=1$. We use the $(-,+,+,+)$ convention.
\section{Brief introduction to Ellis-Bronnikov wormhole}
\label{sec:EBWH}
\subsection{Theory and metric}
One of the simplest ways to construct a wormhole solution is by introducing a free phantom scalar field as exotic matter in General Relativity, namely
\begin{equation}
{\cal L}= \sqrt{-g}(R+\ft{1}{2}(\partial \phi)^2),
\end{equation}
where $R$ the scalar curvature and $\phi$ is a phantom scalar field. The Ellis-Bronnikov wormhole (EBWH) was discovered independently by Ellis and Bronnikov in 1973 and is a spherically symmetric solution. The line element of EBWH can be written as
\setlength\arraycolsep{2pt} \begin{eqnarray}\label{bronnikov}
&&ds^2=-h(r) dt^2+ h(r)^{-1}dr^2+R^2(r)d\Omega_{2}^2\,,\nonumber\\
&&h=e^{-\fft{m}{q}\phi},\qquad R^2=\fft{r^2+q^2-m^2}{h}\,,\nonumber\\
&&\phi=\fft{2q}{\sqrt{q^2-m^2}}\arctan(\fft{r}{\sqrt{q^2-m^2}})\,,\label{eq:ebsol}
\end{eqnarray}
where $(m,q)$ are two integration constants. Here $d\Omega_2^2$ represents the unit solid angle for two dimensions. The wormhole throat, which connects two asymptotic flat spacetime regions, is located at the minimum of $R(r)$ and is at $r=-m$. These regions can be referred to as Universe I where $r\in(-\infty, -m)$ and Universe II where $r\in(-m, +\infty)$. When $m\neq 0$, the metric denotes an asymmetric wormhole because Universe I is not a copy of Universe II.
Specifically, the two Universe would observe the wormhole with opposite sign and different absolute value:
\begin{equation}
\label{eq:Mpm}
M_{\pm} = \pm m e^{ \pm \pi m/(2\sqrt{q^2-m^2})} ~.
\end{equation}
If $m\geq 0$ and $q\geq 0$, Universe I has a positive wormhole mass while Universe II has a negative one. For more details on the global structure of EBWH, refer to Ref.\cite{Huang:2020qmn}.
Later on, we may simply use the $\pm$ sign to refer to the Universe with an observed positive/negative mass, if convenient. Moreover, we see the gravitational property of the wormhole seen by Universe II is equivalent to that by Universe I, as long as we make a transformation $m \to -m$. Thus, we may set $m \geq 0$ without loss of generality. Although the two universes are asymmetric, in our convention \eqref{eq:Mpm} they are written as symmetric as possible, differing only by a transform $m \to -m$.
We mention that, the horizon-less condition for EBWH, i.e. the metric component $g_{00}$ is everywhere negative, is imposed by $q^2 > m^2$. Since we've set $m \geq 0$, the condition can be simply written as $q > m$.
\subsection{Special cases}
\label{sec:cases}
When $m=0$, the metric \eqref{bronnikov} reduces to the well-known symmetric EB wormhole:
\begin{equation}
ds^2=-dt^2+dr^2+(r^2+q^2)d\Omega^2_2,
\end{equation}
where the light ring locates at $r=0$ and without ISCO in the whole spacetime.
In principle, the horizon-less condition $q^2 > m^2$ forbids the possible $q \to 0$ limit. However, if we naively set $q=0$ in the metric \eqref{eq:ebsol}, we may recover the Schwarzschild black hole with mass $m$ (see e.g. \cite{Huang:2020qmn} for more details). This is in agreement with our intuition: the condition $q^2 > m^2$ forbids the existence of a horizon, so when we take $q=0$ and $m \neq 0$, we expect a black hole to come to exist.
\section{Basics on gravitational lensing physics}
\label{sec:lensing}
\subsection{Lensing geometry}
For simplicity, we start with a static spherically symmetric metric
\begin{equation}
\label{eq:SSSmetric}
ds^2 = -P(r) dt^2 + Q(r) dr^2 + R^2(r) d\Omega_2^2 ~.
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[width=0.96\linewidth]{lensgeo.jpg}
\caption{The geometry of a lensing with a point-like lens}
\label{fig:lens}
\end{figure}
We depict the lens geometry in figure \ref{fig:lens}. We may treat the lens object (in our case the wormhole $W$) to be a point, as long as the scale of lensing geometry is much larger than the lens object. A light ray is emitted at an angle $\beta$ from the source $S$, but deflected by the lens such that the observer $O$ received it at an angle $\theta$. We use the deflection angle $\alpha$ to measure the deflection of light. In the metric \eqref{eq:SSSmetric}, the deflection angle has the following expression (see e.g. \cite{Weinberg:1972kfs})
\begin{equation}
\label{eq:alphadef}
\alpha = 2 \int_{r_0}^{\infty} \frac{\sqrt{Q(r)}/R(r) dr}{\sqrt{\frac{R^2(r)}{R^2(r_0)} \frac{P(r_0)}{P(r)} - 1}} - \pi ~,
\end{equation}
where $r_0$ is the distance of closest approach of the light to the center of the gravitational attraction.
Sometimes it's more convenient to use the impact parameter $b$ instead of $r_0$ to describe the light ray. The impact parameter $b$ is defined as
\begin{equation}
b = R(r_0)/ \sqrt{P(r_0)} ~.
\end{equation}
With the help of \eqref{eq:alphadef}, we can relate the angle $\theta$ and $\beta$ by the lensing geometry. In the weak field limit we have
\begin{equation}
D_{ls} \alpha = D_s (\beta - \theta) ~,
\end{equation}
which gives a function $\beta(\theta)$. Observables may then be evaluated from the function $\beta(\theta)$. For example, the magnification is
determined by the ratio between the solid angles
\begin{equation}
|\mu| = \frac{d\Omega_O}{d\Omega_S} = \Big| \frac{\beta}{\theta} \frac{d\beta}{d\theta} \Big|^{-1}.
\end{equation}
We see that the deflection angle $\alpha$ is essential in the lensing physics. In the following, we shall restrict ourselves to the property of $\alpha$. Moreover, we are interested in the weak field regime, where we require $b$ is much larger than any wormhole parameters (in our case $b \gg m$ and $b \gg q$). Thus, we expect the deflection angle to be a series of $m/b$ and $q/b$. In the current paper, we will evaluate $\alpha$ up to the $1/b^4$ order.
\subsection{Deflection angle for special cases}
As pointed out in section \ref{sec:cases}, the asymmetric EBWH reduces to Schwartzchild black hole and symmetric EBWH in the limit $q = 0$ and $m = 0$, respectively. Since the lensing physics of the latter two objects are thoroughly studied, it would be important to compare our result in the two limiting case to the previous study.
For our purpose, we will take the massless EBWH as a tool to illustrate different approximation methods in section \ref{sec:massless}. So we shall simply present the deflection angle for a Schwarzschild black hole in the weak field limit \cite{Frittelli:1999yf,Virbhadra:1999nm}
\begin{align}
\alpha & \nonumber = \frac{4m}{r_0} + \left( \frac{15\pi}{4} - 4 \right) \left( \frac{m}{r_0} \right)^2 + \left( \frac{122}{3} - \frac{15}{2}\pi \right) \left( \frac{m}{r_0} \right)^3 \\
& + \left( \frac{3465}{64} \pi - 130 \right) \left( \frac{m}{r_0} \right)^4 + \mathcal{O} \left( \frac{m}{r_0} \right)^5 ~,
\end{align}
and in terms of the impact parameter $b$, we have
\begin{equation}
\label{eq:alphaSch}
\alpha = \frac{4m}{b} + \frac{15\pi}{4} \frac{m^2}{b^2} + \frac{128}{3} \frac{m^3}{b^3} + \frac{3465}{64} \pi \frac{m^4}{b^4} ~,
\end{equation}
where $m$ is the mass of the Schwarzschild black hole.
The deflection angle of a massless EBWH can be found in section \ref{sec:massless}, equation \eqref{eq:alpha2ana}.
\section{Massless EBWH as an illustration for different formalism}
\label{sec:massless}
This section is a pedagogical introduction for the approximation methods, with the massless EBWH metric
\begin{equation}
\label{eq:masslessmetric}
ds^2 = -dt^2 + dr^2 + (r^2 + q^2) d\Omega_2^2 ~,
\end{equation}
where $q$ is the throat radius and $d\Omega_2^2$ stands for the 2-dimensional unit sphere. The gravitational lensing effect of massless EBWH is extensively studied in the literature (see e.g. \cite{Abe:2010ap,Toki:2011zu,Nakajima:2012pu,Yoo:2013cia,Bozza:2015haa,Lukmanova:2016czn,Tsukamoto:2016qro,Jusufi:2017gyu,Asada:2017vxl,Tsukamoto:2017hva,Bhattacharya:2019kkb}). The deflection angle for the metric \eqref{eq:masslessmetric} is an Elliptic function:
\begin{equation}
\label{eq:alphamasslessfull}
\alpha = 2K\left( \frac{q}{b} \right) - \pi ~,
\end{equation}
where $K(u)$ is the elliptic integral of the first kind, defined as
\begin{equation}
K(u) \equiv \int_0^1 \frac{dx}{\sqrt{(1-x^2)(1-u^2x^2)}} ~.
\end{equation}
We can write the deflection angle \eqref{eq:alphamasslessfull} in the form
\begin{equation}
\alpha = \pi \sum_{n=1}^{\infty} \left[ \frac{(2n-1)!!}{(2n)!!} \right]^2 \left( \frac{q}{b} \right)^{2n} ~,
\end{equation}
where $!!$ denotes the double factorial. Since we're interested in the weak-field region $q \ll b$, we keep the first two contributions
\begin{equation}
\label{eq:alpha2ana}
\alpha = \frac{\pi}{4} \left( \frac{q}{b} \right)^{2} + \frac{9\pi}{64} \left( \frac{q}{b} \right)^{4} + \mathcal{O} \left( \frac{q}{b} \right)^{6} ~,
\end{equation}
We shall keep the result \eqref{eq:alpha2ana} as fiducial. In the following sections, we shall evaluate the deflection angle of the massless EBWH \eqref{eq:masslessmetric} with the three different formalism, and compare it with \eqref{eq:alpha2ana}.
\subsection{The AD formalism}
The AD formalism \cite{Amore:2006pi} converts the complicated integrals into a rapidly convergent series of solvable integrals.
Starting from \eqref{eq:alphadef}, we make a change of variable $z \equiv r_0/r$, and an auxiliary function
\begin{align}
V(z) & \nonumber \equiv \frac{z^4}{r_0^2} \frac{R^2(r_0/z)}{Q(r_0/z)} + \frac{P(r_0)r_0^2}{R^2(r_0)} \\
& - \frac{z^4}{r_0^2} \frac{R^4(r_0/z) P(r_0)}{{Q(r_0/z) P(r_0/z) R^2(r_0)}} ~,
\end{align}
and the expression for $\alpha$ becomes
\begin{equation}
\label{eq:ADintegral}
\alpha = \int_0^1 \frac{dz}{\sqrt{V(1) - V(z)}} - \pi ~.
\end{equation}
Up to now, the formalism is accurate. Now we write the function $V(z)$ as
\begin{equation}
V_{\delta}(z) = V_0(z) + \delta (V(z) - V_0(z)) ~,~ \delta \in (0,1) ~.
\end{equation}
where $V_0(z)$ is some function which makes \eqref{eq:ADintegral} solvable (for example $V_0 = \lambda z^2$). If $V(z)$ is of the form $V(z) = \sum_{n=1}^{\infty} v_n z^n$, we can expand the integral as a series of $\delta$, and integrate it term by term. The result, after taking $\delta = 1$, is
\begin{equation}
\alpha = \frac{3\pi}{2\sqrt{\lambda}} - \frac{1}{\lambda^{3/2}} \sum_{n=1}^{\infty} v_n z^n - \pi ~.
\end{equation}
Finally, we shall assume that the $\alpha$ get the most accurate value when $d\alpha /d\lambda = 0$, i.e. the principle of minimal sensitivity (PMS). The final result is
\begin{equation}
\label{eq:ADalpha}
\alpha = \pi \left[ \sqrt{\frac{\pi}{2 \sum_{n=1}^{\infty} v_n z^n}} - 1 \right] ~.
\end{equation}
We elaborate the procedure with the massless EBWH (one may come to \cite{Dey:2008kn} for more details). Here $-g_{00} = g_{11} = 1$ and $g_{22} = r^2 + q^2$,so the deflection angle is
\begin{equation}
\alpha = 2 \int_{r_0}^{\infty} \frac{\sqrt{r_0^2 + q^2}dr}{\sqrt{r^2 + q^2}\sqrt{r^2 - r_0^2}} - \pi ~,
\end{equation}
and the function $V(z)$ becomes
\begin{align}
V(z) = \frac{b^2 - 2q^2}{b^2} z^2 + \frac{q^2}{b^2} z^4 ~.
\end{align}
From the expression \eqref{eq:ADalpha}, we can directly write
\begin{equation}
\alpha = \pi \left[ \sqrt{\frac{2b^2}{2b^2 - q^2}} - 1 \right] ~.
\end{equation}
In weak field approximation $q \ll b$, the final result is
\begin{equation}
\label{eq:ADmassless}
\alpha = \frac{\pi}{4} \left( \frac{q}{b} \right)^2 + \frac{3\pi}{32} \left( \frac{q}{b} \right)^4 + \mathcal{O} \left( \frac{q}{b} \right)^6 ~.
\end{equation}
The result is accurate in the first order, but disagree with \eqref{eq:alpha2ana}. The difference is noticed in \cite{Nakajima:2012pu}, and explained as the failure of PMS near the throat $r=0$.
\subsection{The GBT formalism}
The GBT formalism is wildly applied to lensing physics since it manifests the physics in a topological viewpoint. We show the lensing geometry in figure \ref{fig:gbtgeo} from \cite{Gibbons:2008rj}.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{GBTgeo.jpg}
\caption{The lensing geometry in weak field approximation. The two geodesics $\gamma_1$ and $\gamma_2$ represent light rays from the source $S$ to the observer. The domain $D_1$ contains the lens L while $L \notin D_2$. The two domains intersect at $\gamma_1$, and we require $D_2$ to be asymptotically flat at least in the neighbor of $S$ and $O$. Finally, we need an auxiliary curve $\gamma_L$ enclosing $L$, and $\gamma_P$ which is the boundary curve of $D_2$.}
\label{fig:gbtgeo}
\end{figure}
The strategy of GBT formalism is as follows. Firstly, the Gauss-Bonnet theorem for a domain $D$ is
\begin{equation}
\label{eq:GBTformal}
\int \int_D K dS + \int_{\partial D} \kappa dt + \sum_i \phi_i = 2\pi \chi(D) ~.
\end{equation}
Here, $K$ and $\kappa$ are Gaussian curvature and geodesic curvature. The angle $\alpha_i$ are the exterior angle, while $\chi(D)$ is the Euler characteristic number. The line integral is done with respect to the affine parameter $t$ (not to be confused with the $t$ coordinate). The exterior angle is related to the interior angle by $\theta_S = \pi - \phi_S$ and $\theta_O = \pi - \phi_O$.
For our case, we shall assume the lens is non-singular so that $\chi (D)=1$. Notice that the geodesic curvature for a geodesic vanishes, so that $\kappa(\gamma_1) = \kappa(\gamma_2) = 0$. The formulae \eqref{eq:GBTformal} for domain $D_1$ then gives
\begin{equation}
\theta_S + \theta_O = \int \int_{D_1} K dS ~,
\end{equation}
which may help us to intuitively judge whether the lens is convex or concave.
Applying the formulae \eqref{eq:GBTformal} to the domain $D_2$, and we choose sufficiently remote $S$ and $O$ such that $\theta_S \simeq \theta_O \simeq \pi/2$, so that
\begin{equation}
\int_{\gamma_P} \kappa (\gamma_P) dt = \pi - \int \int_{D_2} KdS ~.
\end{equation}
Finally, for asymptotically flat domain $D_2$, we have $\kappa (\gamma_P) dt/ d\alpha = 1$, so the line integral is
\begin{equation}
\int_{\gamma_P} \kappa (\gamma_P) dt = \int_{\gamma_P} d\alpha = \pi + \alpha ~,
\end{equation}
so the deflection angle can be directly obtained by
\begin{equation}
\alpha = - \int \int_{D_2} KdS ~.
\end{equation}
It is generically convenient to evaluate the surface integral in the optical metric. We elaborate the process by the massless EBWH. For convenience we fix $\theta = \pi/2$ due to the spherical symmetry, so the null geodesic $ds^2 = 0$ gives
\begin{equation}
dt^2 = dr^2 + (r^2 + q^2) d\varphi^2 ~.
\end{equation}
The next step is to write the null geodesics in the form
\begin{equation}
dt^2 = h_{ab} d\lambda^a d\lambda^b = du^2 + \zeta^2(u) d\varphi^2 ~,
\end{equation}
and for massless EBWH, we already have $u = r$, $\zeta(u) = \sqrt{u^2 + q^2}$, so the Gaussian optical curvature is
\begin{align}
\label{eq:Kmassless}
K & \nonumber \equiv - \frac{1}{\zeta(u)} \left[ \frac{dr}{du} \frac{d}{dr} \left( \frac{dr}{du} \right) \frac{d\zeta}{dr} + \left( \frac{dr}{du} \right)^2 \frac{d^2 \zeta}{dr^2} \right] \\
& = -\frac{1}{\zeta} \frac{d^2 \zeta}{dr^2} = - \frac{q^2}{(q^2 + r^2)^2} ~,
\end{align}
and the deflection angle is
\begin{equation}
\alpha \equiv -\int_{D_2} KdS = - \int_0^{\pi} d\varphi \int_{\frac{b}{\sin \varphi}}^{\infty} K \sqrt{h} dr .
\end{equation}
Here, the integration range is decided by our assumption that $D_2$ is asymptotically flat. If $S$ and $O$ are remote enough, $\gamma_p$ is approximately a straight line, where the angular variable $\varphi$ ranges from $0$ to $\pi$, and the closest radial coordinate $r$ can be approximately taken by $b/\sin \varphi$. With the help of \eqref{eq:Kmassless}, we finally get \cite{Jusufi:2017gyu}
\begin{equation}
\label{eq:alphagbtm=0}
\alpha = \pi - 2 K \left( \frac{q^2}{b^2} \right) \simeq \frac{\pi}{4} \left( \frac{q}{b} \right)^2 - \frac{9\pi}{64} \left( \frac{q}{b} \right)^4 + \mathcal{O} \left( \frac{q}{b} \right)^6 ~,
\end{equation}
where $K(k)$ is the complete elliptic function of the first kind. The result shows more deviation to \eqref{eq:alpha2ana} compared to the AD formalism.
\subsection{The PPN formalism}
The PPN formalism is based on a simple observation that, in the weak field limit, a wild range of physical quantities can be expressed as a series of the effective Newtonian potential. For a gravitational lens with mass $M$, the effective Newtonian potential is $\Phi = M/r$. Notice that we have used Planck units with $8\pi G = 1$, so in principle, the coefficient $G$ in the definition of $\Phi$ should lead to a factor $1/8\pi$. However, since $1/8\pi$ is only a constant, we may suppress it for convenient by a re-scaling. This is equivalent to transfer the $1/8\pi$ factor from $\Phi$ into the coefficients of $\Phi$ in PPN series.
Before proceeding, we shall clarify some tricky issues. Firstly, as we shall see from \eqref{eq:Mpm}, the gravitational mass observed in Universe I and II are different. Thus we shall specify which universe we're working with before proceeding. Besides, for complicated metric like EBWH, the expression for $\Phi$ is not obvious. For example, the physical radius of EBWH is $R$ instead of the $r$ from \eqref{eq:ebsol}, so it's not sure whether we should us $r$ or $R$ in the definition of $\Phi$. Besides, the form of mass could change in different conventions. For example, in some studies, e.g. \cite{Nandi:2016uzg}, the convention for EBWH is chosen such that the Kepler mass in one side is $m$, and in the other side is $-m e^{\pi m/\sqrt{{q^2 - m^2}}}$. Thus if we apply different convention, the definition for $\Phi$ seems to be varying.
In view of the above argument, we need to be more precise about what is an effective newtonian potential. In the standard treatment of PPN formalism, we start with the isotropic metric
\begin{equation}
\label{eq:isodef}
ds^2 = -A(l)dt^2 + B(l) \left( dl^2 + l^2 d\Omega_2^2 \right) ~.
\end{equation}
The isotropic coordinate is defined such that the spatial part of the metric is conformally flat, so that the light cone appears round. In the rest of this paper, we will use $l$ to represent the radial coordinate as long as we're working with isotropic coordinates.
In GR, the Newtonian gravity is recovered by assuming $A(l)-1 \ll 1$, $B(l)-1 \ll 1$, such that the dynamical equation becomes $\nabla^2 \vec{r}_{\mu} = 1/2 \nabla_{\mu} (A-1)$, so the Newtonian potential in this case is simply $\Phi = 1/2 (A-1)$. Thus, it would be more reasonable to define the Newtonian potential as the leading order deviation to the flat configuration in the isotropic coordinate.
Notice that $B(l)$ is also responsible for $\Phi$ in generically cases. We illustrate this in the massless case. The metric in isotropic coordinate is (see appendix \ref{app:isotropic} for more details)
\begin{equation}
ds^2 = -dt^2 + \frac{(l^2 + q^2)^2}{4l^4} \left( dl^2 + l^2 d\Omega_2^2 \right) ~.
\end{equation}
As one may see, $A(l)$ is trivial, so we shall define $\Phi$ through $B(l)$. Moreover, the leading order term is $q^2/l^2$ instead of $q/l$. Thus, the effective Newtonian should be $\Phi \propto q^2/l^2$, instead of the conventional $q/l$ dependence. This is expected since the deflection angle \eqref{eq:alpha2ana} contains only $q^2/b^2$ term. Let's naively take $\Phi = q^2/2l^2$, the metric coefficients become
\begin{equation}
A(\Phi) = 1 ~,~ B(\Phi) = \frac{1}{4} + \Phi + \Phi^2 ~,
\end{equation}
where the coefficients are already a polynomial of $\Phi$.
Under the isotropic metric above, the deflection angle should be expressed as
\begin{equation}
\alpha = 2 \int_{r_0}^{\infty}\frac{1}{r^2} \sqrt{\frac{A B}{1/b^2 - A/r^2}} dr - \pi~.
\end{equation}
Finally, only the nonzero even order terms remains. So the deflection angle in PPN formalism is
\begin{align}
\alpha = \frac{\pi}{4}\left(\frac{q}{b}\right)^2 + \frac{9}{64} \pi \left(\frac{q}{b}\right)^4 + \mathcal{O}\left(\frac{q}{b}\right)^6~.
\end{align}
Compared to \eqref{eq:alpha2ana}, the result in PPN formalism is relatively accurate considering the massless case.
\section{Deflection angle of massive EBWH}
\label{sec:massive}
Now we come to the generic EBWH. Our strategy is as follows. We use the AD formalism, the GBT formalism and the PPN formalism to evaluate the deflection angle up to second order, in section \ref{sec:AD}, \ref{sec:GBT} and \ref{sec:PPN}, respectively. Then, we numerically evaluate the deflection angle, and compare the numerical result to the above methods in section \ref{sec:num}.
Before proceeding, we mention that the metric component in the isotropic coordinate is (see appendix \ref{app:isotropic} for details)
\begin{equation}
\label{eq:Aiso}
A = \exp \left[ \gamma \left(\pi - 4 \arctan\frac{l\gamma}{m} \right) \right] ~,
\end{equation}
\begin{equation}
B = \frac{1}{A} \frac{\left(l^2 \gamma^2 + m^2 \right)^2}{4l^4\gamma^4} ~.
\end{equation}
with the dimensionless constant being
\begin{equation}
\gamma \equiv \frac{m}{\sqrt{q^2 - m^2}} ~.
\end{equation}
For the AD and GBT formalism, it would be useful to do the calculation in the two coordinate system as a consistency check.
Finally, when presenting the final result, we will meet the following combination:
\begin{equation}
\label{termform}
(\pm 1)^{\alpha+\beta} \frac{m^{\alpha} q^{\beta}}{b^{\alpha + \beta}} e^{(\pm 1)^{\alpha + \beta} \frac{m\pi}{\sqrt{q^2 - m^2}}} ~.
\end{equation}
Such combination seems complicated than expected, that is because the metric coefficients on each side of the wormhole cannot be simultaneously set to asymptotic unity. For simplicity we define the re-scaled impact parameter as
\begin{equation}
b_{\pm} \equiv \pm b e^{\pm \frac{m\pi}{\sqrt{q^2 - m^2}}} ~,
\end{equation}
here $\pm$ correspond to wormhole sides with positive/negative mass as mentioned above.
Now the expression \eqref{termform} just simplifies to
\begin{equation}
\label{eq:finalform}
\frac{m^{\alpha} q^{\beta}}{b_{\pm}^{\alpha + \beta}}.
\end{equation}
Compared to the massless case, mass is introduced and the impact parameter is re-scaled in massive results.
\subsection{The AD formalism}
\label{sec:AD}
The auxiliary function V(z) in the metric \eqref{eq:ebsol} is
\begin{align}
\label{eq:Vpmz}
V_{\pm}(z) & \nonumber = \left[ r_0^2 + z^2(q^2 - m^2) \right] \frac{z^2}{r_0^2} \Bigg\{ 1 - \frac{q^2 - m^2 + r_0^2/z^2}{r_0^2 + q^2 - m^2} \\
& \times \exp \left[ \pm 4\gamma \left( \arctan \frac{r_0 \gamma}{m} - \arctan \frac{r_0 \gamma}{mz} \right) \right] \Bigg\} ~.
\end{align}
We are only interested in the deflection angle up to $1/b^4$ term, so we expand \eqref{eq:Vpmz} up to $z^4$ order, substitute into it into formulae \eqref{eq:ADalpha} and get
\begin{align}
& \ \ \ \ \alpha \nonumber \simeq \pm \frac{4m}{r_0} + \frac{\pi^2}{4} \frac{q^2}{r_0^2} + \left( \frac{24}{\pi} - 16 + \frac{15}{4} \pi \right) \frac{m^2}{r_0^2} \\
& \nonumber \pm \left( \frac{37}{3} - 4\pi \right) \frac{q^2m}{r_0^3} \pm \left( \frac{160}{\pi^2} - \frac{192}{\pi} + 89 - 12\pi \right) \frac{m^3}{r_0^3} \\
& \nonumber - \frac{5\pi}{32} \frac{q^4}{r_0^4} + \left( \frac{142}{\pi} - 92 + \frac{245}{16} \pi \right) \frac{q^2m^2}{r_0^4} \\
& + \left( \frac{1120}{\pi^3} - \frac{1920}{\pi^2} + \frac{1362}{\pi} - 420 + \frac{1563}{32} \pi \right) \frac{m^4}{r_0^4} ~,
\end{align}
and translate into the re-scaled impact parameter $b_\pm$, we have
\begin{align}
\label{eq:alphaADr}
\alpha & \nonumber = 4 \frac{m}{b_{\pm}} + \frac{\pi}{4} \frac{q^2}{b_{\pm}^2} + \left( \frac{24}{\pi} - 8 + \frac{15}{4}\pi \right) \frac{m^2}{b_{\pm}^2} \\
& \nonumber + \left( \frac{160}{\pi^2} - \frac{96}{\pi} + 47 + 3\pi \right) \frac{m^3}{b_{\pm}^3} + \left( \frac{43}{3} - 3\pi \right) \frac{m}{b_{\pm}} \frac{q^2}{b_{\pm}^2} \\
& \nonumber + \left( \frac{1120}{\pi^3} - \frac{960}{\pi^2} + \frac{570}{\pi} - 54 + \frac{1059}{32} \pi \right) \frac{m^4}{b_{\pm}^4} \\
& + \left( \frac{166}{\pi} - \frac{62}{3} - \frac{19}{16} \pi \right)\frac{m^2}{b_{\pm}^2} \frac{q^2}{b_{\pm}^2} + \frac{3\pi}{32} \frac{q^4}{b_{\pm}^4} ~.
\end{align}
One may also use the formalism in the isotropic coordinate by a similar procedure and get
\begin{align}
\label{eq:alphaADl}
\alpha & \nonumber = 4 \frac{m}{b_{\pm}} + \frac{\pi}{4} \frac{q^2}{b_{\pm}^2} + \left( \frac{24}{\pi} - 8 + \frac{15}{4}\pi \right) \frac{m^2}{b_{\pm}^2} \\
& \nonumber + \left( \frac{160}{\pi^2} - \frac{96}{\pi} + 56 \right) \frac{m^3}{b_{\pm}^3} + \frac{16}{3} \frac{m}{b_{\pm}} \frac{q^2}{b_{\pm}^2} \\
& \nonumber + \left( \frac{1120}{\pi^3} - \frac{960}{\pi^2} + \frac{678}{\pi} - 72 + \frac{1737}{64} \pi \right) \frac{m^4}{b_{\pm}^4} \\
& + \left( \frac{58}{\pi} - \frac{8}{3} + \frac{151}{32} \pi \right)\frac{m^2}{b_{\pm}^2} \frac{q^2}{b_{\pm}^2} + \frac{9\pi}{64} \frac{q^4}{b_{\pm}^4} ~.
\end{align}
In both cases, the result \eqref{eq:alphaADr} and \eqref{eq:alphaADl} agree with the previous result from AD formalism \eqref{eq:ADmassless} in the massless limit. Compared the result \eqref{eq:alphaADr} and \eqref{eq:alphaADl} to the precise one: in the $q=0$ limit, they differ from \eqref{eq:alphaSch} at the order $m^2/b^2_\pm$; in the $m=0$ limit, they differ from \eqref{eq:ADmassless} at the $q^4/b^4_\pm$ order. Moreover, the AD formalism gives different results in different coordinate system at the $1/b^3_\pm$ order, even if the underlying metrics are the same.
\subsection{The GBT formalism}
\label{sec:GBT}
The Gaussian curvature is simply
\begin{equation}
K = \frac{m(m \mp 2re^{\pm \gamma \pi})-q^2}{(r^2 e^{\pm 2\gamma \pi} + q^2 - m^2)^{\frac{3}{2}}} ~,
\end{equation}
so an integration directly gives
\begin{align}
\label{eq:alphagbtr}
\alpha & \nonumber = \frac{4m}{b_{\pm}} + \frac{\pi}{4} \frac{q^2 - m^2}{b_{\pm}^2} - \frac{4m}{3b_{\pm}} \frac{q^2 - m^2}{b_{\pm}^2} \\
& - \frac{9\pi}{64} \frac{(q^2 - m^2)^2}{b_{\pm}^4} ~.
\end{align}
When $m = 0$, we recover the result in \eqref{eq:alphagbtm=0}. Also, when $q=0$, the result differs from \eqref{eq:alphaSch} even at the $m^2/b^2_\pm$ order.
We may also repeat the procedure in the isotropic coordinate, where the Gaussian curvature is a bit involved
\begin{equation}
K = \frac{8(\pm m-2le^{\gamma \pi})(q^2 - m^2) \mp 32m l^2 e^{2\gamma \pi}}{\left( q^2 - m^2 + 4l^2 e^{2\gamma \pi} \right)^2} ~.
\end{equation}
After evaluating the integral and substitude $l$ into $r$, we get
\begin{align}
\label{eq:alphagbtl}
\alpha & \nonumber = \frac{4m}{b_{\pm}} + \frac{\pi}{4} \frac{q^2 - m^2}{b_{\pm}^2} - \frac{2m}{3b_{\pm}} \frac{q^2 - m^2}{b_{\pm}^2} \\
& - \frac{3\pi}{64} \frac{(q^2 - m^2)^2}{b_{\pm}^4} ~.
\end{align}
As we shall see, the result \eqref{eq:alphagbtr} and \eqref{eq:alphagbtl} differs at the order $1/b^3_\pm$.
\subsection{The PPN formalism}
\label{sec:PPN}
To proceed with the PPN formalism we need to know the Newtonian potential $\Phi$. However, it appears tricky to define $\Phi$ in our case. Recall that, the throat is located at $r=-m$, i.e. $l = q-m$; and the two asymptotic region $r \to \pm \infty$ corresponds to $l = 0$ and $l \to \infty$, respectively. Thus, simple expressions like $\Phi = m/l$ would not work for both sides. In fact, by the conventional treatment, we shall define a Newtonian potential which vanishes at flat region. However, we cannot set the two side of Universe in \eqref{eq:ebsol} to be asymptotically flat simultaneously. Therefore, we cannot expect a global Newtonian potential satisfying both $\Phi(l=0) = 0$ and $\Phi(l\to\infty) = 0$.
Therefore, we shall use an alternative approach. We shall define two Newtonian potential, $\Phi_-$ corresponds to the $r<-m$ universe with negative mass, and $\Phi_+$ corresponds to the $r>-m$ universe with positive mass. Moreover, as we see in the appendix \ref{app:isotropic}, Universe I can be transferred to Universe II by a change of variable $l \to 1/l$. So without loss of generality let's firstly work out the $\Phi_+$ case.
In the $\Phi_+$ case, $l > q-m$ and $m > 0$, and the weak field limit is valid at the region $l \gg q$. Now for the large $l$, the $\arctan$ function in \eqref{eq:Aiso} approximates to $\pi/2$, so let's rewrite the coefficient $A(l)$ as
\begin{equation}
A(l) = \exp \left[ \gamma \left( 4 \arctan \frac{m}{\gamma l} - \pi \right) \right] ~.
\end{equation}
Notice that at the asymptotic region $A(\infty) = \exp (-\gamma \pi) \neq 1$, this is the cost we have to pay for the metric \eqref{eq:ebsol}, where the two side is written as symmetric as possible. Now, one may simply define
\begin{equation}
\Phi_+ = \frac{m}{\gamma l} ~,~ l \in (q-m,\infty) ~,
\end{equation}
and we may expand the metric components as
\begin{align}
A(\Phi) \nonumber &= e^{-\gamma \pi} \left[1 + 4 (\gamma \Phi_{+}) + 8 (\gamma \Phi_{+})^2 \right.\\
\nonumber & + \frac{1}{3}\left(32 - \frac{4}{\gamma^2}\right)(\gamma \Phi_{+})^3 \\
& + \frac{16}{3} \left(2 - \frac{1}{\gamma^2}\right)(\gamma \Phi_{+})^4 + \mathcal{O}(\gamma \Phi_{+})^5]~.
\end{align}
For the universe with the negative mass, we can get the result by a simple replacement $l \to 1/l$. However, in this case $l$ ranges from $0$ to $1/(q-m)$, which is a bit counter-intuitive. So, let's use $l^{\prime} = 1/l \in (q-m,\infty)$. The effective Newtonian potential is
\begin{equation}
\Phi_- = -\frac{m}{\gamma l^{\prime}} ~,~ l^{\prime} \in (q-m,\infty) ~.
\end{equation}
For further convenience, let's suppress the $\prime$ symbol if no confusion would be made. The metric component is then
\begin{equation}
A(l) = \exp \left[ \gamma \left( \pi - 4\arctan \frac{\gamma}{ml} \right) \right] ~,
\end{equation}
In terms of effective potential $\Phi_-$,
\begin{align}
A(\Phi) \nonumber &= e^{-\gamma \pi} \left[1 + 4 (\gamma \Phi_{-}) + 8 (\gamma \Phi_{-})^2 \right.\\
\nonumber & + \frac{1}{3}\left(32 - \frac{4}{\gamma^2}\right)(\gamma \Phi_{-})^3 \\
& + \frac{16}{3} \left(2 - \frac{1}{\gamma^2}\right)(\gamma \Phi_{-})^4 + \mathcal{O}(\gamma \Phi_{-})^5]~.
\end{align}
and the deflection angle for this side is
\begin{align}
\nonumber \alpha &= \frac{4 m}{b_{\pm}} + \frac{\pi}{4}\left(16 + \frac{1}{\gamma^2}\right) \frac{m^2}{b_{\pm}^2} + \frac{16}{3}\left(9 + \frac{1}{\gamma^2}\right)\frac{m^3}{b_{\pm}^3}\\
& + \left(64 + \frac{10}{\gamma^2} + \frac{9}{64 \gamma^4}\right)\pi \frac{m^4}{b_{\pm}^4} + \mathcal{O}\left(\frac{m^5}{b_{\pm}^5}\right) ~,
\end{align}
after using the expression of $\gamma$, we finally get
\begin{align}
\label{eq:alphappn}
\alpha & \nonumber = \frac{4 m}{b_{\pm}} + \frac{15\pi}{4} \frac{m^2}{b_{\pm}^2} + \frac{\pi}{4} \frac{q^2}{b_{\pm}^2} + \frac{128}{3} \frac{m^3}{b_{\pm}^3} + \frac{16}{3} \frac{m}{b_{\pm}} \frac{q^2}{b_{\pm}^2} \\
& + \frac{3465}{64} \pi \frac{m^4}{b_{\pm}^4} + \frac{311}{32} \pi \frac{m^2}{b_{\pm}^2} \frac{q^2}{b_{\pm}^2} + \frac{9\pi}{64} \frac{q^4}{b_{\pm}^4} ~.
\end{align}
We comment that, the result \eqref{eq:alphappn} returns to the presice solution \eqref{eq:alphaSch} and \eqref{eq:alpha2ana} in the limit $q \to 0$ and $m \to 0$, respectively.
\subsection{Numerical evaluation}
\label{sec:num}
Before proceeding, we shall set a ``fiducial'' result for $\alpha$, by expanding \eqref{eq:alphadef} as a series of $q/b_\pm$ and $m/b_\pm$ by brutal force. The result is simply
\begin{align}
\label{eq:alphaana}
\alpha & \nonumber = 4 \frac{m}{b_{\pm}} + \frac{15\pi}{4} \frac{m^2}{b_{\pm}^2} + \frac{\pi}{4} \frac{q^2}{b_{\pm}^2} + \frac{128}{3} \frac{m^3}{b_{\pm}^3} + \frac{16}{3} \frac{m}{b_{\pm}} \frac{q^2}{b_{\pm}^2} \\
& + \frac{3465}{64} \pi \frac{m^4}{b_{\pm}^4} + \frac{311}{32} \pi \frac{m^2}{b_{\pm}^2} \frac{q^2}{b_{\pm}^2} + \frac{9\pi}{64} \frac{q^4}{b_{\pm}^4} ~.
\end{align}
Not surprisingly, the result \eqref{eq:alphaana} coincidences with the PPN result \eqref{eq:alphappn}. In PPN formalism we expand the deflection angle as a series of $\Phi_{\pm}$, and $\Phi_{\pm}$ itself can be seen as a series of $m/b_{\pm}$ and $q/b_{\pm}$. Therefore, the procedure should be equivalent to expand $\alpha$ with respect to $m/b_{\pm}$ and $q/b_{\pm}$ directly, and we then recover \eqref{eq:alphaana}.
Surely, we have no reason to claim \eqref{eq:alphaana} to be precise up to the order $1/b^4_\pm$. However, as we shall show in the following, the result \eqref{eq:alphaana} shows the least deviation on the numerical result, so we may comprehensively understanding the error in different methods, by comparing their coefficients with that in \eqref{eq:alphaana}.
As shown in figure \ref{fig:ana}, the approximation of analytical expansion \eqref{eq:alphaana} is satisfying, with the relative error quickly decreases as parameters $m/b_{\pm}$ and $q/b_{\pm}$ decrease. In isotropic coordinates the analytical expansion is identical, thus Fig. \ref{fig:ana} still fits.
\begin{figure}[ht]
\centering
\includegraphics[width=0.96\linewidth]{anappn.pdf}
\caption{Relative error of analytical/ PPN expansion $\alpha$ compared with numerical result $\alpha_{\text{num}}$ for both sides of EBWH.}
\label{fig:ana}
\end{figure}
The upper triangle region of figure \ref{fig:ana} is the negative mass side and the lower region represents the positive mass side. On the positive mass side the relative error changes smoothly, while on the other side it does not. There is a light thin line on the negative mass side at which the relative error is larger than neighboring regions, that is because the deflection angle reaches zero and changes its sign when crossing this line, causing the machine error to dominate around this area. There is also a dark line on the negative mass side, this is simply the area where analytical expansion reaches its best approximation. We will see similar patterns for other approximation methods.
\begin{figure}[ht]
\centering
\includegraphics[width=0.96\linewidth]{1ad.pdf}
\includegraphics[width=0.96\linewidth]{2ad.pdf}
\caption{Relative error of AD formalism result compared with numerical result $\alpha_{\text{num}}$ for both sides of EBWH.}
\label{fig:ad}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.96\linewidth]{1gbt.pdf}
\includegraphics[width=0.96\linewidth]{2gbt.pdf}
\caption{Relative error of GBT formalism result compared with numerical result $\alpha_{\text{num}}$ for both sides of EBWH.}
\label{fig:gbt}
\end{figure}
We depict the numerical result from AD formalism in figure \ref{fig:ad}, GBT formalism in figure \ref{fig:gbt}, respectively. The PPN result is already presented in figure \ref{fig:ana}. We show the results of AD and GBT formalism in both metric, while the PPN result only appears in the isotropic coordinate, since the effective Newtonian potential is better defined in isotropic coordinate.
As we can see from figure \ref{fig:ad} and \ref{fig:gbt}, choosing the original coordinate or the isotropic coordinate results in almost identical error in the positive-mass branch, for both AD and GBT formalism. However, in the negative-mass branch, the critical line, where the relative error acquire a sudden change, is different in the two coordinate system. As we explained above, the critical line corresponds to the configuration where the deflection angle approaches to $0$. Except for the critical line, the other configuration enjoys almost identical relative error in the two coordinate system. Thus we conclude that, the choice of different coordinate systems would introduce negligible relative error for both AD and GBT formalism, except for the case where the deflection angle approaches to $0$.
Now let's come to the relative error in different methods. From figure \ref{fig:ana}, \ref{fig:ad} and \ref{fig:gbt}, we see that for the case of EBWH, the PPN method is the most precise one, while the AD formalism is more accurate than the GBT formalism. Specifically, the relative error of GBT formalism is roughly one order larger than that of AD formalism.
We may naively understand the result by comparing the AD and GBT result to the analytic result. The leading deviation between AD result \eqref{eq:alphaADr}, \eqref{eq:alphaADl} and the analytic result \eqref{eq:alphaana} occurs at the $m^2/b^2_{\pm}$ term, whose coefficients differ by a proportion
\begin{equation}
1 - \left( \frac{24}{\pi} - 8 + \frac{15}{4}\pi \right)/ \left( \frac{15\pi}{4} \right) \simeq 3\% ~.
\end{equation}
The leading deviation between the GBT result \eqref{eq:alphagbtr}, \eqref{eq:alphagbtl} and the analytic result \eqref{eq:alphaana} also occurs at the $m^2/_{\pm}^2$ term, but the coefficients differ a lot:
\begin{equation}
1 - \left( -\frac{\pi}{4} \right) / \left( \frac{15\pi}{4} \right) \simeq 106.7\% ~.
\end{equation}
In other words, the GBT formalism has a large error even at the $m^2/b^2_\pm$ order, while the AD formalism is still relatively accurate at this order. Even for the case $q \gg m$, the AD result is accurate at the $mq^2/b^3_\pm$ order, while the GBT result is inaccurate at this order.
We conclude that, for the EBWH case, the PPN formalism is the most accurate one, followed by the AD formalism, and then the GBT formalism. Moreover, the choice of coordinate system may have non-trivial issue for AD and GBT formalism, when the deflection angle approaches $0$ in the negative-mass branch of universe.
\section{Conclusion and Outlook}
\label{sec:conclusion}
Higher order effect in Gravitational lensing may be important to distinguish compact objects such as black holes and wormholes. For complicated metrics, the deflection angle may be hard to evaluate, and the approximation methods are thus implemented. Although different approximation methods give same results on the leading order, their predictions on higher order are different, so it is important to clarify which methods should be more precise. In this work, we study the gravitational lensing effect in the EBWH wormhole under the weak field limit to higher order. We evaluate the deflection angle to the $1/b^4_\pm$ order, using the AD formalism, the GBT formalism and the PPN formalism respectively. By a direct comparison to the numerical result, we find that the PPN formalism provides the most accurate result. Then, the AD formalism is more precise than the GBT formalism. Moreover, the choice of coordinate system may have non-trivial issue for AD and GBT formalism, when the deflection angle approaches $0$ in the negative-mass branch of universe.
Although in our case the PPN formalism yields the best result, it might be complicated to figure out all PPN parameters for sophisticated wormhole solutions. Therefore, it is still valuable to estimate the deflection angle using other formalism at higher order. Since we only considered the EBWH wormhole, it would be too early to judge which formalism is better from our result. It would be interesting to comparing results from different formalism in a more generic wormhole metric, and decide which formalism should be better.
It is also interesting to improve the formalism, based on the current result. For example, the AD formalism gives different predictions in the original coordinate \eqref{eq:ebsol} and isotropic coordinate \eqref{eq:isodef}. The difference may arise from the PMS, where the condition $d\alpha /d\lambda$ is different for different metric. Would it be possible to improve the accuracy of AD formalism by changing PMS to some other principles which is coordinate-free? Besides, in the GBT formalism, the lens is simply treated as a point mass, which has no influence on the space-time topology. However, wormhole is a geometric structure whose existence may greatly change the space-time topology. We may study if the topological structure of wormhole shall be taken into account in the GBT formalism, and if the change could improve the precision of the method.
Finally, to distinguish wormholes and other compact objects, we need to evaluate astrophysical observables like magnification, event rate, e.t.c. It is possible that, although the deflection angle differs for different compact object, the resulting observable signals are still highly degenerate, whose difference is so small and below the resolution of current experiments. Thus, it is important to extend our result to astrophysical observables in concrete models, in the future.
\begin{acknowledgments}
We thank for stimulating discussions with Pedro Cunha, Chunshan Lin, Lei-hua Liu, Wen-tao Luo, Xin Ren, Naoki Tsukamoto and Yuhang Zhu. We especially thank Prof. Yi Wang for his suggestions and careful proof-reading. H.H. is grateful for support by the National Natural Science Foundation of China (NSFC)
grants No. 12205123 and by the Sino-German (CSC-DAAD) Postdoc Scholarship Program,2021 (57575640). Mian Zhu is supported by the grant No. UMO 2018/30/Q/ST9/00795 from the National Science Centre, Poland.
\end{acknowledgments}
|
{
"arxiv_id": "2302.13670",
"language": "en",
"timestamp": "2023-02-28T02:26:58",
"url": "https://arxiv.org/abs/2302.13670",
"yymm": "2302"
} | \section{{#1}}}
\newcommand{\uple}[1]{\text{\boldmath${#1}$}}
\def\stacksum#1#2{{\stackrel{{\scriptstyle #1}}
{{\scriptstyle #2}}}}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{O}}{\mathbf{O}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathbf{N}}{\mathbf{N}}
\newcommand{\mathbf{A}}{\mathbf{A}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\newcommand{\mathbf{Z}}{\mathbf{Z}}
\newcommand{\mathbf{P}}{\mathbf{P}}
\newcommand{\mathbf{R}}{\mathbf{R}}
\newcommand{\mathbf{G}}{\mathbf{G}}
\newcommand{\mathbf{S}}{\mathbf{S}}
\newcommand{\mathbf{G}_{m}}{\mathbf{G}_{m}}
\newcommand{\mathbf{H}}{\mathbf{H}}
\newcommand{\mathbf{Q}}{\mathbf{Q}}
\newcommand{{\mathbf{F}_p}}{{\mathbf{F}_p}}
\newcommand{\bar{\mathbf{F}}_p}{\bar{\mathbf{F}}_p}
\newcommand{{\mathbf{F}^\times_p}}{{\mathbf{F}^\times_p}}
\newcommand{\mathbf{F}}{\mathbf{F}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathbf{G}}{\mathbf{G}}
\newcommand{\boldsymbol{\mu}}{\boldsymbol{\mu}}
\newcommand{\overline{\mathbf{F}}}{\overline{\mathbf{F}}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\text{\boldmath$P$}}{\mathbf{P}}
\newcommand{\text{\boldmath$E$}}{\mathbf{E}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{1}}{\mathbf{1}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{g^{\sharp}}{g^{\sharp}}
\newcommand{y^{\sharp}}{y^{\sharp}}
\newcommand{\clconj}[1]{{{#1}}^{\sharp}}
\newcommand{\mods}[1]{\,(\mathrm{mod}\,{#1})}
\newcommand{\sli}[1]{\underline{{#1}}}
\newcommand{\ideal}[1]{\mathfrak{{#1}}}
\newcommand{\widehat}{\widehat}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathbf{G}}{\mathbf{G}}
\newcommand{\mathbf{D}}{\mathbf{D}}
\newcommand{\mathbf{G}^{opt}}{\mathbf{G}^{opt}}
\newcommand{\hautk}[2]{\mathbf{G}_{{#1},{#2}}}
\newcommand{\hautz}[2]{\mathbf{G}^{a}_{{#1},{#2}}}
\newcommand{\hauti}[3]{\mathbf{G}^{{#1}}_{{#2},{#3}}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\skl}[1]{\sheaf{K}^{({#1})}}
\newcommand{\hk}[1]{\sheaf{K}\ell_{{#1}}}
\newcommand{\mutw}[3]{\mu_{{#3},{#2}}}
\newcommand{\frtr}[3]{(\Tr{{#1}})({#2},{#3})}
\DeclareMathOperator{\hypk}{Kl}
\DeclareMathOperator{\Kl}{Kl}
\newcommand{\rightarrow}{\rightarrow}
\newcommand{\longrightarrow}{\longrightarrow}
\newcommand{\twoheadrightarrow}{\twoheadrightarrow}
\newcommand{\hookrightarrow}{\hookrightarrow}
\newcommand{\Longleftrightarrow}{\Longleftrightarrow}
\newcommand{\fleche}[1]{\stackrel{#1}{\longrightarrow}}
\newcommand{\barre}[1]{\overline{{#1}}}
\DeclareMathOperator{\spec}{Spec}
\DeclareMathOperator{\Vol}{Vol}
\DeclareMathOperator{\proj}{Proj}
\DeclareMathOperator{\Card}{Card}
\DeclareMathOperator{\rank}{rank}
\DeclareMathOperator{\res}{Res}
\DeclareMathOperator{\reg}{reg}
\DeclareMathOperator{\ord}{ord}
\DeclareMathOperator{\cl}{Cl}
\DeclareMathOperator{\Div}{Div}
\DeclareMathOperator{\divg}{divg}
\DeclareMathOperator{\Pic}{Pic}
\DeclareMathOperator{\vol}{Vol}
\DeclareMathOperator{\Imag}{Im}
\DeclareMathOperator{\Reel}{Re}
\DeclareMathOperator{\syms}{Sym^{2}}
\DeclareMathOperator{\symk}{Sym}
\DeclareMathOperator{\li}{li}
\DeclareMathOperator{\frob}{\mathrm{Fr}}
\DeclareMathOperator{\Gal}{Gal}
\DeclareMathOperator{\Ind}{Ind}
\DeclareMathOperator{\ind}{\kappa}
\DeclareMathOperator{\supp}{supp}
\DeclareMathOperator{\im}{Im}
\DeclareMathOperator{\Tr}{tr}
\DeclareMathOperator{\Hom}{Hom}
\DeclareMathOperator{\End}{End}
\DeclareMathOperator{\Aut}{Aut}
\DeclareMathOperator{\varia}{Var}
\DeclareMathOperator{\argu}{Arg}
\DeclareMathOperator{\spect}{Spec}
\DeclareMathOperator{\disc}{disc}
\DeclareMathOperator{\swan}{Swan}
\DeclareMathOperator{\bb}{B}
\DeclareMathOperator{\codim}{codim}
\DeclareMathOperator{\ft}{FT}
\DeclareMathOperator{\cond}{cond}
\DeclareMathOperator{\Ad}{Ad}
\DeclareMathOperator{\dual}{D}
\newcommand{\varepsilon}{\varepsilon}
\renewcommand{\rho}{\varrho}
\DeclareMathOperator{\SL}{SL}
\DeclareMathOperator{\GL}{GL}
\DeclareMathOperator{\PGL}{PGL}
\DeclareMathOperator{\rmT}{T}
\DeclareMathOperator{\rmN}{N}
\DeclareMathOperator{\rmU}{U}
\DeclareMathOperator{\PSL}{PSL}
\DeclareMathOperator{\Sp}{Sp}
\DeclareMathOperator{\GSp}{GSp}
\DeclareMathOperator{\SO}{SO}
\DeclareMathOperator{\Ort}{O}
\DeclareMathOperator{\SU}{SU}
\DeclareMathOperator{\Un}{U}
\DeclareMathOperator{\USp}{USp}
\newcommand{{\textstyle{\frac{1}{2}}}}{{\textstyle{\frac{1}{2}}}}
\newcommand{{\textstyle{\frac{1}{4}}}}{{\textstyle{\frac{1}{4}}}}
\newcommand{{\textstyle{\frac{3}{2}}}}{{\textstyle{\frac{3}{2}}}}
\newcommand{\avg}[1]{A[{#1}]}
\newcommand{\underline{O}}{\underline{O}}
\newcommand{O}{O}
\newcommand{\sheaf}[1]{\mathcal{{#1}}}
\newcommand{M}{M}
\newcommand{linearly disjoint}{linearly disjoint}
\newcommand{\sheafm}[1]{\tilde{\sheaf{{#1}}}_{\ell}}
\DeclareMathSymbol{\gena}{\mathord}{letters}{"3C}
\DeclareMathSymbol{\genb}{\mathord}{letters}{"3E}
\def\mathop{\sum \Bigl.^{\flat}}\limits{\mathop{\sum \Bigl.^{\flat}}\limits}
\def\mathop{\sum \sum}\limits{\mathop{\sum \sum}\limits}
\def\mathop{\sum \sum \sum \sum}\limits{\mathop{\sum \sum \sum \sum}\limits}
\def\mathop{\sum\cdots \sum}\limits{\mathop{\sum\cdots \sum}\limits}
\def\mathop{\sum\bigl.^{\flat}}\limits{\mathop{\sum\bigl.^{\flat}}\limits}
\def\mathop{\sum \Bigl.^{*}}\limits{\mathop{\sum \Bigl.^{*}}\limits}
\def\mathop{\sum\sum \Bigl.^{*}}\limits{\mathop{\sum\sum \Bigl.^{*}}\limits}
\def\mathop{\sum\sum \Bigl.^{\sharp}}\limits{\mathop{\sum\sum \Bigl.^{**}}\limits}
\def\mathop{\sum\sum \Bigl.^{\sharp}}\limits{\mathop{\sum\sum \Bigl.^{\sharp}}\limits}
\def\mathop{\prod \Bigl.^{*}}\limits{\mathop{\prod \Bigl.^{*}}\limits}
\def\mathop{\sum \Bigl.^{h}}\limits{\mathop{\sum \Bigl.^{h}}\limits}
\def\frac{1}{2i\pi}\mathop{\int}\limits{\frac{1}{2i\pi}\mathop{\int}\limits}
\def\mathop{\oplus}\limits{\mathop{\oplus}\limits}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{variant}[theorem]{Variant}
\theoremstyle{remark}
\newtheorem*{convention}{Conventions}
\newtheorem*{warning}{Warning}
\newtheorem*{rem}{Remark}
\newtheorem*{rems}{Remarks}
\newtheorem*{property}{Properties}
\theoremstyle{definition}
\newtheorem*{claim}{Claim}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{assumption}[theorem]{Assumption}
\newtheorem*{question}{Question}
\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\newtheorem*{application}{Application}
\newtheorem{xca}{Exercise}
\newcommand{\indic}[1]{[\underline{Hint}:\ {#1}]}
\newcommand{\abs}[1]{\lvert#1\rvert}
\newcommand{\blankbox}[2]{%
\parbox{\columnwidth}{\centering
\setlength{\fboxsep}{0pt}%
\fbox{\raisebox{0pt}[#2]{\hspace{#1}}}%
}%
}
\newcommand{w}{w}
\newcommand{\mathfrak{p}}{\mathfrak{p}}
\newcommand{$g$-equivalent}{$g$-equivalent}
\newcommand{$g$-equivalence}{$g$-equivalence}
\newcommand{G^g}{G^g}
\newcommand{\Psi}{\Psi}
\newcommand{\Upsilon}{\Upsilon}
\newcommand{(\sieve,\siftable)}{(\Psi,\Upsilon)}
\newenvironment{epigraph}
{\hfill\begin{minipage}{0.6\linewidth}\raggedleft\footnotesize}{\end{minipage}\bigskip\bigskip}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathfrak{a}}{\mathfrak{a}}
\renewcommand{\geq}{\geqslant}
\renewcommand{\leq}{\leqslant}
\renewcommand{\Re}{\mathfrak{Re}\,}
\renewcommand{\Im}{\mathfrak{Im}\,}
\newcommand{\eqref}{\eqref}
\newcommand{\backslash}{\backslash}
\newcommand{\ov}[1]{\overline{#1}}
\newcommand{\peter}[1]{\langle{#1}\rangle}
\newcommand\sumsum{\mathop{\sum\sum}\limits}
\newcommand\delval{1/8}
\newcommand\delvaln{1/16}
\begin{document}
\title{Ultra-short sums of trace functions}
\author{E. Kowalski}
\author{T. Untrau}
\address{ETH Z\"urich -- D-MATH\\
R\"amistrasse 101\\
8092 Z\"urich\\
Switzerland}
\email{[email protected]}
\address{Université de Bordeaux, CNRS, Bordeaux INP, IMB, UMR 5251, F-33400 \\ Talence, France}
\email{[email protected]}
\date{\today,\ \thistime}
\subjclass[2010]{11T23, 11L15}
\keywords{Equidistribution, linear relations between algebraic
numbers, Weyl sums, roots of polynomial congruences, trace functions}
\begin{abstract}
We generalize results of Duke, Garcia, Hyde, Lutz and others on the
distribution of sums of roots of unity related to Gaussian periods
to obtain equidistribution of similar sums over zeros of arbitrary
integral polynomials. We also interpret these results in terms of
trace functions, and generalize them to higher rank trace functions.
\end{abstract}
\maketitle
\section{Introduction}
The motivation for this work lies in papers of Garcia, Hyde and
Lutz~\cite{ghl} and Duke, Garcia and Lutz~\cite{dgl}, recently
generalized by Untrau~\cite{untrau} in a number of ways, which
considered the distribution properties of certain finite sums of roots
of unity which are related to Gaussian periods and to
``supercharacters'' of finite groups.
We interpret these sums as examples of sums of trace functions over
certain \emph{bounded} finite sets. From this point of view, this
study is a complement to results concerning sums of trace functions
with growing length modulo a prime~$p$ (for instance, the paper of
Perret-Gentil~\cite{p-g} for sums of length roughly up to $\log p$, or
that of Fouvry, Kowalski, Michel, Raju, Rivat and
Soundararajan~\cite{sliding} for sums of length slightly above
$\sqrt{p}$, and that of Kowalski and Sawin~\cite{ks} for sums of
length proportional to~$p$).
The range of summation will be taken to be more general than an
interval, and despite the simplicity of the setting, one obtains some
interesting equidistribution results.
Here is a simple illustration of our statements. More general versions
will be proved in Sections~\ref{sec-proofs1},~\ref{section
multiplicative} and~\ref{sec-proofs2}. We recall the definition
$$
\Kl_2(a;q)=\frac{1}{\sqrt{q}}\sum_{x\in\mathbf{F}_q^{\times}}
e\Bigl(\frac{ax+\bar{x}}{q}\Bigr),\quad\quad
e(z)=e^{2i\pi z},
$$
of the normalized Kloosterman sums modulo a prime number~$q$.
\begin{theorem}[Ultra-short sums of additive characters and
Kloosterman sums]\label{th-1}
Let $g\in~\mathbf{Z}[X]$ be a fixed monic polynomial of degree~$d\geq
1$. For any field~$K$, denote by $Z_g(K)$ the set of zeros of~$g$
in~$K$, and put $Z_g=Z_g(\mathbf{C})$. Let $K_g= \mathbf{Q}(Z_g)$ be the splitting
field of $g$.
\par
\emph{(1)} As $q\to +\infty$ among prime numbers unramified and
totally split in $K_g$, the sums
$$
\sum_{x\in Z_g(\mathbf{F}_q)}e\Bigl(\frac{ax}{q}\Bigr)
$$
parameterized by $a\in\mathbf{F}_q$ become equidistributed in~$\mathbf{C}$ with
respect to some explicit probability measure~$\mu_g$.
\par
\emph{(2)} As $q\to +\infty$ among prime numbers unramified and
totally split in $K_g$, the sums
$$
\sum_{x\in Z_g(\mathbf{F}_q)}\Kl_2(ax;q)
$$
parameterized by $a\in\mathbf{F}_q$ become equidistributed in~$\mathbf{C}$ with
respect to the measure which is the law of the sum of $d$ independent
Sato--Tate random variables.
\end{theorem}
\begin{example}
(1) The case considered in the previous papers that we mentioned is
that of $g=X^d-1$ for some integer $d\geq 1$, in which case $Z_g$ is
the set of $d$-th roots of unity and the primes involved are the prime
numbers congruent to~$1$ modulo~$d$. (In fact, these references
consider more generally the sums above for $q$ a power of an odd prime
$\equiv 1\mods{d}$, and we will also handle this case.)
\par
(2) The measure $\mu_{g}$ can be described relatively explicitly,
and depends on the additive relations (with integral coefficients)
satisfied by the zeros of $g$. We will discuss this in more detail
below, but ``generically'', we will see that $\mu_g$ is just the law
of the sum $X_1+\cdots+X_d$ of $d$ independent random variables each
uniformly distributed on the unit circle. However, more interesting
measures also arise, for instance for $g=X^{\ell}-1$ where $\ell$ is
a prime number, the measure $\mu_{X^{\ell}-1}$ is the image by the
map
$$
(z_1,\ldots,z_{\ell-1})\mapsto
z_1+\cdots+z_{\ell-1}+\frac{1}{z_1\cdots z_{\ell-1}}
$$
of the uniform (Haar) probability measure on
$(\mathbf{S}^1)^{\ell-1}$. Figure \ref{relations_or_not} below illustrates
two examples. In the case of the polynomial $X^3 + 2X^2 + 3$, one
can show that there are no non-trivial additive relations between
the zeros of $g$, whereas in the case of the polynomial
$X^3 + X+ 3$, there is clearly the relation given by the sum of the
roots which equals zero (because the coefficient of $X^2$ is
zero). We see that this difference between their module of additive
relations translates into different limiting measures $\mu_g$ for
the associated sums of additive characters. Since these two
polynomials have Galois group $\mathfrak{S}_3$ over $\mathbf{Q}$, these
pictures will be fully explained in Section \ref{sec-examples},
Example 2.
\begin{figure}
\centering
\begin{subfigure}[b]{5cm} \label{somme_unif}
\includegraphics[width=\textwidth]{sum_add_gx3p2x2p3q30113.png}
\caption{$g = X^3 + 2X^2 + 3$ and \\
$q=30113$.}
\end{subfigure}
\hspace{0.6cm}
\begin{subfigure}[b]{4.42cm}
\includegraphics[width=\textwidth]{sum_add_gx3pxp3q30223.png}
\caption{$g = X^3 + X + 3$ and $q=30223$.}
\end{subfigure}
\caption{The sums $\sum_{x \in Z_g(\mathbf{F}_q)} e( \frac{ax}{q})$
as $a$ varies in $\mathbf{F}_q$, for two different polynomials
$g$ of degree $3$.}
\label{relations_or_not}
\end{figure}
\par
(3) The second part of the theorem also has precursors: for instance,
the result follows from~\cite[Prop.\,3.2]{clt} if
$g=(X-1)\cdots (X-d)$. We illustrate our generalization in Figure \ref{sum_of_3_ST} with the example of another polynomial $g$ of degree 3.
\begin{figure}
\centering \includegraphics[width =
0.6\textwidth]{distribscklgx3m9xm1p8089.png}
\caption{Distribution of the values of the sums
$\sum_{x \in Z_g(\mathbf{F}_q)} \mathrm{Kl}_2(ax;q)$ as $a$ varies
in $\mathbf{F}_q$, for $g = X^3 -9X -1$ and $q = 8089$. The red
curve is the probability density function of the random
variable $X_1 + X_2 + X_3$ defined as the sum of three
independent and identically distributed Sato--Tate random
variables.}
\label{sum_of_3_ST}
\end{figure}
\end{example}
\begin{remark}
We assume that~$g$ is monic mostly for simplicity to ensure that the
roots of~$g$ are algebraic integers. However, one can also handle an
arbitrary polynomial~$g$ by considering an integer~$N\geq 1$ such
that $Nz$ is integral for all roots~$z$ of~$g$, and either reduce to
the monic case by using the polynomial~$\widetilde{g}$ with roots
the~$Nz$, or by considering below the ring~$\mathbf{O}_g[1/N]$
of~$N$-integers in~$K_g$ instead of the full ring of integers.
\end{remark}
\subsection*{Notation}
\label{section notations}
Let~$G$ be a locally compact abelian group, with character
group~$\widehat{G}$. Let~$H$ be a closed subgroup of~$G$. We recall that
the restriction homomorphism from $\widehat{G}$ to $\widehat{H}$ is
surjective (in other words, any character of $H$ can be extended to a
character of $G$).
The \emph{orthogonal} $H^{\perp}$ of~$H$ is the closed subgroup
of~$\widehat{G}$ defined by
$$
H^{\perp}=\{\chi \in \widehat{G}\,\mid\, \chi(x)=1\text{ for all } x \in
H\}.
$$
If we identify the dual of~$\widehat{G}$ with~$G$ by Pontryagin duality,
then the orthogonal of~$H^{\perp}$ is identified with~$H$, or in other
words
$$
H=\{x \in G\,\mid\, \chi(x)=1\text{ for all } \chi \in
H^{\perp}\}.
$$
We refer, e.g., to Bourbaki's account~\cite{ts2} of Pontryagin duality
for these facts.
Suppose that~$G$ is compact. A random variable with values in~$G$ is
said to be \emph{uniformly distributed on~$H$} if its law~$\mu$ is the
probability Haar measure on~$H$ (viewed as a probability measure
on~$G$).
Throughout this paper, we will consider a fixed \emph{monic}
polynomial $g\in\mathbf{Z}[X]$ of degree~$d\geq 1$. We denote by $Z_g$ the
set of zeros of~$g$ in~$\mathbf{C}$, and more generally by $Z_g(A)$ the set
of zeros of~$g$ in any commutative ring~$A$. We further denote by
$K_g$ the splitting field of~$g$ in~$\mathbf{C}$, so that
$K_g=\mathbf{Q}(Z_g)$. Since our discussion only depends on $Z_g$, we will
assume, without loss of generality, that $g$ is separable. Since~$g$
is monic, the set~$Z_g$ is contained in the ring of integers~$\mathbf{O}_g$
of~$K_g$.
For any set~$X$, we denote by $C(Z_g;X)$ the set of functions
$Z_g\to X$, and in particular we write $C(Z_g)=C(Z_g;\mathbf{C})$ for the
vector space of $\mathbf{C}$-valued functions on~$Z_g$. We denote by $\sigma$
the linear form on~$C(Z_g)$ defined by
$$
\sigma(f)=\sum_{x\in Z_g}f(x),
$$
and by $\gamma$ the morphism of abelian groups from $C(Z_g;\mathbf{Z})$ to~$K_g$
defined by
\begin{equation}\label{eq-gamma}
\gamma(f)=\sum_{x\in Z_g}f(x)x.
\end{equation}
The set $C(Z_g;\mathbf{S}^1)$ is a compact abelian group, isomorphic to
$(\mathbf{S}^1)^{|Z_g|}$ by sending $\alpha$ to $(\alpha(z))_{z\in Z_g}$.
\subsection*{Acknowledgements} The second author wishes to thank
Florent Jouve and Guillaume Ricotta for many helpful discussions, and
Emanuele Tron for giving the key ideas of the proof of the linear
independence of $j$-invariants. Pictures were made using the
open-source software \texttt{sagemath}.
\section{The case of additive characters}\label{sec-proofs1}
We begin with the simple setup of the first part of Theorem~\ref{th-1}
before considering a much more general situation.
We fix a separable monic polynomial~$g\in\mathbf{Z}[X]$ as in the previous
discussion. Let $p$ be a non-zero prime ideal in $\mathbf{O}_g$. We denote
by $|p|=|\mathbf{O}_g/p|$ the norm of~$p$. For any ideal $I\subset \mathbf{O}_g$,
the canonical projection $\mathbf{O}_g\to \mathbf{O}_g/I$ will be denoted
$\varpi_I$, or simply $\varpi$ when the ideal is clear from context.
We denote by $\mathcal{S}_g$ the set of prime ideals $p\subset \mathbf{O}_g$
which do not divide the discriminant of~$g$ (so that the reduction map
modulo~$p$ is injective on~$Z_g$) and have residual degree one (in
particular, these are unramified primes). For $p\in\mathcal{S}_g$, the
norm $q=|p|$ is a prime number, and for any integer $n\geq 1$, the
restriction $\mathbf{Z}\to \mathbf{O}_g/p^n$ of~$\varpi_{p^n}$ induces a ring
isomorphism $\mathbf{Z}/q^n\mathbf{Z}\to \mathbf{O}_g/p^n$. We will usually identify these
two rings. Moreover, for $p\in\mathcal{S}_g$ and $n\geq 1$, the
separable polynomial $g$ has $\deg(g)$ different roots in the completion
of~$K_g$ at~$p$, hence the reduction map modulo $p^n$ induces a
bijection $Z_g\to Z_g(\mathbf{O}_g/p^n)=Z_g(\mathbf{Z}/|p|^n\mathbf{Z})$ for any integer
$n\geq 1$.
For any prime ideal $p\in\mathcal{S}_g$ and any integer~$n\geq 1$, we
view $\mathbf{O}_g/p^n$ as a finite probability space with the uniform
probability measure. We define random variables $U_{p^n}$
on~$\mathbf{O}_g/p^n$, taking values in $C(Z_g;\mathbf{S}^1)$, by
$$
U_{p^n}(a)(x)=e\Bigl(\frac{a\varpi(x)}{|p|^n}\Bigr),
$$
where $\varpi=\varpi_{p^n}$ here (according to our convention,
$a\varpi(x)$ is an element of $\mathbf{O}_g/p^n$ which is identified to an
element of $\mathbf{Z}/|p|^n\mathbf{Z}$).
\begin{remark}
In the earlier references \cite{dgl}, \cite{ghl} and \cite{untrau},
we have $g=X^d-1$ for some
integer~$d$, and one considers primes $q\equiv
1\mods{d}$. A primitive $d$-th root of unity modulo~$q$, say
$w_q$, is fixed for all
such~$q$, and one considers the limit
as~$q\to~\infty$ of the tuples $ (e(\frac{aw_q^k}{q}))_{0 \leqslant
k \leqslant d-1}$, for $a$ uniform
in~$\mathbf{F}_q$. This approach does not generalize in a convenient way
to more general
polynomials~$g$, where the roots are not as easily parameterized.
\end{remark}
\begin{proposition}[Ultra-short equidistribution]\label{pr-1}
The random variables $U_{p^n}$ converge in law as $|p|^n \to+\infty$
to a random function $U\colon Z_g\to \mathbf{S}^1$ such that~$U$ is
uniformly distributed on the subgroup $H_g\subset C(Z_g;\mathbf{S}^1)$ which
is orthogonal to the abelian group
$$
R_g=\ker(\gamma)= \{ \alpha\in C(Z_g;\mathbf{Z})\,\mid\, \sum_{x\in
Z_g}\alpha(x)x=0 \}
$$
of (integral) additive relations betweens roots of~$g$, i.e.
$$
H_g=\{f\in C(Z_g;\mathbf{S}^1)\,\mid\, \text{ for all $\alpha\in R_g$, we
have } \prod_{x\in Z_g}f(x)^{\alpha(x)}=1\}.
$$
\end{proposition}
\begin{proof}
Since $C(Z_g;\mathbf{S}^1)$ is a compact abelian group, we can apply the
generalized Weyl Criterion for equidistribution: it is enough to
check that, for any character $\eta$ of $C(Z_g;\mathbf{S}^1)$, we have
$$
\text{\boldmath$E$}(\eta(U_{p^n}))\to \text{\boldmath$E$}(\eta(U))
$$
as $|p|^n\to +\infty$. The right-hand side is either~$1$ or~$0$,
depending on whether the restriction of~$\eta$ to~$H_g$ is trivial or
not.
The character~$\eta$ is determined uniquely by a function
$\alpha\in C(Z_g;\mathbf{Z})$ by the rule
$$
\eta(f)=\prod_{x\in Z_g}f(x)^{\alpha(x)}
$$
for any $f\in C(Z_g;\mathbf{S}^1)$. We have then by definition
$$
\text{\boldmath$E$}(\eta(U_{p^n})) = \frac{1}{|p|^n} \sum_{a\in \mathbf{O}_g/p^n}
e\Bigl(\frac{a}{|p|^n} \varpi\Bigr( \sum_{x\in Z_g}\alpha(x)x\Bigr)
\Bigr).
$$
Simply by orthogonality of the characters modulo $|p|^n$,
this sum is either~$1$ or~$0$, depending on whether
$$
\gamma(\alpha)=\sum_{x\in Z_g}\alpha(x)x
$$
is zero modulo $p^n$ or not. As soon as $|p|^n$ is large enough,
this condition is equivalent with $\gamma(\alpha)$ being zero or not
in~$K_g$. In particular, the limit of~$\text{\boldmath$E$}(\eta(U_{p^n}))$ is
either~$1$ or~$0$ depending on whether $\alpha\in \ker(\gamma)=R_g$
or not, and this is exactly what we wanted to prove.
\end{proof}
\begin{remark}
The proof shows that in fact the Weyl sums are \emph{stationary}. This
somewhat unusual feature\footnote{\ Though there are important
instances of limit theorems where \emph{moments} are stationary,
e.g. in the convergence of the number of fixed points of random
permutations to a Poisson distribution.} explains the very regular
aspect of the experimental pictures. We will explore further
consequences of this fact in a later work.
\end{remark}
\begin{corollary} \label{cor-additive-characters} For $a$ taken
uniformly at random in $\mathbf{O}_g/p^n$ with $p\in\mathcal{S}_g$, lying
above a prime number $q$ which does not divide $\disc(g)$, the sums
$$
\sum_{x\in Z_g(\mathbf{O}_g/ p^n)}e\Bigl(\frac{ax}{|p|^n}\Bigr)
$$
become equidistributed in~$\mathbf{C}$ as $|p|^n\to +\infty$ with limiting
measure~$\mu_g$ given by the law of $\sigma(U)$, where $U$ is
uniformly distributed on~$H_g$.
\par
Similarly, for a prime number $q$ totally split in~$K_g$ and
not dividing the discriminant of~$g$, the sums
$$
\sum_{\substack{x\in \mathbf{Z} / q^n \mathbf{Z}\\g(x) \equiv 0
\mods{q^n}}}e\Bigl(\frac{ax}{q^n}\Bigr)
$$
for $a\in \mathbf{Z} / q^n \mathbf{Z}$ become equidistributed in~$\mathbf{C}$ as
$q^n \to+\infty$ with limit $\sigma(U)$.
\end{corollary}
\begin{proof}
(1) Since $\varpi_{p^n}$ induces a bijection between $Z_g$ and
$Z_g(\mathbf{O}_g/p^n)$, the random variables whose limit we are
considering coincide with $\sigma(U_{p^n})$, and since $\sigma$ is a
continuous function from $C(Z_g;\mathbf{S}^1)$ to~$\mathbf{C}$, we obtain the
result from Proposition~\ref{pr-1} by composition.
\par
(2) For any prime number $q$ which is totally split in~$\mathbf{O}_g$ and
does not divide the discriminant of $g$, there exists a prime ideal
$p \in~\mathcal{S}_g$ above~$q$, and for any $n\geq 1$, we have then
$Z_g(\mathbf{Z}/q^n\mathbf{Z})=Z_g(\mathbf{O}_g/p^n)$, so that
$$
\sum_{x\in Z_g(\mathbf{O}_g/ p^n)}e\Bigl(\frac{ax}{|p|^n}\Bigr) =
\sum_{\substack{x\in \mathbf{Z} / q^n \mathbf{Z}\\g(x) \equiv 0
\mods{q^n}}}e\Bigl(\frac{ax}{q^n}\Bigr),
$$
and the result follows from~(1) since we are considering a subsequence
of the random variables previously considered.
\end{proof}
Before studying a few examples in the next section, we make a few
remarks concerning the limiting measures. Since the random variable
$\sigma(U)$ is bounded, one can compute all its moments using the
equidistribution. This leads straightforwardly to the formulas
$$
\text{\boldmath$E$}(\sigma(U))=\begin{cases}
0&\text{ if $0\notin Z_g$}\\
1&\text{ if $0\in Z_g$},
\end{cases}
$$
and
$$
\text{\boldmath$E$}(|\sigma(U)|^2)=|Z_g|.
$$
The fact that the expectation is zero if $g$ is irreducible of degree at
least~$2$ has some indirect relevance to the well-known conjecture
according to which the fractional parts of the roots modulo primes
$q\leq x$ of an irreducible polynomial~$g$ of degree at least~$2$ should
become equidistributed (with respect to the Lebesgue measure)
in~$\mathbf{R}/\mathbf{Z}$ as $x\to +\infty$ -- see, e.g., the paper~\cite{dfi} of
Duke, Friedlander and Iwaniec.
\par
Indeed, the Weyl sums for this equidistribution problem are
(essentially)
$$
\frac{1}{\pi(x)}\sum_{q\leq x}\sum_{\substack{y\in
\mathbf{F}_q\\g(y)=0}}e\Bigl(\frac{ay}{q}\Bigr)
$$
(where~$q$ ranges over primes) for some \emph{fixed} non-zero
integer~$a$. For each prime~$q$ which happens to be totally split
in~$K_g$, the inner sum is of the form $\sigma(U_p(a))$ for some prime
ideal~$p\in\mathcal{S}_g$. Thus, Proposition~\ref{pr-1} tells us about
the asymptotic distribution of these terms \emph{when $a$ varies
modulo~$q$}. Intuitively, we may hope that the average over~$q$
should lead to a limit which coincides with $\text{\boldmath$E$}(\sigma(U))=0$,
and this would translate to the equidistribution conjecture.
\par
In fact, we may even ask whether these inner parts of the Weyl sums
for equidistribution are \emph{themselves} equidistributed. More
precisely, fix a non-zero integer~$a$, and consider the random
variables of the type
$$
U'_T(p)(x)=e\Bigl(\frac{a\varpi_p(x)}{|p|}\Bigr)
$$
defined on the probability spaces $\mathcal{S}_g(T)$ of primes~$p$ in
$\mathcal{S}_g$ with $|p|\leq T$ (with uniform probability
measure), and with values in~$C(Z_g;\mathbf{S}^1)$.
\begin{question}
Do the random functions $U'_T$ converge in law as $T\to+\infty$? If
Yes, is the limit the same as in Proposition~\ref{pr-1}?
\end{question}
If the answer to this question is positive, then the equidistribution
conjecture holds, at least when averaging only over primes totally
split in~$K_g$, since then
$$
\frac{1}{|\mathcal{S}_g(T)|}\sum_{p\in\mathcal{S}_g(T)} \sum_{x\in
Z_g}e\Bigl(\frac{a\varpi_p(x)}{|p|}\Bigr) \to
\int_{\mathbf{C}}zd\mu_g(z)=0.
$$
The answer is indeed positive when $g$ is irreducible of degree~$2$,
by the work of Duke, Friedlander and Iwaniec~\cite{dfi} and
Toth~\cite{toth} (more precisely, in this case the relevant inner Weyl
sums are essentially Salié sums, and it is proved -- using the
equidistribution property for the roots of quadratic congruences,
which is the main result of these papers -- that the Salié sums become
equidistributed in $[-2,2]$ like the sums $e(x)+e(-x)$ where~$x$ is
uniformly distributed in~$\mathbf{R}/\mathbf{Z}$. Moreover, this question is closely
related with recent conjectures of
Hrushovski~\cite[\S\,5.5]{hrushovski}, themselves motivated by
questions concerning the model theory of finite fields with an
additive character.
\par
Numerical experiments also seem to suggest a positive answer at least
in many cases. But note also that obtaining the same limiting measure
depends on assuming that~$g$ is irreducible. (For instance, if there
is an integral root~$k$ for~$g$, as is the case with $k=1$ for
$X^d-1$, then the value $U'_T(p)(k)=e(ak/|p|)$ converges to~$1$ as
$|p|\to +\infty$, which is a different behavior than that provided by
Proposition~\ref{pr-1}.)
\section{Examples}\label{sec-examples}
We now consider a few examples of Proposition~\ref{pr-1}.
\par
(1) Suppose that $g=X^d-1$ for some $d\geq 1$, so that $Z_g=\boldsymbol{\mu}_d$ is
the group of $d$-th roots of unity.
\par
Consider first the case when $d=\ell$ is a prime number. The group of
additive relations is generated in this case by the constant function
$\alpha=1$ (indeed, let $\xi\in\boldsymbol{\mu}_{\ell}$ be a root of unity different
from~$1$; then a relation
$$
\sum_{x\in\boldsymbol{\mu}_\ell}\alpha(x)x=0
$$
is equivalent to $f(\xi)=0$, where $f$ is the polynomial
$$
\sum_{i=0}^{\ell-1}\alpha(\xi^i)X^i\in\mathbf{Z}[X],
$$
which must therefore be an integral multiple of the minimal polynomial
$$
1+X+\cdots +X^{\ell-1}
$$
of $\xi$). The subgroup $H_{X^{\ell}-1}$ which is the support of the
limit~$U$ in this case is then
$$
H_{X^{\ell}-1}=\{f\colon \boldsymbol{\mu}_\ell \to\mathbf{S}^1\,\mid\,
\prod_{x\in\boldsymbol{\mu}_{\ell}}f(x)=1\}.
$$
which can be identified with~$(\mathbf{S}^1)^{\ell-1}$ by the group
isomorphism $f\mapsto (f(x))_{x\in\boldsymbol{\mu}_{\ell}\setminus\{1\}}$. The linear
form $\sigma$ is then identified with the linear form
$(\mathbf{S}^1)^{\ell-1}\to \mathbf{C}$ such that
$$
(y_1,\ldots,y_{\ell-1})\mapsto y_1+\cdots
+y_{\ell-1}+\frac{1}{y_1\cdots y_{\ell-1}}.
$$
\par
In the case of a general~$d$, the same argument show that
$R_{X^{d}-1}$ is the group of functions $\alpha\colon\boldsymbol{\mu}_d\to\mathbf{Z}$
such that the $d$-th cyclotomic polynomial $\Phi_d$ divides
$$
\sum_{i=0}^{d-1}\alpha(\xi^i)X^i,
$$
where $\xi$ is a primitive $d$-th root of unity. Thus $R_{X^d-1}$ is a
free abelian group of rank $d-\varphi(d)$, generated by the
functions~$\alpha$ corresponding to the polynomials
$$
\Phi_d,\ X\Phi_d,\ \cdots,\ X^{d-\varphi(d)-1}\Phi_d.
$$
\par
Although this presentation is more abstract, it coincides with the
description of Duke, Garcia and Lutz in~\cite[Th.\,6.3]{dgl}.
\par
(2) The group of additive relations of a polynomial is studied by
Berry, Dubickas, Elkies, Poonen and Smyth~\cite{rels} in some detail
(see also~\cite{relations}). It is known for instance (see
e.g.~\cite[Prop.\,2.8]{relations} or~\cite[Prop.\,4.7.12]{repr}; this
goes back at least to Smyth~\cite{smyth}) that if the Galois group of
$K_g$ over~$\mathbf{Q}$ is the symmetric group~$\mathfrak S_d$, then only two
cases are possible: either $R_g$ is trivial (in which case the limit
measure $\mu_g$ is the law of the sum of $d$ independent random
variables uniformly distributed on~$\mathbf{S}^1$) or $R_g$ is generated by
the constant function~$1$ (in which case the measure $\mu_g$ is the
same measure described in (1), except that $d$ is not necessarily
prime here). This second case corresponds to the situation where the
sum of the roots is zero, i.e., to the case when the coefficient of
$X^{d-1}$ in~$g$ is zero.
\par
(3) More interesting examples arise from polynomials $g$ that are
characteristic polynomials of ``random'' elements of the group of
integral matrices in a simple Lie algebra~$L$, where additive relations
corresponding to the root system of~$L$ will appear. For instance, for
the Lie algebra of type $G_2$, in its $7$-dimensional irreducible
representation, the roots of a characteristic polynomial have the form
of tuples
$$
(0,x,y,x+y,-x,-y,-x-y)
$$
so that the group of additive relations will be quite large. It would
be interesting to determine explicitly the support of the image
measure in this case.
\par
(4) Another natural example comes from the \emph{Hilbert class
polynomial} $g=H_{\Delta}$, whose roots are the $j$-invariants of
elliptic curves with CM by an imaginary quadratic order~$\mathcal{O}$ of given
discriminant $\Delta$ (see, e.g.,~\cite[\S\,13,\,
Prop.\,13.2]{cox}). This means that we consider sums
\begin{equation} \label{sum-elliptic}
\sum_{E\text{ with CM by } \mathcal{O}}e\Bigl(\frac{aj(E)}{q}\Bigr),
\end{equation}
where the sum runs over isomorphism classes over~$\mathbf{C}$ of elliptic
curves with CM by~$\mathcal{O}$, for prime numbers~$q$ totally split in the ring
class field corresponding to the order~$\mathcal{O}$. For instance, if
$\Delta=-4m$ with $m\geq 1$ squarefree, these are exactly the primes of
the form $x^2+my^2$ (see the book of Cox~\cite{cox} for details).
From Proposition \ref{pr-1}, and
Corollary~\ref{cor-additive-characters}, we know that the asymptotic
distribution of the sums \eqref{sum-elliptic}, as $q$ tends to
infinity and $a$ varies in $\mathbf{F}_q$, is governed by the additive
relations between these $j$-invariants. As it turns out, there no
non-trivial relations, except for~$\Delta=-3$. This is essentially due
to the fact that there is one $j$-invariant (for fixed $\mathcal{O}$ with
discriminant large enough) which is much larger than the others,
combined with the following lemma.
\begin{lemma}
Let~$g\in \mathbf{Z}[X]$ be irreducible over~$\mathbf{Q}$ of degree~$d\geq 2$. If
there exists $x_0\in Z_g$ such that
$$
|x_0|>\sum_{\substack{x\in Z_g\\x\not=x_0}}|x|,
$$
then $R_g=\{0\}$.
\end{lemma}
\begin{proof}
Suppose that there exists $\alpha\in R_g$ non-zero. Let $x_1\in Z_g$ be
such that $|\alpha(x_1)|$ is maximal, hence non-zero. Dividing
by~$\alpha(x_1)$, we obtain a relation
$$
0=\sum_{x\in Z_g}\beta(x) x
$$
where $\beta(x)\in\mathbf{Q}$ with $|\beta(x)|\leq 1$ for all~$x$ and
$\beta(x_1)=1$. Since~$g$ is irreducible, we can find a Galois
automorphism $\xi$ such that $\xi(x_1)=x_0$, which means that we may
assume that~$x_1=x_0$. Then we obtain
$$
|x_0|=\Bigl|\sum_{x\not=x_0}\beta(x)x\Bigr|\leq \sum_{x\not=x_1}|x|,
$$
and we conclude by contraposition.
\end{proof}
This lemma is applicable to the Hilbert class polynomial
$H_{\Delta}$. Indeed, it is irreducible (see,
e.g.,~\cite[\S\,13]{cox}). To check the existence of a dominating
$j$-invariant, we use the bound
$$
\Bigl| |j(\tau)| - e^{2\pi \Imag(\tau)}\Bigr| \leqslant 2079,
$$
for $\tau$ in the usual fundamental domain~$F$ of $\mathbf{H}$ modulo
$\SL_2(\mathbf{Z})$ (see \cite[Lemma\,1]{bilu-masser-zannier} by Bilu, Masser
and Zannier), combined with the fact that there is a unique $\tau$
in~$F$ such that $j(\tau)$ is a root of $H_{\Delta}$ and
$\Imag(\tau)\geq \sqrt{|\Delta|}/2$, while all other $j$-invariants
for the order $\mathcal{O}$ are of the form $j(\tau')$ where $\tau'\in F$ has
$\Imag(\tau')\leq \sqrt{|\Delta|}/4$
(see~\cite[Section\,3.3]{allombert} by Allombert, Bilu and
Pizarro-Madariaga). These properties imply that the lemma is
applicable as soon as the bound
$$
e^{\pi \sqrt{|\Delta|}} - 2079>\deg(H_{\Delta})(e^{\pi
\frac{\sqrt{|\Delta|}}{2}} + 2079)
$$
holds. The degree of~$H_{\Delta}$ is the Hurwitz class number, and one
knows classically that
$$
\deg H_{\Delta}\leq \frac{\sqrt{|\Delta|}}{\pi}(\log |\Delta|+2),
$$
(see, e.g.,~\cite[Lemma\,3.6]{b-h-k} by Bilu, Habegger and Kühne). One
checks easily that the desired bound follows unless $\Delta\geq
-9$. For the remaining cases, $H_{\Delta}$ has degree~$1$, and its
unique root is a non-zero integer, except that~$H_{-3}=X$ (see for
instance the table~\cite[\S\,12.C]{cox} in the book of
Cox). Therefore, unless $\Delta = -3$, the module of additive
relations of $H_{\Delta}$ is trivial. Of course, for~$\Delta=-3$, it
is isomorphic to~$\mathbf{Z}$.
This immediately leads to the following corollary concerning the distribution of sums of
type \eqref{sum-elliptic}:
\begin{corollary}
Fix a negative discriminant $\Delta \neq -3$ of an imaginary
quadratic order~$\mathcal{O}$ with class number~$h$. As $q \to \infty$ among
the primes totally split in the ring class field corresponding to
the order $\mathcal{O}$, the sums
$$
\sum_{E\text{ \rm with CM by } \mathcal{O}}e\Bigl(\frac{aj(E)}{q}\Bigr)
$$
parametrized by $a \in \mathbf{F}_q$ become equidistributed in $\mathbf{C}$ with
respect to law of the sum $X_1+\cdots+X_{h}$ of $h$ independent random
variables, each uniformly distributed on the unit circle.
\end{corollary}
On the other hand, for~$\Delta=-3$, we have
$$
\sum_{E\text{ \rm with CM by } \mathcal{O}}e\Bigl(\frac{aj(E)}{q}\Bigr)=1
$$
for all~$q$.
\section{Conditioning}
The basic argument leading to Proposition~\ref{pr-1} extends in another
nice way to the \emph{conditioning} situation, where we restrict the
random variables $U_{p^n}$ to suitable subsets of~$\mathbf{O}_g/p^n$. This
turns out to be closely related to the distribution of the fractional
parts of these subsets.
The precise statements require some additional notation. First, we
define by $\ind(g)$ the non-negative integer such that
$$
\Imag(\gamma)\cap \mathbf{Z}=\ind(g)\mathbf{Z}
$$
(recall the definition~(\ref{eq-gamma}) of~$\gamma$; note that it is
possible that~$\kappa=0$, e.g. for $g=X^2+d$ with $d\not=0$).
For a prime ideal $p\in\mathcal{S}_g$ and $n\geq 1$, and for any
$a\in\mathbf{O}_g/p^n$, we define the ``fractional part'' of~$a$ to be the
fractional part in $[0,1]$ of $\bar{a}/|p|^n$ for any lift
$\bar{a}\in\mathbf{Z}$ of $a$ identified as an element of~$\mathbf{Z}/|p|^n\mathbf{Z}$.
We denote by $U$ the limit in Proposition~\ref{pr-1}.
\begin{proposition}[Ultra-short equidistribution]\label{pr-cond}
For a subsequence of ideals $p^n$ with $p\in\mathcal{S}_g$ and
$n\geq 1$, let $A_{p^n}$ be a non-empty subset of~$\mathbf{O}_g/p^n$.
\par
\emph{(1)} If the fractional parts of $a\in A_{p^n}$ are
\emph{uniformly equidistributed modulo~$1$} as $|p|^n\to +\infty$,
in the sense that
$$
\mathop{\mathrm{Max}}\limits_{\substack{h\in \mathbf{O}_g/p^n\\h\not=0}}
\frac{1}{|A_{p^n}|}\Bigl|\sum_{a\in A_{p^n}}
e\Bigl(\frac{ah}{|p|^n}\Bigr)\Bigr|\to 0
$$
as $|p|^n\to +\infty$, then the restriction of the random variables
$U_{p^n}$ to~$A_{p^n}$, viewed as probability space with uniform
probability measure, converge in law to~$U$.
\par
\emph{(2)} Suppose that~$\ind(g)\not=0$ and that the restriction of
the random variables $U_{p^n}$ to~$A_{p^n}$, viewed as probability
space with uniform probability measure, converge in law to~$U$. Then
the fractional parts of elements of $\ind(g)A_{p^n}$ are
equidistributed modulo~$1$.
\end{proposition}
\begin{proof}
We denote by $U'_{p^n}$ the restriction of~$U_{p^n}$ to~$A_{p^n}$,
viewed as probability space with the uniform probability measure.
We expand the characteristic function $f_{p^n}$ of $A_{p^n}$ in
discrete Fourier series
$$
f_{p^n}(a)=\sum_{h\in
\mathbf{O}_g/p^n}\alpha_{p^n}(h)e\Bigl(\frac{ha}{|p|^n}\Bigr)
$$
where
$$
\alpha_{p^n}(h)=\frac{1}{|p|^n}
\sum_{a\in A_{p^n}}e\Bigl(-\frac{ha}{|p|^n}\Bigr).
$$
Let~$\eta$ be a character of $C(Z_g;\mathbf{S}^1)$, determined by
$\alpha\in C(Z_g;\mathbf{Z})$ as in Proposition~\ref{pr-1}. By definition, we
have
\begin{align*}
\text{\boldmath$E$}(\eta(U'_{p^n}))&=\frac{1}{|A_{p^n}|} \sum_{a\in A_{p^n}}
e\Bigl(\frac{a}{|p|^n} \varpi\Bigr( \sum_{x\in Z_g}\alpha(x)x\Bigr)
\Bigr)
\\
&= \frac{1}{|A_{p^n}|} \sum_{h\in \mathbf{O}_g/p^n}\alpha_{p^n}(h)
\sum_{a\in \mathbf{O}_g/p^n} e\Bigl(\frac{a}{|p|^n}(
\varpi(\gamma(\alpha))+h) \Bigr)
\\
&=\frac{|p|^n}{|A_{p^n}|} \alpha_{p^n}(-\varpi(\gamma(\alpha)))=
\frac{1}{|A_{p^n}|} \sum_{a\in
A_{p^n}}e\Bigl(\frac{\varpi(\gamma(\alpha))a}{|p|^n}\Bigr),
\end{align*}
an identity between Weyl sums for the equidistribution of~$U_{p^n}$
and Weyl sums for the equidistribution of the fractional parts of
elements of~$A_{p^n}$.
Suppose first that $A_{p^n}$ is uniformly equidistributed modulo~$1$.
If $\gamma(\alpha)=0$, then we get
$\text{\boldmath$E$}(\eta(U'_{p^n}))=1$. Otherwise, for $|p|^n$ large enough, we
get $\varpi(\gamma(\alpha))\not=0\in \mathbf{O}_g/p^n$, and therefore
$$
|\text{\boldmath$E$}(\eta(U'_{p^n}))|\leq \mathop{\mathrm{Max}}\limits_{\substack{h\in
\mathbf{O}_g/p^n \\ h \neq 0}}\frac{1}{|A_{p^n}|}\Bigl|\sum_{a\in A_{p^n}}
e\Bigl(\frac{ah}{|p|^n}\Bigr)\Bigr|,
$$
which tends to~$0$ by assumption. This proves the first statement.
Conversely, suppose that $\ind(g)\not=0$ and that $U'_{p^n}$ converges
in law to~$U$. Let~$h\in\mathbf{Z}\setminus \{0\}$. Pick
$\alpha\in C(Z_g;\mathbf{Z})$ such that $\gamma(\alpha)=\ind(g)h$, which
exists by definition of~$\ind(g)$. For all $p^n$, we get
$$
\frac{1}{|A_{p^n}|} \sum_{a\in
A_{p^n}}e\Bigl(\frac{h\ind(g)a}{|p|^n}\Bigr)= \frac{1}{|A_{p^n}|}
\sum_{a\in A_{p^n}}e\Bigl(\frac{\varpi(\gamma(\alpha))a}{|p|^n}\Bigr)=
\text{\boldmath$E$}(\eta(U'_{p^n}))
$$
where $\eta$ is the character of $C(Z_g;\mathbf{S}^1)$ corresponding
to~$\alpha$. This character is not trivial on~$H_g$ (because
$\gamma(\alpha)\not=0$), and therefore
$$
\lim_{|p|^n\to +\infty} \frac{1}{|A_{p^n}|} \sum_{a\in
A_{p^n}}e\Bigl(\frac{h\ind(g)a}{|p|^n}\Bigr)=0,
$$
which proves equidistribution modulo~$1$ of fractional parts of
$\ind(g)A_{p^n}$ by the Weyl Criterion.
\end{proof}
\begin{example}
(1) Let $\alpha\in \mathbf{R}$ satisfy $0<\alpha<1$. Let $A_{p^n}$ be the
set of classes corresponding to an interval of length
$\sim \alpha |p|^n$ in $\mathbf{Z}/|p|^n\mathbf{Z}$. Then equidistribution (and a
fortiori uniform equidistribution) of the fractional parts
\emph{fails}, hence the second part implies, by contraposition, that
if $\ind(g)=1$, then the random variables $U_{p^n}$ conditioned to
have $a\in A_{p^n}$ \emph{do not} converge to~$U$.
\par
As an illustration, let $g= X^3 + X^2 + 2X + 1$. One checks quickly
that~$g$ is irreducible, with Galois group $\mathfrak{S}_3$, so that
Example~2 of Section \ref{sec-examples} implies that the sums
\begin{equation} \label{sum_interval}
\sum_{x \in Z_g(\mathbf{F}_q)}^{} e\Bigl( \frac{ax}{q} \Bigr),
\end{equation}
parametrized by $a \in \mathbf{F}_q$ for $q$ totally split in $K_g$, become
equidistributed with respect to the measure $\mu_g$ which is the law
of the sum of three independent random variables, each uniformly
distributed on $\mathbf{S}^1$. A plot of the values
$ \sum_{x \in Z_g(\mathbf{F}_q)} e(\tfrac{ax}{q})$ for $a \in \mathbf{F}_q$ would
then be very similar to Figure~\ref{relations_or_not}, (A).
However, this polynomial $g$ satisfies $\kappa(g) = 1$ (since the
coefficient~$1$ of~$X^2$ shows that the sum of the roots, which is
an element of~$\Imag(\gamma)\cap \mathbf{Z}$, is $-1$) and hence these
sums, parametrized by $a \in \{ 0, \dots, \frac{q-1}{2}\}$, do not
become equidistributed with respect to the same measure. Numerical
experiments confirm this (see Figure~\ref{different_limit}), but
suggest that there is equidistribution with respect to another
measure.
\begin{figure}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{sum_add_interval_30307}
\caption{$q = 30307$}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{sum_add_interval_60383}
\caption{$q= 60383$}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{sum_add_interval_100357}
\caption{$q=100357$}
\end{subfigure}
\caption{The sums \eqref{sum_interval} for $a$ varying in
$\{ 0, \dots, \frac{q-1}{2}\}$, for three values of $q$}
\label{different_limit}
\end{figure}
\par
(2) Many examples of uniformly equidistributed sets (modulo primes
at least) are provided by using the theory of trace functions and
the Riemann Hypothesis over finite fields. For instance, if
$f\in \mathbf{Z}[X]$ is a monic polynomial, then the fractional parts of
elements of the sets $A_p=f(\mathbf{O}_p/p)\subset \mathbf{O}_g/p$ are uniformly
equidistributed. Indeed, one derives, e.g.,
from~\cite[Prop.\,6.7]{fkm2}, and the Riemann Hypothesis over finite
fields that $|A_p|\gg |p|$ and that
$$
\frac{1}{|A_{p}|} \sum_{\substack{a\in \mathbf{O}_g/p\\a=f(b)\text{ for some
$b$}}} e\Bigl(\frac{ha}{|p|}\Bigr) \ll \frac{1}{|p|^{1/2}}
$$
for all~$h\in (\mathbf{O}_g/p)^{\times}$, where the implied constant
depends only on~$\deg(f)$. The simplest example is that of quadratic
residues.
\par
(3) In the last estimate, since the implied constant depends only on
the degree of the polynomial~$f$, one can take $f$ to depend
on~$p$. It is natural to ask how large $\deg(f)$ can really be
taken. The simplest ``test'' case is when $f=X^d$ is a monomial, and
the question is then whether Proposition~\ref{pr-cond} applies to
small \emph{multiplicative subgroups}
$A_{p^n}\subset (\mathbf{O}_g/p^n)^{\times}$.
Using a striking result of Bourgain~\cite{bourgainarbitrary}, and
adapting an argument of Untrau~\cite[Prop.\,1.14]{untrau} (to show
that if $\varpi_{p^n}(\gamma(\alpha))\not=0$, then its $|p|$-adic
valuation is bounded as $p^n$ varies), one can deduce easily that the
first part of Proposition~\ref{pr-cond} does indeed apply if there
exists $\delta>0$ such that $A_{p^n}$ is a subgroup of
$(\mathbf{O}_g/p^n)^{\times}$ with $|A_{p^n}|\gg |p|^{n\delta}$.
\end{example}
\section{Additive characters with more general polynomials}
\label{sec-generalizations}
Very simple adaptations of the proof of Proposition~\ref{pr-1} (which
are left to the reader) lead to the following more general statements,
the second of which was also studied by Untrau in the case $g=X^d-1$.
\begin{proposition}[Ultra-short equidistribution, 2]\label{pr-2}
Let $v\in\mathbf{Z}[X,X^{-1}]$ be a non-constant Laurent polynomial. Assume
that $0\notin Z_g$. Define random variables $W_{p^n}$ on $\mathbf{O}_g/p^n$
for $p\in\mathcal{S}_g$ dividing none of the roots of $g$ and
$n \geqslant 1$, with values in $C(Z_g;\mathbf{S}^1)$, by
$$
W_{p^n}(a)(x)=e\Bigl(\frac{av(\varpi(x))}{|p|^n}\Bigr).
$$
\par
The random variables $W_{p^n}$ converge in law as $|p|^n\to+\infty$ to the
random function $W\colon Z_g\to \mathbf{S}^1$ such that~$W$ is uniformly
distributed on the subgroup orthogonal to the abelian group
$R_{g,v}\subset C(Z_g;\mathbf{Z})$ of additive relations between components
of $(v(x))_{x\in Z_g}$, namely
$$
R_{g,v}=\{\alpha\colon Z_g\to\mathbf{Z}\,\mid\, \sum_{x\in
Z_g}\alpha(x)v(x)=0\}.
$$
\end{proposition}
\begin{proposition}[Ultra-short equidistribution, 3]\label{pr-3}
Let $k\geq 1$ be an integer and fix distinct integers $m_1$, \ldots,
$m_k$ in~$\mathbf{Z}$. Assume $0 \notin Z_g$. For $p\in\mathcal{S}_g$ dividing none of the roots
of $g$ and $n \geqslant 1$, define random variables $Y_{p^n}$ on the
space $(\mathbf{O}_g/p^n)^k$ with uniform probability measure, with values
in $C(Z_g;\mathbf{S}^1)$, by
$$
Y_{p^n}(a_1,\ldots, a_k)(x)=e\Bigl(\frac{1}{|p|^n}
\Bigl(\sum_{i=1}^k a_i\varpi(x)^{m_i}\Bigr)\Bigr)
$$
\par
The random variables $Y_{p^n}$ converge in law as $|p|^n\to+\infty$ to the
random function $Y\colon Z_g\to \mathbf{S}^1$ such that~$Y$ is uniformly
distributed on the subgroup orthogonal to the abelian group
$$
\{\alpha\colon Z_g\to\mathbf{Z}\,\mid\, \sum_{x\in
Z_g}\alpha(x)x^{m_i}=0\text{ for } 1\leq i\leq k\}.
$$
\end{proposition}
As corollaries, we have equidistribution for the sums
$$
\sum_{x\in Z_g(\mathbf{F}_q)}e\Bigl(\frac{av(x)}{q}\Bigr)
$$
as $a$ varies in $\mathbf{F}_q$ for $q$ totally split in $K_g$ and
$$
\sum_{x\in Z_g(\mathbf{F}_q)}e\Bigl(\frac{a_1x^{m_1}+\cdots +a_kx^{m_k}}{q}\Bigr),
$$
as $a_1$, \ldots, $a_k$ vary independently and uniformly in $\mathbf{F}_q$ for
$q$ totally split in $K_g$.
\begin{example}
Consider the case of $g=X^d-1$ and the sums
\begin{equation}\label{eq-1}
\sum_{x\in \boldsymbol{\mu}_d(\mathbf{F}_q)}e\Bigl(\frac{a(x+\bar{x})}{q}\Bigr)
\end{equation}
and
\begin{equation}\label{eq-2}
\sum_{x\in \boldsymbol{\mu}_d(\mathbf{F}_q)}e\Bigl(\frac{ax+b\bar{x}}{q}\Bigr),
\end{equation}
as $a$ and $b$ vary in $\mathbf{F}_q$ for $q$ totally split in $K_g$. Both
satisfy equidistribution,
but in general have different limiting measures. For~(\ref{eq-1}),
we need to determine the functions $\alpha$ satisfying the relation
$$
\sum_{x\in\boldsymbol{\mu}_d}\alpha(x)(x+x^{-1})=0,
$$
and for~(\ref{eq-2}), we need to solve
$$
\sum_{x\in\boldsymbol{\mu}_d}\alpha(x)x=\sum_{x\in\boldsymbol{\mu}_d}\alpha(x)x^{-1}=0.
$$
\par
This last case boils down to the same relations as in
Section~\ref{sec-examples}, Example~1, since the second sum above is
the complex-conjugate of the first.
For~(\ref{eq-1}), on the other hand,
the relation is equivalent to
$$
\sum_{x\in\boldsymbol{\mu}_d}(\alpha(x)+\alpha(x^{-1}))x=0,
$$
which means that $\beta\colon x\mapsto \alpha(x)+\alpha(x^{-1})$
belongs to the group of additive relations of $X^d -1$.
We now assume that $d=\ell$ is an odd prime number. Then, by the
previous examples, the map $\beta$ must be constant. Let then~$\xi$ be
a non-trivial $\ell$-th root of unity. It is then fairly easy to check
that the module $R_{X^{\ell}-1,X+X^{-1}}$ is generated by the constant
function $\alpha_0 = 1$ and the functions $\alpha_j$ for
$1\leq j\leq (\ell-1)/2$ such that
$$
\alpha_j(\xi^k)=\begin{cases}
0 &\text{ if } k\notin \{j,\ell-j\}\\
1&\text{ if } k=j\\
-1&\text{ if } k=\ell-j.
\end{cases}
$$
(It is clear that $\alpha_0$, \ldots, $\alpha_{(\ell-1)/2}$ provide
relations; conversely, if~$\beta$ is constant then we check that
$$
\alpha=\alpha(1)\alpha_0+\sum_{j=1}^{(\ell-1)/2}
(\alpha(\xi^j)-\alpha(1))\alpha_j,
$$
so that these functions generate the group of relations.)
In particular, the module of relations has rank~$(\ell+1)/2$, and the
limit $W$, in this case, is uniform on the subgroup
$H_{X^{\ell}-1,X+X^{-1}}$ characterized by
$f\in H_{X^{\ell}-1,X+X^{-1}}$ if and only if
$$
\prod_{j=0}^{\ell-1}f(\xi^j)=1,
$$
(corresponding to~$\alpha_0$) and
$$
f(\xi^j)=f(\xi^{\ell-j})
$$
for $1\leq j\leq (\ell-1)/2$ (corresponding to~$\alpha_j$).
Consider for instance the case $\ell = 3$. The sums \eqref{eq-2} will
become equidistributed with respect to the measure on $\mathbf{C}$ which is
the pushforward measure of the uniform measure on $\mathbf{S}^1 \times \mathbf{S}^1$
by $ (y_1, y_2) \mapsto y_1 + y_2 + 1/(y_1y_2)$. This is illustrated
in Figure \ref{comparaison} (B), since the image of the above map is
the closed region delimited by a $3$-cusp hypocycloid.
On the other hand, the sums \eqref{eq-1} become equidistributed in
this case with respect to the image of the Haar measure on~$\mathbf{S}^1$ by
the map $y\mapsto 2y+1/y^2$. Since the image of this map is precisely
the $3$-cusp hypocycloid, this explains the picture obtained in Figure
\ref{comparaison} (A).
\begin{figure}
\centering
\begin{subfigure}{5cm}
\includegraphics[width=\textwidth]{sum_x_xinv_overroots_1.png}
\caption{The sums of type \eqref{eq-1} for $d=3$, $q = 811$, and $a$ varying in $\mathbf{F}_q$.}
\end{subfigure}
\hspace{0.5cm}
\begin{subfigure}{5cm}
\includegraphics[width=\textwidth]{kl_a_et_b_q109.png}
\caption{The sums of type \eqref{eq-2} for $d=3$, $q=109$, and $a$ and $b$ varying in $\mathbf{F}_q$.}
\end{subfigure}
\caption{Comparison between the regions of equidistribution for sums of type \eqref{eq-1} and sums of type \eqref{eq-2}.}
\label{comparaison}
\end{figure}
In the case $\ell = 5$, the sums \eqref{eq-2} are equidistributed with
respect to the measure on $\mathbf{C}$ which is the pushforward measure of
the uniform measure on $(\mathbf{S}^1)^4$ by
$ (y_1, \ldots,y_4) \mapsto y_1 +\cdots + y_4 + 1/(y_1\cdots y_4)$.
The sums \eqref{eq-1} are equidistributed in this case with respect to
the image of the Haar measure on~$(\mathbf{S}^1)^2$ by the map
$(y_1,y_2)\mapsto 2y_1+2y_2+1/(y_1y_2)^2$.
\begin{figure}
\centering
\begin{subfigure}{5cm}
\includegraphics[width=\textwidth]{sum_x_xinv_overroots_d_5.png}
\caption{The sums of type \eqref{eq-1} for $d=5$, $q = 96331$, and $a$ varying in $\mathbf{F}_q$.}
\end{subfigure}
\hspace{0.5cm}
\begin{subfigure}{5cm}
\includegraphics[width=\textwidth]{kloos5p311alpha1}
\caption{The sums of type \eqref{eq-2} for $d=5$, $q=311$, and $a$ and $b$ varying in $\mathbf{F}_q$.}
\end{subfigure}
\caption{Comparison between the equidistribution results for sums of type \eqref{eq-1} and sums of type \eqref{eq-2} for $d = 5$.}
\label{comparaison 2}
\end{figure}
\end{example}
\section{A multiplicative analogue} \label{section multiplicative}
In~\cite{relations}, the group of multiplicative relations between roots
of a polynomial also appears naturally. Is it also relevant for the type
of questions under consideration here? It turns out that it is, if we
change the probability space, and look at the distribution of sums
$$
\sum_{x\in Z_g}\chi(v(x))
$$
where $\chi$ is a varying multiplicative character of~$\mathbf{F}_q$ and $v$ is
a fixed polynomial.
More precisely, we continue with the notation from the previous
section, but assume moreover that $v(x)\not=0$ for $x\in Z_g$ (for
instance, $0\notin Z_g$ if $v=X$). For $p\in\mathcal{S}_g$, we now
consider the probability space $X_p$ of multiplicative characters
$\chi\colon (\mathbf{O}_g/p)^{\times}\to \mathbf{S}^1$, with the uniform probability
measure (we consider only primes instead of prime powers for
simplicity here). The random variables are now $\widetilde{U}_p$,
taking values in the group $C(Z_g;\mathbf{S}^1)$, and defined by
$$
\widetilde{U}_p(\chi)(x)=\chi(v(\varpi(x))).
$$
\begin{proposition}
The random variables $\widetilde{U}_p$ converge in law as
$|p|\to+\infty$ to the random function
$\widetilde{U}\colon Z_g\to \mathbf{S}^1$ such that~$\widetilde{U}$ is
uniformly distributed on the subgroup
$\widetilde{H}_g\subset C(Z_g;\mathbf{S}^1)$ which is orthogonal to the
abelian group $\widetilde{R}_g\subset C(Z_g;\mathbf{Z})$ of multiplicative
relations between values of $v$ on~$Z_g$, namely we have
$$
\widetilde{R}_{g,v}=\{\alpha\colon Z_g\to\mathbf{Z}\,\mid\, \prod_{x\in
Z_g}v(x)^{\alpha(x)}=1\},
$$
and
$$
\widetilde{H}_{g,v}=\{f\in C(Z_g;\mathbf{S}^1)\,\mid\, \text{ for all
$\alpha\in \widetilde{R}_d$, we have } \prod_{x\in
Z_g}f(x)^{\alpha(x)}=1\}.
$$
\par
In particular, as $q\to+\infty$ among primes totally split in $K_g$,
the sums
$$
\sum_{x\in Z_g}\chi(v(x))
$$
converge in law to the image by the linear form $\sigma$ of the Haar
probability measure on~$\widetilde{H}_{g,v}$.
\end{proposition}
\begin{proof}
This is the same as Proposition~\ref{pr-1}, mutatis mutandis, with now
$$
\text{\boldmath$E$}(\eta(\widetilde{U}_p))= \frac{1}{|p|-1} \sum_{\chi\in X_p}
\prod_{x\in Z_g}\chi(v(\varpi(x)))^{\alpha(x)}
$$
for a character $\eta$ of $C(Z_g;\mathbf{S}^1)$ determined by the function
$\alpha$. This is
$$
\text{\boldmath$E$}(\eta(\widetilde{U}_p))= \frac{1}{|p|-1} \sum_{\chi\in X_p}
\chi\Bigl(\varpi\Bigl(\prod_{x\in Z_g}v(x)^{\alpha(x)}\Bigr)\Bigr)
$$
and for the same reasons as before, converges to~$1$ or~$0$, depending
on whether
$$
\prod_{x\in Z_g}v(x)^{\alpha(x)}
$$
is equal to~$1$ or not.
\end{proof}
\begin{example}
(1) Here also there are some interesting examples in~\cite{relations}
and~\cite{repr} if we take $v=X$ (so that $R_{g,v}$ corresponds to
multiplicative relations between roots of~$g$). In particular, we
could take a polynomial $g$ with Galois group the Weyl group of
$\mathbf{E}_8$, which is of degree~$248$ but has all roots obtained
multiplicatively from $8$ of them (see~\cite{e8} for examples).
\par
(2) For $v=X$ again, the case of $g=X^d-1$ is quite
degenerate. Indeed, for $q\equiv 1\mods{d}$ and a multiplicative
character $\chi$ of~$\mathbf{F}_q$, the sum
$$
\sum_{x\in\boldsymbol{\mu}_d(\mathbf{F}_q)}\chi(x)
$$
is either~$d$ or~$0$, depending on whether the character~$\chi$ is
trivial on the $d$-th roots of unity or not. The former means that
$\chi^{(|p|-1)/d}=1$, and there are therefore $(|p|-1)/d$ such
characters. Hence the sum is equal to $d$ with probability $1/d$, and
to~$0$ with probability $1-1/d$.
\par
(3) If we consider the class polynomial for CM curves (as in
Section~\ref{sec-examples}, Example~4), we are led to consider
potential multiplicative relations between $j$-invariants. This is
apparently more challenging than the additive case, and we do not have
a precise answer at the moment (see, e.g., the papers of Bilu, Luca
and Pizarro-Madariaga~\cite{b-l-p} and Fowler~\cite{fowler} for
partial results).
\end{example}
\section{Higher rank trace functions}\label{sec-proofs2}
We now elaborate on the setting of Section~\ref{sec-proofs1} to involve
more general trace functions. Thus the goal is to study the distribution
of
$$
\sum_{x\in Z_g(\mathbf{O}_g/p)}t_p(ax),\quad\quad\text{ or }
\quad\quad
\sum_{x\in Z_g(\mathbf{O}_g/p)}t_p(a+x),
$$
(or other similar expressions) when $t_p$ is, for
each~$p\in\mathcal{S}_g$, a trace function over the finite field
$\mathbf{O}_g/p$. The cases of Section~\ref{sec-proofs1} correspond to
$t_p(x)=e(x/|p|)$ or $t_p(x)=e(v(x)/|p|)$, i.e., to the trace functions of
Artin--Schreier sheaves.
We thus assume that for each $p\in\mathcal{S}_g$, we are given a
middle-extension sheaf $\mathcal{F}_p$ on the affine line over $\mathbf{O}_g/p$. We
assume that these sheaves are pure of weight~$0$, and have the same
rank~$r$, and moreover have bounded conductor in the sense of Fouvry,
Kowalski and Michel~\cite{fkm1, sop}. We denote
by~$\Un_r(\mathbf{C})^{\sharp}$ the space of conjugacy classes in the unitary
group~$\Un_r(\mathbf{C})$.
For any~$x\in\mathbf{O}_g/p$ such that~$\mathcal{F}_p$ is lisse at~$x$, the action
of the geometric Frobenius automorphism at~$x$ on the stalk
of~$\mathcal{F}_p$ at~$x$ gives a unique conjugacy class
$\Theta_p(x)\in\Un_r(\mathbf{C})^{\sharp}$. By convention, we will define
$\Theta_p(x)=1$ if $\mathcal{F}_p$ is not lisse at~$x$ (note that this may
happen only for a set of $x$ of size bounded by the complexity
of~$\mathcal{F}_p$, hence uniformly bounded by our assumptions).
We can then define random functions $U_p$ and $V_p$ on $\mathbf{O}_g/p$, with
values in the space $C(Z_g;\Un_r(\mathbf{C})^{\sharp})$ by
$$
U_p(a)(x)=\Theta_p(ax),\quad\quad
V_p(a)(x)=\Theta_p(a+x).
$$
\par
Since the trace function $t_p$ of $\mathcal{F}_p$ satisfies
$$
t_p(x)=\Tr(\Theta_p(x))
$$
for $x$ lisse, we see that if one can prove that $(U_p)$ and/or $(V_p)$ has
a limit, then the sums
$$
\sum_{x\in Z_g(\mathbf{O}_g/p)}t_p(ax),\quad\quad\text{ and/or }
\quad\quad\sum_{x\in Z_g(\mathbf{O}_g/p)}t_p(a+x),
$$
will become equidistributed according to the image of this
limit distribution by the map
$$
f\mapsto \sum_{x\in Z_g}\Tr(f(x))
$$
for $f\colon Z_g\to \Un_r(\mathbf{C})^{\sharp}$.
We obtain a large supply of examples from known results on estimates of
``sums of products'' of trace functions (see~\cite{sop}). Although the
terminology might not be familiar to all readers, examples after the
proof will provide concrete illustrations.
\begin{proposition}
Assume that $\mathcal{F}_p$ is bountiful in the sense of~\cite{sop} for all
$p$ in $\mathcal{S}_g$.
\par
\emph{(1)} If $\mathcal{F}_p$ is of $\Sp_{r}$-type for all $p$, then $(U_p)$
and $(V_p)$ converge in law as $|p|\to+\infty$, with limit uniform on
$C(Z_g;\USp_r(\mathbf{C})^{\sharp})$.
\par
\emph{(2)} If $\mathcal{F}_p$ is of $\SL_{r}$-type for all $p$, and the
special involution, if it exists, is not $y\mapsto -y$, then $(U_p)$
and $(V_p)$ converge in law as $|p|\to+\infty$, with limit uniform on
$C(Z_g;\SU_r(\mathbf{C})^{\sharp})$.
\par
\emph{(3)} If $\mathcal{F}_p$ is of $\SL_{r}$-type for all $p$ with special
involution $y\mapsto -y$, then $(V_p)$ converge in law as
$|p|\to+\infty$ with limit uniform on $C(Z_g;\SU_r(\mathbf{C})^{\sharp})$, and
$(U_p)$ converges in law with limit uniform on
$$
\{f\colon Z_g \to \SU_r(\mathbf{C})\,\mid\, f(x)=\overline{f(y)}\text{ if }
x=-y \}.
$$
\end{proposition}
\begin{proof}
We argue with $U_p$, as the case of $V_p$ is identical. By definition,
the random variables $U_p$ take values in
$C(Z_g;\USp_r(\mathbf{C})^{\sharp})$. Applying the Weyl Criterion, it
suffices to show that if $(\pi_x)_{x\in Z_g}$ is a family of
irreducible representations of $\USp_r(\mathbf{C})$, not all trivial, with
characters $\chi_x=\Tr(\pi_x)$, we have
$$
\lim_{|p|\to +\infty} \frac{1}{|p|}\sum_{a\in \mathbf{O}_g/p}\ \prod_{x\in
Z_g(\mathbf{O}_g/p)}\chi_x(\Theta_p(ax))=0.
$$
\par
The sum is, up to negligible amount coming from points
where~$\mathcal{F}_p$ is not lisse, the sum of the traces of Frobenius on
the sheaf
$$
\mathcal{G}=\bigotimes_{x\in Z_g(\mathbf{O}_g/p)} \pi_x([a\mapsto ax]^*\mathcal{F}_p),
$$
and by Riemann Hypothesis over finite fields of Deligne, we obtain
$$
\frac{1}{|p|}\sum_{a\in \mathbf{O}_g/p}\ \prod_{x\in
Z_g(\mathbf{O}_g/p)}\chi_x(\Theta_p(ax))\ll |p|^{-1/2}
$$
as soon as the geometric monodromy group of this sheaf has no trivial
subrepresentation in its standard representation. This is true because
the bountiful property of $\mathcal{F}_p$ ensures that the geometric
monodromy group of $\mathcal{G}$ is the product group $\prod_x \Sp_r$.
\par
The argument is similar for (2); for (3), we have to take into account
the fact that the assumption implies that $[a\mapsto -a]^*\mathcal{F}_p$ is
isomorphic to the dual of~$\mathcal{F}_p$, so that
$\Tr(\Theta_p(-ax))=\overline{\Tr(\Theta_p(ax))}$ for all $x\in Z_g$.
\end{proof}
\begin{example}
We illustrate here all three cases with examples.
\par
(1) The classical Kloosterman sums $\Kl_2$ (as in part (2) of
Theorem~\ref{th-1}) are trace functions of a bountiful sheaf of
rank~$r=2$ of $\Sp_2$-type. Thus the first case of the proposition
applies, and in particular this establishes the second part of
Theorem~\ref{th-1}, in view of the fact that the trace of a uniform
random matrix in~$\SU_2(\mathbf{C})$ is Sato--Tate distributed.
\par
Similarly, for even-rank hyper-Kloosterman sums, we obtain the
$\USp_r$ case (see~\cite[\S\,3.2]{sop}).
\par
(2) If $r$ is odd, then the hyper-Kloosterman sum $\Kl_r(a;p)$ arise
as trace functions of a bountiful sheaf of $\SL_r$-type with special
involution $y\mapsto -y$ (see~\cite[\S\,3.3]{sop}). So the third case
of the proposition applies here. In particular, if the polynomial $g$
is even or odd (so that $Z_g=-Z_g$), the support of the limit of
$U_p$ is only ``half-dimensional''.
\par
(3) Examples of trace functions coming from bountiful sheaves of
$\SL_r$-type without special involution are given for instance by
$$
t_p(x)=\frac{1}{\sqrt{|p|}}
\sum_{y\in \mathbf{O}_g/p}\chi(h(y))e\Bigl(\frac{xy}{|p|}\Bigr)
$$
where $h\in\mathbf{Z}[X]$ is a ``generic'' squarefree polynomial of degree
$\geq 2$. This follows from~\cite[Prop.\,3.7]{sop}, where the meaning
of ``generic'' is also explained.
\end{example}
|
{
"arxiv_id": "2302.13729",
"language": "en",
"timestamp": "2023-02-28T02:28:36",
"url": "https://arxiv.org/abs/2302.13729",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Emotion is one of the most essential characteristics that distinguishes humans from robots \cite{intro} and speech is the most basic tool for daily communication \cite{perception}.
Therefore, analyzing emotion states through speech signals is a continuing concern for the research community. Owing to the rapid development of deep learning, many advanced models have been proposed and delivered promising results in speech emotion recognition (SER). In particular, convolutional neural networks \cite{ISNet,audio_cnn}, recurrent neural networks \cite{SER_RNN1,co_attention} and their variants \cite{DECN,gru_attention,graph_meld} have been widely studied and deployed for applications.
Transformer \cite{transformer}, which is the recent white hope architecture, is making a splash in deep learning domain.
Different from previous networks, Transformer adopts the full attention mechanism, which is depicted in Fig.~\ref{fig:introduction}(a), to learn a global representation of input signal.
Although the effectiveness of Transformer in SER has already been confirmed \cite{speechformer,ctnet,ksT}, there are several key points to be aware of when handling emotion analysis with Transformer: 1) Emotional cues are multi-grained in nature, which means that beyond the global representation, the details in speech are also important. For example, the local characteristics, such as articulation and prolongation, are highly relevant to the emotion states.
2) The full attention mechanism suffers from a lack of diversity and is thus inadequate to capture the multi-granularity features.
3) The computation of the full attention is quite redundant.
One mainstream approach to improve Transformer is employing the window-based attention mechanism \cite{speechformer,audio_trans}. As shown in Fig~\ref{fig:introduction}(b), window-based attention restricts the attention scope to a fixed local window whose size is typically set to a small value to focus on the fine-grained features. However, the immutable window also severely decreases the flexibility of model. What is worse, it weakens the ability of global learning and thus it inevitably requires considerable manual tuning of window configuration to obtain peak performance.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{introduction.pdf}
\caption{Comparison between different attention mechanisms. In contrast to prior works that have pre-set window sizes or fixed window positions, we propose to make them both flexible and deformable. DCN-like attention is applied in vision.}
\label{fig:introduction}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.96\linewidth]{framework.pdf}
\caption{Overview structure of the proposed DST. The only difference between DST block and the vanilla Transformer is the replacement of MSA with DSA. In DSA, we show only $h=2$ attention heads and omit the softmax operation for a clear presentation. $\otimes$ and $\oplus$ represent the matrix multiplication and addition, respectively. $\Sigma$ represents the weighted summation.}
\label{fig:framework}
\end{figure*}
To alleviate the above issues, this paper proposes a deformable framework, named DST, for speech emotion recognition. In DST, the window sizes are learned by a light-weight decision network based on the input speech, breaking the limitations of using the pre-set configuration. Also, the window positions can be shifted by learned offsets on a per-input basis. These qualities follow the natures of emotion and greatly improve the model flexibility. In addition, unlike deformable convolutional networks (DCNs) \cite{dcn} and DCN-like attention utilized in vision that model intermittently (Fig.~\ref{fig:introduction}(c)) \cite{dcn_vit_1,dcn_vit_2}, DST models continuous tokens (Fig.~\ref{fig:introduction}(d)), which is more in line with the continuous speech signal.
Finally, we visualize different attention mechanisms for an intuitive understanding.
The contributions of this work are summarized as follows:
\begin{itemize}
\item We endow Transformer with deformability by employing flexible, data-dependent window sizes and offsets.
\item Extensive experiments on IEMOCAP \cite{IEMOCAP} and MELD \cite{meld} datasets show that DST outperforms the state-of-the-art approaches. Our codes are publicly available at \url{https://github.com/HappyColor/DST}.
\end{itemize}
\section{Methodology}
The proposed DST, as illustrated in Fig.~\ref{fig:framework}, is composed of multiple stacked DST blocks. Each DST block mainly consists of a deformable speech attention (DSA) module and a feed-forward network (FFN). Equipped with the DSA module, the system is able to adaptively determine the usage of window sizes and window positions depending on the input speech signal, which greatly improves the model flexibility and can learn the multi-granularity emotional cues effectively.
\subsection{Revisiting Transformer}
At the core of the standard Transformer is the multi-head self-attention module (MSA), which makes Transformer stand out from other deep neural networks.
More details can be found in \cite{transformer}.
Specifically, the MSA mechanism can be written as:
\begin{equation}
\emph{\textbf{Q}}_i = \emph{\textbf{Q}}\emph{\textbf{W}}_i^Q\,,\ \emph{\textbf{K}}_i = \emph{\textbf{K}}\emph{\textbf{W}}_i^K\,,\ \emph{\textbf{V}}_i = \emph{\textbf{V}}\emph{\textbf{W}}_i^V
\label{eq1}
\end{equation}
\begin{equation}
\emph{\textbf{H}}_i = softmax(\frac{\emph{\textbf{Q}}_i\emph{\textbf{K}}_i^\top}{\sqrt{d_{Q}}})\emph{\textbf{V}}_i
\label{eq2}
\end{equation}
\begin{equation}
MSA(\emph{\textbf{Q}},\emph{\textbf{K}},\emph{\textbf{V}}) = concat(\emph{\textbf{H}}_1,..., \emph{\textbf{H}}_h)\emph{\textbf{W}}^o
\label{eq3}
\end{equation}
where \emph{\textbf{Q}}, \emph{\textbf{K}}, \emph{\textbf{V}} are query, key and value matrices, respectively; $d_{Q}$ is a scaling factor and $h$ denotes the number of attention heads; $\emph{\textbf{W}}_i^Q$, $\emph{\textbf{W}}_i^K$, $\emph{\textbf{W}}_i^V$ and $\emph{\textbf{W}}^o$ are to be learned parameters.
\subsection{Deformable Speech Transformer}
\subsubsection{Deformable Speech Attention}
Deformable speech attention (DSA) is at the core of the DST.
Different from previous attention mechanisms, DSA is able to change the window sizes and modify the window positions via a simple decision network.
Let $\emph{\textbf{Q}}_i^j$ be the $j$-th token of $\emph{\textbf{Q}}_i$ in the $i$-th attention head, where $i\in[1,h]$. The decision network first produces the window size $s_{ij}$ and offset $o_{ij}$ conditioned on $\emph{\textbf{Q}}_i^j$:
\begin{equation}
\bar{s}_{ij}, \bar{o}_{ij} = \emph{\textbf{Q}}_{i}^j\emph{\textbf{W}}_i^D
\label{eq4}
\end{equation}
\begin{equation}
s_{ij} = \sigma_1(\bar{s}_{ij})\times L\,,\ o_{ij} = \sigma_2(\bar{o}_{ij})\times L
\label{eq5}
\end{equation}
where $j\in [0,L-1]$ and $L$ denotes the sequence length of the features; $\emph{\textbf{W}}_i^D$ is the parameter matrix; $\sigma_1$ and $\sigma_2$ are two nonlinear functions for restricting the range of the outputs. For example, the window size $s_{ij}$ should lie in the range $(0,L)$. Therefore, we first apply the sigmoid function to limit the value of $\bar{s}$ to $(0,1)$ and then scale it by the maximum length $L$. Similarly, since the valuable information can be on either side of the current $j$-th token, we apply the tanh function to normalize $\bar{o}_{ij}$ to the range $(-1,1)$ before scaling it by $L$.
Giving the current position index $j$ and the offset $o_{ij}$, the anchor of the critical segment $A_{ij}$ can be obtained. Combining with the predicted window size $s_{ij}$, the boundaries of the attention window for the $j$-th query in the $i$-th head, $L_{ij}$ and $R_{ij}$, are also given. The calculations are as follows:
\begin{equation}
A_{ij} = j + o_{ij}
\label{eq6}
\end{equation}
\begin{equation}
L_{ij} = A_{ij} - s_{ij}\,,\ R_{ij} = A_{ij} + s_{ij}
\label{eq7}
\end{equation}
Finally, each query token attends to its respective deformed attention windows through the proposed DSA mechanism.
The DSA is formulated as follows:
\begin{equation}
\emph{\textbf{DH}}_{i}^j = softmax(\frac{\emph{\textbf{Q}}_{i}^j\emph{\textbf{K}}_i[L_{ij}:R_{ij}]^\top}{\sqrt{d_{Q}}})\emph{\textbf{V}}_i[L_{ij}:R_{ij}]
\label{eq8}
\end{equation}
\begin{equation}
DSA(\emph{\textbf{Q}},\emph{\textbf{K}},\emph{\textbf{V}}) = concat(\emph{\textbf{DH}}_{1},..., \emph{\textbf{DH}}_{h})\emph{\textbf{W}}^o
\label{eq9}
\end{equation}
where $\emph{\textbf{K}}_i[L_{ij}:R_{ij}]$ and $\emph{\textbf{V}}_i[L_{ij}:R_{ij}]$ consist of the $L_{ij}$-th to the $R_{ij}$-th tokens of $\emph{\textbf{K}}_i$ and $\emph{\textbf{V}}_i$ matrices, respectively; $\emph{\textbf{DH}}_{i}^j$ denotes the $j$-th output token of the $i$-th attention head.
\subsubsection{End-to-End Training}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{backwards.pdf}
\caption{Differentiable weighting process for end-to-end training. $\odot$ denotes position-wise multiplication.}
\label{fig:backwards}
\end{figure}
For ease of reading, we will omit the subscripts of the notations.
In practice, the outputs of the decision network, the window size $s$ and offset $o$, are decimals, causing the attention boundaries, $L$ and $R$, to be decimals as well.
However, in Eq.~\ref{eq8}, the indexing operations $K[L:R]$ and $V[L:R]$ require both $L$ and $R$ to be integers. One simple solution is rounding the $L$ and $R$ to integers $\lfloor L \rfloor$ and $\lceil R \rceil$, where $\lceil \cdot \rceil$ and $\lfloor \cdot \rfloor$ round a number up and down, respectively. However, the rounding operations are non-differentiable, resulting in a decision network that cannot be optimized by the back propagation algorithm.
To add the decision network to the computation graph in a differentiable way,
we leverage the distances between the predicted boundaries ($L$ and $R$) and the true boundaries ($\lfloor L \rfloor$ and $\lceil R \rceil$), and the distances between the central tokens ($\lfloor A \rfloor$ and $\lceil A \rceil$) and the anchor ($A$) to yield weights for the selected key features in DSA.
In general, only if the predicted boundaries are close to the true boundaries, the $\lfloor L \rfloor$-th and the $\lceil R \rceil$-th tokens will be assigned large weights. The weights for two central tokens are against each other, and whichever side the anchor is close to has a larger weight. Noting that we expect anchor to be the center of the important segment, thus the weights for the central tokens should be larger than 1 to emphasize them. Overall, the weights are computed as below:
\begin{equation}
w^k_i=
\begin{cases}
1-(L-\lfloor L \rfloor) & \text{if}\ k=\lfloor L \rfloor \\
1-(\lceil R \rceil-R) & \text{if}\ k=\lceil R \rceil \\
1+(\lceil A \rceil-A) & \text{if}\ k=\lfloor A \rfloor \\
1+(A-\lfloor A \rfloor) & \text{if}\ k=\lceil A \rceil \\
1 & \text{otherwise}
\end{cases}
\label{eq10}
\end{equation}
where $k\in [\lfloor L \rfloor, \lceil R \rceil]$ denotes the token index and $w^k_i$ is the weight for the $k$-th token in the $\emph{\textbf{K}}_i$ matrix. Eventually, $s$ and $o$ are correlated with the weights, and the process of weighting is differentiable. The decision network can be optimized with the entire model jointly in an end-to-end manner.
Suppose the current index $j$ is 3, the weighting process is shown in Fig.~\ref{fig:backwards}.
\section{Experiments}
\label{sec:exp}
\subsection{Datasets and Acoustic Features
\textbf{IEMOCAP} \cite{IEMOCAP} contains five sessions, every of which has one male and one female speaker, respectively.
We merge excitement into happiness category and select 5,531 utterances from happy, angry, sad and neutral classes. Experiments are conducted in leave-one-session-out cross-validation strategy.
\textbf{MELD} \cite{meld} dataset contains 13,708 utterances with 7 emotion classes.
As MELD has been officially split into training, validation and testing sets, we use the validation set for hyper-parameter turning and report the scores on the testing set.
To be consistent with previous works, weighted accuracy (WA), unweighted accuracy (UA) and weighted average F1 (WF1) are used to assess the model performance.
\noindent \textbf{Features.} Pre-trained self-supervised WavLM \cite{wavlm} is adopted to extract the acoustic features.
The max sequence lengths are set to 326 and 224 for IEMOCAP and MELD, respectively.
\subsection{Training Details and Hyper-Parameters}
The number of training epochs is set to 120. SGD \cite{sgd} with a learning rate of $5e^{-4}$ on IEMOCAP and $1e^{-3}$ on MELD is applied to optimize the model. Cosine annealing warm restarts scheduler \cite{scheduler} is used to adjust the learning rate in the training phase. Learning rate of the decision network is multiplied by a factor of 0.1. The batch size is 32. The number of attention heads is 8. The number of DST blocks is 4.
\subsection{Experimental Results and Analysis}
\subsubsection{Comparison with Other Attention Mechanisms}
\label{sec:exp_attn}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{visualization.pdf}
\caption{Visualization of different attention mechanisms. [N] indicates a silent segment with background noise.}
\label{fig:visualization}
\end{figure}
\textbf{Performance Analysis.} To analyze the potency of DST, we implement other common attention mechanisms, namely, full \cite{transformer}, window-based and DCN-like \cite{dcn_vit_1} attentions, for comparison. The fixed window size of the window-based attention and the number of sampling points in the DCN-like attention are empirically set to 10\% of input length.
Average percentage of activated tokens for each query is also listed for comprehensive analysis. As shown in Table~\ref{tab_1}, DST outperforms the counterparts on IEMOCAP and MELD by a considerable margin. In particular, the use of DCN-like attention causes a significant drop in performance, which means that modeling the continuous tokens is essential for speech signal. Most interestingly, we find that on IEMOCAP, each query attends to an average of 8.7\% of all input tokens, while on MELD, this percentage increases to 12.7\%. This ambiguity exposes the difficulty of manual tuning and recommends configurations that are automatically determined by model itself.
Also, we find that DST can learn all potential emotional features, both fine and coarse, through its deformable capabilities.
Furthermore, we discard the learned window size ($-deform.$ size) or reset the offset to zero ($-deform.$ offset), and the ablation results shown in the last two rows of Table~\ref{tab_1} once again confirm the effectiveness of the proposed deformable design.
\noindent \textbf{Visualization Analysis.}
To further understand the proposed model, we consider an utterance sample and intuitively compare the attention weights in each attention mechanism by visualization. As illustrated in Fig~\ref{fig:visualization}, voiced fragments are distributed in a small part of the entire speech sample. The full attention has difficulty in highlighting the key parts owing to the large amount of noise that deeply confuses the model. Although the window-based attention is able to learn the fine-grained features, it is inevitably limited in performance when the duration and position of the key segments mismatch with the pre-defined window settings. Since speech is a continuous signal, the DCN-like attention fails to judge importance by the discrete tokens, leading to little difference in its assigned weights. Inspiringly, DST successfully focuses on the critical segments (\textit{``ashamed somehow'' in text}) and highlights them by means of the learned window sizes and offsets.
\subsubsection{Comparison to Previous State-of-the-Art}
Table~\ref{tab_2} gives the comparison among the proposed DST with some known approaches on IEMOCAP and MELD. All approaches here adopt acoustic features as input for a fair comparison. On IEMOCAP, DST outperforms the previous best results obtained by \cite{ISNet,co_attention}. On MELD, DST substantially surpasses the other competitors by a considerable margin.
\begin{table}[t]
\caption{Performances of adopting different attention mechanisms on two corpora. PAT indicates the average percentage of the activated tokens for each query during the testing phase. $-deform.$ \textup{x} means the corresponding attribute \textup{x} is fixed.}
\label{tab_1}
\centering
\begin{threeparttable}
\begin{tabular}{cccccc}
\hline
\multirow{2}{*}[-3pt]{\shortstack{Attention\\Mechanisms}} & \multicolumn{3}{c}{\multirow{1}{*}[-2pt]{IEMOCAP}} & \multicolumn{2}{c}{\multirow{1}{*}[-2pt]{MELD}} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-6}
& WA & UA & PAT & WF1 & PAT \\ \hline
Full & 0.710 & 0.720 & 100 & 0.472 & 100 \\
Window-based & 0.714 & 0.723 & 10 & 0.476 & 10 \\
DCN-like & 0.665 & 0.681 & 10 & 0.455 & 10 \\ \hline
DST (Ours) & \textbf{0.718} & \textbf{0.736} & 8.7 & \textbf{0.488} & 12.7 \\
$-deform.$ size & 0.714 & 0.726 & 10 & 0.479 & 10 \\
$-deform.$ offset & 0.716 & 0.729 & 8.9 & 0.483 & 12.8\\ \hline
\end{tabular}
\end{threeparttable}
\end{table}
\begin{table}[t]
\caption{Comparison with known state-of-the-art systems on IEMOCAP and MELD. All systems apply audio as input.}
\label{tab_2}
\centering
\begin{threeparttable}
\begin{tabular}{cc||cc}
\hline
\multicolumn{4}{c}{\multirow{1}{*}[-1pt]{IEMOCAP}} \\ \hline
Method & Year & WA & UA \\ \hline
Audio-CNN \cite{audio_cnn} & 2021 & 0.654 & 0.667 \\
AR-GRU \cite{gru_attention} & 2021 & 0.669 & 0.683 \\
ISNet \cite{ISNet} & 2022 & 0.704 & 0.650 \\
Co-attention \cite{co_attention} & 2022 & 0.698 & 0.711 \\ \hline
DST (Ours) & 2023 & \textbf{0.718} & \textbf{0.736} \\ \hline \hline
\multicolumn{4}{c}{\multirow{1}{*}[-1pt]{MELD}} \\ \hline
Method & Year & \multicolumn{2}{c}{WF1} \\ \hline
CTNet \cite{ctnet} & 2021 & \multicolumn{2}{c}{0.382} \\
DECN \cite{DECN} & 2021 & \multicolumn{2}{c}{0.439} \\
SpeechFormer \cite{speechformer} & 2022 & \multicolumn{2}{c}{0.419} \\
MM-DFN \cite{graph_meld} & 2022 & \multicolumn{2}{c}{0.427} \\ \hline
DST (Ours) & 2023 & \multicolumn{2}{c}{\textbf{0.488}} \\ \hline
\end{tabular}
\end{threeparttable}
\end{table}
\section{Conclusion}
In this paper, a deformable speech Transformer, named DST, has been proposed for speech emotion recognition. DST can capture the multi-granularity emotional cues effectively via deformed attention windows whose sizes and positions are automatically determined by model itself. This deformability of DST significantly improves the model flexibility and adaptability.
Experimental results on IEMOCAP and MELD corpora demonstrate the effectiveness of the proposed DST. We hope our work can inspire insights towards designing flexible and potent variants of Transformer for the speech domain.
In the future, we plan to extend DST to other speech tasks and further verify its adaptability.
\section{Acknowledgement}
The work is supported in part by the Natural Science Foundation of Guangdong Province 2022A1515011588, in part by the National Key R\&D Program of China under Grant 2022YFB4500600, in part by the Science and Technology Project of Guangzhou under Grant 202103010002, in part by the Science and Technology Project of Guangdong under Grant 2022B0101010003, in part by the National Natural Science Foundation of China under Grant U1801262, and in part by the Guangdong Provincial Key Laboratory of Human Digital Twin under Grant 2022B1212010004.
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.13745",
"language": "en",
"timestamp": "2023-02-28T02:29:06",
"url": "https://arxiv.org/abs/2302.13745",
"yymm": "2302"
} | \section{Introduction}
Crystalline materials studied in experiments are almost never perfect monocrystalline structures. Most often they contain lattice defects and are usually polycrystals, i.e., they are composed of several grains of different lattice orientations separated by grain boundaries, which play a crucial role in determining the mechanical properties of the sample \cite{kheradmand2010investigation}. During their deformation the complexity of the dynamics of the polycrystal on the microscopic scale makes predicting the mechanical response of a single sample based on its initial state (microstructure) challenging. Moreover, crystal plasticity exhibits size effects implying that smaller systems are stronger (the stress required to reach a given strain is higher) and their mechanical response to the externally applied stresses tends to be irregular and is characterized by a significant sample-to-sample variation \cite{uchic2009plasticity,dimiduk2005size}. The latter features originate from the sample-dependent microstructure of small polycrystals, implying that predicting their mechanical response is likely to be particularly challenging.
In recent years a huge progress in development and application of machine learning (ML) techniques in many fields of science has been observed \cite{janiesch2021machine,carleo2019machine,shinde2018review,ongsulee2017artificial,schmidt2019recent,rajendra2022advancement}. In material science it has led to emergence of methods able to identify and characterize samples \cite{stoll2021machine,chan2020machine,baskaran2020adaptive}, to design novel materials of desired properties \cite{vasudevan2021machine,durodola2022machine,moosavi2020role,wei2019machine}, and to establish relations between the structure and the properties of the material \cite{dai2020method,fu2022review,chibani2020machine,jung2019efficient}. A related research problem, relevant in this study, is predicting the mechanical response of a sample of material during its deformation \cite{salmenjoki2018machine,sarvilahti2020machine,minkowski2022machine}. The general problem statement can be formulated as follows: Given some description of the initial state (microstructure) of the sample, with what accuracy can its mechanical response be predicted?
The accuracy of the prediction of the given ML algorithm can be expressed quantitatively for instance by the coefficient of determination $r^2$. If the system studied is governed by deterministic equations of motion, in principle it should be possible to train an algorithm to represent its dynamics perfectly, which would result in perfect predictability score $r^2=1$. In practice, however, this does not usually happen. The dynamics of many complex systems is to some degree chaotic, or as in the case of dislocation dynamics, exhibits critical behaviour \cite{ovaska2015quenched,papanikolaou2017obstacles,zapperi2001depinning,chan2010plasticity,alava2014crackling}. This implies that the timeevolution of a complex system such as a small plastically deforming crystal may be sensitive to small perturbation of its initial conditions. In other words, perturbing slightly the initial state of the system can lead to significant differences in its subsequent dynamics. This limits the extent to which the time evolution of such systems can be predicted (e.g., via ML algorithms) because the full information of the initial state, which on the atomic scale includes positions and velocities of all the atoms, is usually not available due to the finite precision of any experimental observations or coarse-grained numerical representations of the data. Moreover, due to finite decimal precision numerical simulations are never perfectly accurate either, something that may further amplify the differences caused by small perturbations of the initial state. This study concerns computer simulations only, but as discussed above, the lack of full description of the initial state exists also in experiments, where any characterization of the initial microstructure (using various imaging techniques) has a finite precision.
Polycrystals have been studied by ML in several publications \cite{shu2022grain,vieira2021machine,hestroffer2023graph,dai2021graph,karimi2022prediction}, where experimental data and finite element simulations were used to produce the training data. In contrast, in this work, we study predictability of the deformation process of cube-shaped iron nanopolycrystals by combining strain-controlled molecular dynamics (MD) simulations with ML methods. We generate a large set of polycrystals with various shapes and sizes of the grains and use it to train a convolutional neural network (CNN) to infer the link between the initial microstructure and features of the stress-strain curve. We show that the key elastic and plastic properties characterizing the response of the system to applied shear stresses, namely shear modulus and yield stress, exhibit different degree of predictability, measured here by the coefficient of determination $r^2$. The degree of predictability we find for these quantities is then discussed in the context of the sensitivity of the system to small perturbations of the initial conditions, giving rise to fundamental limits to deformation predictability.
The paper is organized as follows: In Section \ref{section:methods} the methods employed in the study are introduced, including tools for generating initial polycrystalline structures, MD simulations and ML algorithms. The descriptors used as the input for the ML and the architecture of the CNN are also presented. Next, in Section \ref{section:results} the results of predictability of various mechanical properties extracted from the MD simulations are shown and discussed. Finally, Section \ref{section:conclusions} concludes the findings of the work.
\section{Methods}
\label{section:methods}
\subsection{Generation of polycrystals}
The tools used for generation of the polycrystalline samples are Atomsk \cite{hirel2015atomsk} and Nanocrystal generator \cite{nanogen}; the latter program has been developed in our research group. Both programs implement the Voronoi tessellation \cite{brostow1978construction,finney1979procedure}, which is a method to partition the three-dimensional space into a set of polyhedra, which are here taken to represent the individual grains of the polycrystal. The Voronoi tessellation is fully defined by specifying positions of a certain number of points, called seeds, in the space. For each of those seeds there is a corresponding region called a Voronoi cell which contains all the points that are closer to that given seed than to any other seed. Those Voronoi cells are subsequently filled with atoms arranged in the chosen lattice structure and with the specified crystallographic orientation to represent grains of the polycrystal.
In this work both the positions of the seeds and the Euler angles for the crystallographic orientation of the individual grains are chosen randomly for each polycrystal. In all three directions periodic boundary conditions are implemented.
\begin{figure*}
\centering
\includegraphics[scale=0.5]{system+stressStrain}
\caption{Schematic of the model studied in the work. The polycrystalline sample is first equilibrated at 300 K (left), after which it is shear deformed by MD simulations with a constant strain rate (middle). During the simulation the instantaneous shear stress $\sigma$ is measured as a function of the strain $\epsilon$, resulting in a unique stress-strain curve $\sigma(\epsilon)$ for each sample. Repeating the simulation several times for different initial polycrystal structures results in an ensemble of stress-strain curves with a mean shown as the black line and a standard deviation shown with gray (right).}
\label{model}
\end{figure*}
\subsection{Molecular dynamics simulations}
After the set of initial polycrystalline configurations is generated, the MD simulations are carried out for each one of the them by Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) \cite{LAMMPS}. As the interatomic potential the embedded atom model (EAM) potential for Fe \cite{mendelev2003development} is used.
During a single MD run first the potential energy of the configuration is minimized by letting the atoms relax and then the system is equilibrated at the constant temperature of 300 K using the Nose-Hoover thermostat, and zero pressure in the NPT ensemble. Finally the shear deformation of the sample is performed in the NPH ensemble in $xy$ plane under constant strain rate, which is done by tilting the simulation box. Due to the choice of the ensemble the temperature is allowed to change, that is, no thermostat is used. That would assure that the equations of motion are indeed deterministic, which would be analogous, e.g., to discrete dislocation dynamics (DDD) simulations where no thermal noise is present. Notice however that randomness is included via different randomly chosen initial velocities of the atoms for each sample. During the whole deformation run the instantaneous $xy$ component of the pressure tensor is stored as the function of time. That allows to obtain the stress-strain curve for the given sample. The model is shown schematically together with the corresponding stress-strain curve in Fig. \ref{model}.
For each configuration the shear modulus and yield stresses (using different definitions, see below) are extracted from the corresponding stress-strain curves. The shear modulus is taken as the slope of the linear function fitted to the stress-strain curve in the range of strain from 0 to 0.01, in which the system is still in the elastic regime. For extracting the yield stress, on the other hand, several different definitions are applied, which are discussed in Section \ref{section:results}.
\begin{figure*}
\centering
\includegraphics[scale=0.35]{descriptors_cnn}
\caption{Schematic of the CNN used for predicting the shear modulus and yield stress of the polycrystal. Three-dimensional arrays representing the local crystallographic orientation and the grain boundaries of the polycrystal are fed into three-dimensional convolutional layers, where they are processed by the convolutional filters. Different colours represent different filters. The size of the arrays is subsequently reduced by the maximum pooling layer. The procedure is repeated until the arrays are of the size $1\times1\times1$. They are then concatenated and fed into the fully connected neural network, which gives the final output, i.e., either shear modulus or the yield stress according to one of the defitions indicated in the right panel.}
\label{cnn}
\end{figure*}
\subsection{Descriptors}
Two different three-dimensional fields are extracted from the equilibrated configuration of the polycrystals and later used as the input descriptors for the ML algorithm. One of them is the local orientation of the lattice given by the quaternion representation of the rotations in three-dimensional space. A quaternion consists of four components and can be written as $\mathbf{q}=\cos(\theta/2)+\mathbf{u}\sin(\theta/2)$, where $\mathbf{u}=(u_x,u_y,u_z)$ is a unit vector in three-dimensional space and $\theta$ is the angle of rotation around that vector. The other descriptor is the local density of atoms at the grain boundary, which is identified by removing all the atoms belonging to the bcc structure of the grains.
Both descriptors, illustrated for an example configuration in the left-hand part of Fig. \ref{cnn}, are extracted by the OVITO software \cite{stukowski2009visualization}, which provides features able to identify the local structure type (common neighbor analysis \cite{honeycutt1987molecular}) and the crystallographic orientation (polyhedral template matching \cite{larsen2016robust}). The descriptors are used for predicting the shear modulus and the yield stress by means of a CNN. As will be shown later, by training a CNN with each of those descriptors separately it was found that for predicting the shear modulus the orientation of the grains is more important, while for predicting the yield point the grain boundary is a more useful descriptor. However, the predictability is always the highest when both the descriptors are used. Therefore the descriptors are combined into five different arrays (four for the lattice orientation and one for the grain boundary).
\subsection{Convolutional neural networks}
A CNN is a ML algorithm which takes as the input a pixelized image of the system and processes it through a set of filters in convolutional and pooling layers. Since the system studied here is three-dimensional, the input arrays consist of voxels, which are equivalents of pixels in three dimensions. In this work a CNN is trained to predict the characteristic features of the stress-strain curves mentioned above: shear modulus and yield stress according to its various definitions.
The input arrays are prepared in several different resolutions, which represent the accuracy in which the field extracted from the given configuration is sampled. The highest one is $64\times64\times64$ because for that resolution the number of voxels is of the same order of magnitude as the number of atoms. The lower resolutions used are $16\times16\times16$ and $32\times32\times32$. For representing the local lattice orientation, to each voxel of the array four components of the quaternion determined for the atom closest to the voxel's central point are assigned. For the other descriptor, after the grain boundaries are identified with OVITO, the number of the atoms belonging to the boundary within each voxel is determined.
The data from the input arrays is passed to and subsequently processed by the convolutional and pooling layers included in the architecture of the CNN. The convolutional layers contain 8 filters. The maximum pooling layers reduce each of the spatial dimensions of the data by 2. A sequence of a convolutional and pooling layer is repeated several times, finally leading to the array of the dimension $1\times1\times1\times8$. Therefore, the total number of those layers depends on the input resolution. The activation function in the first convolutional layer is sigmoid, while in all the following ones rectifier functions are used. It was found that this choice of activation functions increases the performance of the training. Additionally, another channel with fewer convolutional and pooling layers but larger filters in the latter, which leads to faster size reduction of the arrays, is added in parallel to the main one. The purpose of that channel is to eliminate the vanishing gradient problem, which occurs when the data is processed through a large number of activation functions. The output of both channels is finally flattened and concatenated giving the linear array of the size 16, which is further processed by a fully connected layer giving a single number representing either the shear modulus or the yield stress as the output. The schematic of the CNN is shown in Fig. \ref{cnn}.
For the training of the CNN the Adam optimizer is utilized with the learning rate 5$\cdot$10$^{-5}$. The L2 regularization is applied to all the convolutional layers with the parameter $\lambda$=0.001.
\subsection{Molecular dynamics simulations of shear deformation}
A set of 4000 cube-shaped initial iron polycrystalline configurations is generated by Atomsk and Nanocrystal generator implementing the Voronoi tessellation. The dimensions of each single configuration are $20\times20\times20$ nm$^{3}$ and they all contain 8 nanograins of the average size 10 nm with randomly chosen positions of the seeds in the Voronoi tessellation and Euler angles specifying the crystallographic lattice orientation. The lattice structure is bcc and the lattice constant is chosen to be 0.287 nm. Each configuration contains around 677000 atoms.
The timestep used in all the MD simulations is 1 fs. After the equilibration run that lasts 1 ns, the MD shear deformation run is performed at the constant strain rate of 3$\cdot$10$^{8}$/s until the strain reaches the value of 0.15. The $xy$ component of the pressure tensor is stored every 50 timesteps.
Stress-strain curves obtained for a few samples along with the average and standard deviation for the whole set are shown in the right panel of Fig. \ref{model}. As can be observed there, the curves exhibit a large sample-to-sample variability. While the elastic part of different curves is similar (but not the same), increasing linearly with a certain slope whose magnitude varies from sample to sample, in the plastic regime, which typically starts around the strain value of 0.09, there are large differences in the stress response and the curves exhibit fluctuating character with many stress drops. One can also observe that different curves exhibit qualitatively different behaviour. Some of them have a large stress drop at some value of strain, while others after reaching the yield remain relatively flat. As a consequence, the yield stress exhibits a much larger variability than the shear modulus.
\subsection{Machine learning by convolutional neural networks}
The input fields extracted with different resolutions from the equilibrated configurations along with the output values of the shear modulus and yield stress extracted from the stress-strain curves are subsequently used to train CNNs. To test the convergence of the training, the procedure is performed for different sizes of the dataset, starting from 500 or 1000 configurations and increasing it successively by a certain number until the full dataset is covered. Both for shear modulus and for yield stress five different CNNs are trained for different random seeds representing different splits of the dataset into the training, test and validation set in the ratio 80:10:10\%. The coefficient of determination defined as
\begin{equation}
r^2=1-\frac{\sum_{i=1}^{N}(y_i-f_i)^2}{\sum_{i=1}^{N}(y_i-\langle y \rangle)^2},\label{coeff_determ}
\end{equation}
where $y_i$ is the true value of shear modulus or yield stress of the sample $i$, $\langle y \rangle$ is the mean value, $f_i$ is the value predicted by the CNN, and $N$ is the total number of samples in the given set, is obtained for each of those sets separately and subsequently averaged over all the seeds.
\section{Results}
\label{section:results}
\subsection{Shear modulus}
A material is said to be in elastic regime when it returns to its original shape and size after the externally applied stress is removed. Elasticity is quantitatively characterized by a set of elastic constants, such as Young's modulus, bulk modulus or shear modulus, which indicate what amount of stress is needed to deform the sample in a certain way. Those constants can be written in the form of the elasticity tensor.
While the elastic constants of monocrystals are known for most materials, in the case of polycrystalline samples they depend on the shape and crystallographic orientation of each constituent grain \cite{sheng2012effective}. The elastic constants of those individual grains correspond to the rotational transformation of the elasticity tensor obtained for the main crystallographic axes. Moreover, in the equilibrated polycrystalline sample the crystallographic orientation may be different near the grain boundaries than within the grains, which may also influence the elastic properties of the whole material. One can thus expect that the shear modulus of the whole polycrystal can be extracted with a reasonable accuracy from the field of the crystallographic orientation varying within the sample.
In Fig. \ref{shear} the coefficient of determination $r^2$ is shown for the shear modulus as a function of the dataset size $N$ used as the input for training the CNN. The predictability is already good even for the smallest values of $N$. Adding more configurations increases it further and reduces the training-test set gap $\delta$ as seen in the insets. Moreover, it can be seen that increasing the resolution of the input data also improves the predictability. While the difference in $r^2$ between the resolutions of $16\times16\times16$ and $32\times32\times32$ is quite significant, the results do not improve much more when the resolution is increased further up to $64\times64\times64$.
Additionally, the values of $r^2$ resulting from training with only one of the descriptors compared to $r^2$ for both descriptors combined are shown in Fig. \ref{shear_descriptors} for the resolution $32\times32\times32$. As one can see there, the lattice orientation of the individual grains of the polycrystal is a more important descriptor for predicting the shear modulus than the grain boundary. The values of $r^2$ for the latter descriptor are in fact slightly negative, which suggests that it does not on its own provide any information about the shear modulus. However, $r^2$ for both the descriptors combined is still slightly higher than that for the lattice orientation only in almost the whole range of $N$. It suggest that the descriptor of the grain boundary may in fact contain some information relevant to the shear modulus but only in combination with the other descriptor.
\begin{figure}
\centering
\includegraphics[scale=0.9]{stats_shear_one_plot}
\caption{$r^2$ for shear modulus obtained for three different resolutions of the CNN input data as a function of $N$. The dashed lines show the values of $r^2$ for the training set, and the continuous lines for the test set. The inset shows $\delta$ as a function of $N$.}
\label{shear}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.9]{statsShearModulusDescriptors}
\caption{$r^2$ for shear modulus obtained for the test set at the resolution $32\times32\times32$ with the descriptors used separately and combined as a function of $N$.}
\label{shear_descriptors}
\end{figure}
\subsection{Yield point}
Crystals deform plastically when they do not return to their original shape if the external stress is removed. While in a perfect crystal a slip, which is displacement of two atomic layers along each other, would require a large amount of stress, plasticity in real crystals is facilitated by defects. Very often the plasticity is mediated by motion of dislocations. In the polycrystals studied here no dislocations are present within the grains in the initial configuration, however, the grain boundaries for low misorientation angles can be considered as arrays of dislocations. For higher angles of misorientations grain boundaries can be thought of as amorphous regions within the crystal. Polycrystals will most often deform by the grain boundary slip mechanism \cite{weissmuller2011kinematics,patriarca2013slip}.
Yield of the material is the point on the stress-strain curve that indicates the transition from the elastic to the plastic behaviour. Once the sample enters the plastic regime it is deformed permanently. The yield point is fully specified by giving its two coordinates, referred to as the yield strain and yield stress. There are various ways in which the position of the yield point is precisely determined. Often it is defined as the maximal stress that the sample can withstand \cite{dinkgreve2016different,moller2006yield}, which corresponds to the global maximum of the stress-strain curve. However, in the case of irregular, highly fluctuating stress-strain curves, such as those occurring in small samples, that definition might not be appropriate, and therefore, other definitions may have to be considered.
Another way of defining the yield stress is taking the stress value at some fixed strain, which is sometimes defined as the flow stress \cite{ishikawa2005high,luo2016correlation}. The exact value of the strain should be chosen in such a way that the system has been already plastically deformed. Looking at the average stress-strain curve in Fig. \ref{model} an appropriate value of strain to choose is approximately in the range of 0.075-0.1.
Another commonly used definition of the yield point is the offset method \cite{wei2007plane}. In that approach the yield is determined as the intersection of the stress-strain curve with the line parallel to the elasticity region. The offset by which that line is shifted may vary depending on the specific material. Usually it is chosen as 0.2\% strain \cite{ross1999mechanics,wei2007plane}, however, that value is not particularly useful in this work because the intersection point determined with it lies within the elastic part of the stress-strain curve. It has been shown that nanocrystalline polycrystals deform more heterogeneously, and therefore, not all the grains are deformed at all by the the 0.2\% offset stress \cite{brandstetter2006micro,saada2005hall}. Hence, instead, in this work the offset values 1\% and 2\% are chosen. The method is illustrated in the right panel of Fig. \ref{cnn}. The yield point determined by the offset method can be interpreted as the state of the plastic deformation of the system by the value specified by the offset.
\subsubsection{Yield stress}
\begin{figure}
\centering
\includegraphics[scale=0.9]{stats_yieldMax_one_plot}
\caption{$r^2$ for yield stress defined as the maximal stress value along the stress-strain curve obtained for three different resolutions of the CNN input data as a function of $N$. The dashed lines show the values of $r^2$ for the training set, and the continuous lines for the test set. The inset shows $\delta$ as a function of the dataset size.}
\label{yieldMax}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.9]{statsYieldMaxDescriptors}
\caption{$r^2$ for yield stress defined as the maximal stress value along the stress-strain curve obtained for the test set at the resolution $32\times32\times32$ with the descriptors used separately and combined.}
\label{yield_descriptors}
\end{figure}
The coefficient of determination $r^2$ for the training and test set as a function of $N$ (with a spacing of 500 configurations), obtained from the CNNs trained for the maximal stress value is shown in Fig. \ref{yieldMax} at three different resolutions of the input data. It can be observed that both for the training and the test set the value of $r^2$ increases with the $N$. The results look similar for all the resolutions studied. The insets show that the $\delta$ with increasing $N$. $r^2$ is significantly lower than in the case of the shear modulus and $\delta$ is slightly higher.
\begin{figure*}
\centering
\includegraphics[scale=0.9]{stats_yield_fixed_strain}
\caption{$r^2$ for the yield stress defined as the stress at the fixed strain value of 0.075 (a) and 0.1 (b), obtained for the resolution $32\times32\times32$ of the CNN input data, shown as a function of $N$. The insets show $\delta$ as a function of $N$.}
\label{yieldStressAtFixedStrain}
\end{figure*}
Again, the values of $r^2$ obtained for the separate descriptors compared to them being used together are shown in Fig. \ref{yield_descriptors}. This time it can be seen that the grain boundary is a more important descriptor than the lattice orientation, however, both descriptors provide significant information about the yield stress. The value of $r^2$ for the combination of the descriptors is also significantly higher than for either of the descriptors used separately.
In Fig. \ref{yieldStressAtFixedStrain} the coefficient of determination for the yield stress defined as the value of stress at the fixed strain values of 0.075 and 0.1 is shown. For every size of the dataset $r^2$ is larger for the lower strain value of those two. This is expected since the corresponding point of the stress-strain curve lies closer to the elastic part, and, as seen earlier, the elastic properties are predicted much more easily than the plastic properties.
The coefficient of determination $r^2$ for the yield stress extracted with the offset method is shown in Fig. \ref{offsetYieldStress} for two values of the offset: 1\% and 2\%. The CNN training was performed for the resolution of $32\times32\times32$ changing the size of the dataset with the interval of 1000 configurations. It can be observed that the predictability is similar for both values of the offset and $r^2$ for the test set reaches 0.6 for the full dataset. $\delta$ is reduced with increasing $N$ in both cases.
\begin{figure}
\centering
\includegraphics[scale=0.9]{stats_yield_stress_offset_one_plot}
\caption{$r^2$ for the yield stress defined with the offset method for the resolution of $32\times32\times32$ of the CNN input data as a function of $N$. The dashed lines show the values of $r^2$ for the training set, and the continuous lines for the test set. The inset shows $\delta$ as a function of $N$.}
\label{offsetYieldStress}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.9]{stats_yield_strain_offset_one_plot}
\caption{$r^2$ for the yield strain defined with the offset method for the resolution of $32\times32\times32$ of the CNN input data as a function of $N$. The dashed lines show the values of $r^2$ for the training set, and the continuous lines for the test set. The inset shows $\delta$ as a function of $N$.}
\label{offsetYieldStrain}
\end{figure}
\subsubsection{Yield strain}
In addition to the yield stress, one also can analogously train the CNN to predict the yield strain. In principle one can do it for all the definitions of the yield point that were discussed earlier. Obviously, it only makes sense for the definitions in which the yield point is not determined at the fixed value of strain. Moreover, it turns out that the prediction is very poor for the yield point defined as the maximum stress of the stress-strain curve. This is most likely due to the fact that while the maximum value of stress is to a certain degree determined by the structure of the polycrystal, the strain value at which this maximum is reached is largely random due to the fluctuating character of the stress-strain curve. Therefore, the only definition for which the results of the prediction are presented is the one which employs the offset method. $r^2$ obtained with that method is shown in Fig. \ref{offsetYieldStrain}, again for the offset values of 1\% and 2\%.
\subsection{Sensitivity to initial conditions}
In all the studied cases the value of $r^2$ is lower than 1, which implies that the predictability is never perfect. Even though $r^2$ for the test set tends to increase with $N$ (and with the resolution) as the gap between the training and the test set closes, for both of those sets $r^2$ seems to approach a certain value below 1. It suggests that there exists a certain limit for the predictability of the both the elastic and plastic properties of the small polycrystalline samples we study. This limit might be related to the fundamental properties of the system which are manifested as sensitivity to the initial conditions.
It has been seen that the predictability for the shear modulus is higher than that for the yield point. Moreover, different definitions of the yield stress resulted in different values of $r^2$. As already shown in Fig. \ref{model} the stress-strain curves exhibit a larger variability near the yield point than in the elastic regime. However, it does not explain directly the difference in the values of the predictability scores between the shear modulus and the yield stress, since according to Eq.~(\ref{coeff_determ}) the predictability is measured as the accuracy of the fit relative to the variance of the quantity that is being predicted. Therefore, {\it a priori} quantities exhibiting a higher variability should not as such be harder to predict.
On the other hand, studying the degree of sensitivity of the deformation dynamics of the system to small perturbations of the initial conditions could give us insight into the limits of predictability. There are several ways in which such sensitivity may limit the predictability score. First of all, the velocities of the particles in the system are initialized randomly. The values of those velocities are not a part of the descriptors fed to the algorithm, however, one can expect that they may influence the details of the dynamics of the system, in particular the stress-strain curve, from which all the quantities discussed in this work in the context of predictability are extracted. Moreover, the system may be sensitive to the initial choice of the parameters in the Voronoi tessellation, that is the position of the nodes, which specify the shape of the grains, and the angles for the lattice rotation. Since those features of the system are given to the CNN within the descriptors with a finite resolution, small variations in grain shapes and lattice rotations may not result in any changes to the descriptors. Finally, the computer simulations are performed with finite decimal precision, which leads to inaccuracies in integrating the equations of motion. All those factors may contribute to a limit on the predictability.
In order to study the sensitivity of the system to the initial conditions quantitatively new MD simulations have been carried out using as the initial state sets of configurations in which one of the features described above (random seed, position of the Voronoi nodes and lattice orientation) is varied while the remaining ones are kept the same. In order to make the results representative for the whole dataset, 15 configurations were picked from the original set and for each of them and for each of the features 50 simulations were run. To measure the sensitivity quantitatively, one should relate the variance of the system's response, for instance the stress at a given strain or the shear modulus, measured here for different random seeds to the variance of the same response for the whole original set of configurations. In order for the result to be representative for the whole system and not just one chosen configuration one needs to repeat this procedure for each of the 15 chosen configurations and average the result. The sensitivity can therefore be expressed as
\begin{equation}
\chi=\frac{\langle\langle(y_i^j-\langle y_i \rangle)^2\rangle_j\rangle_i}{\langle(y-\langle y \rangle)^2\rangle},\label{sensitivity_def}
\end{equation}
where $y$ denotes the quantity for which the sensitivity is determined, the denominator is the variance of this quantity over the whole original set of configurations, while the nominator is the average of the variances of the quantity in the perturbed configurations enumerated with the index $j$ over the set of the selected configurations enumerated with the index $i$.
It is easy to see that the definition of sensitivity in Eq. (\ref{sensitivity_def}) is similar to the ratio of the variances in the definition of $r^2$ in Eq. (\ref{coeff_determ}). Both quantities relate the scatter of the values obtained in some procedure to the scatter of the reference values of the system. It can be expected that in the case when the sensitivity is the only factor limiting the predictability, $\chi=1-r^2$. Since there are always also other predictability-limiting factors (such as limited dataset, convergence of training of the ML algorithm, its complexity, choice of descriptors), in practice one has $\chi<1-r^2$.
Below the sensitivity of the system to perturbations of different initial conditions is discussed. The extracted values of $\chi$ are collected in Table \ref{sensitivity_table}, where they are compared with the corresponding values of $r^2$ obtained for the full data set at the resolution $32\times32\times32$.
\subsubsection{Random seeds}
First, simulations starting with the same initial polycrystalline configuration but different random seeds initializing the velocities of the particles at the start of the equilibration phase have been carried out. 15 different configurations were used, for each of which 50 simulations with different random seeds have been performed. The stress-strain curves for several such random seeds for one of the configurations are shown in Fig. \ref{stressStrainPerturbation}a. As can be seen there, the curves are similar in the elastic regime, resulting in similar values of the shear modulus, however, in the plastic regime there is a large variability of the stress. In Fig. \ref{sensitivity} the sensitivity $\chi_{seed}$ of the stress at the given strain value to the initial choice of the atom velocities is shown. It can be observed that its value is relatively high for very low values of strain, which is most likely due to the thermal fluctuations of the stress being at the beginning of the deformation larger than its average value. Around the strain value of 0.02 $\chi_{seed}$ drops to a very low value and stays there until the yield occurs, that is, around 0.08 strain. Above that $\chi_{seed}$ reaches values slightly over 0.2. This result can be used to explain the difference in the values of $r^2$ for the stress at the fixed strain value. For the strain value of 0.075 $r^2$ is significantly higher than for 0.1, and also an increase in $\chi_{seed}$ occurs between those two values.
For the the shear modulus, yield strain and the other definitions of the yield stress one can perform the same analysis in an analogous way, namely by determining the variance of the chosen value for the random seeds, dividing it by the variance of the same value for the original set and averaging over the 15 configurations, for which the sensitivity analysis has been performed. They are all shown in Table \ref{sensitivity_table}. The lowest $\chi_{seed}$ is found for the shear modulus. It is also in agreement with the fact that $r^2$ for the shear modulus is the highest for all the quantities studied, approaching the value of 0.9 for the full dataset and the highest resolution. On the other hand, the highest $\chi_{seed}$ is exhibited by the stress at 0.1 strain, which also has the lowest $r^2$. $\chi_{seed}$ for the maximal value of the stress is higher than for the fixed strain value of 0.075 but lower than that at the strain value of 0.1. Finally, the values of $\chi_{seed}$ for the yield point determined with the offset method are relatively low, which is in agreement with their high values of $r^2$.
\begin{table*}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
quantity & $\chi_{seed}$ & $\chi_{nodes}$ & $\chi_{orientation}$ & $\chi_{total}$ & $1-\chi_{total}$ & $r^2$ \\ \hline
shear modulus & 0.005 & 0.012 & 0.043 & 0.054 & 0.946 & 0.88 $\pm$ 0.002 \\ \hline
maximal stress & 0.05 & 0.131 & 0.121 & 0.176 & 0.824 & 0.522 $\pm$ 0.027 \\ \hline
stress at 0.075 strain & 0.034 & 0.121 & 0.128 & 0.183 & 0.817 & 0.594 $\pm$ 0.015 \\ \hline
stress at 0.1 strain & 0.205 & 0.302 & 0.31 & 0.38 & 0.62 & 0.308 $\pm$ 0.016 \\ \hline
1\% offset yield stress & 0.025 & 0.097 & 0.102 & 0.164 & 0.836 & 0.623 $\pm$ 0.013 \\ \hline
2\% offset yield stress & 0.028 & 0.107 & 0.098 & 0.16 & 0.84 & 0.616 $\pm$ 0.009 \\ \hline
1\% offset yield strain & 0.038 & 0.111 & 0.12 & 0.179 & 0.821 & 0.601 $\pm$ 0.013 \\ \hline
2\% offset yield strain & 0.039 & 0.116 & 0.111 & 0.169 & 0.831 & 0.581 $\pm$ 0.007\\ \hline
\end{tabular}
\caption{Values of sensitivity $\chi$ for different quantities compared with their predictability $r^2$. Also values of $1-\chi_{total}$, i.e., the theoretical maxima of the $r^2$'s, are shown. The errors for $r^2$ are standard errors of the mean (SEM) determined from the outcomes of the five different CNNs trained for each quantity.}
\label{sensitivity_table}
\end{table*}
\begin{figure*}
\centering
\includegraphics[scale=0.6]{stressStrainPerturbation}
\caption{Stress-strain curves generated for an individual example initial polycrystalline microstructure which is perturbed in different ways, to assess the sensitivity of the response to small perturbations of the initial state. (a) Perturbations in the form of different random seeds used to initialize the atom velocities, (b) perturbations of the Voronoi nodes, and (c) perturbations of the angular orientation of the grains. The thick black lines are the averages over 50 stress-strain curves, each obtained for a different perturbation of the initial state. The gray areas represent the standard deviation of the stress at a given strain.}
\label{stressStrainPerturbation}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale=0.9]{avSensitivity}
\caption{$\chi$ of the stress value at the given strain measured for the choice of the random seed initializing the velocities of the atoms, perturbation of the position of the nodes in the Voronoi tessellation, perturbation of the lattice rotation of the grains, and all of those perturbations combined.}
\label{sensitivity}
\end{figure}
\subsubsection{Nodes in Voronoi tessellation}
In order to study the sensitivity of the system to the initial position of the nodes in the Voronoi tessellation, for each of the 15 chosen configurations additional 49 configurations were generated by displacing the nodes randomly according to a Gaussian distribution with the standard deviation equal to 3\AA. This procedure leads to configurations similar to the original ones with slightly different grain structures but the same lattice orientation within the grains. The resulting stress-strain curves are shown in Fig. \ref{stressStrainPerturbation}b. It was checked that reducing the standard deviation of the node position perturbation to 2\AA\ results in a similar scatter of the stress-strain curves, which implies that the sensitivity is not significantly dependent on the magnitude of perturbation. $\chi_{nodes}$ for the stress value at a given strain, determined in an analogous way to the case of different random seeds, is shown in Fig. \ref{sensitivity}. It can be seen there that $\chi_{nodes}$ is significantly higher than $\chi_{seed}$. In particular, its magnitude at the strain values of 0.075 and 0.1, used for the prediction of the stress, is also higher in this case.
In the analogous way as in the case of different random seeds $\chi_{nodes}$ for other quantities of the system was determined. It can be seen in Table \ref{sensitivity_table} that the values of $\chi_{nodes}$ are generally higher than the corresponding values of $\chi_{seed}$. However, the relations between its values for different quantities are similar. The minimal and the maximal $\chi_{nodes}$ is again exhibited by the shear modulus and the stress at 0.1 strain, respectively.
\subsubsection{Lattice orientation of grains}
Next, the sensitivity of the system to the initial lattice orientation of the grains inside the polycrystal was studied. This time new configurations were generated with the fixed position of the nodes in the Voronoi tessellation and Euler angles perturbed according to a Gaussian distribution with the standard deviation equal to 3 degrees. The stress-strain curves corresponding to that perturbation are shown in Fig. \ref{stressStrainPerturbation}c. Also this time it was checked that reducing the standard deviation of the perturbation to 2 degrees does not impact the results significantly. Again, the magnitude of $\chi_{orientation}$ at different strain values is shown in Fig. \ref{sensitivity}. It seems higher in the elastic regime compared to the previous measures of $\chi$. Furthermore, as shown in Table \ref{sensitivity_table} $\chi_{orientation}$ for the shear modulus is also higher in this case, which is related to the previously mentioned observation that the elastic properties of the sample are controlled mainly by the lattice orientation, which is the perturbed property here. On the other hand, $\chi_{orientation}$ in the plastic part of stress-strain curve is comparable to $\chi_{nodes}$ and for some of the quantities it is actually slightly smaller. Unlike the previous measures of $\chi$, $\chi_{orientation}$ for the yield point determined with the offset method is lower for the higher offset than for the lower one. Again, this is related to the fact that the lattice orientation of the grains has a larger impact on the elastic properties of the sample than on the plastic ones.
\subsubsection{Total sensitivity}
All the contributions to the sensitivity discussed above (random seed, position of nodes and lattice orientation) contribute to the total sensitivity of the system to the initial conditions. However, one cannot expect that the measure of the total sensitivity $\chi_{total}$ is simply a sum of all those contributions. Therefore, additional MD simulations were carried out. They were performed in an analogous way as before but instead of changing only one of the initial parameters discussed above, all three of them were varied simultaneously. The value of $\chi_{total}$ as the function of strain is also shown in Fig. \ref{sensitivity}. It can be seen that it is always higher than $\chi$ measured with respect to change of any of the features separately.
Values of $\chi_{total}$ for all the quantities studied were determined and the results are again collected in Table \ref{sensitivity_table}. They can be compared to the corresponding values of $r^2$ and it can be seen that the quantities that are more sensitive to the initial conditions of the system (larger $\chi_{total}$) tend to have smaller $r^2$. The values of $r^2$ are plotted against those of $\chi_{total}$ in Fig. \ref{sensitivityPredictability}, where a linear correlation between them can be observed. Additionally, the maximal value of predictability at the given $r^2$ equal to $1-\chi_{total}$ is shown in the plot as the blue line. For all the quantities the actual $r^2$ lies below that line, which means that the condition $r^2 \le 1-\chi_{total}$ is always satisfied.
Some possible additional factors besides $\chi$ that may limit the predictability have been already mentioned, however, it can also be noted that the difference between the maximal ($1-\chi_{total}$) and actual predictability ($r^2$) differs between the quantities. It is the lowest for the shear modulus and the highest for the stress at 0.1 strain. Those two quantities exhibit also the highest and the lowest $r^2$, respectively. Therefore, it seems that those other predictability-limiting factors have different contributions for different quantities. Generally, it can be expected that the elastic properties, such as shear modulus, are relatively easy to predict because their measurement requires only small deformation of the sample, which corresponds to a short time evolution of the system. Moreover, it has been shown that the value of the shear modulus is mostly determined by the lattice orientation of the individual grains. Therefore, it can be expected that the relation between the descriptors and the predicted value is relatively simple. On the other hand, the plastic properties of the sample, such as value of stress at 0.1 strain, can be more difficult to predict because they occur further on the stress-strain curve, such that the system might have partially lost its memory of its initial state.
\begin{figure}
\centering
\includegraphics[scale=0.9]{sensitivityPredcitability}
\caption{Values of $r^2$ of different quantities studied in this work plotted against their total sensitivity $\chi_{total}$ to the initial conditions of the sample. The green line is a linear fit to the data and the blue line represents the maximal possible $r^2$ for the given $\chi_{total}$, i.e., $1-\chi_{total}$.}
\label{sensitivityPredictability}
\end{figure}
\section{Conclusions}
\label{section:conclusions}
In this work, predictability (measured as the coefficient of determination $r^2$) of the elastic and plastic properties of nano-sized shear-deformed iron polycrystals has been determined using ML methods. While the shear modulus can be easily defined as the slope of the stress-strain curve for small strains, defining the yield stress is not so straightforward and therefore several definitions have been considered. For all those quantities studied it was found that the predictability is increasing with the dataset size $N$ and the spatial resolution of the chosen microstructural descriptors. However, it always seems to reach a certain value below 1, which implies that there exists a certain fundamental limit of deformation predictability of small polycrystals. Moreover, the predictability obtained by the CNN was found to be higher for the shear modulus than for the yield point, independently of the exact definition of the latter. The reasons for that difference and for the limit of predictability have been explored by measuring the sensitivity of the system studied to small perturbations of its initial state.
This sensitivity of the system has been measured by varying the random seed initializing the velocities in the MD simulation, position of the nodes in the Voronoi tessellation and lattice orientations of the grains inside the initial polycrystal configuration. It has been found that in accordance with the differences in the predictability the plastic properties of the system exhibit a larger sensitivity to the initial state than the elastic properties. In general, the sensitivity can be thought of as a measure of the amount of information that is not available to the ML algorithm. Since at any finite temperature the system constantly fluctuates and its descriptors are extracted from the equilibrated configuration at some timestep, the magnitude of the fluctuations of the position of the atoms and their velocities are unknown to the CNN. Moreover, a voxelized representation of the initial microstructure with any finite resolution tends to hide small differences in the initial microstructure between samples. Therefore, given that the system exhibits sensitivity to small perturbations of the initial microstructure, two configurations with identical descriptors may result in different time evolution and, as studied here, different stress-strain curves.
Our study thus provides important insights into the fundamental limits of deformation predictability, and those insights are expected to apply more generally to predicting the time evolution of complex physical systems. Even if the dynamics of a complex system, such as deformation of the polycrystals studied here, is governed by deterministic equations of motion, its predictability may still be limited due to the incomplete information about the initial state and other factors such as random thermal fluctuations. Even though the study presented here is purely computational, the conclusions drawn from it could also be extended to experiments. The accuracy at which the structure of the sample can be determined by measurements is always finite and because of the constant thermal fluctuations essentially no information is available about the velocities of individual particles. Overall, the analysis presented here concerning the role of sensitivity of the system to small perturbations of its initial configuration in limiting the predictability of the system's future evolution could be applied in a wide range of contexts where one aims at predicting the behaviour of a complex system.
\section{Acknowledgments}
The authors acknowledge the support of the Academy of Finland via the Academy Project COPLAST (project no. 322405). The authors would like to thank Henri Salmenjoki for interesting discussions on machine learning.
|
{
"arxiv_id": "2302.13680",
"language": "en",
"timestamp": "2023-02-28T02:27:20",
"url": "https://arxiv.org/abs/2302.13680",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
In this work, we present a multigrid method to solve the saddle point system
\begin{equation}\label{eq:system_saddle_point}
\mathcal{S}\mathbf{x}=\mathbf{f},
\end{equation}
where $\mathbf{x}=(\mathbf{y},\mathbf{u},\mathbf{p})=(\mathbf{y}_1,\dots,\mathbf{y}_N,\mathbf{u},\mathbf{p}_1,\dots,\mathbf{p}_N)^\top$, $\mathcal{S}$ has the block structure
\begin{equation}\label{eq:matrix_saddle_point}
\mathcal{S}=\begin{pmatrix}
C_1 & & & & A_1^\top\\
& \ddots & & & &\ddots\\
& & C_N & & & & A_N^\top\\
& & & G & D_1 &\dots & D_N\\
A_1 & & & E_1\\
& \ddots & &\vdots\\
& & A_N & E_N
\end{pmatrix},
\end{equation}
and all submatrices involved represent the discretization of some differential operators. More details on each block are provided in Section \ref{Sec:quadratic}.
Matrices such as \eqref{eq:matrix_saddle_point} are often encountered while solving PDE-constrained optimization problems under uncertainty of the form
\begin{equation}\label{eq:OCP_model_introduction}
\begin{aligned}
&\min_{u\in U} \mathcal{R}\LQ Q(y(\omega),u)\RQ\\
&\text{s.t. } y(\omega)\in V\text{ satisfies}\\
& \langle e(y(\omega),u,\omega),v\rangle =0 \quad \forall v\in V,\text{ a.e. }\omega\in \Omega,
\end{aligned}
\end{equation}
where $u$ is the unknown \textit{deterministic} distributed control, $y(\omega)$ is the state variable which satisfies a random PDE constraint expressed by $e(\cdot,\cdot,\omega)$ for almost every realization $\omega$ of the randomness, $Q$ is a real-valued quantity of interest (cost functional) and $\mathcal{R}$ is a risk measure. The vectors $\left\{\mathbf{y}_j\right\}_{j=1}^N$ and $\left\{\mathbf{p}_j\right\}_{j=1}^N$ are the discretizations of the state and adjoint variables $y(\omega)$ and $p(\omega)$ at the
$N$ samples in which the random PDE constraint is collocated. The vector $\mathbf{u}$ is the discretization of the deterministic control $u$.
Problems of the form \eqref{eq:OCP_model_introduction} are increasingly employed in applications. The PDE constraints typically represent some underlying physical model whose behaviour should be optimally controlled, and the randomness in the PDE allows one to take into account the intrisinc variability or lack of knowledge on some parameters entering the model. The introduction of a risk measure in \eqref{eq:OCP_model_introduction} allows one to construct robust controls that take into account the distribution of the cost over all possible realizations of the random parameters.
Therefore, the topic has received a lot of attention in the last years, see, e.g. \cite{kouri2018optimization,Kouri_Cvar,martinez2018optimal,doi:10.1137/19M1294952,geiersbach2020stochastic,antil2021ttrisk,nobile_vanzan2,eigel2018risk,ASADPOURE20111131}.
However, few works have focused on efficient solvers for the optimality systems \eqref{eq:system_saddle_point}. A popular approach is to perform a Schur complement on $\mathbf{u}$ and solve the reduced system with a Krylov method (possibly with Conjugate Gradient), despite each iteration would then require the solution of $2N$ PDEs, with $A_j$ and $A_j^\top$ for $j=1,\dots,N$ \cite{Kourisparse}. For a full-space formulation, block diagonal preconditioners have been studied in \cite{nobile_vanzan}, using both an algebraic approach based on Schur complement approximations, and an operator preconditioning framework.
In this manuscript, we present a multigrid method to solve general problems of the form \eqref{eq:system_saddle_point} and show how this strategy can be used for the efficient solution of three different Optimal Control Problems Under Uncertainty (OCPUU). First, we consider a linear-quadratic OCPUU and use the multigrid algorithm directly to solve the linear optimality system. Second, we consider a nonsmooth OCPUU with box constraints and $L^1$ regularization on the control. To solve such problem, we use the collective multigrid method as an inner solver within an outer semismooth Newton iteration. Incidentally, we show that the theory developed for the deterministic OCPs with $L^1$ regularization can be naturally extended to the class of OCPUU considered here.
Third, we study a risk-adverse OCPUU involving the smoothed Conditional Value at Risk (CVaR) and test the performance of the multigrid scheme in the context of a nonlinear preconditioned Newton method.
The multigrid algorithm is based on a collective smoother \cite{borzi2005multigrid,borzi2009multigrid,takacs2011convergence} that, at each iteration, loops over all nodes of the computational mesh (possibly in parallel), collects all the degrees of freedom related to a node, and updates them collectively by solving a reduced saddle-point problem.
For classical (deterministic) PDE-constrained optimization problems, this reduced system has size $3\times 3$ so that its solution is immediate \cite{borzi2009multigrid}. In our context, the reduced problem has size $(2N+1)\times (2N+1)$, so that it can be large when dealing with a large number of samples. Fortunately, we show that it can be solved with optimal $O(N)$ complexity.
Let us remark that collective multigrid strategies have been applied to OCPUU in \cite{Borzi,Borzi2} and in \cite{rosseel2012optimal}. This manuscript differs from the mentioned works since, on the one hand, \cite{Borzi,Borzi2} considers a \textit{stochastic} control $u$, therefore for (almost) every realization of the random parameters a different control $u(\omega)$ is computed through the solution of a standard deterministic OCP. On the other hand, \cite{rosseel2012optimal} considers a Stochastic Galerkin discretization, and hence the correspoding optimality system has a structure which is very different from \eqref{eq:matrix_saddle_point}.
The multigrid algorithm assumes that all variables are discretized on the same finite element mesh. Further, despite we here consider only the case of a global distributed control, the smoothing procedure could be extended to a local distributed control on a subset $\widetilde{\D}$ of the computational domain $\D$, and to boundary control, see, e.g., \cite{borzi2005multigrid}.
The rest of the manuscript is organized as follows. In Section \ref{Sec:quadratic} we introduce the notation, a classical linear-quadratic OCPUU, and interpret \eqref{eq:matrix_saddle_point} as the matrix associated to an optimality system of the discretized OCPUU. In Section \ref{Sec:collective}, we present the collective multigrid algorithm, discuss implementation details and show its performance to solve the linear-quadratic OCPUU. In Section \ref{Sec:l1}, we consider a nonsmooth OCPUU with box constraints and a $L^1$ regularization on the control.
Section \ref{Sec:risk} deals with a risk-adverse OCPUU.
For each of these cases, we first show how the multigrid approach can be integrated in the solution process, by detailing concrete algorithms, and then we present extensive numerical experiments to show the efficiency of the proposed framework.
Finally, we draw our conclusions in Section \ref{Sec:conc}.
\section{A linear-quadratic optimal control problem under uncertainty}\label{Sec:quadratic}
Let $\D$ be a Lipschitz bounded domain in $\mathbb{R}^d$, $V$ the standard Sobolev space $H^1_0(D)$, and $(\Omega,\mathcal{F},\PP)$ a complete probability space.
Given a function $u\in L^2(\D)$, we consider the linear elliptic random PDE
\begin{equation}\label{eq:random_PDE_weak}
a_\omega(y,v)=(u,v),\forall v\in V,\quad \PP\text{-a-e. } \omega\in \Omega,
\end{equation}
where $(\cdot,\cdot)$ denotes the standard $L^2(\D)$ scalar product.
To assure uniqueness and sufficient integrability of the solution of \eqref{eq:random_PDE_weak}, we make the following additional assumption.
\begin{assumption}\label{ass:diff}
There exist two random variables $a_{\min}(\omega)$ and $a_{\max}(\omega)$ such that
\[a_{\min}(\omega)\|v\|^2_V\leq a_\omega(v,v)\leq a_{\max}(\omega)\|v\|^2_V,\quad \forall v\in V,\ \PP\text{-a.e. }\omega \in \Omega,\]
and further $a_{\min}$ and $a_{\max}$ are in $L^p(\Omega)$ for some $p\geq 4$.
\end{assumption}
Under Assumption \ref{ass:diff}, it is well-known (see, e.g., \cite{lord_powell_shardlow_2014,Scheichl}) that \eqref{eq:random_PDE_weak} admits a solution in $V$ for $\PP\text{-a.e. } \omega$, and the solution $y$, interpreted as a function-valued random variable $y:\omega\in \Omega\mapsto y(\omega)\in V$, lies in the Bochner space $L^p(\Omega;V)$ \cite{cohn2013measure}.
We often use the shorthand notation $y_\omega=y(x,\omega)$ when the dependence on $x$ is not needed, or $y_{\omega}(u)$ when we want to highlight the dependence on the control function $u$.
In this manuscript, we consider the minimization of functionals constrained by \eqref{eq:random_PDE_weak}.
Let us first focus on the linear-quadratic problem
\begin{equation}\label{eq:quadratic_OCP}
\begin{aligned}
&\min_{u\in L^2(\D)} \frac{1}{2}\E\LQ \|y_\omega-y_d\|^2_{L^2(\D)}\RQ +\frac{\nu}{2}\|u\|^2_{L^2(\D)},\\
&\text{where } y_\omega\in V \text{ solves }\\
&a_\omega(y_\omega,v)=(u+f,v),\quad \forall v \in V,\ \PP\text{-a.e. } \omega\in \Omega,
\end{aligned}
\end{equation}
where $y_d\in L^2(\D)$ is a target state, $f\in L^2(\D)$ (we omit the continuous embedding operators from $L^2(\D)$ to $L^2(\Omega;L^2(\D))$), $\E:L^1(\Omega)\rightarrow \setR$ is the expectation operator and $\nu >0$.
Introducing the linear control-to-state map $S: u\in L^2(D)\rightarrow y_\omega(u)\in L^2(\Omega;L^2(\D))$, we reformulate \eqref{eq:quadratic_OCP} as,
\begin{equation}\label{eq:quadratic_OCP_reduced}
\min_{u\in L^2(\D)} \frac{1}{2}\E \LQ \|S(u)-y_d\|^2_{L^2(\D)}\RQ +\frac{\nu}{2}\|u\|^2_{L^2(\D)}.
\end{equation}
Existence and uniqueness of the minimizer of \eqref{eq:quadratic_OCP_reduced} follows directly from standard variational arguments \cite{lions1971optimal,hinze2008optimization,troltzsch2010optimal,kouri2018optimization}.
Furthermore, due to Assumption \ref{ass:diff}, the optimal control $\overline{u}$ satisfies the variational equality
\begin{equation}\label{eq:optimality_condition_quadratic_exact}
(\nu \overline{u} -S^\star(y_d-S(\overline{u}+f)),v)_{L^2(\D)}=0,\quad \forall v \in L^2(\D),
\end{equation}
where the adjoint operator $S^\star: L^2(\Omega;L^2(\D))\rightarrow L^2(\D)$, satisfying $(Su,z)_{L^2(\Omega;L^2(\D))}=(u,S^\star z)_{L^2(\D)}$, is characterized by $S^\star z=\E\LQ p\RQ$ where $p=p_{\omega}(x)$ is the solution of the adjoint equation
\begin{equation}\label{eq:adjoint_equation}
a_\omega(v,p_\omega)=(z(\omega),v),\quad \forall v \in V,\ \PP\text{-a-e. } \omega\in \Omega.
\end{equation}
The optimality condition \eqref{eq:optimality_condition_quadratic_exact} can thus be formulated as the optimality system
\begin{equation}\label{eq:optimality_system}
\begin{aligned}
& a_\omega(y_\omega,v)=(\overline{u}+f,v),\quad \forall v\in V,\quad \PP\text{-a-e. } \omega\in \Omega,\\
& a_\omega(v,p_\omega)=(y_d-y_\omega,v),\quad \forall v \in V,\ \PP\text{-a-e. } \omega\in \Omega,\\
& (\nu \overline{u} -\E\LQ p_\omega\RQ,v)_{L^2(\D)}=0,\quad \forall v \in L^2(\D).
\end{aligned}
\end{equation}
To solve numerically \eqref{eq:quadratic_OCP}, we replace the exact expectation operator $\E$ of the objective functional by a quadrature formula $\EQ$ with $N$ nodes $\left\{\omega_i\right\}_{i=1}^N$ and positive weights $\left\{\zeta_i\right\}_{i=1}^N$, namely
\[ \E\LQ X\RQ \approx \EQ\LQ X\RQ:= \sum_{i=1}^N \zeta_i X(\omega_i),\quad \text{with}\quad \sum_{i=1}^N \zeta_i=1.\]
Common quadrature formulae are Monte Carlo, Quasi-Monte Carlo and Gaussian formulae. The latter requires that the probability space can be parametrized by a sequence (finite or countable) of random variables $\left\{\chi_j\right\}_j$, each with distribution $\mu_j$, and the existence of a complete basis of tensorized $L^2_{\mu_j}$-orthonormal polynomials.
Hence for the semi-discrete OCP, the $\PP$-a.e. PDE-constraint is naturally collocated onto the nodes of the quadrature formula.
Concerning the space domain, we consider a family of regular triangulations $\left\{\mathcal{T}_h\right\}_{h>0}$ of $\D$, and a Galerkin projection onto the finite element space $V^h$ of continuous piecewise polynomial functions of degree $r$ over $\mathcal{T}_h$ that vanish on $\partial \D$, spanned by a Lagragian basis.
$N_h$ is the dimension of $V^h$, and $\left\{\phi\right\}_{i=1}^{N_h}$ is a nodal Lagrangian basis. We discretize the state, adjoint and control variables on the same finite element space.
Once fully discretized, \eqref{eq:optimality_system} can be expressed as
\begin{equation}\label{eq:full_space_optimality_system}
\begin{pmatrix}
M_s & & & & A_1^\top\\
& \ddots & & & &\ddots\\
& & M_s & & & & A_N^\top\\
& & & \nu M_s & -w_1 M_s &\dots & -w_N M_s\\
A_1 & & & -M_s\\
& \ddots & &\vdots\\
& & A_N & -M_s
\end{pmatrix}
\begin{pmatrix}
\mathbf{y}_1\\ \vdots \\\mathbf{y}_N\\ \mathbf{u}\\
\mathbf{p}_1\\ \vdots \\ \mathbf{p}_N
\end{pmatrix}=
\begin{pmatrix}
M_s \mathbf{y}_d\\ \vdots \\
M_s \mathbf{y}_d\\ \mathbf{0} \\ M_s\mathbf{f}\\ \vdots \\ M_s\mathbf{f}
\end{pmatrix},
\end{equation}
where $A_j$ are the stiffness matrices associated to the bilinear forms $a_{\omega_j}(\cdot,\cdot)$, $M_s$ is the mass matrix, $\mathbf{y}_d$ and $\mathbf{f}$ are the finite element discretizations of $y_d$ and $f$ respectively, while $\mathbf{y}_j$ and $\mathbf{p}_j$ are the discretizations of $y_{\omega_j}$ and $p_{\omega_j}$.
\section{Collective multigrid scheme}\label{Sec:collective}
In this section, we describe the multigrid algorithm to solve the full space optimality system \eqref{eq:full_space_optimality_system}.
For the sake of generality, we consider the more general matrix \eqref{eq:matrix_saddle_point}, so that our discussion covers also the different saddle-point matrices obtained in Sections \ref{Sec:l1} and \ref{Sec:risk}.
For each node of the triangulation, let us introduce the vectors $\widetilde{\mathbf{y}}_i$ and $\widetilde{\mathbf{p}}_i$,
\[\widetilde{\mathbf{y}}_i=\begin{pmatrix}
(\mathbf{y}_1)_i\\ \vdots\\(\mathbf{y}_N)_i
\end{pmatrix}\in \setR^{N},
\quad \widetilde{\mathbf{p}}_i=\begin{pmatrix}
(\mathbf{p}_1)_i\\\vdots\\(\mathbf{p}_N)_i
\end{pmatrix} \in \setR^{N}, \quad i=1,\dots,N_h,
\]
which collect the degrees of freedom associated to the $i$-th node, the scalar $u_i=(\mathbf{u})_i$,
and the restriction operators $R_i \in \mathbb{R}^{(2N+1) N_h \times 2N+1}$ such that \[
R_i \begin{pmatrix}
\mathbf{y} \\ \mathbf{u}\\\mathbf{p}
\end{pmatrix}=\begin{pmatrix}
\widetilde{\mathbf{y}}_i\\ u_i\\\widetilde{\mathbf{p}}_i
\end{pmatrix}=:\mathbf{x}_i.\]
The prolongation operators are $P_i:=R_i^\top$, while the reduced matrices $\widetilde{S}_i:=R_iSP_i\in \setR^{2N+1\times 2N+1}$ represent a condensed saddle-point matrix on the $i$-th node, and satisfy
\begin{equation}\label{eq:reduced_matrices}
\widetilde{S}_i=\begin{pmatrix}
\diag{\mathbf{c}_i} & 0 &\diag{\mathbf{a}_i}\\
0 & (G)_{i,i} & \mathbf{d}_i^\top\\
\diag{\mathbf{a}_i} & \mathbf{e}_i & 0
\end{pmatrix}
\end{equation}
with $\mathbf{c}_i:=((C_1)_{i,i},\dots,(C_N)_{i,i})^\top$, $\mathbf{a}_i:=((A_1)_{i,i},\dots,(A_N)_{i,i})^\top$, $\mathbf{e}_i=((E_1)_{i,i},\dots,(E_N)_{i,i})^\top$, $\mathbf{d}_i=((D_1)_{i,i},\dots,(D_N)_{i,i})$, where $\diag{\mathbf{v}}$ denotes a diagonal matrix with the components of $\mathbf{v}$ on the main diagonal.\\
Given an initial vector $\mathbf{x}^0$, a Jacobi-type collective smoothing iteration computes for $n=1,\dots,n_1$,
\begin{equation}\label{eq:collective_smoothing}
\mathbf{x}^n=(1-\theta)\mathbf{x}^{n-1}+ \theta\sum_{i=1}^{N_h} P_i \widetilde{S}_i^{-1}R_i\left(\mathbf{f}-S\mathbf{x}^{n-1}\right),
\end{equation}
where $\theta \in (0,1]$ is a damping parameter. Gauss-Seidel variants can straightforwardly be defined.
Next, we consider a sequence of meshes $\left\{\mathcal{T}_{h_\ell}\right\}_{\ell=\ell_{\min}}^{\ell_{\max}}$, which we assume for simplicity to be nested, and restriction and prolongator operators $R_{\ell-1}^\ell$, $P_{\ell-1}^{\ell}$ which map between grids $\mathcal{T}_{h_{\ell-1}}$ and $\mathcal{T}_{h_{\ell}}$. In the numerical experiments, the coarse matrices are defined recursively in a Galerkin fashion starting from the finest one, namely $S_\ell:=R^{\ell+1}_{\ell}S_{\ell+1}P^{\ell+1}_{\ell}$ for $\ell\in\left\{1,\dots, \ell_{\max}-1\right\}$. Nevertheless it is obviously possible to define $S_\ell$ as the discretization of the continuous saddle-point system onto the mesh $\mathcal{T}_{h_\ell}$.
With this notation, the V-cycle collective multigrid is described by Algorithm \ref{Alg:algorithm-Vcycle}, which can be repeated until a certain stopping criterium is satisfied. We used the notation \textit{Collective\_Smoothing$(\cdot,\cdot,\cdot)$} to denote possible variants of \eqref{eq:collective_smoothing} (e.g. Gauss-Seidel).
\begin{algorithm}[t]
\setlength{\columnwidth}{\linewidth}
\caption{V-cycle Collective Multigrid Algorithm - V-cycle($\mathbf{x}^{0}$,$\mathbf{f}$,$\ell$)}
\begin{algorithmic}[1]\label{Alg:algorithm-Vcycle}
\IF{$\ell=\ell_{\min}$,}
\STATE set $\mathbf{x}^{0}=S^{-1}_{\ell_{\min}}\mathbf{f}.$ $\qquad \qquad \quad \quad \; \:$ $\;\;\;\;\;\;\;\; \; \; \; $ (direct solver)
\ELSE
\STATE $\mathbf{x}^{n_1}$=Collective\_Smoothing$(\mathbf{x}^0,S_\ell,n_1)$ $\;$ ($n_1$ steps of collective smoothing)
\STATE $\mathbf{r} = \mathbf{f}-S_{\ell}\mathbf{x}^{n_1}$ $\qquad \qquad \qquad \quad \; \,$ $\;\;\;\;\;\;\;\; \; \; \; \;$(compute the residual)
\STATE ${\bf e}_c=$V-cycle(${\bf 0}, R^\ell_{\ell-1}{\bf r},\ell-1$).$\quad \; \; \:$ $\;\;\;\;\;\; \; \; \; \;$(recursive call)
\STATE $\mathbf{x}^{0} = \mathbf{x}^{n_1}+P^\ell_{\ell-1}{\bf e}_c$ $\qquad \qquad \quad \; \: $ $\;\;\;\;\;\;\; \; \; \; \;$(coarse correction)
\STATE $\mathbf{x}^{n_2}$=Collective\_Smoothing$(\mathbf{x}^0,S_\ell,n_2)$ $\;$ ($n_2$ steps of collective smoothing)
\STATE Set $\mathbf{x}^{0}=\mathbf{x}^{n_2}$ $\qquad \qquad \qquad \quad \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;$ (update)
\ENDIF
\RETURN $\mathbf{x}^{0}$.
\end{algorithmic}
\end{algorithm}
Notice that \eqref{eq:collective_smoothing} requires to invert the matrices $S_i$ for each computational node. We now show that this can be done with optimal $O(N)$ complexity. Indeed, performing a Schur complement on $u_i$, the system $\widetilde{S}_i\mathbf{x}_i=\mathbf{f}_i$, with $\mathbf{f}_i=(\mathbf{f}_{p_i},b_{u_i},\mathbf{f}_{y_i})^\top$ can be solved exclusively computing inverses of diagonal matrices and scalar products between vectors through
\begin{equation}\label{eq:solution_reduced_system}
\begin{aligned}
u_i&=\frac{b_{ui}+\mathbf{d}_i^\top(\diag{\mathbf{a}_i}^{-1}\diag{\mathbf{c}_i}\diag{\mathbf{a}_i}^{-1}\mathbf{f}_{yi}-\diag{\mathbf{a}_i}^{-1}\mathbf{f}_{pi})}{(G)_{i,i}+\mathbf{d}_i^\top \diag{\mathbf{a}_i}^{-1}\diag{\mathbf{c}_i}\diag{\mathbf{a}_i}^{-1}\mathbf{e}_i},\\
\widetilde{\mathbf{y}}_i&=(\diag{\mathbf{a}_i})^{-1}(\mathbf{f}_{yi}-\mathbf{e}_i u_{i}),\\
\widetilde{\mathbf{p}}_i&=(\diag{\mathbf{a}_i})^{-1}(\mathbf{f}_{pi}-\diag{\mathbf{c}_i} \widetilde{\mathbf{y}}_i).
\end{aligned}
\end{equation}
Notice that we should guaranteee that $\diag{\mathbf{a}_i}$ admits an inverse and that $(G)_{i,i}+\mathbf{d}_i^\top \diag{\mathbf{a}_i}^{-1}\diag{\mathbf{c}_i}\diag{\mathbf{a}_i}^{-1}\mathbf{e}_i\neq 0$. This has to be verified case by case, so we now focus on the specific matrix \eqref{eq:full_space_optimality_system}. On the one hand, the vectors $\mathbf{a}_i$ are strictly positive componentwise, since $(\mathbf{a}_i)_j=a_{\omega_j}(\phi_i,\phi_i)>0$ $\forall i=1,\dots,N_h$, $j=1,\dots,N$ (due to Assumption \ref{ass:diff}). On the other hand, $(G)_{i,i}=\int_\D\phi^2_i(x)\ dx >0$, while
a direct calculation shows that
\[\mathbf{d}_i^\top \diag{\mathbf{a}_i}^{-1}\diag{\mathbf{c}_i}\diag{\mathbf{a}_i}^{-1}\mathbf{e}_i=(M_s)^3_{i,i} \sum_{j=1}^N w_j (A_j)^{-2}_{i,i}>0,\]
which implies that \eqref{eq:solution_reduced_system} is well-defined.
Assuming that the V-cycle algorithm requires a constant number of iterations to reach a given tolerance as both $N_h$, $N$ and the number of levels increase (so that the size of the coarse problem becomes neglegible in the limit), Algorithm \ref{Alg:algorithm-Vcycle} exhibits an overall optimal linear complexity $O(N_h N)$ since asymptotically the cost of each iteration is dominated by the $N_h$ relaxation steps, each of a cost proportional to $N$ due to the above discussion.
In the next numerical experiments sections, we show indeed that the number of iterations remain constant for several test cases.
\subsection{Numerical experiments}
We now show the performance of Alg. \ref{Alg:algorithm-Vcycle} and its robustness with respect to several parameters for the solution of \eqref{eq:full_space_optimality_system}. We consider the state equation
\begin{equation}\label{eq:state_equation2}
a_{\omega}(y_{\omega},v)=\int_D \kappa(x,\omega) \nabla y(x,\omega)\cdot\nabla v(x)\ dx=\int_\D u(x)v(x)\ dx,\quad \forall v\in V,\ \PP\text{-a.e. }\omega \in \Omega,
\end{equation}
in a domain $\D=(0,1)^2$ discretized with a regular mesh of squares of edge $h_{\ell}=2^{-\ell}$, which are then decomposed into two right triangles. We choose $\kappa(x,\omega)$ as an approximated log-normal diffusion field
\begin{equation}\label{eq:log_normal}
\kappa(x,\omega)=e^{\sigma \sum_{j=1}^M \sqrt{\lambda_j}b_j(x)N_j(\omega)}\approx e^{g(x,\omega)},
\end{equation}
where $g(x,\omega)$ is a mean zero Gaussian field with Covariance function $Cov_g(x,y)=\sigma^2 e^{\frac{-\|x-y\|_{2}^2}{L^2}}$. The parameter $\sigma^2$ tunes the variance of the random field, while $L$ denotes the correlation length. The pairs $(b_j(x),\sigma^2\lambda_j)$ are the eigenpairs of $T:L^2(\D)\rightarrow L^2(\D)$, $(Tf)(x)=\int_\D Cov_g(x,y)f(y)\ dy$, and $N_j\sim \mathcal{N}(0,1)$. Assumption \ref{ass:diff} is satisfied since $a_{\min}(\omega)=\left(\text{ess}\inf_{x\in \D} \kappa(x,\omega)\right)^{-1}$ and $a_{\max}(\omega)=\|\kappa(\cdot,\omega)\|_{L^\infty(\D)}$ are in $L^p(\Omega)$ for every $p<\infty$ \cite{Charrier}. We first consider $L^2=0.5$, so that setting $M=3$ into \eqref{eq:log_normal} is enough to keep $99\%$ of the variance, and we discretize the problem using the Stochastic Collocation method \cite{babuvska2010stochastic} on Gauss-Hermite tensorized quadrature nodes. The target state is $y_d=e^{y^2}\sin(2\pi x)\sin(2\pi y)$.
Table \ref{Tab:expected} shows the number of V-cycle iterations (Alg. \ref{Alg:algorithm-Vcycle}) and of GMRES iterations preconditioned by the V-cycle (in brackets) to solve \eqref{eq:full_space_optimality_system} up to a tolerance of $10^{-9}$ on the relative (unpreconditioned) residual.
Inside the V-cycle algorithm, we use $n_1=n_2=2$ pre- and post-smoothing iterations based on the Jacobi relaxation \eqref{eq:collective_smoothing} with a damping parameter $\theta=0.5$. The number of levels of the V-cycle hierachy is denoted with $N_L$.
\begin{table}[h]\caption{Number of V-cycle iterations and preconditioned GMRES iterations (in brackets) to solve \eqref{eq:full_space_optimality_system}.}\label{Tab:expected}
\centering
\begin{tabular}{| c | c | c | c | c|}
\hline
$\nu$ & $ 10^{-2}$ & $10^{-4}$ & $10^{-6}$ & $10^{-8}$\\ \hline\hline
It. & 17 (11) & 19 (13) & 19 (15) & 19 (14) \\
\hline
\end{tabular}\\
\centering
$N_h=961$, $N=125$, $N_L=3$, $\sigma^2=0.5$, $L^2=0.5$.\\\vspace*{0.2cm}
\begin{tabular}{| c | c | c | c | c|}
\hline
$\sigma^2$ & 0.1 & 0.5 & 1 & 1.5\\ \hline\hline
It. & 19 (12) & 19 (13) & 19 (13) & 19 (14) \\
\hline
\end{tabular}\\
\centering
$N_h=961$, $N=125$, $N_L=3$, $\nu=10^{-4}$, $L^2=0.5$.\\\vspace*{0.2cm}
\centering
\begin{tabular}{| c | c | c | c |}
\hline
$N_h$($N_L$) & 225 (2) & 961 (3) & 3969 (4)\\ \hline\hline
It. & 18 (13) & 19 (13) & 19 (13) \\
\hline
\end{tabular}\\
\centering
$N=125$, $\nu=10^{-4}$, $\sigma^2=0.5$, $L^2=0.5$.\\\vspace{0.2cm}
\centering
\begin{tabular}{| c | c | c | c | c|}
\hline
$N$ & 8 & 27 & 64 & 125\\ \hline\hline
It. & 19 (12) & 19 (12) & 19 (13) & 19 (13) \\
\hline
\end{tabular}\\
\centering
$N_h=961$, $\nu=10^{-4}$, $\sigma^2=0.5$, $L^2=0.5$.\\\vspace{0.3cm}
\end{table}
The multigrid algorithm has a perfect robustness with respect to all parameters considered. In particular, the convergence does not deteriorate as $\nu\rightarrow 0$, which is a well-known troublesome limit for several preconditioners (see, e.g., \cite{nobile_vanzan,rees2010optimal,pearson2012new,ciaramella2022iterative} and references therein). Further, the third sub-table shows the robustness of the algorithm with respect to the number of levels as the fine grid is refined. The fourth sub-table instead shows that the number of iteration remains constant as the discretization of the probability space is refined.
As a complementary experiment, we next consider $L^2=0.025$ and use a Monte Carlo quadrature formula, since Stochastic Collocation suffers from the curse of dimensionality. We also refine the spatial grid to properly capture the variations of the random diffusion coefficient. Table \ref{Tab:expected_MC} confirms the robustness of the algorithm even for much larger numbers of samples.
\begin{table}\caption{Number of V-cycle iterations and preconditioned GMRES iterations (in brackets) to solve \eqref{eq:full_space_optimality_system}.}\label{Tab:expected_MC}
\centering
\begin{tabular}{| c | c | c | c |}
\hline
$N$ & 100 & 1000 \\ \hline\hline
It. & 21 (14) & 21 (14) \\
\hline
\end{tabular}\\
\centering
$N_h=3969$, $\nu=10^{-4}$, $N_L=3$, $\sigma^2=1.5$, $L^2=0.025$.\\\vspace{0.3cm}
\end{table}
\section{An optimal control problem under uncertainty with box-constraints and $L^1$ penalization}\label{Sec:l1}
In this section, we consider the nonsmooth OCPUU
\begin{equation}\label{eq:l1_OCP}
\begin{aligned}
&\min_{u\in U_{ad}} \frac{1}{2}\E\LQ \|y_\omega(u)-y_d\|^2_{L^2(\D)}\RQ + \frac{\nu}{2}\|u\|^2_{L^2(\D)} + \beta \|u\|_{L^1(\D)},\\
&\text{where } y_\omega(u)\in V \text{ solves }\\
&a_\omega(y_\omega(u),v)=(u+f,v),\quad \forall v \in V,\ \PP\text{-a-e. } \omega\in \Omega,\\
&U_{ad}:=\left\{v\in L^2(\D): a\leq u \leq b\quad \text{almost everywhere in }\D\right\},
\end{aligned}
\end{equation}
with $a<0<b$ and $\nu,\beta>0$.
Deterministic OCPs with a $L^1$ penalization lead to optimal controls which are sparse, i.e. they are nonzero only on certain regions of the domain $\D$ \cite{stadler2009elliptic,casas2017review}. Sparse controls can be of great interest in applications, because it is often not desirable, or even impossible, to control the system over the whole domain $\D$. For sparse OCPUU, we mention \cite{li2019sparse} where the authors considered both a simplified version of \eqref{eq:l1_OCP} in which the randomness enters linearly into the state equation as a force term, and a different optimization problem whose goal is to find a stochastic control $u(\omega)$ which has a similar sparsity pattern regardless of the realization $\omega$. Notice further that the assumption $\nu>0$ does not eliminate the nonsmoothness of the objective functional, but it regularizes the optimal solution $u$, and is needed to use the fast optimation algorithm described in the following.
The well-posedness of \eqref{eq:l1_OCP} follows directly from standard variational arguments \cite{troltzsch2010optimal,hinze2008optimization}, being $U_{ad}$ a convex set, $\varphi(u):=\beta \|u\|_{L^1(\D)}$ a convex function and the objective functional coercive. In particular, the optimal solution $\overline{u}$ satisfies the variational inequality (\cite[Proposition 2.2]{ekeland1999convex})
\begin{equation}\label{eq:variational_inequality}
(\nu \overline{u} -S^\star(y_d-S(\overline{u}+f)),\overline{u} -v)+\varphi(\overline{u})-\varphi(v)\geq 0,\quad \forall v\in U_{ad}.
\end{equation}
Through a pointwise discussion of the box constraints and an analysis of a Lagrange multiplier belonging to the subdifferential of $\varphi$ in $\overline{u}$, \cite{stadler2009elliptic} showed that \eqref{eq:variational_inequality} can be equivalently formulated as the nonlinear equation $\mathcal{F}(\overline{u})=0$, with $\mathcal{F}:L^2(\D)\rightarrow L^2(\D)$ defined as
\begin{equation}\label{eq:nonlinear_optimality_condition}
\medmuskip=-1mu
\thinmuskip=-1mu
\thickmuskip=-1mu
\nulldelimiterspace=0.9pt
\scriptspace=0.9pt
\arraycolsep0.9em
\mathcal{F}(u):=u-\frac{1}{\nu}\left(\max(0,\mathcal{T}u-\beta)+\min(0,\mathcal{T}u+\beta)-\max(0,\mathcal{T}u-\beta-\nu b)-\min(0,\mathcal{T}u +\beta-\nu a)\right),
\end{equation}
where $\mathcal{T}:L^2(\D)\ni u \rightarrow -S^\star(Su)+ S^\star(y_d-Sf)\in L^2(\D)$.
Notice that $\mathcal{F}$ is nonsmooth due to the presence of the Lipschitz functions $\max(\cdot)$ and $\min(\cdot)$.
Nevertheless, $\mathcal{F}$ can be shown to be semismooth \cite{hinze2008optimization}, provided that $\mathcal{T}$ is continuously Fr\'{e}chet differentiable, and further Lipschitz continuous interpreted as map from $\mathcal{T}:L^2(\D)\rightarrow L^r(\D)$, with $r>2$ \cite{doi:10.1137/1.9781611970692,hinze2008optimization}. These conditions are satisfied also in our settings since $\mathcal{T}$ is affine and further the adjoint variable $p_\omega$, solution of \eqref{eq:adjoint_equation} with $z=y_d-S(u+f)$, lies in $L^2(\Omega,H^1_0(\D))$ so that $\mathcal{T}u=\E\LQ p_\omega\RQ\in H^1_0(\D)\subset L^r(\D)$, where $r>2$ follows from the Sobolev embeddings.
Hence, to solve \eqref{eq:nonlinear_optimality_condition} we use the semismooth Newton method whose iteration reads for $k=1,2,\dots$ until convergence,
\begin{equation}\label{eq:semismooth_problem}
u^{k+1}=u^{k}+du^k,\quad \text{with}\quad \mathcal{G}(u^k)du^k=-\mathcal{F}(u^k),
\end{equation}
$\mathcal{G}(u):L^2(\D)\rightarrow L^2(\D)$ being the generalized derivative of $\mathcal{F}$.
Using the linearity of $\mathcal{T}$ and considering the supports of the weak derivatives of $\max(0,x)$ and $\min(0,x)$, we obtain that
\begin{equation}\label{eq:generalized_differential}
\mathcal{G}(u)[v]=v+\frac{1}{\nu}\chi_{(I^+\cup I^-)}S^\star Sv,
\end{equation}
where $\chi$ is the charateristic function of the union of the disjoint sets
\[I^+=\left\{x\in \D: 0\leq \mathcal{T}u-\beta\leq \nu b \right\}\text{ and } I^-=\left\{x\in \D: \nu a\leq \mathcal{T}u+\beta\leq 0\right\}.\]
It is possible to show that the generalized derivative $\mathcal{G}(u)$ is invertible with bounded inverse for all $u$, the proof being identical to the deterministic case treated in \cite{Stadler2}. This further implies that the semismooth Newton method \eqref{eq:semismooth_problem} converges locally superlinearly \cite{doi:10.1137/1.9781611970692}.
We briefly summarize these results in the following proposition.
\begin{proposition}\label{eq:superlinear_convergence}
Let the inizialization $u^0$ be sufficiently close to the solution $\overline{u}$ of \eqref{eq:l1_OCP}. Then the iterates $u^k$ generated by \eqref{eq:semismooth_problem} converge superlinearly to $\overline{u}\in L^2(\D)$.
\end{proposition}
Introducing the supporting variables $dy^k_\omega$ and $dp^k_w$ in $L^2(\Omega;H^1_0(\D))$, the semismooth Newton equation $\mathcal{G}(u^k)du^k=-\mathcal{F}(u^k)$ may be rewritten as the equivalent saddle point system
\begin{equation}\label{full_space_optimality_system_semismooth1}
\begin{aligned}
& a_\omega(dy^k_\omega,v)-(du^k,v)=0,\quad \forall v\in V,\quad \PP\text{-a-e. } \omega\in \Omega,\\
& a_\omega(v,dp^k_\omega)+(dy^k_\omega,v)=0,\quad \forall v \in V,\ \PP\text{-a-e. } \omega\in \Omega,\\
& (\nu\ du^k - \chi_{(I^+\cup I^-)}\E\LQ dp^k_\omega\RQ,v)_{L^2(\D)}=-\mathcal{F}(u^k),\quad \forall v \in L^2(\D).
\end{aligned}
\end{equation}
Further, if we set $y^0=S(f+u^0)$ and $p^0=S^\star(y_d-y^0)$, due to the linearity of $S$ and $S^\star$, it holds $y^{k+1}=S(u^{k+1})=y^k+dy^{k}$ and similarly $p^{k+1}=p^k+dp^k$.
Once fully discretized and using the notation $\EQ\LQ p_\omega\RQ=\sum_{j=1}^N w_j \mathbf{p}_{j}$, the optimality condition \eqref{eq:nonlinear_optimality_condition} can be expressed through the nonlinear finite-dimensional map $\mathbf{F}:\setR^{N_h}\rightarrow \setR^{N_h}$,
\begin{equation}
\begin{aligned}
\mathbf{F}(\mathbf{u})&=\mathbf{u}-\frac{1}{\nu}\Bigl(\max(0,\EQ\LQ \mathbf{p}_\omega\RQ-\beta)+\min(0,\EQ\LQ \mathbf{p}_\omega\RQ+\beta)\\
&-\max(0,\EQ\LQ \mathbf{p}_\omega\RQ-\beta-\nu b)-\min(0,\EQ\LQ \mathbf{p}_\omega\RQ +\beta-\nu a)\Bigl),
\end{aligned}
\end{equation}
where the $\max(\cdot)$ and $\min(\cdot)$ functions act componentwise.
Equation \eqref{full_space_optimality_system_semismooth1} leads to the saddle point system
\begin{equation}\label{eq:full_space_optimality_system_semismooth2}
\begin{pmatrix}
M_s & & & & A_1^\top\\
& \ddots & & & &\ddots\\
& & M_s & & & & A_N^\top\\
& & & M_s & -w_1 M_s H^k&\dots & -w_N M_s H^k\\
A_1 & & & -M_s\\
& \ddots & &\vdots\\
& & A_N & -M_s
\end{pmatrix}
\begin{pmatrix}
\mathbf{dy}^k_1\\ \vdots \\\mathbf{dy}^k_N\\ \mathbf{du}^k\\
\mathbf{dp}^k_1\\ \vdots \\ \mathbf{dp}^k_N
\end{pmatrix}=
\begin{pmatrix}
\mathbf{0}\\ \vdots \\
\mathbf{0}\\ -\mathbf{F}(\mathbf{u}^{k}) \\ \mathbf{0}\\ \vdots \\\mathbf{0}
\end{pmatrix},
\end{equation}
where $H^k\in \mathbb{R}^{N_h\times N_h}$ is a diagonal matrix representing the charateristic function $\chi_{I_k^+\cup I_k^-}$, namely
\[(H^k)_{i,i}=\frac{1}{\nu} \text{ if }i\in I_k^+\cup I_k^-\quad \text{ and }\quad (H^k)_{i,i}=0 \text{ if }i\notin I_k^+\cup I_k^-,\]
with
\begin{equation}\label{eq:activesets}
I_k^+=\left\{i: 0\leq \EQ\LQ \mathbf{p}^k\RQ -\beta\leq \nu b \right\}\text{ and } I_k^-=\left\{i: \nu a\leq \EQ\LQ \mathbf{p}^k\RQ +\beta\leq 0\right\}.
\end{equation}
To derive the expression of $H$, we assumed that a Lagrangian basis is used for the finite element space.
Notice that \eqref{eq:full_space_optimality_system_semismooth2} fits into the general form \eqref{eq:matrix_saddle_point}, and thus we use the collective multigrid algorithm to solve it. Further, with the notation of \eqref{eq:matrix_saddle_point}, it holds
\[
(G)_{i,i}+d_i^\top \diag{a_i}^{-1}\diag{c_i}\diag{a_i}^{-1}e_i=
(M_s)_{i,i}+(M_s)^3_{i,i} \sum_{j=1}^N w_j (A_j)^{-2}_{i,i}>0 \]
if $i\in I^+\cup I^-$, and
\[(G)_{i,i}+d_i^\top \diag{a_i}^{-1}\diag{c_i}\diag{a_i}^{-1}e_i=(M_s)_{i,i}>0,\]
if $i\notin I^+\cup I^-$.
The collective multigrid iteration is then well-defined.
The overall semismooth Newton Algorithm is summarized in Algorithm \ref{Alg:semismoothnewton}. At each iteration we solve \eqref{eq:full_space_optimality_system_semismooth2} using the collective multigrid algorithm (line 3) and update the active sets given the new iteration (line 9).
Notice that in order to globalize the convergence, we consider a line-search step (lines 5-7) performed on the merit function $\phi(\mathbf{u})=\sqrt{\mathbf{F}(\mathbf{u})^\top M_s \mathbf{F}(\mathbf{u})}$ \cite{martinez1995inexact}.
\begin{algorithm}[t]
\setlength{\columnwidth}{\linewidth}
\caption{Globalized semismooth Newton Algorithm to solve $\mathbf{F}(\mathbf{u})=0$}
\begin{algorithmic}[1]\label{Alg:semismoothnewton}
\REQUIRE $\mathbf{u}^0$, $\text{Tol}\in \mathbb{R}^+$, $\sigma,\rho\in (0,1)$.
\STATE $\mathbf{y}^0_j=A_j^{-1}(M_s(\mathbf{f}+\mathbf{u}^0))$, $\mathbf{p}^0_j=\left(A^\top_j\right)^{-1}(M_s(\mathbf{y}_d-\mathbf{y}^0_j))$, $j=1,\dots,N$.\\
Set $k=0$ and define $I_0^+$ and $I_0^-$ using \eqref{eq:activesets}.
\WHILE {$\phi(\mathbf{u}^k)>\text{Tol}$}
\STATE Solve \eqref{eq:full_space_optimality_system_semismooth2} calling Alg. \ref{Alg:algorithm-Vcycle} until convergence.
\STATE Set $\gamma=1$
\WHILE {$\phi(\mathbf{u}^k+\gamma\mathbf{du}^k)-\phi(\mathbf{u}^k)>-\sigma\phi(\mathbf{u}^k)$}
\STATE $\gamma=\rho\gamma$.
\ENDWHILE
\STATE Update $\mathbf{u}^{k+1}=\mathbf{u}^{k}+\gamma \mathbf{du}^k$, $\mathbf{y}^{k+1}_j=\mathbf{y}_j^{k}+\gamma\mathbf{dy}_j^k$, $\mathbf{p}^{k+1}_j=\mathbf{p}_j^{k}+\gamma\mathbf{dp}_j^k$, $j=1,\dots,N$.
\STATE Update $I_k^+$ and $I_k^-$ using \eqref{eq:activesets}.
\STATE Set $k=k+1$.
\ENDWHILE
\RETURN $\mathbf{u}^{k}, \mathbf{y}_j^{k}$ and $\mathbf{p}_j^{k}$, $j=1,\dots,N$.
\end{algorithmic}
\end{algorithm}
\subsection{Numerical experiments}
In this section we test the semismooth Newton algorithm for the solution of \eqref{eq:nonlinear_optimality_condition} and the collective multigrid algorithm to solve the related optimality system \eqref{eq:full_space_optimality_system_semismooth2}.
We consider the random PDE-constraint \eqref{eq:state_equation2} with the random diffusion coefficient \eqref{eq:log_normal}. The semismooth iteration is stopped when $\phi(\mathbf{u}^k)<10^{-9}$. The inner linear solvers are stopped when the relative (unpreconditioned) residual is smaller than $10^{-11}$.
Table \ref{Tab:l1} reports the number of semismooth Newton iterations and in brackets the averaged number of iterations of the V-cycle algorithm used as a solver (left) or as preconditioner for GMRES (right).
Table \ref{Tab:l1} confirms the effectiveness of the multigrid algorithm, which requires essentially the same computational effort than in the linear-quadratic case. Further, except for large values of $\sigma^2$, we numerically observed that the line-search could be omitted without compromising the convergence of the outer iteration.
\begin{table}[h]\caption{Number of semismooth Newton iterations, and average number of V-cycle iterations and preconditioned GMRES iterations (in brackets).}\label{Tab:l1}
\centering
\begin{tabular}{| c | c | c | c | c|}
\hline
$\sigma^2$ & 0.1 & 0.5 & 1 & 1.5\\ \hline\hline
It. & 3 (22.33-10.3) & 4 (22.5-10.5) & 7(22.8-11.7) & 8 (23.1-11.7) \\
\hline
\end{tabular}\\
\centering
$N_h=961$, $\nu=10^{-4}$ $\beta=10^{-2}$, $N=125$, $N_L=3$, , $L^2=0.5$, $b=50$, $a=-50$.\\\vspace*{0.2cm}
\centering
\begin{tabular}{| c | c | c | c |}
\hline
$N_h$($N_L$) & 225 (2) & 961 (3) & 3969 (4)\\ \hline\hline
It. & 4 (21.75-11.25) & 4 (22.5-10.5) & 4 (22.5-10.25) \\
\hline
\end{tabular}\\
\centering
$\nu=10^{-4}$, $\beta=10^{-2}$, $N=125$, $\sigma^2=0.5$, $L^2=0.5$, $b=50$, $a=-50$.\\\vspace{0.2cm}
\centering
\begin{tabular}{| c | c | c | c | c|}
\hline
$N$ & 8 & 27 & 64 & 125\\ \hline\hline
It. & 4 (21-9.7) & 4 (21.5-10.5) & 5 (22.2-9.6) & 4 (22.5-10.5) \\
\hline
\end{tabular}\\
\centering
$N_h=961$, $\nu=10^{-4}$, $\beta=10^{-2}$, $\sigma^2=0.5$, $L^2=0.5$, $b=50$, $a=-50$.\\\vspace{0.2cm}
\centering
\begin{tabular}{| c | c | c | c | c|}
\hline
$\beta$ & 0 & $10^{-4}$ & $10^{-3}$ & $10^{-2}$\\ \hline\hline
It. & 5 (22.2-10.2) & 4 (22-11) & 5 (22-11) & 4 (22.5-10.5) \\
\hline
\end{tabular}\\
\centering
$N_h=961$, $\nu=10^{-4}$, $N=125$, $\sigma^2=0.5$, $L^2=0.5$, $b=50$, $a=-50$.\\\vspace{0.3cm}
\end{table}
More challenging is the limit $\nu\rightarrow 0$ reported in Table \ref{Tab:l1_nu}. The performance of both the (globalized) semismooth Newton iteration and the inner multigrid solver deteriorates. The convergence of the outer nonlinear algorithm can be improved by performing a continuation method, namely we consider a sequence of $\nu=10^{-j}$, $j=2,\dots,8$, and we start the $j$-th problem using as initial condition the optimal solution computed for $\nu=10^{-j+1}$.
Concerning the inner solver, the stand-alone multigrid algorithm struggles since for small values of $\nu$ the optimal control is of bang-bang type, that is satisfies $u=a$, $u=b$ or $u=0$ for almost every point of the mesh (for $\nu=10^{-8}$ only seven nodes are nonactive at the optimum). The matrices $H^{k}$ are then close to zero, and the multigrid hierarchy struggles to capture changes at such small scale.
Nevertheless, the multigrid algorithm remains a very efficient preconditioner for GMRES even in this challenging limit.
\begin{table}\caption{Number of semismooth Newton iterations, and of V-cycle iterations and preconditioned GMRES iterations (in brackets). In the second row, the semismooth Newton method starts from a warm-up initial guess obtained through continuation.}\label{Tab:l1_nu}
\centering
\begin{tabular}{| c | c | c | c | c|}
\hline
$\nu$ & $ 10^{-2}$ & $10^{-4}$ & $10^{-6}$ & $10^{-8}$\\ \hline\hline
It. & 2 (22-9.5) & 4 (22.5-10.5) & 23 (27.5-12.3) & 52 (45.1-13.7) \\
It. & 2 (22-9.5) & 4 (22-10.5) & 5 (27.6-11.8) & 14 (68.07-17.2)\\
\hline
\end{tabular}\\
\centering
$N_h=961$, $N=125$, $N_L=3$, $\sigma^2=0.5$, $L^2=0.5$, $b=50$, $a=-50$.\\\vspace*{0.2cm}
\end{table}
Fig. \ref{Fig:betas} shows a sequence of optimal controls for different values of $\beta$ with and without box-constraints. The optimal control for $\beta=0$ and without box-constraints corresponds to the minimizer of the linear-quadratic OCP \eqref{eq:quadratic_OCP}. We observe that $L^1$ penalization indeed induces sparsity, since the optimal controls are more and more localized as $\beta$ increases.
Numerically we have verified that for sufficiently large $\beta$, the optimal control is identically equal to zero, a property shown in \cite{stadler2009elliptic}.
\begin{figure}
\includegraphics[scale=0.28]{betabox_0.eps}
\includegraphics[scale=0.28]{betabox_0005.eps}
\includegraphics[scale=0.28]{betabox_005.eps}\\
\includegraphics[scale=0.28]{beta_0.eps}
\includegraphics[scale=0.28]{beta_0005.eps}
\includegraphics[scale=0.28]{beta_005.eps}
\caption{From left to right: optimal control computed for $\beta\in \left\{0,5\cdot 10^{-3},5\cdot 10^{-2}\right\}$ with (top row) and without (bottom row) box constraints: $a=-50$, $b=50$.}\label{Fig:betas}
\end{figure}
\section{A risk-adverse optimal control problem under uncertainty}\label{Sec:risk}
In this section we consider an instance of risk-adverse OCPUU. This class of problems has recently drawn lot of attention since in engineering applications it is important to compute a control that minimizes the quantity of interest even in rare, but often troublesome, scenarios \cite{Kouri_Cvar,Kouri_ex,antil2021ttrisk}.
As a risk-measure \cite{shapiro2014lectures}, we use the Conditional Value-At-Risk (CVaR) of confidence level $\lambda$, $\lambda \in (0,1)$,
\[\Cvar{X}:= \E\LQ X|X\geq \Var{X}\RQ,\quad \forall X\in L^1(\Omega;\setR),\]
that is, the expected value of a quantity of interest $X$ given that the latter is greater than or equal to its $\lambda$-quantile, here denoted by $\Var{X}$.
Rockafellar and Uryasev \cite{rockafellar2000optimization} proved that $\Cvar{X}$ admits the equivalent formulation
\begin{equation}\label{eq:Cvarequidef}
\Cvar{X}= \inf_{t\in \setR}\left\{t+\frac{1}{1-\lambda}\E\LQ (X-t)^+\RQ \right\},
\end{equation}
where $(\cdot)^+:=\max(0,\cdot)$, if the distribution of $X$ does not have an atom at $\Var{X}$. In order to use tools from smooth optimization, we
rely on a smoothing approach proposed in \cite{Kouri_Cvar}, which consists in replacing $(\cdot)^+$ with a smooth function $g_\varepsilon$, $\varepsilon\in\setR^+$, such that $g_\varepsilon\rightarrow (\cdot)^+$ in some functional norm as $\varepsilon\rightarrow 0$.
Specifically, we choose the $C^2$-differentiable approximation
\begin{equation}\label{eq:smoothapproximation}
g_\varepsilon(x)=\begin{cases}
0\quad &\text{if } x\leq -\frac{\varepsilon}{2},\\
\frac{(x-\frac{3}{2})^3}{\varepsilon^2}-\frac{(x-\frac{\epsilon}{2})^4}{2\varepsilon^3}\quad &\text{if } x\in(-\frac{\varepsilon}{2},\frac{\varepsilon}{2}),\\
x\quad &\text{if }x\geq \frac{\varepsilon}{2}.
\end{cases}
\end{equation}
Then, the smoothed risk-adverse OCPUU is
\begin{equation}\label{eq:OCP}
\begin{aligned}
&\min_{u\in L^2(\D),t\in \setR} t+\frac{1}{1-\lambda} \E\LQ g_\varepsilon\left(\frac{1}{2}\|y_\omega-y_d\|^2_{L^2(\D)}-t)\right)\RQ+\frac{\nu}{2}\|u\|^2_{L^2(\D)},\\
&\text{where } y_\omega\in V \text{ solves}\\
&a_\omega(y_\omega,v)=(u+f,v)\quad \forall v\in V,\ \PP\text{-a.e. }\omega \in \Omega.
\end{aligned}
\end{equation}
where $\nu \in \setR^+$ and $\lambda\in [0,1)$.
The well-posedness of \eqref{eq:OCP}, the differentiability of its objective functional, as well as bounds for the error introduced by replacing $(\cdot)^+$ with $g_{\varepsilon}(\cdot)$, have been analyzed in \cite{Kouri_Cvar}. Further, defining $Q_\omega=\frac{1}{2}\|y_\omega-y_d\|^2_{L^2(\D)}-t$, the optimality conditions form the nonlinear system,
\begin{equation}\label{eq:optmality_system_smoothed}
\begin{array}{r l r l}
& a_\omega(v,p_\omega)-\frac{g^\prime_\varepsilon\left(Q_\omega\right)}{1-\lambda}(y_d-y_\omega,v)=0,\quad &\forall v \in V,\ \PP\text{-a-e. } \omega\in \Omega,\\
&(\nu\ u -\E\LQ p_\omega\RQ,v)=0,\quad &\forall v\in L^2(\D)\\
&a_\omega(y_\omega,v)-(u+f,v)=0,\quad &\forall v\in V,\quad \PP\text{-a-e. } \omega\in \Omega,\\
&1-\frac{1}{1-\lambda}\E\LQ g^\prime_\varepsilon\left(Q_\omega)\right)\RQ=0,\quad &
\end{array}
\end{equation}
Approximating $V$ and $\E$ with $V_h$ and $\EQ$, and letting $\widetilde{\mathbf{x}}=(\mathbf{y},\mathbf{u},\mathbf{p},t)$, the finite-dimensional discretization of \eqref{eq:optmality_system_smoothed} correponds to the nonlinear system $\widetilde{\mathbf{F}}(\widetilde{\mathbf{x}})=\mathbf{0}$, where $\widetilde{\mathbf{F}}:\mathbb{R}^{(2N+1)N_h+1}\rightarrow \mathbb{R}^{(2N+1)N_h+1}$,
\begin{equation}\label{eq:discretized_nonlinearsystem}
\widetilde{\mathbf{F}}(\widetilde{\mathbf{x}})=\begin{pmatrix}
\widetilde{\mathbf{F}}_1(\widetilde{\mathbf{x}})\\\widetilde{\mathbf{F}}_2(\widetilde{\mathbf{x}})\\\widetilde{\mathbf{F}}_3(\widetilde{\mathbf{x}})\\ \widetilde{F}_4(\widetilde{\mathbf{x}})
\end{pmatrix}=\begin{pmatrix}
\widetilde{M}(\mathbf{y}-I\mathbf{y}_d)+A^\top \mathbf{p}\\ \nu M_s \mathbf{u} - M_s\EQ\LQ \mathbf{p}\RQ\\ A\mathbf{y}- M_s(I\mathbf{u}+\mathbf{f})\\ 1-\frac{1}{1-\lambda}\EQ\LQ g^\prime_\varepsilon(Q_\omega)\RQ
\end{pmatrix},
\end{equation}
with $A=\diag{A_1,\dots,A_N}$, $I=[I_{Nh},\dots,I_{Nh}]\in\mathbb{R}^{N_h\times N_h N}$, $I_h$ being the identity matrix, $\mathbf{y}_d$ is the discretization of $y_d$, and \[\widetilde{M}=\text{diag}\left(\frac{g^\prime_\varepsilon(Q_{\omega_1})}{1-\lambda}M_s,\dots,\frac{g^\prime_\varepsilon(Q_{\omega_N})}{1-\lambda}M_s\right), \text{ with }Q_{\omega_j}:=\frac{1}{2}(\mathbf{y}_j-\mathbf{y}_d)^\top M_s(\mathbf{y}_j-\mathbf{y}_d)-t,\]
for $j=1,\dots,N$.
A possible approach to solve \eqref{eq:discretized_nonlinearsystem} is to use a Newton method, which given $\mathbf{x}^k=(\mathbf{y}^k,\mathbf{u}^k,\mathbf{p}^k,t^k)$ computes the corrections $\widetilde{\mathbf{dx}}^k=(\mathbf{dy}^k,\mathbf{du}^k,\mathbf{dp}^k,dt^k)$ solution of $\widetilde{\mathbf{J}}^k\widetilde{\mathbf{dx}}^k=-\widetilde{\mathbf{F}}(\widetilde{\mathbf{x}}^k)$, where
\begin{equation}\label{eq:optmality_system_smoothed_discretized}
\widetilde{\mathbf{J}}^k:=\begin{pmatrix}
C_1(\mathbf{y}_1^k,t^k) & & & & A_1^\top & & &-\mathbf{v}^k_1\\
& \ddots & & & &\ddots & &\vdots\\
& & C_N(\mathbf{y}_N^k,t^k) & & & & A_N^\top & -\mathbf{v}^k_N\\
& & & \nu M_s & -w_1M_s &\dots & -w_N M_s & \\
A_1 & & & -M_s & & & & \\
& \ddots & &\vdots & & & & \\
& & A_N & -M_s & & & &\\
-w_1\left(\mathbf{v}_1^k\right)^\top & \ddots & -w_N\left(\mathbf{v}^k_N\right)^\top & & & & & \frac{\EQ\LQ g_\varepsilon^{\prime\prime}(Q_\omega^k)\RQ}{1-\lambda}
\end{pmatrix},
\end{equation}
with
\begin{align}
Q_{\omega_i}^k&:=\frac{1}{2}(\mathbf{y}_i^k-\mathbf{y}_d)^\top M_s(\mathbf{y}_i^k-\mathbf{y}_d) -t^k,\\
C_i(\mathbf{y}_i^k,t^k)&:=\frac{1}{1-\lambda}\left(g^\prime_\varepsilon(Q_{\omega_i}^k)M_s +g_\varepsilon^{\prime\prime}(Q_{\omega_i}^k)M_s(\mathbf{y}_i^k-\mathbf{y}_d)(\mathbf{y}_i^k-\mathbf{y}_d)^\top M_s\right),\label{eq:equation_C}\\
\mathbf{v}^k_i&:=\frac{1}{1-\lambda}g_\varepsilon^{\prime\prime}(Q_{\omega_i}^k)M_s(\mathbf{y}_i^k-\mathbf{y}_d),
\end{align}
for $i=1,\dots, N$.
Unfortunately, $\widetilde{\mathbf{J}}^k$ can be singular away from the optimum, in particular whenever $\EQ\LQ g_\varepsilon^{\prime\prime}(Q_\omega^k)\RQ=0$ which implies
\begin{align}\label{eq:condition}
g_\varepsilon^{\prime\prime}\left(\frac{1}{2}(\mathbf{y}_j^k-\mathbf{y}_d)^\top M_s(\mathbf{y}_j^k-\mathbf{y}_d) -t^k\right)=0,\ \forall j=1,\dots,N,
\end{align}
which is not unlikely for small $\varepsilon$ since $\text{supp}(g_\varepsilon^{\prime\prime})=(-\frac{\varepsilon}{2},\frac{\varepsilon}{2})$. Splitting strategies have been proposed (e.g. \cite{Markowski2022} in a reduced approach), in which whenever \eqref{eq:condition} is satisfied, an intermediate value of $t$ is computed by solving $\widetilde{F}_4(t;\mathbf{y},\mathbf{u},\mathbf{p})=0$ so to violate \eqref{eq:condition}.
In the next section, we discuss a similar splitting approach. To speed up the convergence of the outer nonlinear algorithm, we use a preconditioned Newton method based on nonlinear elimination \cite{doi:10.1137/S106482759325154X}. At each iteration we will need to invert saddle-point matrices like \eqref{eq:matrix_saddle_point}, possibly several times. To do so, we rely on the collective multigrid algorithm.
\subsection{Nonlinear preconditioned Newton method}
Nonlinear elimination is a nonlinear preconditioning technique based on the identification of variables and equations of $\mathbf{F}$ (e.g. strong nonlinearities) that slow down the convergence of Newton method. These components are then eliminated through the solution of a local nonlinear problem at every step of an outer Newton. This elimination step provides a better initial guess for the outer iteration, so that a faster convergence is achieved \cite{doi:10.1137/S106482759325154X,doi:10.1137/15M104075X}.
In light of the possible singularity of $\widetilde{\mathbf{J}}$, we split the discretized variables $\widetilde{\mathbf{x}}$ into $\widetilde{\mathbf{x}}=(\mathbf{x},t)$, and we aim to eliminate the variables $\mathbf{x}$ to obtain a scalar nonlinear equation only for $t$. To do so, we partition \eqref{eq:optmality_system_smoothed} as
\begin{equation}\label{eq:partitioned}
\widetilde{\mathbf{F}}\begin{pmatrix}
\mathbf{x}\\ t
\end{pmatrix}=\begin{pmatrix}
\mathbf{F}_1(\mathbf{x},t)\\ F_2(\mathbf{x},t)
\end{pmatrix}=\begin{pmatrix}
\mathbf{0}\\0
\end{pmatrix},
\end{equation}
where $\mathbf{F}_1=(\widetilde{\mathbf{F}}_1(\mathbf{x},t),\widetilde{\mathbf{F}}_2(\mathbf{x},t),\widetilde{\mathbf{F}}_3(\mathbf{x},t))$ and $F_2(\mathbf{x},t)=\widetilde{F}_4(\mathbf{x},t)$.
Similarly, $\widetilde{\mathbf{J}}$ is partitioned into
\[\widetilde{\mathbf{J}}=\begin{pmatrix}
\mathbf{J}_{1,1} & \mathbf{J}_{1,2}\\
\mathbf{J}_{2,1} & J_{2,2}
\end{pmatrix}\]
whose blocks have dimensions $\mathbf{J}_{1,1}\in \setR^{(2N+1)N_h\times (2N+1)N_h}$, $\mathbf{J}_{1,2}\in \setR^{(2N+1)N_h\times 1}$, $\mathbf{J}_{2,1}\in \setR^{1\times (2N+1)N_h}$, and $\mathbf{J}_{2,2}\in \setR$.
Notice that $\mathbf{J}_{1,1}$ is always nonsingular, while $\mathbf{J}_{2,1}$, $\mathbf{J}_{1,2}$ and $J_{2,2}$ are identically zero if \eqref{eq:condition} is verified.
Thus $\mathbf{F}_1$ allows us to define an implicit map $h:\setR\rightarrow \setR^{(2N+1)N_h}$, such that $\mathbf{F}_1(h(t),t)=0$, so that the first set of nonlinear equations in \eqref{eq:partitioned} are satisfied. We are then left to solve the nonlinear scalar equation
\begin{equation}\label{eq:reduced_nonlinear}
F(t)=0,\quad\text{where}\quad F(t):=F_2(h(t),t).
\end{equation}
To do so using the Newton method, we need the derivative of $F(t)$ evaluated at $t=t^k$ which, using implicit differentiation, can be computed as
\[F^\prime(t^k)=J_{2,2}(h(t^k),t^k)-\mathbf{J}_{2,1}(h(t^k),t^k)\left(\mathbf{J}_{1,1}(h(t^k),t^k)
\right)^{-1}\mathbf{J}_{1,2}(h(t^k),t^k).\]
The nonlinear preconditioned Newton method is described in Alg. \ref{Alg:preconditioned_newton}, and consists in solving \eqref{eq:reduced_nonlinear} with Newton method. However, to overcome the possible singularity of $J^k_{2,2}$, $\mathbf{J}^k_{1,2}$ and $\mathbf{J}^k_{2,1}$, we check at each iteration $k$ if \eqref{eq:condition} is satisfied, and in the affermative case we update $\mathbf{x}^k$ by solving $\mathbf{F}_1(\mathbf{x}^{k+1},t^k)=0$ using Newton method, and update $t^k$ by solving $F_2(\mathbf{x}^k,t^{k+1})=0$.
Notice further, that each iteration of the backtracking line-search requires to solve $F_1(h(t),t)=0$ using Newton method, thus additional linear systems with matrix $\mathbf{J}_{1,1}$ must be solved.
We report that we also tried to eliminate $t$ by computing the map $l$ such that $F_2(\mathbf{x},l(\mathbf{x}))=0$, while iterating on the variable $\mathbf{x}$. This has the advantage that $l$ can be evaluted very cheaply, being a scalar equation. However, we needed many more iterations both of the outer Newton method, and consequently of the inner linear solver. Thus, according to our experience, this second approach resulted less efficient and appealing.
\begin{algorithm}[t]
\setlength{\columnwidth}{\linewidth}
\caption{Nonlinear preconditioned Newton method to solve $\widetilde{\mathbf{F}}(\widetilde{\mathbf{x}})=0$.}
\begin{algorithmic}[1]\label{Alg:preconditioned_newton}
\REQUIRE $t^0$, $\text{Tol}\in \mathbb{R}^+$, $\sigma,\rho\in (0,1)$.
\STATE Compute $\mathbf{x}^0=h(t^0)$ solving $\mathbf{F}_1(\mathbf{x}^0;t^0)=0$ using the Newton method.
\STATE Set $k=0$.
\WHILE {$|F(t^k)|>\text{Tol}$}
\IF {\eqref{eq:condition} is satisfied}
\STATE Compute $\mathbf{x}^{k+1}$ and $t^{k+1}$ solving $\mathbf{F}(\mathbf{x}^{k+1};t^k)=0$ and $\widetilde{F}_4(\mathbf{x}^k;t^{k+1})=0$.
\ELSE
\STATE Compute descend Newton's direction $d=-(F^\prime(t^k))^{-1}F(t^k)$. Set $\gamma=1$ and compute $\mathbf{x}=h(t^k+\gamma d)$ solving $\mathbf{F}_1(\mathbf{x};t^k+d)=0$.
\WHILE {$|F(t^k+\gamma d)|-|F(t^k)|>-\sigma|F(t^k)|$}
\STATE Set $\gamma=\rho\gamma$.
\STATE Compute $\mathbf{x}=h(t^k+\gamma d)$ solving $\mathbf{F}_1(\mathbf{x};t^k+\gamma d)=0$.
\ENDWHILE
\STATE Set $t^{k+1}=t^k+\gamma d$, $\mathbf{x}^{k+1}=\mathbf{x}$, $k=k+1$.
\ENDIF
\ENDWHILE
\RETURN $t^{k+1}$ and $\mathbf{x}^{k+1}$.
\end{algorithmic}
\end{algorithm}
\subsection{Numerical experiments}
In this section we report numerical tests to asses the performance of the preconditioned Newton algorithm to solve \eqref{eq:discretized_nonlinearsystem}, and of the collective multigrid algorithm to invert the matrix $\mathbf{J}_{1,1}$.
We consider the random PDE-constraint \eqref{eq:state_equation2} with the random diffusion coefficient \eqref{eq:log_normal}.
Table \ref{Tab:cvar} reports the number of outer and inner Newton iterations, and the average number of V-cycle iterations and of preconditioned GMRES iterations to solve the linear systems at each (inner/outer) Newton iterations. The outer Newton iteration is stopped when $|F(t^k)|\leq 10^{-6}$, the inner Newton method to compute $h(\cdot)$ is stopped when $\|\mathbf{F}_{1,1}(\mathbf{x};t^k)\|_2\leq 10^{-9}$, and the linear solvers are stopped when the (unpreconditioned) residual is smaller than $10^{-10}$.
In Table \ref{Tab:cvar}, the number of outer Newton iterations is quite stable, while the number of inner Newton iterations varies between five and twenty iterations per outer iteration, essentially due to how difficult is to compute the nonlinear map $h(t)$ by solving $\mathbf{F}_1(\mathbf{x};t^k)=0$ in line (5), (8) and (11) of Alg. \ref{Alg:preconditioned_newton}. The average number of inner linear solvers is quite stable as well. We emphasize that the top left blocks of $\mathbf{J}_{1,1}$ involve the matrices $C_i(\mathbf{y}^k_i,t^k)$ (see \eqref{eq:equation_C}) which contain a dense low-rank term if $g_\varepsilon^{\prime\prime}(Q^k_{\omega_i})\neq 0$. Further, as $\varepsilon\rightarrow 0$, $g_\varepsilon^{\prime\prime}(\cdot)$ tends to a Dirac delta, so the dense term become dominant.
Multigrid methods based on a pointwise relaxations are expected to be not very efficient for these matrices which may not be diagonally dominant.
The standard V-cycle algorithm indeed struggles, however the Krylov acceleration performs much better as it handles these low-rank perturbation with smaller effort. We sometimes noticed that the GMRES residual stagnates after 20/30 iterations around $10^{-8}/10^{-9}$, due to a loss of orthogonality in the Krylov subspace. We allowed a maximum number of 80 GMRES iterations per linear system.
\begin{table}[h]\caption{Number of preconditioned Newton iterations, and of V-cycle iterations and preconditioned GMRES iterations (in brackets).}\label{Tab:cvar}
\centering
\centering
\begin{tabular}{| c | c | c | c |}
\hline
$N_h$($N_L$) & 81 (2) & 361 (3) & 1521 (4)\\ \hline\hline
It. & 6-61-(33.4-18.2) & 7-74-(34.2-16.7) & 7-79-(30.1-20.7) \\
\hline
\end{tabular}\\
\centering
$\nu=10^{-4}$, $\beta=10^{-2}$, $N=125$, $\sigma^2=0.5$, $L^2=0.5$.\\\vspace{0.2cm}
\centering
\begin{tabular}{| c | c | c | c | c|}
\hline
$N$ & 8 & 27 & 64 & 125\\ \hline\hline
It. & 4-20-(19.34-13.16) & 8-188-(32.3-17.3) & 5-50-(27.85-15.73) & 7-85-(22.9-14.8) \\
\hline
\end{tabular}\\
\centering
$N_h=961$, $\nu=10^{-4}$, $\varepsilon=10^{-2}$, $\lambda=0.95$, $\sigma^2=1$, $L^2=0.5$.\\\vspace{0.2cm}
\centering
\begin{tabular}{| c | c | c | c | c|}
\hline
$\lambda$ & 0 & 0.5 & 0.95 & 0.99\\ \hline\hline
It. & 0-1-(21-15) & 6-80-(20.3-14.2) & 6-85-(22.9-14.8) & 7-124-(34.8-17.3) \\
\hline
\end{tabular}\\
\centering
$N_h=961$, $\nu=10^{-4}$, $\varepsilon=10^{-2}$, $N=125$, $\sigma^2=1$, $L^2=0.5$.\\\vspace{0.3cm}
\centering
\begin{tabular}{| c | c | c | c | c|}
\hline
$\varepsilon$ & $10^{-1}$ & $10^{-2}$ & $10^{-3}$ & $10^{-4}$\\ \hline\hline
It. & 4-25-(21.12-14.8) & 2-13-(20.8-14.5) & 4-100-(27.79-15.98) & 1-27-(62.8-16.1) \\
\hline
\end{tabular}\\
\centering
$N_h=961$, $\nu=10^{-4}$, $\beta=0.95$, $N=125$, $\sigma^2=1$, $L^2=0.5$.\\\vspace{0.3cm}
\end{table}
Figure \ref{Fig:qoi} compares the two optimal controls obtained minimizing either $\E\LQ Q(y_\omega)\RQ$ or $\Cvar{Q(y_\omega)}$, and the quantity of interests $Q(y_{\omega_j})$ for every sample discretization at optimum. We remark that the risk-adverse control indeed minimizes the \textit{risk} of having large values of $Q(y_\omega)$, at the price of leading to higher values of $Q(y_\omega)$ for most realizations. Notice that we used Gauss-Legendre nodes, so that not all realizations have the same weights. The more likely events are around the median, while the first 17 realizations count for around $1\%$ of the discretized probability mass.
\begin{figure}
\centering
\includegraphics[scale=0.33]{risk_neutral.eps}
\includegraphics[scale=0.33]{risk_adverse.eps}
\includegraphics[scale=0.33]{qoi2.eps}
\caption{Solution of the linear-quadratic OCP (top-left), solution of the smoothed risk-adverse OCP with $\lambda=0.99$ (top-right), distribution of the quantity of interest $Q(y_\omega)$ (bottom-center).}\label{Fig:qoi}
\end{figure}
\section{Conclusion}\label{Sec:conc}
We have presented a multigrid method to solve the large saddle point linear systems that typically arise in full-space approaches to solve OCPUU. We tested the algorithm as a iterative solver and as a preconditioner on three test cases: a linear-quadratic OCPUU, a nonsmooth OCPUU, and a risk-adverse nonlinear OCPUU. Overall, the multigrid method shows very good performances and robustness with respect to the several parameters of the problems considered.
\section{Acknowledgements} Gabriele Ciaramella and Tommaso Vanzan are members of GNCS (Gruppo Nazionale
per il Calcolo Scientifico) of INdAM.
\bibliographystyle{siamplain}
|
{
"arxiv_id": "2302.13757",
"language": "en",
"timestamp": "2023-02-28T02:29:27",
"url": "https://arxiv.org/abs/2302.13757",
"yymm": "2302"
} | \section{Introduction}
\IEEEPARstart{I}{ntegrated} sensing and communications (ISAC) has been recognized as a key enabling technology for next-generation wireless networks (such as 5G-Advanced (5G-A) and 6G). It pursues a deep integration between sensing and communication (S\&C) such that the two functionalities can be co-designed to improve the hardware-, spectral-, and energy-efficiency, as well as to acquire mutual performance gains \cite{pin2021integrated}.
There have been numerous waveform design schemes for ISAC, which may be categorized as two main methodologies, i.e., non-overlapped resource allocation and fully unified waveform design. The first scheme aims to allocate orthogonal/non-overlapped wireless resources to S\&C, such that they do not interfere with each other. Nevertheless, this method suffers from poor resource efficiency. As a consequence, it is more favorable to design a fully unified ISAC waveform through the shared use of wireless resources between S\&C. In general, fully unified waveform design follows one of the following three schemes: Sensing-centric design (SCD), communication-centric design (CCD), and joint design (JD) \cite{liu2022integrated}. The first two schemes guarantee the priority of sensing/communication capabilities of the ISAC system, while treating the other functionality as a by-product. In contrast to SCD and CCD, JD schemes aim to design an ISAC signal from the ground-up instead of relying on existing S\&C waveforms, which leads to a scalable tradeoff between S\&C \cite{feng2020china}.
While the existing JD schemes are well-designed by sophisticated approaches, they generally assume Nyquist pulse shaping in an implicit manner. To further boost the communication throughput under critical sensing constraints, one may consider to leverage the faster-than-Nyquist (FTN) signaling in the ISAC waevform design. The key idea of FTN signaling is to improve data rate by accelerating the transmitted pulses in the temporal dimension, thus violating the Nyquist criterion and compromising controllable inter-symbol interference (ISI) \cite{anderson2013faster}. For an MU-MIMO system employing FTN signaling, interference appears in both spatial and temporal domains, namely, the multi-user interference (MUI), and the ISI due to non-orthogonal pulse shaping. Conventional precoding designs address the interference issues via channel equalization techniques, e.g., zero forcing, which however ignore the fact that both categories of interference is known to the ISAC base station (BS), and may be leveraged to enhance the useful signal power \cite{spano2018faster}. The flexibility of JD-based ISAC waveform design allows us to perform symbol level precoding (SLP) to exploit the constructive interference (CI), where transmitted signals are designed based on the knowledge of both the channel state information (CSI) and data information. Different from the conventional block-level precoding relying only on the CSI, the aim of SLP is not to cancel the interference, but rather to control it to impose constructive effect on each information-carrying symbol, thus to improve the communication signal-to-interference-plus-noise ratio (SINR) \cite{li2020tutorial}.
In this paper, we propose a novel ISAC precoding technique referred to as FTN-ISAC-SLP. It merges the strategies discussed above, thus gleaning performance improvement for S\&C from both temporal and spatial dimensions. We first introduce the system model and performance metrics for the considered MU-MIMO ISAC system employing FTN signaling, and then formulate the ISAC precoding design into an optimization problem. While the problem is non-convex in general, we propose a tailor-made successive convex approximation (SCA) method, which finds a near-optimal solution in polynomial time. Numerical results show that the proposed FTN-ISAC-SLP method achieves significant performance gain in terms of both S\&C compared to conventional Nyquist signaling and block-level precoding approaches.
\section{System Model}
We consider a narrowband MIMO ISAC BS equipped with $N_t$ transmit antennas and $N_r$ receive antennas, which is serving $K$ downlink single-antenna users while detecting targets as a monostatic radar. Without loss of generality, we assume $K < N_t$. Before formulating the FTN-ISAC-SLP problem, we first elaborate on the system model and performance metrics of both radar sensing and communications.
\begin{figure}[h]
\centering
\includegraphics[scale=0.33]{img/downlink.eps}
\caption{ISAC Downlink System}
\label{fig:system}
\end{figure}
\subsection{Signal Model}
Let $\mathbf{S}=[\mathbf{s}_1, \mathbf{s}_2, \cdots, \mathbf{s}_{K}]^{\top} \in \mathbb{C}^{K \times L}$ denote the symbol matrix to be transmitted, with $\mathbf{s}_k$ being the data stream intended for the $k$-th user with a block length $L$, and each entry being drawn from a given constellation. Unless otherwise specified, in this paper we consider a PSK constellation, since the extension to QAM constellations is straightforward \cite{li2020tutorial}. Moreover, let $\mathbf{X} = [\mathbf{x}_1, \mathbf{x}_2, \cdots, \mathbf{x}_{N_t}]^{\top} \in \mathbb{C}^{N_t \times L}$ be the precoded signal matrix, with $\mathbf{x}_n$ representing the data stream to be transmitted at the $n$-th antenna. Suppose that the precoded symbols $\mathbf{x}_i$ are passed through a root-raised-cosine (RRC) shaping filter $\varphi(t)$ with a roll-off factor $\alpha$ and a duration $T_0$. The band-limited signal is transmitted with an FTN-specific symbol interval $T = \tau T_0$ where $\tau \in [0,1]$. Under such a setting, the transmit FTN signal $x_n(t)$ can be expressed as
\begin{equation}
x_n(t) = \sum_{i=0}^{L-1}x_{n,i}\varphi(t-iT),
\end{equation}
where $x_{n,i}$ represents the $i$-th element of $\mathbf{x}_n$.
\subsubsection{Communication Model}
Consider the quasi-static flat fading channel matrix $\mathbf{H}_C = [\mathbf{h}_1^C,\mathbf{h}_2^C,\cdots,\mathbf{h}_{K}^C]^{\top}\in\mathbb{C}^{K \times N_t}$ that models the MUI among different data streams, where $\mathbf{h}_k^C$ is the channel vector of $k$-th user. Then the received signal at the $k$-th user can be expressed as
\begin{equation}
r_k^C(t) = \sum_{i=0}^{N_t-1}h_{k,i}^Cx_i(t) + n_k^C(t),
\end{equation}
where $h_{k,i}^C$ represents the $i$-th elements of $\mathbf{h}_{C,k}$ and $n_k^C(t)$ is the complex-valued additive white Gaussian noise (AWGN) at the $k$-th user with zero mean and variance $\sigma_C^2$.
The received FTN signal after passing through a matched filter $\varphi^{*}(-t)$ at the $k$-th user is given by
\begin{equation}
\begin{aligned}
y_k^C(t) &= r_k^C(t) * \varphi^{*}(-t) \\
&= \sum_{i=0}^{N_t-1}h_{k,i}^Cx_i(t) * \varphi^{*}(-t) + n_k^C(t) * \varphi^{*}(-t) \\
&= \sum_{i=0}^{N_t-1}\sum_{j=0}^{L-1}h_{k,i}^Cx_{i,j}\phi(t-jT) + \eta_k(t),
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
\phi(t) &= \int_{-\infty}^{\infty}\varphi(\zeta)\varphi^{*}(\zeta-t)d\zeta, \\
\eta_k(t) &= \int_{-\infty}^{\infty}n_k^C(\zeta)\varphi^{*}(\zeta-t)d\zeta.
\end{aligned}
\end{equation}
The $l$-th filtered sample at the $k$-th user $y_{k,l} = y_k(lT) \; (l = 0,1,\cdots,L-1)$ can be expressed as
\begin{equation}
\label{eq:filted_signal}
y_{k,l}^C = \sum_{i=0}^{N_t-1}\sum_{j=0}^{L-1}h_{k,i}^Cx_{i,j}\phi((l-j)T) + \eta_k(lT),
\end{equation}
which can be written in a compact matrix form as
\begin{equation}
\label{eq:c_matrix_form_origin}
\widetilde{\mathbf{Y}}_C = \mathbf{H}_C\mathbf{X}\mathbf{\Phi} + \widetilde{\mathbf{N}}_C,
\end{equation}
where $\widetilde{\mathbf{Y}}_C = (y_{k,l}) \in \mathbb{C}^{K \times L}$, and $\mathbf{\Phi} \in \mathbb{R}^{L \times L}$ is defined as
\begin{equation}
\label{eq:RGXH}
\begin{aligned}
&\mathbf{\Phi} = \\&
\begin{bmatrix}
\phi(0) & \phi(-T) & \cdots & \phi(-(L-1)T) \\
\phi(T) & g(0) & \cdots & \phi(-(L-2)T) \\
\vdots & \vdots & \ddots & \vdots \\
\phi((L-1)T) & \phi((L-2)T) & \cdots & \phi(0)
\end{bmatrix},
\end{aligned}
\end{equation}
which is a positve semidefinite Toeplitz symmetric matrix. Moreover, $\widetilde{\mathbf{N}}_C = [\bm{\eta}_0, \bm{\eta}_1, \cdots, \bm{\eta}_{K-1}]^{\top}$, with $\bm{\eta}_k = [\eta_k(0), \eta_k(T), \cdots, \eta_k((L-1)T)]^\top$ being the corresponding noise vector at the $k$-th user. Notice that $\mathbb{E}[\bm{\eta}_k\bm{\eta}_k^H] = \sigma_C^2\mathbf{\Phi}$, which indicates that the noise received at each user is not independent. To decorrelate the noise, consider the eigenvalue decomposition of $\mathbf{\Phi}$ is $\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{H}$ where $\mathbf{U}$ is a unitary matrix containing eigenvectors and $\mathbf{\Lambda}$ is a diagonal matrix composed by eigenvalues. Right-multiplying $\mathbf{U}$ at both sides of (\ref{eq:c_matrix_form_origin}) yields
\begin{equation}
\label{eq:c_matrix_form}
\mathbf{Y}_C = \mathbf{H}_C\mathbf{X}\mathbf{U\Lambda} + \mathbf{N}_C,
\end{equation}
where $\mathbf{Y}_C=\widetilde{\mathbf{Y}}_C\mathbf{U}$ and $\mathbf{N}_C=\widetilde{\mathbf{N}}_C\mathbf{U}$. By doing so, the covariance matrix for row vectors of $\mathbf{N}_C$ becomes $\sigma_C^2\mathbf{\Lambda}$, i.e., a diagonal matrix.
\subsubsection{Radar Sensing Model}
Consider the target response matrix (TRM) $\mathbf{H}_R = [\mathbf{h}_1^R, \mathbf{h}_2^R, \cdots, \mathbf{h}_{N_r}^R]^{\top} \in \mathbb{C}^{N_r \times N_t}$ that models the sensing channel. Depending on the sensing scenarios, $\mathbf{H}_R$ can be of different forms. For angular extended target model where all the point-like scatterers are located in the same range bin, we have
\begin{equation}
\mathbf{H}_R=\sum_{i=1}^{N_s}\alpha_i\mathbf{b}(\theta_i)\mathbf{a}^H(\theta_i),
\end{equation}
where $N_s$ is the number of scatterers, $\alpha_i$ and $\theta_i$ denote the reflection coefficient and the angle of the $i$-th scatterer, and $\mathbf{a}\left(\theta\right) \in \mathbb{C}^{N_t \times 1}$ and $\mathbf{b}\left(\theta\right) \in \mathbb{C}^{N_r \times 1}$ are transmit and receive steering vectors. Another example is to detect multiple point targets using OFDM waveforms. Suppose that the radar receives echoes from $N_s$ point targets. Each target has individual reflection coefficient, angle, delay, and Doppler parameters $\alpha_i$, $\theta_i$, $\tau_i$, and $f_{D,i}$. And $\Delta f$ and $T_O$ represent subcarrier spacing and OFDM symbol duration. Then $\mathbf{H}_R$ can be modeled as a TRM defined on the $n$-th subcarrier and the m-th OFDM symbol as
\begin{equation}
\mathbf{H}_R=\mathbf{H}_{n,m}=\mathbf{B}(\Theta)\mathbf{C}_n\mathbf{D}_m\mathbf{A}^H(\Theta),
\end{equation}
where
\begin{equation}
\mathbf{A}(\Theta)=[\mathbf{a}(\theta_1),\cdots,\mathbf{a}(\theta_{N_s})],
\mathbf{B}(\Theta)=[\mathbf{b}(\theta_1),\cdots,\mathbf{b}(\theta_{N_s})]
\end{equation}
are transmit and receive steering matrices, and
\begin{equation}
\begin{aligned}
&\mathbf{C}_n=\mathrm{Diag}\left([\alpha_1e^{-j2\pi(n-1)\Delta f\tau_1},\cdots,\alpha_{N_s}e^{-j2\pi(n-1)\Delta f\tau_{N_s}}]\right), \\
&\mathbf{D}_m=\mathrm{Diag}\left([e^{j2\pi f_{D,1}(m-1)T_{O}},\cdots,e^{j2\pi f_{D,N_s}(m-1)T_{O}}]\right)
\end{aligned}
\end{equation}
are phase shifting matrices resulted by time delay and Doppler of each target.
To guarantee the generality of the proposed method, we consider a generic TRM $\mathbf{H}_R$ instead of specific models above. Similar to the communication model, the received echo signal at the $k$-th receive antenna can be written as
\begin{equation}
y_k^R(t) = \sum_{i=0}^{N_t-1}h_{k,i}^Rx_i(t) + n_k^R(t),
\end{equation}
where $h_{k,i}^R$ represents the $i$-th elements of $\mathbf{h}_{R,k}$ and $n_k^R(t)$ is the complex-valued AWGN at the $k$-th receive antenna with zero mean and variance $\sigma_R^2$. At the sensing receiver, we directly sample the received signal without passing it through the pulse-shaping filter, yielding the following radar received signal model
\begin{equation}
\label{eq:radar}
\mathbf{Y}_R = \mathbf{H}_R\mathbf{X}\mathbf{C}^{\top} + \mathbf{N}_R,
\end{equation}
where $\mathbf{N}_R$ denotes an AWGN matrix, with zero mean and the variance of each entry being $\sigma_R^2$. Here we assume $\mathbf{h}=\mathrm{vec}(\mathbf{H}_R)\thicksim\mathcal{CN}(\mathbf{0},\sigma_H^2\mathbf{I})$ and $\mathbf{C}$ is given by
\begin{equation}
\label{eq:conv_matrix}
\mathbf{C} =
\begin{bmatrix}
c_0 & 0 & \cdots & 0 \\
c_1 & c_0 & \ddots & \vdots \\
\vdots & \vdots & \ddots & \vdots \\
c_{P-1} & c_{P-2} & \ddots & 0 \\
0 & c_{P-1} & \ddots & c_0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & \cdots & \cdots & c_{P-1}
\end{bmatrix},
\end{equation}
where $\mathbf{c}=[c_0,c_1,\cdots,c_{P-1}]^{\top}$ is the sample vector for RRC function $\varphi(t)$ with sampling number $P$.
\textit{Remark:} In the communication model we attempt to detect the signal $\mathbf{S}$ from $\mathbf{X}$ in the receiver side, thus we pass the received signal to RRC matched filter to maximize the received SINR for each precoded symbol. In the sensing model our aim is to recover the TRM $\mathbf{H}_R$ from the raw observation (\ref{eq:radar}), rather than to recover $\mathbf{X}$. Therefore, we treat $\mathbf{X}\mathbf{C}^{\top}$ as an equivalent transmitted waveform and regard (\ref{eq:radar}) as the sufficient statistics for estimating $\mathbf{H}_R$, which needs not to be match-filtered by the RRC pulse.
\subsection{CI Constraint for Communication}
Following the CI constraint given in \cite{masouros2015exploiting}, for any transmitted symbol $s$ and its corresponding received symbol $y$ at the receiver side, they must satisfy the following inequality to exploit the CI effect
\begin{equation}
\left| \Im\left\{s^{*}y\right\} \right| - \Re\left\{s^{*}y\right\}\tan\theta \leq -\sqrt{\Gamma\sigma^2}\tan\theta,
\end{equation}
where $\sigma^2$ is the variance of the noise imposes on this symbol, $\Gamma$ is the required SNR and $\theta$ depends on the modulation type.
Let $\mathbf{Y}_C=[\mathbf{y}_0,\mathbf{y}_1,\cdots,\mathbf{y}_{K-1}]^{\top}$ and $\bm{\sigma}=\sqrt{\mathrm{diag}(\sigma_C^2\mathbf{\Lambda})}=[\sigma_C\Lambda_{0,0},\sigma_C\Lambda_{1,1},\cdots,\sigma_C\Lambda_{L-1,L-1}]^{\top}$ where $\mathrm{diag}$ refers to the operation taking the entries in the diagonal and stacking them as a vector. Then the CI constraint for the $k$-th user can be written as
\begin{equation}
\label{eq:comlex_ci_origin}
\left| \Im\left\{\mathbf{s_k^{*}} \circ \mathbf{y}_k\right\} \right| - \Re\left\{\mathbf{s_k^{*}} \circ \mathbf{y}_k\right\}\tan\theta \leq (-\sqrt{\Gamma_k}\tan\theta)\bm{\sigma}, \; \forall{k},
\end{equation}
where $\Gamma_k$ is the required SINR of the $k$-th user, and $\circ$ refers to the Hadamard product.
For the received symbol at the $k$-th user we have
\begin{equation}
\mathbf{y}_k^{\top} = {\mathbf{h}_k^C}^{\top}\mathbf{XU\Lambda}.
\end{equation}
By noting the fact that $\mathbf{s_k^{*}} \circ \mathbf{y}_k$ can be equivalently expressed as $\mathbf{s_k^{*}} \circ \mathbf{y}_k = \mathbf{S}_k^{*}\mathbf{y}_k$, where
\begin{equation}
\mathbf{S}_k = \mathrm{Diag}(\mathbf{s}_k) =
\begin{bmatrix}
e^{j\phi_{k,0}} & 0 & \cdots & 0 \\
0 & e^{j\phi_{k,1}} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & e^{j\phi_{k,L-1}}
\end{bmatrix},
\end{equation}
the inequality (\ref{eq:comlex_ci_origin}) can be recast to
\begin{equation}
\label{eq:comlex_ci}
\begin{aligned}
\left| \Im\left\{{\mathbf{h}_k^C}^{\top}\mathbf{XU\Lambda}\mathbf{S}^{*}_k\right\} \right| - \Re\left\{{\mathbf{h}_k^C}^{\top}\mathbf{XU\Lambda}\mathbf{S}^{*}_k\right\}\tan\theta \\
\leq (-\sqrt{\Gamma_k}\tan\theta)\bm{\sigma}^{\top}, \; \forall k,
\end{aligned}
\end{equation}
which is a linear constraint in $\mathbf{X}$ and is thus convex.
\subsection{MMSE for Radar Sensing}
Let $\mathbf{y}_R=\mathrm{vec}(\mathbf{Y}_R)$, $\mathbf{h}_R=\mathrm{vec}(\mathbf{H}_R)$ and $\mathbf{n}_R=\mathrm{vec}(\mathbf{N}_R)$. Equation (\ref{eq:radar}) can be expanded as
\begin{equation}
\mathbf{y}_R=(\mathbf{C}\mathbf{X}^{\top}\otimes\mathbf{I}_{N_r})\mathbf{h}_R + \mathbf{n}_R.
\end{equation}
According to \cite{kay1993fundamentals}, the corresponding MMSE for estimating $\mathbf{h}_R$ from the noisy observation $\mathbf{y}_R$ is
\begin{equation}
\begin{aligned}
\mathrm{MMSE} &= \mathbb{E}(\Vert \mathbf{h}_R - \mathbf{h}_{R}^\mathrm{MMSE} \Vert_2^2) \\
&= \mathrm{tr}\left(\left(\sigma_H^{-2}\mathbf{I}+\sigma_R^{-2}(\mathbf{C}\mathbf{X}^{\top}\otimes\mathbf{I}_{N_r})^{H}(\mathbf{C}\mathbf{X}^{\top}\otimes\mathbf{I}_{N_r})\right)^{-1}\right) \\
&=
\mathrm{tr}\left(\left(\sigma_H^{-2}\mathbf{I}+\sigma_R^{-2}(\mathbf{X}^{*}\mathbf{C}^{\top}\mathbf{C}\mathbf{X}^{\top}\otimes\mathbf{I}_{N_r})\right)^{-1}\right) \\
&=
\sigma_R^{2}N_r\mathrm{tr}\left(\left(\frac{\sigma_R^2}{\sigma_H^2}\mathbf{I}+\mathbf{X}\mathbf{\Psi}\mathbf{X}^{H}\right)^{-1}\right),
\end{aligned}
\end{equation}
where $\mathbf{\Psi}=\mathbf{C}^{\top}\mathbf{C}$. Notice that this expression is non-convex in $\mathbf{X}$, which will be tackled in the next section.
\section{FTN-ISAC Symbol-Level Precoding}
\subsection{Problem Fomulation}
Based on the discussion above, the precoding optimization
problem can be expressed as
\begin{equation}
\label{opt:origin_mimo}
\begin{aligned}
\underset{\mathbf{X}}{\mathrm{min}} &\;
f(\mathbf{X})=\mathrm{tr}\left(\left(\frac{\sigma_R^2}{\sigma_H^2}\mathbf{I}+\mathbf{X}\mathbf{\Psi}\mathbf{X}^{H}\right)^{-1}\right) \\
s.t. &\;
\left| \Im\left\{{\mathbf{h}_k^C}^{\top}\mathbf{XU\Lambda}\mathbf{S}^{*}_k\right\} \right| - \Re\left\{{\mathbf{h}_k^C}^{\top}\mathbf{XU\Lambda}\mathbf{S}^{*}_k\right\}\tan\theta \\ &\qquad\qquad\qquad\qquad\qquad
\leq (-\sqrt{\Gamma_k}\tan\theta)\bm{\sigma}^{\top}, \; \forall k, \\
&\; \Vert\mathbf{X}\mathbf{C}^{\top}\Vert_F^2 \leq E.
\end{aligned}
\end{equation}
That is, we design the precoded symbol matrix $\mathbf{X}$ for the to-be-transmitted symbol matrix $\mathbf{S}$, such that the MMSE for radar sensing is minimized while guaranteeing the CI conditions for communication under a given energy budget $E$.
\subsection{Lower-Bound for the MMSE}
We first derive the lower bound of problem (\ref{opt:origin_mimo}) by considering the following optimization problem that solely minimizes the MMSE without imposing CI constraints
\begin{equation}
\label{opt:lower_bound}
\begin{aligned}
\underset{\mathbf{X}}{\mathrm{min}} &\;
f(\mathbf{X})=\mathrm{tr}\left(\left(\frac{\sigma_R^2}{\sigma_H^2}\mathbf{I}+\mathbf{X}\mathbf{\Psi}\mathbf{X}^{H}\right)^{-1}\right) \;
s.t. \Vert\mathbf{X}\mathbf{C}^{\top}\Vert_F^2 \leq E.
\end{aligned}
\end{equation}
By letting $\widetilde{\mathbf{X}}=\mathbf{X}\mathbf{C}^{\top}$, problem (\ref{opt:lower_bound}) can be recast to
\begin{equation}
\label{opt:equiv_lower_bound}
\begin{aligned}
\underset{\widetilde{\mathbf{X}}}{\mathrm{min}} &\;
f(\widetilde{\mathbf{X}})=\mathrm{tr}\left(\left(\frac{\sigma_R^2}{\sigma_H^2}\mathbf{I}+\widetilde{\mathbf{X}}\widetilde{\mathbf{X}}^{H}\right)^{-1}\right) \;
s.t. \Vert\widetilde{\mathbf{X}}\Vert_F^2 \leq E
\end{aligned}
\end{equation}
which is independent of $\tau$. According to \cite{yang2007mimo}, the optimal value of problem (\ref{opt:equiv_lower_bound}) is
\begin{equation}
f_{\min}=\sum_{i=1}^{N_t}\frac{1}{\lambda_i^2+\frac{\sigma_R^2}{\sigma_H^2}},
\end{equation}
where $\lambda_i^2=\left(\kappa-\frac{\sigma_R^2}{\sigma_H^2}\right)^+$ and $a^+=\max(a,0)$. The constant $\kappa$ is chosen to satisfy $\sum_{i=1}^{N_t}\lambda_i^2=E$.
\subsection{SCA Algorithm for Solving Problem (\ref{opt:origin_mimo})}
It can be readily observed that the feasible region $\mathcal{Q}$ for (\ref{opt:origin_mimo}) is convex, whereas the objective function is not. To address this issue, we propose an SCA algorithm to solve problem (\ref{opt:origin_mimo}) in an iterative manner.
To proceed with the SCA algorithm, we approximate $f(\mathbf{X})$ using its first-order Taylor expansion near a
given point $\mathbf{X}_i\in\mathcal{Q}$ as
\begin{equation}
f(\mathbf{X}) \approx f(\mathbf{X}_i) + \Re\left\{\mathrm{tr}\left(\nabla f(\mathbf{X}_i)^{H}(\mathbf{X}-\mathbf{X}_i)\right)\right\}
\end{equation}
where $\nabla f(\cdot)$ represents the gradient of $f(\cdot)$ and $\nabla f(\mathbf{X}_i)$ can be calculated as
\begin{equation}
\label{eq:gradient}
\begin{aligned}
&\nabla f(\mathbf{X}_i) = \\&
-2\left(\frac{\sigma_R^2}{\sigma_H^2}\mathbf{I}+\mathbf{X}_i\mathbf{\Psi}\mathbf{X}_i^{H}\right)^{-1}\left(\frac{\sigma_R^2}{\sigma_H^2}\mathbf{I}+\mathbf{X}_i\mathbf{\Psi}\mathbf{X}_i^{H}\right)^{-1}\mathbf{X}_i\mathbf{\Psi}
\end{aligned}
\end{equation}
At the $(i+1)$-th iteration of the SCA algorithm, we solve the following convex optimization problem
\begin{equation}
\label{opt:sca_mimo}
\begin{aligned}
\underset{\mathbf{X}}{\mathrm{min}} &\;
g(\mathbf{X})=\Re\left\{\mathrm{tr}\left(\nabla f(\mathbf{X}_i)^{H}(\mathbf{X}-\mathbf{X}_i)\right)\right\} \\
s.t. &\;
\left| \Im\left\{{\mathbf{h}_k^C}^{\top}\mathbf{XU\Lambda}\mathbf{S}^{*}_k\right\} \right| - \Re\left\{{\mathbf{h}_k^C}^{\top}\mathbf{XU\Lambda}\mathbf{S}^{*}_k\right\}\tan\theta \\ &\qquad\qquad\qquad\qquad\qquad
\leq (-\sqrt{\Gamma_k}\tan\theta)\bm{\sigma}^{\top}, \; \forall k, \\
&\; \Vert\mathbf{X}\mathbf{C}^{\top}\Vert_F^2 \leq E,
\end{aligned}
\end{equation}
where $\mathbf{X}_i\in\mathcal{Q}$ is the $i$-th iterative point. By solving the convex problem (\ref{opt:sca_mimo}) we get a solution $\mathbf{X}^{\star}\in\mathcal{Q}$. Note that $g(\mathbf{X}^{\star})\leq g(\mathbf{X}_i) = 0$, indicating that $\mathbf{X}^{\star}-\mathbf{X}_i$ yields a descent direction for the objective function. With a properly chosen step size $t\in[0,1]$, one may get the $(i+1)$-th iteration point as
\begin{equation}
\mathbf{X}_{i+1}=\mathbf{X}_i+t(\mathbf{X}^{\star}-\mathbf{X}_i)=(1-t)\mathbf{X}_i+t\mathbf{X}^{\star}.
\end{equation}
Since $\mathbf{X}_i,\mathbf{X}^{\star} \in \mathcal{Q}$ by the definition of convexity, we have $\mathbf{X}_{i+1} \in \mathcal{Q}$, which is a feasible solution to problem (\ref{opt:origin_mimo}).
We are now ready to present Algorithm \ref{alg:sca} to solve problem (\ref{opt:sca_mimo}) based on the discussion above.
\begin{algorithm}
\caption{SCA Algorithm for Solving (\ref{opt:origin_mimo})}
\label{alg:sca}
\begin{algorithmic}[1]
\REQUIRE $\mathbf{H}$, $\mathbf{\Psi}$, $E$, $\mathbf{S}$, $\bm{\sigma}$, $\Gamma_k,\forall k$, the execution threshold $\epsilon$ and the maximum iteration number $i_{\max}$.
\ENSURE $\mathbf{X}^{\star}$
\STATE {initialize $\mathbf{X}_0\in\mathcal{Q}$ by picking up $\mathbf{X}_{-1}$ randomly and solving problem (\ref{opt:sca_mimo}), $i=0$.}
\REPEAT{
\STATE Calculate the gradient $\nabla f(\mathbf{X}_i)$ by equation (\ref{eq:gradient}).
\STATE Solve problem (\ref{opt:sca_mimo}) to obtain $\mathbf{X}^{\star}$.
\STATE {
Update the solution by
$\mathbf{X}_{i+1} = \mathbf{X}_i + t\left(\mathbf{X}^{\star}-\mathbf{X}_i\right)$,
where $t$ is determined by using the exact line search.
}
\STATE $i=i+1$.
}
\UNTIL {
$\Vert \mathbf{X}_i-\mathbf{X}_{i-1}\Vert_F^2\leq\epsilon$ or $i=i_{\max}$.
}
\STATE $\mathbf{X}^{\star}=\mathbf{X}_i$
\end{algorithmic}
\end{algorithm}
\section{Numerical Results}
\begin{figure*}[t]
\centering
\begin{minipage}{0.48\linewidth}
\centering
\includegraphics[scale=0.43]{img/mmse_iter.eps}
\caption{MMSE versus SCA iteration in case of $K=12$, $\Gamma=15\;\mathrm{dB}$, $E = 40\;\mathrm{dBm}$.}
\label{fig:iter_mmse}
\end{minipage}\qquad
\begin{minipage}{0.48\linewidth}
\centering
\includegraphics[scale=0.43]{img/constellation.eps}
\caption{Constellation plot of the received symbols in case of $K=12$, $\Gamma=15\;\mathrm{dB}$, $E=35\;\mathrm{dBm}$.}
\label{fig:constellation}
\end{minipage}
\begin{minipage}{0.48\linewidth}
\centering
\includegraphics[scale=0.43]{img/tp_sinr.eps}
\caption{Throughput versus SNR in the case of $K=12$, $E=40\;\mathrm{dBm}$.}
\label{fig:tp_sinr}
\end{minipage}\qquad
\begin{minipage}{0.48\linewidth}
\centering
\includegraphics[scale=0.45]{img/mmse_sinr.eps}
\caption{MMSE versus SNR, in the case of $K=8$, $E = 30\;\mathrm{dBm}$.}
\label{fig:mmse_sinr}
\end{minipage}
\end{figure*}
In this section, we provide numerical results to verify
the superiority of the proposed FTN-ISAC-SLP approaches.
Without loss of generality, we consider an ISAC BS that is
equipped with $N_t = 16$ and $N_r = 20$ antennas for its
transmitter and receiver.
The noise variances are set as $\sigma^2_C = \sigma^2_R = 0\;\mathrm{dBm}$, and the
frames length is set as $L = 30$. The variance of the fluctuation of TRM is set as $\sigma^2_H=20\;\mathrm{dBm}$. Without loss of generality,
all the communication users are
imposed with the same worst-case QoS, i.e., $\Gamma_k=\Gamma,\forall k$.
Our baseline ISAC-BLP is the ISAC beamforming method from \cite{liu2021cramer}, namely the block-level precoding method to find the optimal linear precoding matrix $\mathbf{W}_{DF}$ that minimizes the sensing CRB with guaranteed per-user SINR, through solving the problem below.
\begin{equation}
\label{opt:isac-blp}
\begin{aligned}
\underset{\mathbf{W}_{DF}}{\mathrm{min}} &\; \mathrm{MMSE}(\mathbf{W}_{DF}) =
\frac{\sigma_R^2N_r}{L}\left(\left(\frac{\sigma_R^2}{\sigma_H^2}\mathbf{I}+\mathbf{W}_{DF}\mathbf{W}_{DF}^{H}\right)^{-1}\right) \\
s.t. &\;
\gamma_k \geq \Gamma_k, \; \forall k, \; L\Vert\mathbf{W}_{DF}\Vert_F^2 \leq E.
\end{aligned}
\end{equation}
where $\gamma_k$ is the SINR at $k$-th user. In order to ensure a fair comparison, we replace the original objective function CRB in \cite{liu2021cramer} with the MMSE.
Fig. \ref{fig:iter_mmse} shows the convergence performance of the proposed SCA algorithm. The tolerance threshold of the algorithm is set as $\epsilon=10^{-4}$. The algorithm converges and approaches to the lower bound we derived in above section. It can be observed that the proposed FTN-ISAC-SLP method outperforms the benchmark block-level design.
Fig. \ref{fig:constellation} shows the constellation plots for both the ISAC-BLP and FTN-ISAC-SLP approaches. The green points depict the region for SINR constraint while the blue points depict the region for CI constraint. It is clearly observed that the resulting CI constellation genearlly yields larger SNR compared to the block-level precoding.
In Fig. \ref{fig:tp_sinr}, we show the communication throughput performance with increased SNR threshold. Suppose the number of successfully recovered bits are $N_b$, our throughput is calculated by $N_b/\tau$ per time unit. We set FTN duration factor $\tau=0.8,0.9$ and $1$ respectively, and for ISAC-BLP $\tau=1$. As $\tau$ decreases, the throughput increases. When $\tau=1$, $T=T_0$, the FTN signaling reduces to Nyquist pulse shaping, whereas it still outperforms the ISAC-BLP method thanks to the exploitation of the CI effect.
Finally in Fig. \ref{fig:mmse_sinr}, we show the radar estimation MMSE with increased SNR threshold for communication users. It is observed that when the communications SNR is on the rise, the estimation performance becomes worse, which indicates that there is an inherent tradeoff between communication and sensing performance. An increasing trend of MMSE when $\tau$ increases is observed, because the power constraint $\Vert\mathbf{X}\mathbf{C}^{\top}\Vert_F^2\leq E$ is tightened as $\tau$ increases. Again, our results show the superiority of the proposed FTN-ISAC-SLP method over that of the ISAC-BLP due to leveraging the CI constraint.
\section{Conclusion}
This paper studies symbol-level precoding for faster-than-Nyquist signaling in ISAC, where a precoded symbol matrix is developed to carry out target sensing and information signaling simultaneously. In particular, we guarantee the per-user constructive interference constraint in the downlink while minimizing the MMSE for target estimation. Despite the non-convexity of the formulated precoding problem, we design an effective successive convex approximation method that, at each iteration, resolves a second-order cone program subproblem. The superiority of the proposed FTN-ISAC-SLP method is demonstrated by numerical results, which show that our method is capable of greatly enhancing both communication and sensing performance compared to conventional block-level precoding based on Nyquist pulse shaping.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13615",
"language": "en",
"timestamp": "2023-02-28T02:25:23",
"url": "https://arxiv.org/abs/2302.13615",
"yymm": "2302"
} | \section{Introduction}
The scientific exploitation of very-high-energy (VHE, $E > 100\,{\rm GeV}$) gamma-ray data has been traditionally conducted with proprietary data and analysis software. In recent years, standardised data formats and open-source analysis tools for gamma-ray astronomy \cite{nigro_2021} have been developed in anticipation of the next generation of ground-based gamma-ray telescopes, that will open their observational time and data to the astronomical community \cite{cta_book}. But the current generation of VHE instruments can already profit from this new approach towards data and software. Standardised data provide indeed a natural solution to preserve and make publicly available their observations beyond the end of their operations. If this data legacy were to be made public, its compatibility with open-source analysis tools would be necessary for its profitable usage by the community. Of particular relevance in the implementation of this new open approach in gamma-ray astronomy are: the \textit{Data Formats for Gamma-ray Astronomy} (GADF) forum \cite{gadf}, a community-based initiative defining a standardised format for data from different $\gamma$-ray instruments; and \texttt{Gammapy}\xspace \cite{gammapy}, an open-source python package for their analysis. The current generation of VHE instruments has started to produce GADF-compliant data \cite{hess_dl3_dr1} and their potential to be combined in multi-instrument analyses has already been demonstrated \cite{joint_crab, hawc_dl3}.
\par
In this contribution, we present for the first time the effort to convert the MAGIC data in the standardised GADF format. We perform a validation of the point-like analysis by comparing the results obtained analysing the standardised data with \texttt{Gammapy}\xspace against those obtained with the MAGIC proprietary software, \texttt{MARS}\xspace \cite{mars}.
\section{Data Conversion}
To produce the MAGIC data in the GADF-compliant format we used \texttt{MARS}\xspace to reduce the observations down to a data level containing the events (the air showers) with their estimated energy and direction and a score from a classification algorithm ranking their likelihood of being initiated by a gamma ray. Simulated Monte Carlo (MC) data are reduced to the same data level. A proprietary \texttt{C++} library is then used to extract a list of gamma-like events from the observations, and to estimate the instrument response function (IRF) from the MC. Event lists and IRFs constitute the reduced data level, technically referred to as data level 3 (DL3), containing a minimum (detector-independent) level of information necessary to perform a scientific analysis (e.g. estimating a gamma-ray spectrum), and on which the GADF specifications focus. MAGIC's event lists and IRFs are hence stored in \texttt{FITS} \cite{fits} format, compliant with the GADF specs.
\par
We begin the process of validation from the point-like or one-dimensional analysis. In the latter, the position of the source is known and its extension is considered negligible. The signal is estimated from the events within a circular region, referred to as ON, enclosing the source. The background to be subtracted is estimated from one or more regions, referred to as OFF, with the same offset of the ON region, but symmetric with respect to the camera centre. The ON and OFF counts thus extracted are binned in energy (see e.g. Fig.~\ref{fig:counts_comparison}). A likelihood procedure is then applied to fit the observed counts. Expected counts are computed folding the IRF with an analytical parametrisation assumed to describe the source spectrum (see \cite{piron_2001} for a detailed description of the analysis method).
To validate the estimation of spectra and the light curves, the common final scientific products of a point-like gamma-ray analysis, we selected the following data samples:
\begin{itemize}
\item $50\,{\rm h}$ of Crab Nebula observations gathered between 2011 and 2012. Of these, $30\,{\rm h}$ were observed with the source sitting at $0.4^{\circ}$ offset from the camera centre (single-offset sample), in the zenith range $[5^{\circ}, 50^{\circ}]$. These represent MAGIC standard data-taking conditions and indeed this data constitute the sample used to estimate the MAGIC stereoscopic performance \cite{magic_performance}. In the remaining $20\,{\rm h}$, the source was observed at different camera offsets: $[0.20^{\circ}, 0.35^{\circ}, 0.40^{\circ}, 0.70^{\circ}, 1.00^{\circ}, 1.40^{\circ}]$ (multi-offset sample), in the zenith range $[5^{\circ}, 35^{\circ}]$. These observations were meant to test the response along the whole camera and were used to validate the spatial likelihood analysis method in \cite{skyprism};
\item $42\,{\rm h}$ of Mrk421 observations from 2013, described in \cite{mrk421_2013}. The source was observed at $0.4^{\circ}$ offset, in the zenith range $[5^{\circ}, 70^{\circ}]$.
\end{itemize}
The Crab Nebula represent the science case of a steady bright source (optimal to test the spectrum estimation), while Mrk421 that of a bright source with a variable emission in time (optimal to test the computation of a light curve).
\section{Validation}
\begin{figure}
\centering
\includegraphics[scale=0.45]{figures/run_stacked_counts_comparison.pdf}
\caption{Validation of the signal / background extraction: gamma-ray events in the ON and OFF regions extracted with \texttt{MARS}\xspace (black) and \texttt{Gammapy}\xspace (red) from the single-offset Crab Nebula sample.}
\label{fig:counts_comparison}
\end{figure}
The results of the signal / background estimation obtained for the single-offset Crab Nebula sample with \texttt{MARS}\xspace and \texttt{Gammapy}\xspace are shown in Fig.~\ref{fig:counts_comparison}. We observe an excellent agreement between the ON and OFF counts extracted with the two software, concluding that the information contained in the \texttt{MARS}\xspace proprietary data has been correctly translated into the GADF-compliant format. We then estimate the Crab Nebula spectrum using both software and assuming a log-parabolic function to describe the spectrum (Eq.~1 in \cite{magic_performance}).
Fig.~\ref{fig:spectra_comparison_single_offset} and \ref{fig:spectra_comparison_all_offsets} show the results obtained with the single- and multi-offset samples. In both cases the results of the forward-folding likelihood analysis, described in \cite{piron_2001} and implemented in both software, are in excellent agreement. The flux point computation also returns compatible results despite the two different methods adopted in the two software: unfolding for \texttt{MARS}\xspace \cite{magic_unfolding} and a likelihood fit performed with the events in each energy bin for \texttt{Gammapy}\xspace.
\begin{figure}
\centering
\includegraphics[scale=0.5]{figures/spectra_comparison.pdf}
\caption{Validation of the spectrum estimation: spectral energy distribution measured from the single-offset Crab Nebula sample using \texttt{MARS}\xspace (black) and \texttt{Gammapy}\xspace (red). The continuous line and the associated error band represent the result of the forward-folding likelihood fit. Flux points are obtained using the unfolding procedure for \texttt{MARS}\xspace, while repeating the forward-folding likelihood fit in a single energy bin for \texttt{Gammapy}\xspace.}
\label{fig:spectra_comparison_single_offset}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.42]{figures/all_offsets_spectra_comparison.pdf}
\caption{Same as Fig.~\ref{fig:spectra_comparison_single_offset}, but using Crab Nebula observations at different offsets from the camera centre.}
\label{fig:spectra_comparison_all_offsets}
\end{figure}
We use the Mrk421 April 2013 data to compare the light curve estimation performed by \texttt{MARS}\xspace and \texttt{Gammapy}\xspace. Fig.~\ref{fig:light_curve_mrk421} illustrates the estimated integral flux above $E_0 = 800\,{\rm GeV}$, obtained with the two software. Consistent results are obtained, despite different methods being used for the flux estimation vs time. \texttt{MARS}\xspace estimates the integrated flux from the number of excess (ON - OFF) events above $E_0$, dividing it by the effective area (integrated above $E_0$) and time; \texttt{Gammapy}\xspace performs instead a likelihood fit with the events in the specific time bin and reports the integral above $E_0$ of the fitted spectrum.
\begin{figure}
\centering
\includegraphics[scale=0.46]{figures/light_curve_800_GeV_100_TeV_comparison.pdf}
\caption{Light curve showing the integrated flux of Mrk421 above $800\,{\rm GeV}$ obtained with \texttt{MARS}\xspace (black) and \texttt{Gammapy}\xspace (red).}
\label{fig:light_curve_mrk421}
\end{figure}
\section{Conclusion}
We presented the effort to produce MAGIC data in the standardised GADF format. We validated the point-like analysis by comparing the results obtained with \texttt{Gammapy}\xspace against those obtained with the standard \texttt{MARS}\xspace analysis chain. For both Crab Nebula and Mrk421 observations we find excellent agreement between the spectrum and light curve estimated with the two frameworks. Major productions of GADF-compliant data ($\sim$ years observational periods) have already being initiated by the MAGIC Collaboration, while at the same time its analysers are starting to adopt GADF-compliant data and \texttt{Gammapy}\xspace for their scientific analyses. This represents a milestone in the definition of the instrument data legacy as for the first time its observations can be produced in a standardised format analysable with open-source analysis tools.
\section{Acknowledgements}
C.N. acknowledges support by the Spanish Ministerio de Ciencia e Innovación (MICINN), the European Union – NextGenerationEU and PRTR through the programme Juan de la Cierva (grant FJC2020-046063-I), by the the MICINN (grant PID2019-107847RB-C41), and from the CERCA program of the Generalitat de Catalunya.
|
{
"arxiv_id": "2302.13660",
"language": "en",
"timestamp": "2023-02-28T02:26:48",
"url": "https://arxiv.org/abs/2302.13660",
"yymm": "2302"
} | \section{Introduction}
In the past decade, the papers \cite{dev14,pro13,tei13} gave an engaging account of how some of the traditional theorems from calculus and real analysis are each equivalent to the least upper bound property of, or completeness axiom for, the field $\mathbb{R}$ of all real numbers. In a mathematical theory, theorems are proven from a collection of axioms, but in the aforementioned papers, the opposite thought process was exhibited: in the appropriate universe of discourse (which in this case is the class of all ordered fields) selected traditional theorems were each stated as an axiom (for an arbitrary ordered field) and the statement of the completeness axiom was proven as a consequence. As the author, J. Propp, of \cite{pro13} pointed out, this intellectual exercise has the flavor of \emph{Reverse Mathematics} \cite[p. 392]{pro13}, which is interesting in its own right. See, for instance, \cite[Section 1.1]{sim09} or \cite[Chapter 1]{sti18}. Propp further pointed out that this kind of investigation ``sheds light on the landscape of mathematical theories and structures,'' and that ``arguably the oldest form of mathematics in reverse'' is the quest for a list of equivalences for the parallel postulate in Euclidean geometry \cite[pp. 392--393]{pro13}. An independent work \cite{tei13} gave more technical proofs and initiated a list of `completeness properties' for an ordered field that was expanded in \cite{dev14}, so that, currently, 72 characterizations of the completeness axiom have been identified. An almost full cast of the traditional calculus theorems, ranging from the Intermediate Value Theorem to convergence tests and L'H\^opital's rule, appear in the list, but we also find some forms of the Arzel\`a-Ascoli Theorem, and the Lipschitz property of $C^1$ functions. This paper is inspired by the question, of possibly including in the list, some of the theorems of measure theory. We decided to start with the most fundamental\textemdash Littlewood's three principles, the well-known heuristics for understanding measure theory.
A quote from \emph{the} J. E. Littlewood is now inevitable. From \cite[p. 26]{lit44}, the three principles are:
\begin{enumerate}\item\label{prin1} Every measurable set is nearly a finite sum (meaning union) of intervals.
\item\label{prin2} Every (measurable) function is nearly continuous.
\item\label{prin3} Every convergent sequence of (measurable) functions is nearly uniformly convergent.
\end{enumerate}
The principle \ref{prin2} is also known as Lusin's Theorem \cite[pp. 72, 74]{roy88}, while \ref{prin3} is also referred to as Egoroff's Theorem \cite[p. 40]{lit44}. We shall keep the form of each of the above principles as a `one-directional' implication. For instance, \ref{prin2} may be rephrased as ``If $f$ is a measurable function, then $f$ is nearly continuous.'' In the textbooks, we often find the biconditional form ``$f$ is a measurable function if and only if $f$ is nearly continuous,'' but this is \emph{not} what we shall use. The choice of which statement appears in the hypothesis and which statement appears in the conclusion of the one-directional implication is based on how J. E. Littlewood originally stated the principles: the `nearly' part is always in the conclusion of the conditional statement. Principles \ref{prin1} and \ref{prin3} shall be handled similarly. Also, we shall be using `local' forms of such principles. That is, there is a given closed and bounded interval $I$ (in an arbitrary ordered field) such that we shall consider only the measurable sets contained in $I$ and functions with domain $I$ (or a subset of $I$). This `local' perspective means that, among the many forms of the first principle, we shall indeed be using that form which involves the symmetric difference of two sets, where the second set is the union of a finite number of intervals, which was originally in the aforementioned quote from J. E. Littlewood.
The issue of how to define Lebesgue measure and Lebesgue integrals in an arbitrary ordered field $\mathbb{F}$ was one of the first issues we had to deal with. If the usual outer measure approach is to be used, then we have to define outer measure as the infimum of some set, but then, the \emph{Existence of Infima}, or the assertion that any nonempty subset of $\mathbb{F}$ that has a lower bound has an infimum, is one of the equivalent forms of the completeness axiom \cite[p. 108]{tei13}, yet $\mathbb{F}$ need not be complete. What we found as a suitable approach is the Riesz method \cite[Chapter II]{cha95}, in which step functions form the starting point for establishing the definition of the Lebesgue integral, and the measurability of a set is defined later as the Lebesgue measurability of its characteristic function.
\section{Preliminaries}\label{PrelSec}
Let $\mathbb{F}$ be an ordered field. Thus, $\mathbb{F}$ has characteristic zero, and consequently, contains a subfield isomorphic to the field $\mathbb{Q}$ of all rational numbers. The usual ordering in $\mathbb{Q}$ is consistent with the order relation $<$ on $\mathbb{F}$ defined by $a<b$ if and only if $b-a\in\mathbb{P}$, where $\mathbb{P}$ is the set of all positive elements of $\mathbb{F}$. The relation $<$ on $\mathbb{F}$ obeys a \emph{Trichotomy Law} which states that for any $a,b\in\mathbb{F}$, exactly one of the assertions $a<b$, $b<a$ or $a=b$, is true.
The relation $>$ is defined by $a>b$ if and only if $b<a$, and the relations $<$ and $>$ may be extended in the usual manner to obtain the relations $\leq$ and $\geq$, respectively. Using these order relations, some standard notions of analysis may be defined for the field $\mathbb{F}$.
Given $a,b\in\mathbb{F}$, the open interval with left endpoint $a$ and right endpoint $b$ is $\lpar a,b\rpar:=\{x\in\mathbb{F}\ :\ a<x<b\}$. The intervals $\lbrak a,b\rbrak$, $\left( a,b\right]$ and $\left[ a,b\right)$ may be defined in the obvious manner, by taking the union of $\lpar a,b\rpar$ with one or both of its endpoints. The length of any of the four aforementioned intervals is defined to be $b-a$.
Given functions $f,g:\lbrak a,b\rbrak\longrightarrow\mathbb{F}$, by $f\geq g$, we mean $f(x)\geq g(x)$ for all $x\in\lbrak a,b\rbrak$. A function $f:\lbrak a,b\rbrak\longrightarrow\mathbb{F}$ is continuous at $c\in\lbrak a,b\rbrak$ if, for each $\varepsilon\in\mathbb{P}$, there exists $\delta\in\mathbb{P}$ such that for any $x\in\lbrak a,b\rbrak\cap\left( c-\delta,c+\delta\right)$, $f(x)-f(c)\in\left(-\varepsilon,\varepsilon\right)$. We say that $f:\lbrak a,b\rbrak\longrightarrow\mathbb{F}$ is a continuous function if $f$ is continuous at each element of $\lbrak a,b\rbrak$.
Since $\mathbb{F}$ contains $\mathbb{Q}$ as a subfield, we may view the set of all positive integers $\mathbb{N}\subseteq\mathbb{Q}$ as a subset of $\mathbb{F}$. A sequence in $\mathcal{S}\subseteq\mathbb{F}$ is a function\linebreak $a:\mathbb{N}\longrightarrow\mathcal{S}$. The traditional notation is $a_n:=a(n)$ for any $n\in\mathbb{N}$, and instead of referring to $a$ as a sequence, we say that $\lpar a_n\rpar$ is a sequence. If indeed $\lpar a_n\rpar$ is a sequence in some subset of $\mathbb{F}$, $a_n$ may further be equal to some other expression determined by $n$. A sequence $\lpar a_n\rpar$ converges to $L\in\mathbb{F}$ if, for each $\varepsilon\in\mathbb{P}$, there exists $N\in\mathbb{N}$ such that for any $n\in\mathbb{N}$ with $n\geq N$, we have $a_n-L\in\left(-\varepsilon,\varepsilon\right)$. If indeed $\lpar a_n\rpar$ converges to $L$, then using the Trichotomy Law in $\mathbb{F}$, the element $L$ is unique, and we define $\displaystyle\lim_{n\rightarrow\infty} a_n:=L$. Given sequences $\lpar a_n\rpar$ and $\lpar b_n\rpar$ such that for each $n\in\mathbb{N}$, $b_n=\displaystyle\sum_{k=1}^na_k$, if $\lpar b_n\rpar$ converges, then $\displaystyle\sum_{k=0}^\infty a_k:=\displaystyle\lim_{n\rightarrow\infty} b_n$.
Given $\lbrak a,b\rbrak\subseteq\mathbb{F}$, we shall also be considering sequences of functions $\lbrak a,b\rbrak\longrightarrow\mathcal{S}$ for some $\mathcal{S}\subseteq\mathbb{F}$. By such, we simply mean that each $n\in\mathbb{N}$ is assigned to a unique function $\varphi_n:\lbrak a,b\rbrak\longrightarrow\mathcal{S}$, and we denote the function sequence by $\left(\varphi_n\right)$. We say that $\left(\varphi_n\right)$ is \emph{monotonically decreasing} if $\varphi_{n}\geq \varphi_{n+1}$ for any $n\in\mathbb{N}$. Let $\left(\varphi_n\right)$ be a sequence of functions $\lbrak a,b\rbrak\longrightarrow\mathcal{S}$. We say that the functions in $\left(\varphi_n\right)$ \emph{converge uniformly} to a function $f:\lbrak a,b\rbrak\longrightarrow\mathcal{S}$ if for each $\varepsilon\in\mathbb{P}$, there exists $N\in\mathbb{N}$ such that for any $n\geq N$ and any $x\in\lbrak a,b\rbrak$, $f(x)-\varphi_n(x)\in\left(-\varepsilon,\varepsilon\right)$. If indeed the functions in $\left(\varphi_n\right)$ converge to $f$ and each $\varphi_n$ is a continuous at any element of $\lbrak a,b\rbrak$, then by routine instantiation of quantifiers, $f$ is also continuous at any element of $\lbrak a,b\rbrak$.
By a \emph{partition} of $\lbrak a,b\rbrak\neq\emptyset$, we mean a finite subset of $\lbrak a,b\rbrak$ that contains the endpoints $a$ and $b$. If $a=b$, then the only possible partition of $\lbrak a,b\rbrak$ is the singleton $\{a\}=\{b\}$. For the non-degenerate case, which is when $a<b$, we use the traditional notation, in which, if indeed we have a partition $\Delta$ of $\lbrak a,b\rbrak$ with cardinality $n$, then the elements of $\Delta$ are indexed as\linebreak $a=x_0<x_1<\cdots<x_n=b$. A function $\varphi:\lbrak a,b\rbrak\longrightarrow\mathbb{F}$ is a \emph{step function} if there exists a partition $\{x_0,x_1,\ldots,x_n\}$ of $\lbrak a,b\rbrak$ such that for each $i\in\{1,2,\ldots,n\}$, the restriction $\left.\varphi\right|_{\left( x_{i-1},x_i\right)}$ is a constant function, that is, a function with a one-element range, say $\{M_i\}$ for some $M_i\in\mathbb{F}$. In such a case, the integral of $\varphi$ over $\lbrak a,b\rbrak$ is defined as
\begin{eqnarray}
\displaystyle\int_a^b\varphi:=\sum_{i=1}^nM_i(x_i-x_{i-1}).\label{IntegralStep}
\end{eqnarray}
For the degenerate case $a=b$, the summation in \eqref{IntegralStep} is an empty sum, and so $\displaystyle\int_a^b\varphi=0$. If $a<b$, and if $M_i\in\mathbb{P}\cup\{0\}$ for any $i\in\{1,2,\ldots,n\}$, then $\displaystyle\int_a^b\varphi\geq 0$.
Consider an arbitrary nondegenerate $\lbrak a,b\rbrak\subseteq\mathbb{F}$. We say that $\mathcal{S}\subseteq\lbrak a,b\rbrak$ is a \emph{null set} or \emph{has measure zero} if, for each $\varepsilon\in\mathbb{P}$, there exists a countable collection $\{\left( a_n,b_n\right)\subseteq\lbrak a,b\rbrak\ :\ n\in\mathbb{N}\}$ of open intervals such that
\begin{eqnarray}
\mathcal{S}\subseteq\bigcup_{n=1}^\infty\left( a_n,b_n\right),\qquad\sum_{n=1}^\infty(b_n-a_n)<\varepsilon.\nonumber
\end{eqnarray}
A statement $\mathscr{P}(x)$ is said to hold \emph{almost everywhere in $\lbrak a,b\rbrak$} if\linebreak $\{x\in\lbrak a,b\rbrak\ :\ \neg\mathscr{P}(x)\}$ has measure zero. We say that a sequence $\left(\varphi_n\right)$ of functions $\lbrak a,b\rbrak\longrightarrow\mathbb{F}$ converges to a function $f:\lbrak a,b\rbrak\longrightarrow\mathbb{F}$ almost everywhere in $\lbrak a,b\rbrak$ if $f(x)=\displaystyle\lim_{n\rightarrow\infty}\varphi_n(x)$ almost everywhere in $\lbrak a,b\rbrak$. A function $f:\lbrak a,b\rbrak\longrightarrow\mathbb{P}\cup\{0\}$ is \emph{Lebesgue measurable} if there exists a monotonically decreasing sequence $\left(\varphi_n\right)$ of step functions $\lbrak a,b\rbrak\longrightarrow\mathbb{P}\cup\{0\}$ that converge to $f$ almost everywhere in $\lbrak a,b\rbrak$. A function $f:\lbrak a,b\rbrak\longrightarrow\mathbb{P}\cup\{0\}$ \emph{has a Lebesgue integral} if $f$ is Lebesgue measurable and if there exists $\displaystyle\int_a^bf\in\mathbb{F}$ such that for any sequence $\left(\varphi_n\right)$ of monotonically decreasing step functions $\lbrak a,b\rbrak\longrightarrow\mathbb{P}\cup\{0\}$ that converge to $f$ almost everywhere in $\lbrak a,b\rbrak$, the sequence $\left(\displaystyle\int_a^b\varphi_n\right)$ of integrals converges to $\displaystyle\int_a^bf$. The notion of Lebesgue measurability and of having a Lebesgue integral may then be extended to a function $\lbrak a,b\rbrak\longrightarrow\mathbb{F}$ in the usual manner, which is by considering the nonnegative and negative `parts' of a function.
For any sets $X$ and $Y$, we define $X\backslash Y:=\{x\in X\ :\ x\notin Y\}$,\linebreak and $X\triangle Y:=(X\backslash Y)\cup(Y\backslash X)$.
We say that $E\subseteq\lbrak a,b\rbrak$ is a \emph{measurable set} if its \emph{characteristic function}
\begin{eqnarray}
\chi_E(x):=\begin{cases}1, & \mbox{if }x\in E,\\
0, & \mbox{if }x\in \lbrak a,b\rbrak\backslash E,\end{cases}\nonumber
\end{eqnarray}
is a measurable function. If $E$ has measure zero, or $\chi_E$ has a Lebesgue integral, then $E$ \emph{has Lebesgue measure}. In the former case, $\displaystyle\int_a^b\chi_E=0$.
By a \emph{cut}\footnote{This definition of cut was taken from \cite[p. 60]{mon08}. We chose it because it is apparently more concise. The definition in \cite{dev14,pro13,tei13} is based on the more traditional, which is that a cut is a pair of subsets of $\mathbb{F}$.} of $\mathbb{F}$ we mean a nonempty proper subset $A$ of $\mathbb{F}$ such that for any $a\in A$ and any $b\in\mathbb{F}\backslash A$, $a<b$. We say that $c\in\mathbb{F}$ is a \emph{cut point} of a cut $A\subseteq\mathbb{F}$ if for any $a\in A$ and any $b\in\mathbb{F}\backslash A$, $a\leq c\leq b$. A cut of $\mathbb{F}$ that does not have a cut point is a \emph{gap}. We say that $\mathbb{F}$ is a \emph{complete ordered field} if $\mathbb{F}$ satisfies the axiom:
\begin{enumerate}
\item[{\bf(CA)}] {\it Cut Axiom.} Every cut of $\mathbb{F}$ is not a gap.
\end{enumerate}
Otherwise, $\mathbb{F}$ is said to be \emph{incomplete}.
Let $a\in\mathbb{F}$. If $b\in\mathbb{F}$ such that $a\leq b$, then we define $\max\{a,b\}:=b$, $\min\{a,b\}:=a$, and $|a|:=\max\{-a,a\}$.
\section{Equivalence of Littlewood's Principles to the Cut Axiom}
We now prove the equivalence of Littlewood's three principles to {\bf(CA)}, with a couple more statements added to our list of equivalences.
\begin{theorem} For an arbitrary ordered field $\mathbb{F}$, each of the statements
\begin{enumerate}\item[{\bf(LIP)}] \emph{Lebesgue Integral Property.}\footnote{The names {\bf(LIP)}\ and {\bf(LMP)}\ are not popularly used in standard real analysis. Also, these two statements reduce to trivialities when $\mathbb{F}$ is complete. We chose to name these two statements as such, in analogy to what in \cite{dev14} was called the \emph{Darboux Integral Property}, which states that every Darboux integrable function $f:\lbrak a,b\rbrak\longrightarrow\mathbb{F}$ has a Darboux integral. According to \cite[p. 271]{dev14}, the Darboux Integral Property (in conjunction with some other statement) is one of the equivalent forms of {\bf(CA)}.} Given $\lbrak a,b\rbrak\subseteq\mathbb{F}$, every Lebesgue measurable function $\lbrak a,b\rbrak\longrightarrow\mathbb{F}$ has a Lebesgue integral.
\item[{\bf(LMP)}] \emph{Lebesgue Measure Property.} Given $\lbrak a,b\rbrak\subseteq\mathbb{F}$, every measurable subset of $\lbrak a,b\rbrak$ has Lebesgue measure.
\item[{\bf(LP1)}] \emph{Littlewood's First Principle.} Given $\lbrak a,b\rbrak\subseteq\mathbb{F}$, for each measurable set $E\subseteq\lbrak a,b\rbrak$ and each $\varepsilon\in\mathbb{P}$, there exist intervals $I_1,I_2,\ldots, I_m\subseteq\lbrak a,b\rbrak$ such that if $U=\displaystyle\bigcup_{i=1}^mI_i$, then $\displaystyle\int_a^b\chi_{E\triangle U}<\varepsilon$.
\item[{\bf(LP2)}] \emph{Littlewood's Second Principle.} Given $\lbrak a,b\rbrak\subseteq\mathbb{F}$, for each Lebesgue measurable function $f:\lbrak a,b\rbrak\longrightarrow\mathbb{F}$ and each $\varepsilon\in\mathbb{P}$, there exists a measurable set $E\subseteq\lbrak a,b\rbrak$ such that $f:\lbrak a,b\rbrak\backslash E\longrightarrow\mathbb{F}$ is a continuous function and that $\displaystyle\int_a^b\chi_{E}<\varepsilon$.
\item[{\bf(LP3)}] \emph{Littlewood's Third Principle.} Given $\lbrak a,b\rbrak\subseteq\mathbb{F}$, for each sequence $\left( \varphi_n\right)$ of Lebesgue measurable functions $\lbrak a,b\rbrak\longrightarrow\mathbb{F}$ that converge to\linebreak $f:\lbrak a,b\rbrak\longrightarrow\mathbb{F}$ almost everywhere in $\lbrak a,b\rbrak$, and each $\varepsilon\in\mathbb{P}$, there exists a measurable set $E\subseteq\lbrak a,b\rbrak$ such that the functions $\varphi_n:\lbrak a,b\rbrak\backslash E\longrightarrow\mathbb{F}$ converge uniformly to $f:\lbrak a,b\rbrak\backslash E\longrightarrow\mathbb{F}$, and that $\displaystyle\int_a^b\chi_{E}<\varepsilon$.
\end{enumerate}
is equivalent to {\bf(CA)}.
\end{theorem}
\begin{proof} We shall prove
\begin{eqnarray}
\mbox{\bf(CA)} \Longrightarrow \mbox{\bf(LIP)} \Longrightarrow \mbox{\bf(LMP)} \Longrightarrow {\bf(CA)},\label{TheCycle1}\\
\mbox{\bf(CA)} \Longrightarrow \mbox{\bf(LP3)} \Longrightarrow \mbox{\bf(LP2)} \Longrightarrow \mbox{\bf(LP1)} \Longrightarrow {\bf(CA)}.\label{TheCycle2}
\end{eqnarray}
Let {\bf($\star$)}\ be one of the statements {\bf(LIP)}, {\bf(LMP)}, {\bf(LP1)}, {\bf(LP2)}, or {\bf(LP3)}. From standard real analysis, {\bf($\star$)}\ is true for the ordered field $\mathbb{R}$. Suppose $\mathbb{F}$ is an ordered field that satisfies $\neg\mbox{\bf($\star$)}$. Then $\mathbb{F}$ cannot be $\mathbb{R}$, but we have a well-known fact\footnote{See, for instance, \cite[pp. 601--605]{spi08}.} that any complete ordered field is isomorphic to $\mathbb{R}$, so $\mathbb{F}$ is incomplete, and thus, $\neg\mbox{\bf(CA)}$ holds in $\mathbb{F}$. We have thus proven\linebreak $\mbox{\bf(CA)}\Longrightarrow\mbox{\bf($\star$)}$ by contraposition. In particular,
\begin{eqnarray}
\mbox{\bf(CA)} &\Longrightarrow& \mbox{\bf(LIP)},\nonumber\\
\mbox{\bf(CA)} & \Longrightarrow & \mbox{\bf(LP3)}.\nonumber
\end{eqnarray}
The implication
\begin{eqnarray}
\mbox{\bf(LIP)} &\Longrightarrow& \mbox{\bf(LMP)},\nonumber
\end{eqnarray}
is trivial, while {\bf(LP2)}\ is well-known as a consequence of {\bf(LP3)}. One proof of
\begin{eqnarray}
\mbox{\bf(LP3)}\Longrightarrow\mbox{\bf(LP2)},\nonumber
\end{eqnarray}
can be found in \cite[p. 110]{cha95}, and in this proof, there is a straightforward use of {\bf(LP3)}\ to carry out a `modus ponens' argument to prove {\bf(LP2)}, and all notions [mainly, continuity and uniform convergence] used in the proof are valid for the arbitrary ordered field $\mathbb{F}$, where such necessary notions have been defined for $\mathbb{F}$ in Section~\ref{PrelSec}. To complete the proof of \eqref{TheCycle1}--\eqref{TheCycle2}, only three implications remain, which we prove in the following.
\newline
\noindent $\neg\mbox{\bf(CA)}\Longrightarrow \neg\mbox{\bf(LMP)}$. Suppose $\mathbb{F}$ is incomplete. We proceed by contradiction, so suppose {\bf(LMP)}\ holds. By \cite[Lemma~B, p. 110]{tei13}, there exist a gap $A\subseteq\mathbb{F}$ and some strictly increasing and non-convergent sequence $\lpar a_n\rpar$ in $A$ such that
\begin{eqnarray}
\forall x\in A\quad \left[\ a_1<x \ \Longrightarrow\ \exists!n\in\mathbb{N}\ x\in\left( a_n,a_{n+1}\right]\ \right].\label{TeiSeq}
\end{eqnarray}
Let $a:=a_1$, let $b\in\mathbb{F}\backslash A$, and let $E:=\lbrak a,b\rbrak\backslash A$. For each $n\in\mathbb{N}$, define $\varphi_n:\lbrak a,b\rbrak\longrightarrow\mathbb{F}$ by $\varphi_n:x\mapsto 1-\chi_{\left[ a,a_{n}\right]}(x)$. Since, for any $n\in\mathbb{N}$,\linebreak $\left[ a,a_n\right]\subseteq\left[ a,a_{n+1}\right]$, we find that $\left(\varphi_n\right)$ is a monotonically decreasing sequence of step functions, with $\displaystyle\int_a^b\varphi_n=b-a_{n}$ for any $n$. Let $x\in\lbrak a,b\rbrak$, and let $\varepsilon\in\mathbb{P}$. If $x\in A$, then $\chi_E(x)=0$, and by \eqref{TeiSeq}, there exists $N\in\mathbb{N}$ such that for any $n\geq N+1$, $\varphi_n(x)=0$, and furthermore, $\chi_E(x)-\varphi_n(x)=0\in\left(-\varepsilon,\varepsilon\right)$. If $x\in\lbrak a,b\rbrak\backslash A$, since $\lpar a_n\rpar$ is a sequence in $A$, by the definition of cut, we have, for any $n\in N$, $x>a_{n}$, so $x\notin\left[ a,a_{n}\right]$. This implies $\varphi_n(x)=1$, but $\chi_E(x)=1$, so we further obtain $\chi_E(x)-\varphi_n(x)=0\in\left(-\varepsilon,\varepsilon\right)$. We have thus shown that the statement $\chi_E(x)=\displaystyle\lim_{n\rightarrow\infty}\varphi_n(x)$ is true for any $x\in\lbrak a,b\rbrak$, and consequently, almost everywhere in $\lbrak a,b\rbrak$. This means that $E$ is measurable, and by the {\bf(LMP)}, there exists $\displaystyle\int_a^b\chi_E\in\mathbb{F}$ such that $\displaystyle\int_a^b\chi_E=\displaystyle\lim_{n\rightarrow\infty}\displaystyle\int_a^b\varphi_n=\displaystyle\lim_{n\rightarrow\infty}(b-a_n)$. That is, the sequence $\left( b-a_n\right)$ converges to $\displaystyle\int_a^b\chi_E$. Routine arguments may be used, to show that, in $\mathbb{F}$, the usual linearity and constant rules for sequence limits hold, so $(a_n-b)$ converges to $-\displaystyle\int_a^b\chi_E$ and taking the sum of $(a_n-b)$ with the constant sequence $(b)$, we find that $\lpar a_n\rpar$ converges to $b-\displaystyle\int_a^b\chi_E$, contradicting the fact that $\lpar a_n\rpar$ is non-convergent. Therefore, $E$ does not have Lebesgue measure.\newline
\noindent $\mbox{\bf(LP2)}\Longrightarrow\mbox{\bf(LP1)}$. Let $\varepsilon\in\mathbb{P}$. If $E\subseteq\lbrak a,b\rbrak$ is a measurable set, then $\chi_E$ is a measurable function, and by {\bf(LP2)}, there exists a measurable set $F\subseteq\lbrak a,b\rbrak$ such that $\chi_E:\lbrak a,b\rbrak\backslash F\longrightarrow\mathbb{P}\cup\{0\}$ is a continuous function and that $\displaystyle\int_a^b\chi_{F}<\frac{\varepsilon}{2}$. Since the integral $\displaystyle\int_a^b\chi_{F}$ of the nonnegative characteristic function $\chi_F$, is nonnegative,
\begin{eqnarray}
-\frac{\varepsilon}{2}<\displaystyle\int_a^b\chi_{F}<\frac{\varepsilon}{2},\label{xLebInt}
\end{eqnarray}
which, in particular, means that $F$ has Lebesgue measure. Since $ F$ is measurable, there exists a monotonically decreasing sequence $\left(\varphi_n\right)$ of step functions $\lbrak a,b\rbrak\longrightarrow\mathbb{P}\cup\{0\}$ that converge to $\chi_{ F}$ almost everywhere in $\lbrak a,b\rbrak$, and that
\begin{eqnarray}
\displaystyle\int_a^b\chi_{ F} = \displaystyle\lim_{n\rightarrow\infty}\displaystyle\int_a^b\varphi_n.\nonumber
\end{eqnarray}
Consequently, there exists $N\in\mathbb{N}$ such that for all $n\geq N$,
\begin{eqnarray}
-\frac{\varepsilon}{2}< \displaystyle\int_a^b\varphi_n-\displaystyle\int_a^b\chi_{ F}<\frac{\varepsilon}{2},\nonumber
\end{eqnarray}
which, in conjunction with \eqref{xLebInt}, completes the proof that
\begin{eqnarray}
\displaystyle\lim_{n\rightarrow\infty}\displaystyle\int_a^b\varphi_n=0.\label{xLebInt2}
\end{eqnarray}
For each $m\in\mathbb{N}$, define $P_m=\{x\in\lbrak a,b\rbrak\ :\ \chi_{ F}\geq\frac{1}{m}\}$. Since $\chi_{ F}$ takes on only values of $0$ or $1$,
\begin{eqnarray}
\bigcup_{m\in\mathbb{N}} P_m =\{x\in\lbrak a,b\rbrak\ :\ \chi_{ F}(x)=1\}= F.\label{xLebIntII2}
\end{eqnarray}
Since $\left(\varphi_n\right)$ is monotonically decreasing and converges to $\chi_F$ at $x$, by a routine argument, for any $n\in\mathbb{N}$, $\varphi_n\geq \chi_{ F}$, and so,
\begin{eqnarray}
\varphi_n(x)\geq\frac{1}{m},\label{xLebIntII1}
\end{eqnarray}
for all $x\in P_m$. Since $\varphi_n$ is a step function, if we take the values of $\varphi_n$ that are at least $\frac{1}{m}$, then we have a finite subset of $\mathbb{P}\cup\{0\}$, the inverse image of which, under $\varphi_n$, is the union of a finite number of intervals, and this union contains $P_m$. Let such intervals be collected in the set $\mathscr{C}_n$, and let $\Sigma_n$ be the sum of the lengths of the intervals in $\mathscr{C}_n$. By \eqref{xLebIntII1}, $\displaystyle\int_a^b\varphi_n\geq \frac{\Sigma_n}{m}$, and by \eqref{xLebInt2}, given $\varepsilon\in\mathbb{P}$, there exists $N\in\mathbb{N}$ such that for all $n\geq N$,
$\frac{\displaystyle\Sigma_n}{m}\leq \displaystyle\int_a^b\varphi_n<\frac{\varepsilon}{m}$, and so, $\Sigma_n<\varepsilon$. At this point, we have proven that, for any $m\in\mathbb{N}$, $P_m$ has measure zero, and by \eqref{xLebIntII2}, so does $ F$.
Let $\left( \phi_n\right)$ be a monotonically decreasing sequence of step functions\linebreak $\lbrak a,b\rbrak\longrightarrow\mathbb{P}\cup\{0\}$ such that $\chi_E(x)=\displaystyle\lim_{n\rightarrow\infty}\phi_n(x)$ for all $x\in\lbrak a,b\rbrak\backslash Q$, where $Q$ has measure zero. For each $n\in\mathbb{N}$, define $\psi_n:\lbrak a,b\rbrak\longrightarrow\mathbb{P}\cup\{0\}$ by
\begin{eqnarray}
\psi_n(x):=\begin{cases} 1, & \mbox{if }\phi_n(x)\geq \frac{1}{2},\\
0, & \mbox{if }\phi_n(x)<\frac{1}{2},\end{cases}\nonumber
\end{eqnarray}
which is a step function. By a routine epsilon argument,
\begin{eqnarray}
\displaystyle\lim_{n\rightarrow\infty}\left|\psi_n(x)-\chi_E(x)\right|=0,\label{Lusin1}
\end{eqnarray}
for any $x\in\lbrak a,b\rbrak\backslash Q$. By an argument similar to that done earlier in this proof, we have $\psi_n=\chi_{B_n}$, where $B_n$ is the union of a finite number of intervals in $\lbrak a,b\rbrak$. By a routine argument, $\left|\psi_n-\chi_E\right|=\chi_{E\triangle B_n}$. Let $D_n$ be the set of all discontinuities of $\psi_n$. Since $\psi_n$ is a step function, and $\chi_E:\lbrak a,b\rbrak\backslash F\longrightarrow\mathbb{P}\cup\{0\}$ is a continuous function, the set $D_n\cup F$ of all discontinuities of $\chi_{E\triangle B_n}=\left|\psi_n-\chi_E\right|$ is a set of measure zero, and so is $R:=Q\cup F\cup\displaystyle\bigcup_{n\in\mathbb{N}}D_n$.
Let $x\in\lbrak a,b\rbrak\backslash R$. Thus, for each $n\in\mathbb{N}$, $\chi_{E\triangle B_n}$ is continuous at $x$, and since $\chi_{E\triangle B_n}$ takes on a value of only $0$ or $1$, we find that there exists an interval $Z_x\subseteq\lbrak a,b\rbrak$ that contains $x$ and that $\chi_{E\triangle B_n}$ is zero on all of $Z_x$, or there exists an interval $O_x\subseteq\lbrak a,b\rbrak$ that contains $x$ and that $\chi_{E\triangle B_n}$ has value $1$ on all of $O_x$. Let $\varepsilon_0=\min\left\{1,\varepsilon\right\}$. By \eqref{Lusin1}, there exists $N\in\mathbb{N}$ such that for all $n\geq N$,
\begin{eqnarray}
-1 < & \chi_{E\triangle B_n}(x) & < 1,\nonumber
\end{eqnarray}
and since $\chi_{E\triangle B_n}(x)$ can be only $0$ or $1$, we only have $\chi_{E\triangle B_n}(x)=0$, so $O_x=\emptyset$. Thus, $x\in Z_x$, and we have proven
\begin{eqnarray}
\lbrak a,b\rbrak\backslash R\subseteq\bigcup_{x\in\lbrak a,b\rbrak\backslash R}Z_x\subseteq\{x\in\lbrak a,b\rbrak\ :\ \chi_{E\triangle B_n}=0\},\nonumber
\end{eqnarray}
from which we deduce that $E\triangle B_n=\{x\in\lbrak a,b\rbrak\ :\ \chi_{E\triangle B_n}\neq 0\}\subseteq R$.
At this point, we have proven that for all $n\geq N$, $\chi_{E\triangle B_n}$ has measure zero, and hence has Lebesgue measure. In particular, $\displaystyle\int_a^b\chi_{E\triangle B_n}=0$. Thus, we may state that, for each $\varepsilon\in\mathbb{P}$, there exists a union $U=B_N$ of a finite number of intervals such that $ \displaystyle\int_a^b\chi_{E\triangle U}=0<\varepsilon$. Therefore, $\mathbb{F}$ satisfies {\bf(LP1)}.
\newline
\noindent $\neg\mbox{\bf(CA)}\Longrightarrow\neg\mbox{\bf(LP1)}$. Suppose $\mathbb{F}$ is incomplete, and that, tending towards a contradiction, {\bf(LP1)}\ holds. As shown in the proof of $\neg\mbox{\bf(CA)}\Longrightarrow \neg\mbox{\bf(LMP)}$, there exist a gap $A\subseteq\mathbb{F}$, some $a\in A$ and $b\in F\backslash A$ such that $E:=\lbrak a,b\rbrak\backslash A$ is a measurable set, but does not have Lebesgue measure. Let $\varepsilon\in\mathbb{P}$. By {\bf(LP1)}, there exist intervals $I_1,I_2,\ldots,I_\mu\subseteq\lbrak a,b\rbrak$ such that if $U=\displaystyle\bigcup_{k=1}^\mu I_k$, then $\displaystyle\int_a^b\chi_{E\triangle U}<\frac{\varepsilon}{2}$. By an argument similar to one of those that were done in the proof of $\mbox{\bf(LP2)}\Longrightarrow\mbox{\bf(LP1)}$, $E\triangle U$ has measure zero.
Without loss of generality, we assume $I_1,I_2,\ldots,I_\mu$ are pairwise disjoint, and that, for some index $K\in\{1,2,\ldots,\mu\}$, the interval $I_K$, if nonempty, intersects both $\lbrak a,b\rbrak\cap A$ and $\lbrak a,b\rbrak\backslash A$, and we further assume that all intervals $I_k$ with $k<K$ are subsets of $\lbrak a,b\rbrak\backslash A$, while all intervals $I_k$ with $k>K$ are subsets of $\lbrak a,b\rbrak\cap A$.
Let $T:=\displaystyle\bigcup_{k=1}^KI_k$. Since $E\triangle U$ has measure zero, the sets $I_K\backslash E\subseteq E\triangle U$ and $E\triangle T\subseteq E\triangle U$ also have measure zero. Furthermore, the equation
\begin{eqnarray}
\chi_E(x)=\chi_T(x)+\chi_{E\triangle T}(x),\label{LebInt4}
\end{eqnarray}
is true for all $x\in\lbrak a,b\rbrak$ except those $x\in I_K\backslash E$. Hence, \eqref{LebInt4} is true almost everywhere in $\lbrak a,b\rbrak$. Since $E\triangle T$ has measure zero, it has Lebesgue measure, so $\chi_{E\triangle T}$ has a Lebesgue integral. The other function in the right-hand side of \eqref{LebInt4}, which is $\chi_T$, is a step function because $T$ is an interval, and thus, $\chi_T$ also has a Lebesgue integral, and, since \eqref{LebInt4} is true almost everywhere in $\lbrak a,b\rbrak$, by a routine argument, we find that $\chi_E$ also has a Lebesgue integral, contradicting the fact that $E$ does not have Lebesgue measure. Therefore, {\bf(LP1)}\ is false in $\mathbb{F}$.
\end{proof}
|
{
"arxiv_id": "2302.13594",
"language": "en",
"timestamp": "2023-02-28T02:24:56",
"url": "https://arxiv.org/abs/2302.13594",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
With the increasing demand for high-quality video transmission over the Internet, video compression has become essential to efficiently transmit videos over limited bandwidth. It has driven the development of video compression standards such as H.265/HEVC \cite{hevc}, and beyond. However, compressed videos suffer unavoidable compression artifacts. As a result, there is a growing interest in the research community in enhancing the quality of compressed videos.
Several studies have proposed methods to improve the quality of individual frames in videos \cite{yang2018enhancing,wang2017novel} as well as leveraging temporal information between frames \cite{yang2019quality,guan2019mfqe,deng2020spatio,yang2020learning,wang2020multi,huo2021recurrent}. Most existing methods focus on the architecture design of the model, which typically involves (i) designing the backbone extraction module using CNNs or Transformers, (ii) designing the propagation module to effectively capture information flow between frames, and (iii) designing the enhancement module as a post-processing step to improve the quality of the output video. However, there is often a little emphasis on incorporating prior knowledge from video content such as motion information as well as the compression algorithm. This represents an untapped potential for improving the overall quality of the video compression process.
In this study, we propose several methods to enhance the performance of BasicVSR++, a state-of-the-art video super-resolution method \cite{chan2021basicvsr++}. Our approach begins by examining BasicVSR++'s performance with varying numbers of input frames, taking into account the motion information of the content. As compressed video uses the HEVC low-delay configuration, the first frame (also known as the Intra frame) has significantly higher quality than the others. To take advantage of this, we train a separate network called Intra frame BasicVSR++ to improve the quality of the first frame. Finally, we introduce an adaptive mechanism that combines multiple reconstructed instances with different input sequence lengths to obtain the final enhanced output.
The experiments demonstrate that the proposed framework not only leverages the low-delay configuration of video compression but also incorporates context-adaptive video fusion to enhance the final quality of compressed videos. These results demonstrate the potential of incorporating domain-specific knowledge into deep learning models for advancing the state-of-the-art in compressed video quality enhancement.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.65]{low-delay.png}
\caption{
Low-delay configuration of HEVC with a group of picture size of 4, where the configuration has only one intra frame at the 0-th index and a repeated group structure for 4 frames. Full references are only available from the 13-th frame.
}
\label{fig:low_delay}
\end{figure*}
\section{Performance Analysis of BasicVSR++}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.65]{Gap_visualization2.png}
\caption{Performance variation of BasicVSR++ with respect to the number of input frames. The per-frame PSNR difference between the outputs of trimmed videos with different gaps ($90, 122, 154, 186$) and the output of the original video is shown as $\Delta$ PSNR. The first and second rows display the results with start frame $0$ and $32$, respectively.}
\label{fig:gap_visual}
\end{figure*}
BasicVSR++ \cite{chan2021basicvsr++} is a state-of-the-art video super-resolution method that enhances video quality through a combination of frame propagation and alignment techniques. While BasicVSR++ has shown impressive results in enhancing video quality, it does not take into account the unique characteristics of compressed video.
In a compressed video \cite{hevc}, a frame can be an intra or inter frame. Intra frame compression only uses information from the current image, while inter-frame compression utilizes information from previously encoded frames to reduce redundancy. The NTIRE22 challenge encoded video using a low-delay configuration \cite{hevc}, as shown in Figure \ref{fig:low_delay}, with a group of pictures of 4. This results in the compressed video having only one intra frame at a significantly higher quality than other inter-frames. As a result, the quality of frames in the compressed video is highly varied, but BasicVSR++ does not take this into account. In practice, the entire video is fed as input to the model during testing, which may not be the optimal choice.
We investigate the effect of varying input video frame lengths on the performance of BasicVSR++. As in Fig. \ref{fig:gap_visual}-a, the pre-trained network's performance varies significantly depending on the frame length, with shorter frame length demonstrating higher performance for the first few frames.
Interestingly, we also noticed that this performance phenomenon did not occur when the input of the trimmed video started from the 32nd frame, i.e., when the intra frame is not included. This finding suggests that the network is better able to exploit the high performance of the intra frame with a smaller frame length. For later frames, our experiments demonstrated that using the full frame length performed better thanks to the temporal dependency in both backward and forward directions. In contrast, trimmed video inputs have limited backward and forward dependency, resulting in lower performance for later frames.
This observation suggests that the optimal choice of input frame length may vary depending on the temporal characteristics of the video content. The use of shorter frame lengths may be more effective for the early frames, while longer frame lengths may be more suitable for later frames that rely on both backward and forward dependencies. Additionally, the effectiveness of the backward and forward dependencies may be limited to a certain temporal range, as seen in Video 208 at shift 32. Examining 40 test sequences 201-240 in the NTIRE22 dataset, we observe that the first 64 frames of enhanced outputs with the video inputs trimmed at the length of 186 yields the best performance.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.65]{Gap_visualization3-IntraFrame.png}
\caption{Performance of the fine-tuned Intra frame BasicVSR++ without the Adaptive Context-Aware Fusion mechanism.}
\label{fig:i_frame}
\end{figure*}
\section{Proposed Method}
\subsection{Intra frame BasicVSR++}
\begin{figure*}[ht]f
\centering
\includegraphics[scale=0.45]{general.png}
\caption{General framework of the adaptive context-aware mechanism for video fusion. The mechanism detects static video content using the gradient of the average frame.}
\label{fig:general}
\end{figure*}
To further leverage the superior quality of the intra frame, we introduce a new network called Intra rame BasicVSR++. In order to do so, we created a new intra frame video dataset by cutting the original videos into multiple non-overlapping 30-frame segments and encoding them using HEVC with the same low-delay configuration as the NTIRE22 dataset. The resulting dataset contains videos with only one intra frame and multiple inter-frames. We then divided the compressed videos into training and testing datasets for network training. By utilizing this configuration, we ensured that the intra frame is of higher quality than the inter-frames, reflecting the reality of compressed videos.
During training, we fine-tuned the Intra frame BasicVSR++ network by using the intra frame as the first frame of each segment. This allowed the network to learn to enhance the high-quality intra frame more effectively, resulting in a more accurate and efficient network. In this way, the Intra frame BasicVSR++ network is designed to specifically target the improvement of the first frame's quality, while BasicVSR++ improves the quality of the entire video.
As shown in Fig.\ref{fig:i_frame}, we observed that the Intra frame BasicVSR++ network improves the performance of the intra frame in most cases, such as videos 201 and 202. However, this improvement is not universal and may not always hold for sequences with high frame rates and slow motion, such as video 226. This may be due to the limited amount of information in the intra frame in such cases, and the network may struggle to extract and propagate useful features. Nonetheless, these results suggest that the intra frame BasicVSR++ network can be an effective tool in improving the performance of compressed video enhancement for most types of content.
\subsection{Adaptive Context-Aware Fusion}
From our analysis, it is evident that BasicVSR++ benefits from adaptive input frame length and can be further improved by Intra frame BasicVSR++. However, it is also important to note that this improvement is not applicable to all types of video content. In order to address this issue, we propose a heuristic that separates cases where the frame rate is high and there is slow motion. This is achieved by comparing the gradient of the average frame with a given threshold. By using this threshold, we can determine if the video contains a high frame rate and slow motion, and take appropriate measures to optimize the final performance.
Firstly, the average frame is obtained as follows:
\begin{equation}
\bar{f} = \sum_{i}^{i \times m < N} f_{i\times m} ,
\end{equation}
where $m$ is a scaling factor that is proportional to the input video frame rate. Specifically, we set $m=4$ for videos with a frame rate less than 30fps and $m=8$ for those with a frame rate greater than 30fps. Next, we compare the gradient of the average frame to a given threshold:
\begin{equation}
\nabla (\bar{f}) = ||\nabla_x (\bar{f})|| + ||\nabla_y (\bar{f})|| < \tau,
\end{equation}
where $\nabla_x$ and $\nabla_y$ denote the gradient in the horizontal and vertical directions of a given frame $f$, and $\tau$ is a threshold of value 2300. The value of $\tau$ can be normalized based on the number of pixels, as shown in Fig. \ref{fig:general}.
We propose a novel video fusion mechanism called Adaptive Context-Aware Fusion, as shown in Fig. \ref{fig:general}. The method involves enhancing an input video using three different approaches: full BasicVSR++, short BasicVSR++ with the first 154 frames, and the first 122 frames by Intra frame BasicVSR++. Depending on the current content of the video, an adaptive fusion technique is performed to select either the first frame from short BasicVSR++ or Intra frame BasicVSR++. For the subsequent 63 frames, we select frames from the short BasicVSR++ set, and the remaining frames are extracted from the full BasicVSR++ set.
\subsection{Loss Function}
In order to fine-tune the BasicVSR++ and Intra frame BasicVSR++ networks, we used a weighted sum of three loss components: (i) Charbonnier loss \cite{lai2018fast}, (ii) total variation (TV) loss, and (iii) temporal gradient (TG) loss. The temporal gradient loss captures the difference between the ground truth and the output temporal sequence by calculating the loss between two consecutive frames in each sequence. To calculate the temporal gradient loss, we subtract two consecutive frames in each sequence to obtain the temporal gradient sequence.
The hyperparameters of the loss weights were optimized using grid search to find the best values. The final loss function is given by:
\begin{equation}
\begin{aligned}
\mathcal{L}_{\mathrm{Final}} = \mathcal{L}_{\mathrm{Char}} +1e\times 10^{-3}*\mathcal{L}_{\mathrm{TG}} \\\ + 1e\times 10^{-4}*\mathcal{L}_{\mathrm{TV}},
\end{aligned}
\end{equation}
where $L_{Char}$ is the Charbonnier loss, $L_{TV}$ is the total variation loss, $L_{TG}$ is the temporal gradient loss.
\section{Experiments}
\subsection{Datasets}
For the NTIRE 2022 Challenge, we used the original LDV dataset \cite{yang2021ntire} that consists of 240 videos as our primary training set. To increase our training data, we also utilized the LDV 2.0 dataset, which contains an additional 90 videos. We split the LDV 2.0 videos into six sets, each containing 15 videos. Two of these sets were used as our validation and test sets, respectively. In splitting the videos, we aimed to maintain the diversity of the videos in each set, in terms of content, frame rate, and other factors, as similar as possible. All videos in the LDV and LDV 2.0 datasets, as well as the splits for the NTIRE 2021 and NTIRE 2022 Challenges, are publicly available at \url{https://github.com/RenYang-home/LDV_dataset}.
\subsection{Training Details}
We employed the Adam optimizer \cite{kingma2014adam} with a learning rate of $2 \times 10^{-5}$ and utilized the Cosine Restart scheduler \cite{loshchilov2016sgdr} with a period of 10,000 iterations. To ensure a stable optimization process, we linearly increased the learning rate for the first 10\% of iterations.
\subsubsection{Fine-tuning BasicVSR++}
Due to computational limitations, we fine-tuned only the upsample layer of the pre-trained BasicVSR++ network. We found that increasing the input frame length from 30 to 60 frames led to a 0.03 dB improvement in model performance. The network was fine-tuned for 50,000 iterations.
\subsubsection{Fine-tuning Intra frame BasicVSR++}
We create a new dataset is created by trimming the original videos into multiple segments, each of which consists of 30 frames without overlapping frames. The video segments are encoded using HEVC at a low-delay profile resulting in approximately 16,000 training samples. For the Intra frame BasicVSR++ network, only the segments from the first 200 videos are used to train the model, and the last 40 videos are used for testing.
\begin{table*}[t]
\footnotesize
\centering
\caption{The PSNR results on the test set for our method and the compared method, BasicVSR++. Values are given in dB.}
\resizebox{\textwidth}{!}{
\begin{tabular}{lcccccccccccccccc}
\toprule
Method \textbackslash Video ID & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 &Avg. \\
\midrule
BasicVSR++ & 33.73 & 32.42& 31.22 & 36.75& 32.16& 30.57 & 30.41& 33.99& 31.68 & 36.20& 27.06 & 23.93& 30.24 & 32.94& 31.20& 31.63\\
Ours (OCL-VCE) & 33.81 & 32.54 & 31.30& 36.82 & 32.28& 30.59& 30.47 & 34.07& 31.75& 36.28 & 27.08& 24.00& 30.37& 33.01& 31.25& 31.71\\
\bottomrule
\end{tabular}
}
\label{tab:res}
\end{table*}
\subsection{Ensembling with Test Time Augmentation}
In our study, we perform Ensembling with test time augmentation (TTA) by generating eight input variations by flipping and rotating the input sequences in the spatial dimension. We then use our proposed framework shown in Fig. \ref{fig:general} to enhance each variation, and post-process the corresponding output by flipping and rotating it back to its original location. Finally, we obtain the final output by averaging the outputs of all variations. This approach helps to reduce the impact of input variability and improve the overall performance of the model.
\subsection{Experimental Results}
We participated in the NTIRE22 challenge as team \textbf{OCL-VCE} and submitted our results on the test set. In Table \ref{tab:res}, we present the performance of our method and BasicVSR++ on the test set. Our method achieved a higher PSNR score of 31.71 dB compared to 31.63 dB obtained by BasicVSR++.
We further evaluated our framework on the validation set, which consists of 10 videos. Using the pre-trained BasicVSR++ and Intra frame BasicVSR++ models and applying test-time augmentation (TTA), our framework achieved a PSNR score of 32.12 dB, surpassing the 31.84 dB score achieved without TTA.
In addition, without finetuning, with and without TTA, our framework achieved a PSNR score of 32.02 dB and 31.86 dB, respectively, improving the baseline by 0.06 dB and 0.02 dB, respectively. Our results demonstrate the effectiveness of our proposed Adaptive Context-Aware Fusion framework in enhancing the quality of low-delay compressed videos.
\section{Conclusion}
In conclusion, this paper proposes a novel method that leverages the unique characteristics of low-delay video compression algorithms to improve the quality of compressed videos using deep learning techniques. By incorporating this prior knowledge into the state-of-the-art method, BasicVSR++, we achieve a significant improvement in performance over existing methods. Our experimental results on the NTIRE22 challenge validate the effectiveness of our proposed method. This work underscores the importance of incorporating video compression knowledge into deep learning models to further enhance their performance and enable real-world applications.
{\small
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13663",
"language": "en",
"timestamp": "2023-02-28T02:26:55",
"url": "https://arxiv.org/abs/2302.13663",
"yymm": "2302"
} |
\section{Introduction}
The H.E.S.S.\xspace Galactic Plane Survey \citep[HGPS;][]{HESS_HGPS_2018} has revealed a large number of Galactic $\gamma$\xspace-ray sources in the very-high-energy (VHE; $E>\SI{100}{\GeV}$) domain.
While a number of these sources could be firmly associated with multi-wavelength counterparts, a large fraction of the sources remain without firm association.
Among the firmly identified sources, the vast majority are either pulsar wind nebulae (PWNe) or supernova remnants (SNRs), or composite systems.
In this paper, we study the unassociated source HESS~J1809$-$193\xspace, which has been discovered using early HGPS observations \citep{HESS_J1809_2007}.
Identifying the physical counterpart of HESS~J1809$-$193\xspace is particularly challenging due to the presence of several plausible associations in its vicinity.
For instance, the region harbours two energetic pulsars: PSR~J1811$-$1925 ($\dot{E}=\SI{6.4e36}{\erg\per\second}$, $d\sim \SI{5}{\kpc}$) and, noteworthy in particular, PSR~J1809$-$1917\xspace ($\dot{E}=\SI{1.8e36}{\erg\per\second}$, $d\sim \SI{3.3}{\kpc}$) \citep{Manchester2005}, which powers an X-ray PWN with an extension of $\sim$3\arcmin{} \citep{Kargaltsev2007,Anada2010,Klingler2018,Klingler2020}.
Located nearby is also a transient X-ray magnetar, XTE~J1810$-$197 \citep{Alford2016}.
On the other hand, HESS~J1809$-$193\xspace is also spatially coincident with several SNRs, most notably G011.1+00.1\xspace and G011.0$-$00.0\xspace \citep{Green2019}, as well as with molecular clouds \citep{Castelletti2016,Voisin2019}.
This leaves open the possibility to interpret the $\gamma$\xspace-ray emission as originating from high-energy electrons\footnote{In this paper, we use the term `electrons' to refer to both electrons and positrons.} -- most likely provided by one of the pulsars -- that up-scatter photons from ambient radiation fields to $\gamma$\xspace-ray energies via the Inverse Compton (IC) process (`leptonic scenario'), or as being due to interactions of high-energy cosmic-ray nuclei -- accelerated, for example, at SNR shock fronts -- within nearby molecular clouds (`hadronic scenario').
In the HESS~J1809$-$193\xspace discovery paper \citep{HESS_J1809_2007}, as well as in two follow-up studies presented shortly afterwards \citep{Komin2007,Renaud2008}, the authors have shown that the PWN surrounding PSR~J1809$-$1917\xspace can naturally explain the observed $\gamma$\xspace-ray emission in a leptonic scenario, and thus represents the most likely association.
However, \citet{Castelletti2016} and \citet{Araya2018} have subsequently put forward an interpretation in a hadronic scenario involving the SNRs and molecular clouds found within the region.
Recently, the detection of $\gamma$\xspace-ray emission from HESS~J1809$-$193\xspace with the High Altitude Water Cherenkov Observatory (HAWC) above \SI{56}{\TeV} (and quite possibly above \SI{100}{\TeV}; \citeauthor{HAWC2020} \citeyear{HAWC2020}; see also \citeauthor{Goodman2022} \citeyear{Goodman2022}) has added to the motivation to identify its origin.
In a hadronic scenario, this detection would make HESS~J1809$-$193\xspace a good `PeVatron' candidate -- that is, a source capable of accelerating cosmic-ray nuclei to PeV energies.
The identification of such PeVatrons is regarded as decisive in the quest for unveiling the origin of Galactic cosmic rays \citep[e.g.][]{Berezinskii1990,Aharonian2013,Cristofari2021}.
On the other hand, should the leptonic interpretation hold, the detection by HAWC would demonstrate that HESS~J1809$-$193\xspace is a fascinating laboratory for the study of high-energy electrons and their propagation -- and render it another extremely-high-energy $\gamma$\xspace-ray source associated to a pulsar \citep[see e.g.][]{Sudoh2021,HAWC2021a,deOnaWilhelmi2022}.
In this context, we also note the recent discovery of extended halos around several energetic pulsars \citep[e.g.][]{HAWC2017}.
While the term `halo' has frequently been adopted in the literature for many $\gamma$\xspace-ray sources associated with pulsars \citep[e.g.][]{Linden2017}, we follow here the stricter definition by \citet{Giacinti2020}, who have defined it as a region where the pulsar no longer dominates the dynamics of the interstellar medium (ISM), yet where an over-density of relativistic electrons is present.
The escape of the electrons from the PWN to the extended halo could for example be caused by an interaction of the reverse SNR shock with the pulsar wind \citep{Blondin2001,Hinton2011}.
We present here an updated study of HESS~J1809$-$193\xspace with H.E.S.S.\xspace, based on a larger data set compared to previous publications, and employing improved analysis methods.
To be able to interpret the results in a consistent manner, we complement this with a new analysis of data above \SI{1}{\GeV} from the \emph{Fermi}-LAT\xspace space telescope for the same region.
In doing so, we are able to gain new insights into the nature of HESS~J1809$-$193\xspace.
In Sect.~\ref{sec:data_analysis}, we introduce the H.E.S.S.\xspace and \emph{Fermi}-LAT\xspace data sets and analyses.
The results of the analyses are presented in Sect.~\ref{sec:results}, followed by an interpretation in the framework of leptonic and hadronic models in Sect.~\ref{sec:modelling}.
Finally, we conclude the paper in Sect.~\ref{sec:conclusion}.
\section{Data analysis}
\label{sec:data_analysis}
\subsection{H.E.S.S.\xspace data analysis}
\label{sec:hess_analysis}
H.E.S.S.\xspace is an array of five imaging atmospheric Cherenkov telescopes (IACTs), which detect the Cherenkov light produced in atmospheric air showers that are initiated by primary $\gamma$\xspace rays.
It is situated on the Southern hemisphere, in the Khomas highlands of Namibia (23\degr{}16\arcmin{}18\arcsec{}S, 16\degr{}30\arcmin{}00\arcsec{}E), at an altitude of \SI{1800}{\meter} above sea level.
The original array, referred to as \mbox{`HESS-I'}, was installed in 2000-2003 and comprised four telescopes with \SI{12}{\meter}-diameter mirrors (CT1-4), arranged in a square layout with \SI{120}{\meter} side length \citep{HESS_Crab_2006}.
The array was completed in 2012 by a fifth telescope, CT5, featuring a \SI{28}{\meter}-diameter mirror, and placed in the centre of the array \citep{Holler2015}.
With the full array, H.E.S.S.\xspace is sensitive to $\gamma$\xspace rays in the energy range between $\sim$\SI{0.1}{\TeV} and $\sim$\SI{100}{\TeV}.
\subsubsection{Data set and low-level data analysis}
\label{sec:hess_low_level_analysis}
H.E.S.S.\xspace observations on HESS~J1809$-$193\xspace have been carried out between 2004 and 2010, that is, exclusively during the HESS-I phase.
The analysis presented here is therefore restricted to the CT1-4 telescopes.
The observations are divided into `runs' of typically \SI{28}{\minute} duration.
Selecting runs that encompass HESS~J1809$-$193\xspace within $\sim$2.2\degr{} of the pointing position of the telescopes, and applying standard selection criteria for spectral studies \citep{HESS_Crab_2006}, we obtained a data set comprising 201~runs, amounting to a total observation time of \SI{93.2}{\hour}.
This represents a significant increase with respect to the previous dedicated publications on HESS~J1809$-$193\xspace, which used \SI{25}{\hour} \citep{HESS_J1809_2007}, \SI{32}{\hour} \citep{Komin2007}, and \SI{41}{\hour} \citep{Renaud2008} of data.
In the data analysis, we have selected $\gamma$\xspace ray-like events using a machine learning-based method \citep{Ohm2009}, and have reconstructed their energy and arrival direction employing a maximum-likelihood fit, in which the recorded telescope images are compared to a library of simulated image templates \citep{Parsons2014}.
We have repeated the entire analysis with an independent second analysis chain \citep{Becherini2011}, which employs different algorithms for the image calibration, event selection, and event reconstruction, obtaining compatible results.
For the subsequent high-level analysis, we converted our data to the open `GADF' format \citep{Deil2018}, and used the open-source analysis package \textsc{Gammapy}\xspace \citep{Deil2017,Deil2020} (v0.17).
Atmospheric air showers initiated by charged cosmic rays outnumber those resulting from $\gamma$\xspace rays by several orders of magnitude and they cannot be rejected completely in the event selection without a too severe loss of $\gamma$\xspace-ray efficiency.
The modelling of the residual background due to these events (referred to as `hadronic background' hereafter) represents one of the major challenges in any analysis of IACT data.
Most established techniques rely on an estimation of the background from source-free regions in the observed field of view of the run itself \citep[see][for a review]{Berge2007}.
We have chosen here an alternative approach, in which the residual hadronic background is provided by a background model, which we have constructed from archival H.E.S.S.\xspace observations, as detailed in \citet{Mohrmann2019}.
Together with the usage of \textsc{Gammapy}\xspace, this enabled us to carry out a 3-dimensional likelihood analysis of the data, that is, to model simultaneously the energy spectrum and spatial morphology of HESS~J1809$-$193\xspace.
The application of this analysis method to H.E.S.S.\xspace data has been validated by \citet{Mohrmann2019}.
Owing to varying observation conditions (in particular the pointing zenith angle and the atmospheric transparency), a dedicated energy threshold needs to be computed for each observation run.
We determined the thresholds requiring that the average bias of the energy reconstruction does not exceed 10\%, and that the background model is not used below the energy at which the predicted background rate peaks \citep[for more details, see][]{Mohrmann2019}.
The resulting energy thresholds are below \SI{0.9}{\TeV} for all observations, while the lowest threshold, obtained for $\sim$10\% of the observations, is \SI{0.27}{\TeV}.
Because the performance of the system degrades at large offset angles, we furthermore imposed a maximum angle between the direction of reconstructed events and the telescope pointing position of 2.2\degr{}.
The value has been chosen such that the emission region is fully enclosed for all selected observations runs, many of which have been taken as part of the HGPS, implying that a considerable fraction exhibit relatively large (i.e.\ $>1^\circ$) offset angles with respect to the centre of the emission region.
\subsubsection{Likelihood analysis}
\label{sec:hess_likelihood_analysis}
In the likelihood analysis, the best-fit models have been obtained by minimising the quantity $-2\log(\mathcal{L})$, where $\mathcal{L}=\prod_i P(n_i\,|\,\mu_i)$, and $P(n_i\,|\,\mu_i)$ is the Poisson probability of observing $n_i$ events in bin~$i$, given a predicted number of events $\mu_i$ from the background model, and $\gamma$\xspace-ray source models if present \citep{Mattox1996}.
To compute the number of events predicted by source models, we folded the source spatial model and energy spectrum with the instrument response functions (IRFs; effective area, point spread function, and energy dispersion matrix), which we derived for every observation run from extensive Monte Carlo simulations\footnote{Technically, custom IRFs for every observation run are obtained by interpolating between IRFs generated from Monte Carlo simulations that have been carried out for a grid of observational parameters.} \citep{Bernloehr2008}.
As spatial source models we have used 2-dimensional Gaussians that can be either symmetric or elongated, represented by the \texttt{GaussianSpatialModel} class in \textsc{Gammapy}\xspace.
As spectral models, we have used a power law (PL) of the form
\begin{linenomath*}
\begin{equation}\label{eq:pl}
\frac{\mathrm{d}N}{\mathrm{d}E} = N_0\cdot \left(\frac{E}{E_0}\right)^{-\Gamma}\,\,,
\end{equation}
\end{linenomath*}
with normalisation $N_0$, spectral index $\Gamma$, and reference energy $E_0$, as well as a power law with exponential cut-off (ECPL),
\begin{linenomath*}
\begin{equation}\label{eq:ecpl}
\frac{\mathrm{d}N}{\mathrm{d}E} = N_0\cdot \left(\frac{E}{E_0}\right)^{-\Gamma}\cdot \exp\left(-\frac{E}{E_c}\right)\,\,,
\end{equation}
\end{linenomath*}
where $E_c$ is the cut-off energy.
We carried out the likelihood fit in a region of interest (RoI) of 6\degr{} $\times$ 6\degr{}, centred on HESS~J1809$-$193\xspace (see Fig.~\ref{fig:sign_map_bkg_fit} in Appendix~\ref{sec:appendix_bkg_fit}).
For the binning of our data, we used spatial pixels of 0.02\degr{} $\times$ 0.02\degr{} size, and an energy binning of 16 bins per decade of energy.
Besides HESS~J1809$-$193\xspace, the RoI also contains the known $\gamma$\xspace-ray sources HESS~J1804$-$216, HESS~J1813$-$178 \citep[][]{HESS_HGPS_2018}, and HESS~J1808$-$204 \citep{HESS_J1808_2018}, which we have masked in the fit using circular exclusion regions (cf.\ Fig.~\ref{fig:sign_map_bkg_fit}).
In the first step of the analysis, we have adjusted the background model for each observation.
This background model fit is described in detail in Appendix~\ref{sec:appendix_bkg_fit}, where we also lay out the procedure for computing significance maps.
The fit result indicates that we have achieved a very good description of the hadronic background after the adjustment.
For the further analysis, we have combined the observations into six `stacked' data sets, where observations with the same energy threshold have been grouped together.
This procedure effectively combines observations with similar observing conditions; further divisions of the data would lead to too many separate data sets.
The six data sets are fitted jointly in the likelihood analysis.
Then, we have modelled the $\gamma$\xspace-ray emission of HESS~J1809$-$193\xspace by adding source components to the model prediction.
For nested\footnote{
Two models $M_0$ and $M_1$ are considered nested if the parameters of $M_0$ are a subset of those of $M_1$, and $M_1$ can be reduced to $M_0$ for a particular choice of values for its additional parameters.
}
models, the preference of one model over another one can be computed from the `test statistic', $\mathrm{TS}=-2\log(\mathcal{L}_0/\mathcal{L}_1)$, which -- in the limit of sufficient statistics, and far enough from parameter boundaries -- follows a $\chi^2$ distribution with $k$ degrees of freedom, where $k$ is the difference in the number of model parameters between the two tested models \citep{Wilks1938}.
After the model fit, it is possible to extract flux points for each fitted source component.
To do so, we re-ran the fit in narrow energy ranges, keeping all source model parameters except for the flux normalisation $\phi_0$ fixed to their best-fit values.
The best-fit normalisation found in each energy range can then be taken as the measured flux in that range, and is quoted at its centre energy (in logarithmic space).
\subsubsection{Estimation of systematic uncertainties}
\label{sec:hess_sys_error}
Despite the fact that we have computed customised IRFs for each observation run, due to necessary simplifying assumptions in their generation, these IRFs do not always describe the instrument and data-taking conditions perfectly.
Discrepancies between the assumed IRFs and the true conditions can then lead to a systematic bias in the likelihood analysis.
To assess the potential impact of mis-modelled IRFs on our fit results, we have estimated systematic uncertainties for all fit parameters.
Specifically, we have considered two effects that together dominate the systematic uncertainty on our results: a shift of the global energy scale, and uncertainties of the hadronic background model.
A shift of the energy scale may, for example, arise from a mis-modelling of the optical efficiency of the telescopes, or from variations in the transparency for Cherenkov radiation of the atmosphere.
On the other hand, the background model has been constructed from observation runs that were taken under similar, but not identical conditions, and may therefore -- despite its adjustment to the analysed observations (cf.\ Appendix~\ref{sec:appendix_bkg_fit}) -- not predict the background rate perfectly.
\begin{figure*}[th]
\centering
\includegraphics{img/sign_maps_models}
\caption{Significance maps with best-fit models.
We used an oversampling radius of $0.07^\circ$ for smoothing.
Significance values were obtained using the `Cash' statistic \citep[][see also Appendix~\ref{sec:appendix_bkg_fit}]{Cash1979}.
Circle markers and coloured dashed lines display the best-fit position and 1-$\sigma$ extent of the Gaussian model components, respectively.
(a) Pre-modelling significance map, with best-fit model components of the 1-component model and the 2-component model indicated.
(b) Residual significance map for the 1-component model.
(c) Residual significance map for the 2-component model.
In all panels, the grey dashed line marks the Galactic plane, white dashed circles show regions excluded from the analysis, and the black triangle marker denotes the position of PSR~J1809$-$1917\xspace.
}
\label{fig:sign_maps_models}
\end{figure*}
We estimated the systematic uncertainties adopting a Monte Carlo-based approach, in which we randomly varied the IRFs according to the two systematic effects mentioned above, generated random pseudo data sets based on these IRFs and the best-fit source models, and re-fitted these pseudo data sets with the original, unmodified IRFs.
The obtained spread in the fitted source model parameters then reflects their combined statistical and systematic uncertainty.
The procedure is described in detail in Appendix~\ref{sec:appendix_sys_err}, and the resulting systematic uncertainties are presented along with the analysis results in Sect.~\ref{sec:hess_results}.
We note that the two considered effects potentially do not encompass all possible sources of systematic errors in the analysis.
For a more general estimate of the systematic uncertainties of H.E.S.S.\xspace, we refer to \citet{HESS_Crab_2006}, where systematic uncertainties of 20\% on the flux normalisation and 0.1 on the source spectral index have been derived.
\subsection{\emph{Fermi}-LAT\xspace data analysis}
\label{sec:fermi_analysis}
\emph{Fermi}-LAT\xspace is a pair conversion telescope onboard the \textit{Fermi} Gamma-Ray Space Telescope and is sensitive to $\gamma$\xspace rays in the energy range from $\sim$\SI{20}{\MeV} to several hundred~GeV \citep{FermiLAT2009}.
Here, we analysed 12~years and 5~months of data, taken between August 4, 2008 and December 31, 2020.
We used the `Pass~8' IRFs \citep{Atwood2013} and selected events passing the P8R3\_SOURCE event selection (event class~128, event type~3).
Because the angular resolution of \emph{Fermi}-LAT\xspace substantially worsens below \SI{1}{\GeV}, we restricted the analysis to events above this energy.
To suppress $\gamma$\xspace rays originating from the Earth's limb, we furthermore excluded events with zenith angles above $90^\circ$.
The data were analysed using \textsc{Fermitools}\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/software}} version~2.2.0 and \textsc{Fermipy}\xspace\footnote{\url{https://fermipy.readthedocs.io}} version~1.1.5 \citep{Wood2017}.
The analysis was carried out in a region of interest (ROI) of $10^\circ\times10^\circ$, centred on the nominal position of 4FGL~J1810.3$-$1925e\xspace provided in the 4FGL-DR2 catalogue \citep{FermiLAT_4FGL_2020,FermiLAT_4FGLDR2_2020}.
The events and exposure maps were binned using spatial bins of $0.1^\circ\times0.1^\circ$ size and five bins per decade in energy.
$\gamma$\xspace ray source models were then fitted to the data with a likelihood fit as described in Section~\ref{sec:hess_likelihood_analysis}, where we used for the sources in the ROI the models provided in the 4FGL-DR2 catalogue by default.
In addition, we included standard templates for isotropic and Galactic diffuse $\gamma$\xspace-ray emission.\footnote{We used the file iso\_P8R3\_SOURCE\_V2\_v1.txt for the isotropic and the file gll\_iem\_v07.fits for the Galactic diffuse emission, respectively; see also \url{ http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}.}
In the likelihood analysis, we have fixed the parameters of all source models with a TS value smaller than~25 or with fewer than~700 predicted events.
We left free the normalisation parameter for all sources within $1^\circ$ of 4FGL~J1810.3$-$1925e\xspace, all parameters of the isotropic and Galactic diffuse model, as well as all parameters of the source models for 4FGL~J1810.3$-$1925e\xspace and 4FGL~J1811.5$-$1925\xspace, which immediately overlap with HESS~J1809$-$193\xspace.
Systematic uncertainties on the best-fit flux normalisations have been obtained by scaling up and down the effective area by 3\% and repeating the analysis\footnote{We have followed the procedure outlined at \url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/Aeff_Systematics.html}}.
\section{Results}
\label{sec:results}
\subsection{H.E.S.S.\xspace results}
\label{sec:hess_results}
\begin{figure*}
\centering
\includegraphics{img/flux_map}
\caption{Flux maps for HESS~J1809$-$193\xspace, above a $\gamma$\xspace-ray energy of \SI{0.27}{\TeV}.
(a) View of the entire emission of HESS~J1809$-$193\xspace.
(b) Zoom-in on core region.
In both panels, light blue circles show the positions of known SNRs, the black triangle marker denotes the position of PSR~J1809$-$1917\xspace, and the grey dashed line marks the Galactic plane.
The position and extent of component~A and component~B of the two-component model are displayed in green and purple, respectively.
The multi-wavelength data in panel (b) are from the JCMT \citep{Castelletti2016}, the FUGIN survey \citep{Umemoto2017}, the Mopra telescope \citep{Voisin2019}, and the Suzaku X-ray telescope \citep{Anada2010}.
The velocity interval for the FUGIN data has been adopted from \citet{Castelletti2016}.
We computed the flux maps assuming a power law-type spectrum with index $-2.2$, and have employed a Gaussian kernel with $0.07^\circ$ radius for smoothing.
}
\label{fig:flux_map}
\end{figure*}
We show in Fig.~\ref{fig:sign_maps_models}(a) the residual significance map after the fit of the hadronic background model, while the map in Fig.~\ref{fig:flux_map}(a) displays the deduced flux of $\gamma$\xspace rays above the threshold energy of \SI{0.27}{\TeV}.
Extended $\gamma$\xspace-ray emission around the position of PSR~J1809$-$1917\xspace (black triangle marker) is visible, we refer to this source as HESS~J1809$-$193\xspace.
It is striking that besides the larger-scale emission with an extent of about $1^\circ$, a core of bright emission close to -- but not fully coinciding with -- the pulsar position is present.
\begin{figure}
\centering
\includegraphics{img/sign_dist_model_fits}
\caption{Significance distributions after model fits.
The red, unfilled histogram shows the distribution for all spatial pixels (outside the exclusion regions) of the significance map in Fig.~\ref{fig:sign_maps_models}(b), while the blue, filled histogram corresponds to the map in Fig.~\ref{fig:sign_maps_models}(c).
The black, dashed line displays a Gaussian distribution with a mean $\mu=0$ and width $\sigma=1$.
}
\label{fig:sign_dist_models}
\end{figure}
First, we have attempted to model the emission using a 1-component model, which comprises a single source component, described by an elongated Gaussian spatial model and a PL spectral model.
The best-fit position and 1-$\sigma$ extent of this component are shown in blue in Fig.~\ref{fig:sign_maps_models}(a), and in Fig.~\ref{fig:sign_maps_models}(b) we show the residual significance map after subtracting the emission predicted by this model.
The residual map shows significant remaining features close to the best-fit position, indicating that the 1-component model does not provide an acceptable description of the observed emission.
This finding is confirmed by the corresponding distribution of pixel significance values, shown by the red line in Fig.~\ref{fig:sign_dist_models}, which clearly deviates from the expected distribution for a good description of the data (black, dashed line).
This is because the larger-scale emission and the bright core cannot be simultaneously modelled by a single component that is described by a Gaussian, or any other reasonably basic spatial model.
This finding holds even if we allow the extent of the Gaussian model to vary with energy, as detailed in Appendix~\ref{sec:appendix_1comp_fit_ebands}.
We have therefore adopted a 2-component model, which features in addition a second component that is described by a symmetric Gaussian spatial model and a PL spectral model.
Fig.~\ref{fig:sign_maps_models}(a) also shows the best-fit position and extent of both components of the 2-component model, while the residual significance map after the fit of this model is displayed in Fig.~\ref{fig:sign_maps_models}(c), and the corresponding distribution of significance values is shown in Fig.~\ref{fig:sign_dist_models} (blue histogram).
As is clearly visible, the 2-component model provides a much better fit to the observed data than the 1-component model (statistically, it is preferred by $13.3 \sigma$), and in fact no residual deviations except for those expected from statistical fluctuations can be made out.
In the following, we will refer to the extended component as `component~A', and to the compact component as `component~B'.
In Appendix~\ref{sec:appendix_2comp_maps_ebands}, we study the agreement between the 2-component model and the observed data as a function of energy, finding that the model provides a good fit at all energies.
Finally, we have explored in Appendix~\ref{sec:appendix_2comp_fit_ebands} how the parameters of component~A vary when the model is fitted in separate energy bands.
We find that the fitted parameters do not change significantly, and provide in Table~\ref{tab:size_comp1} the fitted extent of component~A in the four employed energy bands.
\begin{table}
\centering
\caption{Extent of H.E.S.S.\xspace component~A in energy bands.}
\label{tab:size_comp1}
\begin{tabular}{cccc}
\hline\hline
$E_\mathrm{min}-E_\mathrm{max}$ & $E_\mathrm{mean}$ & $\sigma_\mathrm{major}$ & $\sigma_\mathrm{minor}$\\
(TeV) & (TeV) & (deg) & (deg)\\
\hline
$0.27 - 0.75$ & 0.43 & $0.69\pm 0.10$ & $0.41 \pm 0.10$ \\
$0.75 - 2.1$ & 1.2 & $0.62\pm 0.05$ & $0.31 \pm 0.04$ \\
$2.1 - 5.6$ & 3.2 & $0.62\pm 0.06$ & $0.35 \pm 0.05$ \\
$>5.6$ & 9.6 & $0.61\pm 0.11$ & $0.46 \pm 0.13$ \\
\hline
\end{tabular}
\tablefoot{
$E_\mathrm{min}$ and $E_\mathrm{max}$ denote the lower and upper boundary of the energy band, respectively, $E_\mathrm{mean}$ the weighted mean energy.
$\sigma_\mathrm{major}$ and $\sigma_\mathrm{minor}$ denote the 1-sigma extent of the semi-major and semi-minor axis of the elongated Gaussian spatial model for H.E.S.S.\xspace component~A, respectively.
See Appendix~\ref{sec:appendix_2comp_fit_ebands} for further details.
}
\end{table}
In Fig.~\ref{fig:flux_map}(b), we provide a detail view of the inner region of HESS~J1809$-$193\xspace, with multi-wavelength data overlaid.
The peak of the emission (and, by that, the position of component~B), is offset by $\sim$7\arcmin{} from the position of PSR~J1809$-$1917\xspace and its surrounding X-ray nebula, indicated by the brown contour lines \citep{Anada2010}.
On the other hand, component~B lies with its centre point directly on the western edge of the SNR~G011.0+00.0, and is furthermore spatially coincident with dense molecular clouds as indicated by $^{12}$CO ($J$=3--2) observations by the James Clerk Maxwell Telescope \citep[JCMT;][]{Castelletti2016} and CS observations by the Mopra telescope \citep{Voisin2019}.
Moreover, the contour lines from the FUGIN~$^{12}$CO ($J$=1--0) survey \citep{Umemoto2017} illustrate that molecular gas is present throughout the region.
In Appendix~\ref{sec:appendix_fugin_map}, we provide a map of the FUGIN $^{12}$CO data with the two components of HESS~J1809$-$193\xspace overlaid.
\begin{figure}
\centering
\includegraphics{img/sed_hess}
\caption{H.E.S.S.\xspace energy spectrum results.
We show in green and purple the flux points for component~A and component~B of the H.E.S.S.\xspace analysis, respectively.
Upper limits are at 95\% confidence level.
The solid lines with shaded bands display the best-fit PL model and statistical uncertainty for each of the components.
For component~A, the dashed green line shows the best-fit ECPL model in addition.
The energy spectra are compared to published results, taken from \citet{HESS_J1809_2007}, \citet{HESS_HGPS_2018}, \citet{Zaric2021}, \citet{HAWC2020}, and \citet{Goodman2022}.
}
\label{fig:sed_hess}
\end{figure}
Finally, we show in Fig.~\ref{fig:sed_hess} the energy spectra and flux points obtained for the two components of HESS~J1809$-$193\xspace, and compare these to previously obtained results from the literature.
We note that, when comparing the fitted PL models for component~A and~B (solid lines), the spectrum of component~B appears somewhat harder than that of component~A ($\Gamma=1.98\pm 0.05$ vs. $\Gamma=2.24\pm 0.03$).
However, the flux upper limits obtained above energies of $\sim$\SI{20}{\TeV} for component~A seem to indicate the presence of a cut-off to the spectrum.
We have therefore repeated the analysis adopting an ECPL spectral model for component~A, which led to no changes in the best-fit parameters values for component~B or for the spatial model of component~A, but yielded a flatter spectrum ($\Gamma=1.90\pm 0.05$) also for component~A at energies below $\sim$\SI{5}{\TeV} (dashed line in Fig.~\ref{fig:sed_hess}).
With respect to the PL spectral model, the ECPL model is preferred with a statistical significance of $\sim$8$\sigma$.
For component~B, on the other hand, we found no significant preference for a cut-off to the spectrum.
The spectrum of component~A, which dominates the total emission except at the highest energies, is well compatible with that published in \citet{HESS_HGPS_2018}, and may also be reconcilable with the high-energy emission measured with HAWC \citep{HAWC2020,Goodman2022}, although a more gradual decrease of the flux than predicted by the ECPL spectral model would be required in this case.
That the flux points from \citet{HESS_J1809_2007} indicate a lower flux compared to the one found here can be understood when considering that the flux was extracted only from within a circular region of $0.5^\circ$ radius, and thus part of the larger-scale emission has been missed.
We summarise the best-fit parameter values found for the 2-component model in Table~\ref{tab:model_pars}, along with the corresponding statistical and systematic uncertainties (the latter having been derived as described in Appendix~\ref{sec:appendix_sys_err}).
For component~A, we provide the results both for the PL spectral model and the ECPL spectral model.
\begin{table}
\centering
\caption{Best-fit parameter values for the H.E.S.S.\xspace 2-component model.
For component~A, we provide the best-fit values for the assumption of a PL spectral model and an ECPL spectral model.
}
\label{tab:model_pars}
\begin{tabular}{lc}
\hline\hline
Par. [unit] & Value \\\hline
\multicolumn{2}{c}{\rule{0pt}{1.1\normalbaselineskip} Component~A (PL spectral model) \vspace{0.05cm}}\\\hline
R.A. [deg] & $272.551\pm 0.025_\mathrm{stat}\pm 0.018_\mathrm{sys}$\\
& ($18^\mathrm{h}10^\mathrm{m}12^\mathrm{s} \pm 6^\mathrm{s}_\mathrm{stat} \pm 4^\mathrm{s}_\mathrm{sys}$)\\
Dec. [deg] & $-19.344\pm 0.023_\mathrm{stat}\pm 0.013_\mathrm{sys}$\\
& ($-19^\circ 20.6'\pm 1.4'_\mathrm{stat}\pm 0.8'_\mathrm{sys})$\\
$\sigma$ [deg] & $0.622\pm 0.032_\mathrm{stat}\pm 0.020_\mathrm{sys}$\tablefootmark{a}\\
$e$ & $0.824\pm 0.025_\mathrm{stat}$\\
$\phi$ [deg] & $50.0\pm 3.1_\mathrm{stat}$\\
$N_0\,[10^{-12}\,\mathrm{TeV}^{-1}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}]$ & $8.42\pm 0.40_\mathrm{stat}\pm 1.14_\mathrm{sys}$\\
$\Gamma$ & $2.239\pm 0.027_\mathrm{stat}\pm 0.020_\mathrm{sys}$\\
$E_0$ [TeV] & 1 (fixed)\\\hline
\multicolumn{2}{c}{\rule{0pt}{1.1\normalbaselineskip} Component~A (ECPL spectral model) \vspace{0.05cm}}\\\hline
R.A. [deg] & $272.554\pm 0.025_\mathrm{stat}\pm 0.019_\mathrm{sys}$\\
& ($18^\mathrm{h}10^\mathrm{m}13^\mathrm{s} \pm 6^\mathrm{s}_\mathrm{stat} \pm 5^\mathrm{s}_\mathrm{sys}$)\\
Dec. [deg] & $-19.344\pm 0.021_\mathrm{stat}\pm 0.012_\mathrm{sys}$\\
& ($-19^\circ 20.6'\pm 1.3'_\mathrm{stat}\pm 0.7'_\mathrm{sys})$\\
$\sigma$ [deg] & $0.613\pm 0.031_\mathrm{stat}\pm 0.015_\mathrm{sys}$\tablefootmark{a}\\
$e$ & $0.820\pm 0.025_\mathrm{stat}$\\
$\phi$ [deg] & $51.3\pm 3.1_\mathrm{stat}$\\
$N_0\,[10^{-12}\,\mathrm{TeV}^{-1}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}]$ & $9.05\pm 0.47_\mathrm{stat}\pm 0.91_\mathrm{sys}$\\
$\Gamma$ & $1.90\pm 0.05_\mathrm{stat}\pm 0.05_\mathrm{sys}$\\
$E_c$ [TeV] & $\left.12.7_{-2.1}^{+2.7}\right|_\mathrm{stat}\left._{-1.9}^{+2.6}\right|_\mathrm{sys}$\\
$E_0$ [TeV] & 1 (fixed)\\\hline
\multicolumn{2}{c}{\rule{0pt}{1.1\normalbaselineskip} Component~B \vspace{0.05cm}}\\\hline
R.A. [deg] & $272.400\pm 0.010_\mathrm{stat}$\\
& ($18^\mathrm{h}09^\mathrm{m}36^\mathrm{s} \pm 2.4^\mathrm{s}_\mathrm{stat}$)\\
Dec. [deg] & $-19.406\pm 0.009_\mathrm{stat}$\\
& ($-19^\circ 24.4'\pm 0.5'_\mathrm{stat})$\\
$\sigma$ [deg] & $0.0953\pm 0.0072_\mathrm{stat}\pm 0.0034_\mathrm{sys}$\\
$N_0\,[10^{-12}\,\mathrm{TeV}^{-1}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}]$ & $0.95\pm 0.11_\mathrm{stat}\pm 0.011_\mathrm{sys}$\\
$\Gamma$ & $1.98\pm 0.05_\mathrm{stat}\pm 0.03_\mathrm{sys}$\\
$E_0$ [TeV] & 1 (fixed)\\\hline
\end{tabular}
\tablefoot{
$\sigma$, $e$, and $\phi$ denote the 1-$\sigma$ radius, eccentricity, and position angle of the Gaussian spatial model, respectively.
$N_0$, $\Gamma$, $E_0$, and $E_c$ are parameters of the spectral models, as defined in Eqs.~(\ref{eq:pl}) and (\ref{eq:ecpl}).
Systematic uncertainties have been derived as described in Appendix~\ref{sec:appendix_sys_err}.\\
\tablefoottext{a}{For the asymmetric component~A, $\sigma$ refers to the semi-major axis. The extents of the semi-minor axis compute to $0.353 \pm 0.029_\mathrm{stat}$ and $0.351 \pm 0.028_\mathrm{stat}$ for the PL and ECPL spectral model, respectively.}
}
\end{table}
\subsection{\emph{Fermi}-LAT\xspace results}
\label{sec:fermi_results}
The \emph{Fermi}-LAT\xspace 4FGL-DR2 catalogue \citep{FermiLAT_4FGL_2020,FermiLAT_4FGLDR2_2020} lists two sources that are located in the immediate vicinity of HESS~J1809$-$193\xspace:\footnote{We have used the 4FGL-DR2 catalogue as a basis for our analysis, but have checked that the region is modelled in the same way in the more recent 4FGL-DR3 catalogue \citep{FermiLAT_4FGLDR3_2022}.}
(i) 4FGL~J1810.3$-$1925e\xspace is modelled as an extended source (using a two-dimensional Gaussian as spatial model) and its spectrum is fitted with a log-parabola model,
\begin{linenomath*}
\begin{equation}\label{eq:logparabola}
\frac{\mathrm{d}N}{\mathrm{d}E} = N_0\cdot \left(\frac{E}{E_0}\right)^{-\alpha-\beta\log(E/E_0)}\,\,
\end{equation}
\end{linenomath*}
(with log the natural logarithm);
(ii) 4FGL~J1811.5$-$1925\xspace is modelled as a point-like source and its spectrum is fitted with a power-law model (cf.\ Eq.~\ref{eq:pl}).
We confirm with our analysis the presence of both sources, that is, we were not able to obtain a satisfactory fit with only one source, or a different choice of spatial models.
In the following, we will refer to the source models we obtained as J1810.3$-$1925e\xspace and J1811.5$-$1925\xspace, respectively, in distinction to the models provided in the 4FGL-DR2 catalogue.
The best-fit parameter values of the models are summarised in Table~\ref{tab:fermi_model_pars}.
The systematic uncertainties on all model parameters except the flux normalisation $N_0$ are negligible compared to the statistical ones, and thus not quoted in the table.
We acknowledge that our best-fit spectral model for J1810.3$-$1925e\xspace shows no significant curvature, but have decided to maintain the log-parabola model for consistency with the 4FGL-DR2 catalogue (which extended to lower energies than our analysis here).
\begin{table}
\centering
\caption{Best-fit parameter values for the \emph{Fermi}-LAT\xspace data analysis.
}
\label{tab:fermi_model_pars}
\begin{tabular}{lc}
\hline\hline
Par. [unit] & Value \\\hline
\multicolumn{2}{c}{\rule{0pt}{1.1\normalbaselineskip} J1810.3$-$1925e\xspace \vspace{0.05cm}}\\\hline
R.A. [deg] & $272.547\pm 0.033_\mathrm{stat}$\\
& ($18^\mathrm{h}10^\mathrm{m}11^\mathrm{s} \pm 8^\mathrm{s}_\mathrm{stat}$)\\
Dec. [deg] & $-19.397\pm 0.038_\mathrm{stat}$\\
& ($-19^\circ 23.8'\pm 2.3'_\mathrm{stat})$\\
$\sigma$ [deg] & $0.317\pm 0.024_\mathrm{stat}$\\
$N_0\,[10^{-9}\,\mathrm{GeV}^{-1}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}]$ & $2.86\pm 0.27_\mathrm{stat}\pm 0.09_\mathrm{sys}$\\
$\alpha$ & $2.53\pm 0.11_\mathrm{stat}$\\
$\beta$ & $-0.015\pm 0.043_\mathrm{stat}$\\
$E_0$ [GeV] & 1.747 (fixed)\\\hline
\multicolumn{2}{c}{\rule{0pt}{1.1\normalbaselineskip} J1811.5$-$1925\xspace \vspace{0.05cm}}\\\hline
R.A. [deg] & $272.874\pm 0.025_\mathrm{stat}$\\
& ($18^\mathrm{h}11^\mathrm{m}30^\mathrm{s} \pm 6^\mathrm{s}_\mathrm{stat}$)\\
Dec. [deg] & $-19.410\pm 0.027_\mathrm{stat}$\\
& ($-19^\circ 24.6'\pm 1.6'_\mathrm{stat})$\\
$N_0\,[10^{-9}\,\mathrm{GeV}^{-1}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}]$ & $0.016\pm 0.003_\mathrm{stat}\pm 0.001_\mathrm{sys}$\\
$\Gamma$ & $2.42\pm 0.16_\mathrm{stat}$\\
$E_0$ [GeV] & 7.48 (fixed)\\\hline
\end{tabular}
\tablefoot{
$\sigma$ is the 1-$\sigma$ radius of the Gaussian spatial model.
$N_0$, $\alpha$, $\beta$, $\Gamma$, and $E_0$ are parameters of the spectral models defined in Eqs.~(\ref{eq:logparabola}) and (\ref{eq:pl}).
}
\end{table}
Removing J1810.3$-$1925e\xspace and J1811.5$-$1925\xspace from the best-fit ROI model, we obtained the significance map shown in Fig.~\ref{fig:fermi_sign_maps}, panel~(a).
Panel~(b) displays the significance map after adding J1811.5$-$1925\xspace to the ROI model, while panel~(c) shows the map with both source models restored.
A comparison of the panels shows that the two fitted sources account for the majority of the emission around HESS~J1809$-$193\xspace.
The energy spectra extracted for J1810.3$-$1925e\xspace and J1811.5$-$1925\xspace are displayed in Fig.~\ref{fig:sed_fermi}.
The obtained spectrum for J1810.3$-$1925e\xspace is in good agreement with that obtained by \citet{Araya2018}, considering that the region has been modelled differently (a single source with a disk spatial model) there.
\begin{figure*}
\centering
\includegraphics{img/sign_maps_fermi}
\caption{
Significance maps above \SI{1}{\GeV} from the \emph{Fermi}-LAT\xspace analysis.
(a) With J1811.5$-$1925\xspace and J1810.3$-$1925e\xspace removed from the best-fit model.
(b) With J1810.3$-$1925e\xspace removed from the best-fit model.
(c) With all sources included in the model.
The light blue cross denotes the fitted position of J1811.5$-$1925\xspace, whereas the dark blue cross and dashed circle display the fitted position and 1-$\sigma$ extent of J1810.3$-$1925e\xspace.
For comparison, the components of the 2-component model fitted to the H.E.S.S.\xspace data are shown in green and purple (same as in Fig.~\ref{fig:sign_maps_models}).
The grey dashed line marks the Galactic plane, while the coloured triangle markers denote the positions of PSR~J1809$-$1917\xspace, PSR~J1811$-$1925\xspace, and XTE~J1810$-$197.
}
\label{fig:fermi_sign_maps}
\end{figure*}
The point-like source J1811.5$-$1925\xspace is positionally coincident with PSR~J1811$-$1925\xspace, which strongly suggests an association with this pulsar, as also listed in the 4FGL-DR2 catalogue.
We therefore regard its emission as unrelated to HESS~J1809$-$193\xspace.
On the other hand, the best-fit position of J1810.3$-$1925e\xspace is close to PSR~J1809$-$1917\xspace and the two H.E.S.S.\xspace source components, suggesting a connection to HESS~J1809$-$193\xspace.
In particular, the fitted position and extent are very similar to those of the extended H.E.S.S.\xspace component (component~A), as is evident from Fig.~\ref{fig:fermi_sign_maps}.
In order to further explore the connection between the \emph{Fermi}-LAT\xspace and H.E.S.S.\xspace data, we have also extracted energy spectra of the emission observed with \emph{Fermi}-LAT\xspace using the best-fit spatial models of the two H.E.S.S.\xspace components as spatial templates (removing J1810.3$-$1925e\xspace from the model but retaining J1811.5$-$1925\xspace).
The result is shown in Fig.~\ref{fig:sed_fermi_hess_comp}.
As expected due to its slightly larger spatial extent, the spectrum obtained for the template of component~A is slightly above that of J1810.3$-$1925e\xspace.
With the template of component~B we obtained only flux upper limits, however, this is not a surprise given \emph{Fermi}-LAT\xspace's broadband sensitivity\footnote{Broadband sensitivity curves for \emph{Fermi}-LAT\xspace are available at \url{https://www.slac.stanford.edu/exp/glast/groups/canda/lat_Performance}. We have used the curve for Galactic coordinates $l=0^\circ$ and $b=0^\circ$.} (dashed line in Fig.~\ref{fig:sed_fermi_hess_comp}).
\begin{figure}
\centering
\includegraphics{img/sed_fermi}
\caption{
\emph{Fermi}-LAT\xspace energy spectrum results for J1810.3$-$1925e\xspace and J1811.5$-$1925\xspace.
We compare our spectra to that obtained by \citet{Araya2018} for the entire region.
}
\label{fig:sed_fermi}
\end{figure}
\begin{figure}
\centering
\includegraphics{img/sed_fermi_hess_comp}
\caption{
\emph{Fermi}-LAT\xspace energy spectra obtained with H.E.S.S.\xspace model templates.
The dashed grey line shows the 10-year \emph{Fermi}-LAT\xspace broadband sensitivity.
}
\label{fig:sed_fermi_hess_comp}
\end{figure}
\section{Modelling}
\label{sec:modelling}
In this section, we present an interpretation of the observational results by means of modelling the primary cosmic-ray particle populations responsible for the observed $\gamma$\xspace-ray emission.
We investigate two scenarios: (i) that the emission detected with H.E.S.S.\xspace is entirely attributed to the PWN of PSR~J1809$-$1917\xspace, that is, of purely leptonic origin (Section~\ref{sec:pwn_scenario}); (ii) that there is an additional contribution to the emission from hadronic cosmic rays accelerated in one of the SNRs and interacting in the molecular clouds (Section~\ref{sec:hadronic_scenario}).
\subsection{Pulsar wind nebula scenario}
\label{sec:pwn_scenario}
We employed a one-zone PWN model, in which we performed a time-dependent modelling of the pulsar energy output, the ambient magnetic field, and the injected electrons, following the approach outlined in \citet{HAWC2021}.
The parameters of the model are summarised in Table~\ref{tab:pars_pwn_model}.
The input parameters consist mostly of measured properties of PSR~J1809$-$1917\xspace.
For the pulsar braking index, however, we have adopted the canonical value of $n=3$, assuming that this is more representative for the full history of the pulsar than the recent measurement of $n=23.5$, which may be affected by undetected glitches of the pulsar \citep{Parthasarathy2019,Parthasarathy2020}.
The electron injection spectrum follows a power law with index $-\alpha$ and an exponential cut-off at energy $E_c$.
Its normalisation is proportional to $\theta\times \dot{E}$, that is, coupled to the time-dependent spin-down power.
As specified in \citet{Gaensler2006} and \citet{Venter2007}, we took the time evolution of the pulsar period as $P(t)=P_0(1+t/\tau_0)^{0.5}$, of the pulsar spin-down power as $\dot{E}(t)=\dot{E}_0(1+t/\tau_0)^{-2}$, and of the magnetic field as $B(t)=B_0[1+(t/\tau_0)^{0.5}]^{-1}$, where $\tau_0=P_0/(2\dot{P}_0)$ is the initial spin-down time scale.
We then computed the non-thermal emission from the injected electrons (i.e.\ synchrotron radiation and IC emission) employing the GAMERA library \citep{Hahn2015}, which takes into account cooling losses of the electrons.
For the IC target photon fields, we have used the model by \citet{Popescu2017}\footnote{We note that the prediction of this large-scale model may not be very accurate in the specific region studied here. As our conclusions are based on order-of-magnitude estimates, however, they are unaltered even if the predicted radiation field densities are wrong by a factor of a few.}.
Finally, we have fitted the adjustable parameters of the model to the observed spectral energy distribution (SED) of HESS~J1809$-$193\xspace, where the optimisation has been carried out using a Markov chain Monte Carlo (MCMC) method implemented in the \textsc{emcee} package \citep{ForemanMackey2013}.
We note that some of the model parameters are correlated with each other or not well constrained by the available data.
Therefore, we stress that while we have carried out an optimisation of the model parameters, the obtained values should not be regarded as measurements of the corresponding quantities, but rather as one possible combination of parameter values that yield a reasonable description of the observational data.
The parameter values given in Table~\ref{tab:pars_pwn_model} are those that yielded the highest numerical probability, that is, the best fit to the data.
\begin{table}
\centering
\caption{Parameters of the PWN model}
\label{tab:pars_pwn_model}
\begin{tabular}{ccc}
\hline\hline
Par. & Description & Value\\
\hline
\multicolumn{3}{c}{\rule{0pt}{1.1\normalbaselineskip} Input parameters \vspace{0.05cm}}\\\hline
$d$ & pulsar distance\tablefootmark{a} & \SI{3.3}{\kpc}\\
$\dot{E}$ & pulsar spin-down power\tablefootmark{a} & \SI{1.8e36}{\erg\per\second}\\
$\tau_c$ & pulsar characteristic age\tablefootmark{a} & \SI{51.4}{\kilo\year}\\
$P$ & pulsar period \tablefootmark{a} & \SI{82.76}{\milli\second}\\
$\dot{P}$ & pulsar period derivative\tablefootmark{a} & \SI{2.55e-14}{\second\per\second}\\
$n$ & pulsar braking index\tablefootmark{b} & 3\\
\hline
\multicolumn{3}{c}{\rule{0pt}{1.1\normalbaselineskip} Adjusted parameters \vspace{0.05cm}}\\\hline
$\theta$ & electron power fraction & $0.6$\\
$B$ & magnetic field & $\SI{4}{\micro\gauss}$\\
$P_0$ & pulsar birth period & $\SI{50}{\milli\second}$\\
$E_c$ & cut-off energy & $\SI{420}{\TeV}$\\
$\alpha$ & injection spectrum index & $2.0$\\
$\tau_\mathrm{young}$ & age of young e$^-$ & $\SI{1.2}{\kilo\year}$\\
$\tau_\mathrm{med}$ & age of medium-age e$^-$ & $\SI{4.7}{\kilo\year}$\\
\hline
\end{tabular}
\tablefoot{
Pulsar parameters denote present-day values unless otherwise specified.
The parameter values for the `adjusted' parameters were obtained using an MCMC method, but should be regarded as indicative values rather than precise fit results (see main text).\\
\tablefoottext{a}{Taken from \citet{Manchester2005}.}
\tablefoottext{b}{Assumed value.}
}
\end{table}
In the model, we invoked three `generations' of electrons:
(i)~`relic' electrons, which have been injected over the life time of the system ($\tau\approx \SI{33}{\kilo\year}$\footnote{The `true' age of the pulsar can be computed as $\tau=(P/((n-1)\cdot \dot{P}))\cdot(1-(P_0/P)^{n-1})$ \citep{Gaensler2006}.
We note that this formula depends on the unknown pulsar birth period $P_0$, for which we have used the value suggested by our model optimisation (cf.\ Table~\ref{tab:pars_pwn_model}).
Other values of $P_0$ will lead to different estimates of the pulsar age.
})
and are associated with the extended H.E.S.S.\xspace component (A);
(ii)~`medium-age' electrons, which have been injected within the last $\tau_\mathrm{med}\approx \SI{4.7}{\kilo\year}$ and are associated with the compact H.E.S.S.\xspace component (B);
(iii)~`young' electrons, which have been injected within the last $\tau_\mathrm{young}\approx \SI{1.2}{\kilo\year}$ and are associated with the X-ray nebula.
In this picture, the `relic' electrons are assumed to have escaped from the central region (which contains the X-ray PWN and the compact component~B) at some instant in the past.
For lack of evidence when this escape has occurred, the `relic' electrons are injected from the birth of the system until the `medium-age' electrons start to be injected.
We note that, despite associating the different generations with different spatial regions, we have not performed a spatial modelling -- the association is made in terms of the SED only.
In addition to the already presented H.E.S.S.\xspace spectra, we used in the fit the spectrum of the X-ray nebula as measured by \citet{Anada2010} with the Suzaku satellite between 2~and \SI{10}{\keV}.
Since we associate only the most recently injected `young' electrons with the X-ray nebula, we integrated the measured flux in the immediate vicinity of the nebula only (regions 2, 3, 6, and 7 in Fig.~4 / Table~4 of \citeauthor{Anada2010} \citeyear{Anada2010}).
Additionally, we derived an upper limit (at 95\% confidence level) for the X-ray flux emitted by the `medium-age' electrons using the measured flux in regions 9--16 and applying a scaling factor that takes into account the difference in solid angle between these regions and the compact H.E.S.S.\xspace component associated with the `medium-age' electrons.
The upper limit is not used in the fit and only serves as a sanity check for the model.
We show the obtained SEDs for the three generations of electrons in Fig.~\ref{fig:model_pwn}, together with the observed data.
The model describes the spectra measured with H.E.S.S.\xspace and Suzaku well, and the predicted X-ray flux of the `medium-age' electrons does not exceed the Suzaku upper limit.
The fit yields, for example, a moderate required present-day magnetic field of $\sim$\SI{4}{\micro\gauss} and a reasonable spectral index for the injection spectrum of $\sim$2.
Furthermore, a maximum electron energy of several hundred~TeV is implied by the data.
The total predicted $\gamma$\xspace-ray spectrum is also well compatible with the total flux from HESS~J1809$-$193\xspace as measured by HAWC \citep{Goodman2022}.
The model fails, however, to explain the spectrum of the \emph{Fermi}-LAT\xspace source J1810.3$-$1925e\xspace below $\sim$\SI{10}{\GeV}.
This would require an additional IC component, emitted by electrons even older than the `relic' electrons.
In this case, however, it would be expected that the emission of J1810.3$-$1925e\xspace exhibits a larger spatial extent than that of component~A of HESS~J1809$-$193\xspace, which is not the case.
Alternatively, a hadronic component related to the SNR G011.0$-$00.0\xspace could be invoked -- this scenario will be discussed in more detail in Section~\ref{sec:hadronic_scenario}.
The offset between component~B and PSR~J1809$-$1917\xspace may be explained, for example, by the proper motion of the pulsar.
Indeed, \citet{Klingler2018,Klingler2020} have detected a northward proper motion of $\sim$\SIrange{20}{40}{\mas\per\year}, albeit not with high significance.
This would imply a travel time between the best-fit position of component~B and the current pulsar position of $\sim$\SIrange{10}{20}{\kilo\year}.
This is somewhat larger than our estimate of the age of the `medium-age' electrons associated with component~B.
However, considering that also an asymmetric crushing of the PWN by the SNR reverse shock can lead to a displacement between the PWN and the pulsar \citep{Blondin2001}, the scenario still appears feasible.
\begin{figure*}
\centering
\subfigure[]{
\includegraphics{img/gamera_model_pwn_full}
\label{fig:model_pwn_full}
}
\subfigure[]{
\includegraphics{img/gamera_model_pwn_zoom}
\label{fig:model_pwn_zoom}
}
\caption{
SED of HESS~J1809$-$193\xspace for the leptonic model.
(a) Full energy range.
(b) Zoom into high-energy regime.
SED curves for the three assumed electron generations, obtained with GAMERA, are shown with dark grey lines.
In panel (b), the light grey lines show individual solutions from the MCMC sampling, and thus give an indication of the statistical spread.
The H.E.S.S.\xspace and \emph{Fermi}-LAT\xspace data points have been derived in this work.
The Suzaku X-ray data are from \citet{Anada2010}, where the butterfly corresponds to the `young' electrons (dotted line) and the upper limit refers to the `medium-age' electrons (dashed line; see main text for details).
Shown for comparison but not used in the fit are radio data for G011.0$-$00.0\xspace \citep{Brogan2006} and data points from HAWC \citep{Goodman2022}.
}
\label{fig:model_pwn}
\end{figure*}
Having derived the expected age of the PWN system, we used our measurement of the size of the extended H.E.S.S.\xspace component to infer how fast the `relic' electrons associated to this component have diffused since their injection (see Fig.~\ref{fig:hess_comp1_size}).
We have assumed an energy-dependent diffusion coefficient
\begin{linenomath*}
\begin{equation}\label{eq:diff}
D = D_0\left(\frac{E_e}{\SI{40}{\TeV}}\right)^\delta\,,
\end{equation}
\end{linenomath*}
where $E_e$ is the electron energy, $D_0$ denotes the diffusion coefficient at a reference energy of \SI{40}{\TeV}, and $\delta$ specifies the energy dependence of the diffusion.
Using again the GAMERA library to derive the expected size of the `relic' electron component as a function of $\gamma$\xspace-ray energy, we determined the two parameters $D_0$ and $\delta$ by fitting the expected size to the observed size of component~A of HESS~J1809$-$193\xspace (cf.\ Table~\ref{tab:size_comp1}) -- noting again that the results of the fit are strongly model-dependent and should not be taken as a measurement.
The best-fit diffusion coefficient of $D_0\sim\SI{1.1e28}{\centi\meter\squared\per\second}$ appears reasonable and is of the same order of magnitude as the coefficient obtained for the Geminga halo by \citet{HAWC2017}.
On the other hand, the observed data do not provide very strong constraints for $\delta$, with both Kolgoromov scaling ($\delta=1/3$) and Bohm scaling ($\delta=1$) consistent with the observations.
While our simple estimate assumes a radially symmetric diffusion of the electrons, we note that the elongation of component~A aligns with the asymmetric extension of the X-ray PWN, possibly hinting at a particular arrangement of the magnetic field in the region.
Lastly, we point out that because the highest-energy `relic' electrons have cooled since they were injected, a cut-off to the corresponding $\gamma$\xspace-ray spectrum is expected to occur.
The measured cut-off energy of $\sim$\SI{13}{\TeV} for component~A of HESS~J1809$-$193\xspace is well in line with this prediction, as can be seen in Fig.~\ref{fig:model_pwn_zoom}.
\begin{figure}
\centering
\includegraphics{img/hess_comp1_size_vs_energy}
\caption{
Measured and predicted radius of the extended component of HESS~J1809$-$193\xspace (component~A).
As measured radius, we used the 1-$\sigma$ extent of the semi-major axis of the elongated Gaussian spatial model for component~A (cf.\ Table~\ref{tab:size_comp1}).
The solid blue curve has been obtained by fitting the electron diffusion parameters ($D_0, \delta$; cf.\ eq.\ \ref{eq:diff}) to the measured data points; the dashed orange and dotted green curves show results for fixed values of $\delta$ as indicated in the legend.
}
\label{fig:hess_comp1_size}
\end{figure}
We conclude that the appearance of HESS~J1809$-$193\xspace is compatible with that of a halo of old electrons (component~A) around the PWN (component~B \& X-ray emission).
We also note that in terms of its X-ray-to-TeV luminosity ratio, PSR~J1809$-$1917\xspace fits well into the population of PWN \citep{Kargaltsev2013}.
\subsection{Possible hadronic contributions}
\label{sec:hadronic_scenario}
Given the presence of SNRs and molecular clouds in the vicinity of HESS~J1809$-$193\xspace, we also need to consider the possibility that cosmic-ray nuclei accelerated at the SNR shock fronts and interacting hadronically in the molecular clouds are responsible for at least part of the observed $\gamma$\xspace-ray emission.
Indeed, a mixed leptonic/hadronic scenario seems possible in principle: while we are not aware of distance estimates for G011.1+00.1\xspace, existing distance estimates for G011.0$-$00.0\xspace of \SI{2.6}{\kpc} \citep{Bamba2003}, $\sim$\SI{3}{\kpc} \citep{Castelletti2016}, and $2.4\pm0.7\,\si{\kpc}$ \citep{Shan2018} seem broadly consistent with those for PSR~J1809$-$1917\xspace of \SI{3.7}{\kpc} \citep{Morris2002} and \SI{3.27}{\kpc} \citep{Parthasarathy2019}.
Furthermore, molecular gas is present throughout the region (cf.\ Appendix~\ref{sec:appendix_fugin_map}), and in particular the dense molecular clouds found by \citet{Castelletti2016} and \citet{Voisin2019} seem to lie at distances compatible with that of G011.0$-$00.0\xspace, thus providing the required target material for cosmic-ray interactions.
This has lead \citet{Voisin2019} to propose that G011.0$-$00.0\xspace is the host SNR of PSR~J1809$-$1917\xspace.
However, while the pulsar proper motion could be compatible with this scenario, the association is not firm \citep{Klingler2018,Klingler2020}.
Although the measured spectrum of the \emph{Fermi}-LAT\xspace source J1810.3$-$1925e\xspace is comparatively soft, it could in principle be described (below $\sim$\SI{10}{GeV}) using a hadronic model.
However, it remains unclear why the spatial model of J1810.3$-$1925e\xspace coincides with that of the extended H.E.S.S.\xspace component in this case, as we would rather expect the emission to be more compact and centred on the positions of the molecular clouds.
We also note that simultaneously modelling the emission of J1810.3$-$1925e\xspace and either of the two H.E.S.S.\xspace components in a purely hadronic scenario would require, in order to explain the transition from the steep spectrum of the former to the harder spectra of the latter, a spectral hardening in the primary cosmic-ray spectrum, for which there is no obvious explanation.
As presented in Sect.~\ref{sec:pwn_scenario}, both components of HESS~J1809$-$193\xspace can be modelled well within a leptonic scenario.
Nevertheless, we have also explored the implications of either of the components being of hadronic origin.
To this end, we have fitted a proton-proton ($pp$) model to both components, employing the \textsc{Naima} package \citep{Zabalza2015}.
The primary proton spectrum is described using an ECPL model (see Eq.~\ref{eq:ecpl}) and we have assumed a distance to the source of \SI{3}{\kpc}.
We used the wrapper class for \textsc{Naima} models implemented in \textsc{Gammapy}\xspace, so that they could be fitted directly to the H.E.S.S.\xspace data sets (as opposed to fitting them to the extracted flux points only), using the same likelihood framework as before (cf.\ Sect.~\ref{sec:hess_likelihood_analysis}).
The fit results are presented in Table~\ref{tab:pp_model_pars}, and the resulting spectra displayed in Fig.~\ref{fig:model_snr}.
The same spatial models as in the previous analysis (cf.\ Sect.~\ref{sec:hess_results}) were assumed and compatible best-fit parameters were obtained for them.
\begin{table}
\centering
\caption{Best-fit parameter values for the hadronic $pp$ models.
}
\label{tab:pp_model_pars}
\begin{tabular}{lc}
\hline\hline
Par. [unit] & Value \\\hline
\multicolumn{2}{c}{\rule{0pt}{1.1\normalbaselineskip} Component~A \vspace{0.05cm}}\\\hline
$N_0^{p,\mathrm{A}}\,[10^{34}\,\mathrm{eV}^{-1}]$ & $10.0\pm 1.8$\\
$\Gamma^{p,\mathrm{A}}$ & $1.76\pm 0.17$\\
$E_c^{p,\mathrm{A}}$ [TeV] & $90_{-30}^{+35}$\\
$E_0^{p,\mathrm{A}}$ [TeV] & 20 (fixed)\\\hline
\multicolumn{2}{c}{\rule{0pt}{1.1\normalbaselineskip} Component~B \vspace{0.05cm}}\\\hline
$N_0^{p,\mathrm{B}}\,[10^{34}\,\mathrm{eV}^{-1}]$ & $1.0\pm 0.4$\\
$\Gamma^{p,\mathrm{B}}$ & $1.34\pm 0.45$\\
$E_c^{p,\mathrm{B}}$ [TeV] & $110_{-50}^{+135}$\\
$E_0^{p,\mathrm{B}}$ [TeV] & 20 (fixed)\\\hline
\end{tabular}
\tablefoot{
$N_0$, $\Gamma$, $E_0$, and $E_c$ are parameters of the ECPL spectral model (Eq.~\ref{eq:ecpl}), where the superscript $p,\mathrm{X}$ denotes that these are the parameters of the primary proton spectrum of component $\mathrm{X}=\{\mathrm{A},\mathrm{B}\}$, respectively.
The quoted errors represent statistical uncertainties only.
}
\end{table}
\begin{figure}
\centering
\includegraphics{img/naima_model_snr_only_with_legend}
\caption{
SED of HESS~J1809$-$193\xspace with hadronic ($pp$) models.
The H.E.S.S.\xspace and \emph{Fermi}-LAT\xspace data points have been derived in this work, the HAWC data points (shown for comparison only) are taken from \citet{Goodman2022}.
The lines show the predicted $\gamma$\xspace-ray spectra for hadronic models fitted to each of the two H.E.S.S.\xspace components, respectively.
}
\label{fig:model_snr}
\end{figure}
The $pp$ model for component~A prefers a relatively hard spectral index of $\Gamma^{p,\mathrm{A}}=1.76\pm 0.17$.
Integrating the primary spectrum above \SI{1}{\GeV} yields a total required energy of $W^{p,\mathrm{A}}\sim \num{3.2e50}\,(n/\SI{1}{\per\cubic\centi\meter})^{-1}\,\si{\erg}$, which represents -- unless high ISM densities are invoked -- a significant fraction of the canonically assumed kinetic energy released in a supernova explosion of $\sim$\SI{e51}{\erg} \citep[e.g.][]{Ginzburg1975}.
In this context, we note that while the FUGIN CO data indeed show the presence of molecular gas beyond the dense clouds discovered by \citet{Castelletti2016} and \citet{Voisin2019}, they also indicate a gradient in the gas density across the extent of HESS~J1809$-$193\xspace (see the map in Appendix~\ref{sec:appendix_fugin_map}).
This gradient is not reflected in the observed $\gamma$\xspace-ray emission, which makes the interpretation of component~A in a hadronic scenario challenging.
For component~B, our fit yields an even harder proton spectral index of $\Gamma^{p,\mathrm{B}}=1.35\pm 0.45$.
The required energy in protons above \SI{1}{\GeV} is $W^{p,\mathrm{B}}\sim \num{2.7e49}\,(n/\SI{1}{\per\cubic\centi\meter})^{-1}\,\si{\erg}$.
Even considering that only part of the cosmic rays potentially accelerated by G011.0$-$00.0\xspace will reach the dense molecular clouds, the high density of gas in the vicinity of component~B in general ($\sim$\SI{1000}{\per\cubic\centi\meter}, cf.\ Appendix~\ref{sec:appendix_fugin_map}) makes this energy seem well affordable.
An explanation of component~B of HESS~J1809$-$193\xspace in a hadronic scenario therefore appears entirely reasonable, and would furthermore explain its offset from the position of PSR~J1809$-$1917\xspace and the X-ray PWN without the need of requiring, for example, a large proper motion velocity of the pulsar.
It would appear natural in this case to associate component~B also with the highest-energy emission up to \SI{100}{\TeV} measured with HAWC.
However, as is evident from Fig.~\ref{fig:model_snr}, the fitted cut-off energy $E_c^{p,\mathrm{B}}\sim\SI{110}{\TeV}$ of the primary proton spectrum leads to a too strong cut-off in the $\gamma$\xspace-ray spectrum, leaving the highest HAWC flux points unexplained.
We have therefore repeated the fit of the hadronic models, adding a $\chi^2$ term that denotes the deviation between the sum of the predicted $\gamma$\xspace-ray fluxes of both H.E.S.S.\xspace components and the HAWC flux points to the total TS.
In this case, we obtain slightly softer spectra ($\Gamma^{p,\mathrm{A}}=1.95\pm0.10$; $\Gamma^{p,\mathrm{B}}=1.56\pm0.22$) and slightly higher cut-off energies ($E_c^{p,\mathrm{A}}=140_{-50}^{+80}\,\si{\TeV}$; $E_c^{p,\mathrm{B}}=200_{-130}^{+420}\,\si{\TeV}$).
These values are, however, consistent within uncertainties with those obtained in the previous fit, which demonstrates that it is possible to also explain the measured HAWC flux points within a hadronic scenario (or a mixed one, in which component~A is leptonic and component~B is hadronic).
Finally, we note that while the relatively hard primary proton spectra obtained for both components of HESS~J1809$-$193\xspace are not consistent with generic predictions of diffuse shock acceleration \citep{Bell2013}, they are compatible with a scenario in which cosmic rays accelerated in a supernova remnant illuminate a nearby gas cloud \citep[e.g.][]{Gabici2009}.
On the other hand, there is also the possibility of a continuous wind of hadronic cosmic rays powered by the pulsar \citep[e.g.][]{Gallant1994,Amato2003}, which may be an interesting scenario to explore for the case of PSR~J1809$-$1917\xspace, as already noted by \citet{Voisin2019}.
\section{Conclusion}
\label{sec:conclusion}
We have presented a new analysis of the $\gamma$\xspace-ray emission from HESS~J1809$-$193\xspace, employing improved analysis techniques.
For the first time, we were able to resolve the emission into two distinct components, which we have modelled with Gaussian spatial models.
Component~A appears extended and elongated, with a 1-$\sigma$ semi-major and semi-minor axis of $\sim0.62^\circ$ and $\sim 0.35^\circ$, respectively, and exhibits a spectrum with a cut-off at $\sim$\SI{13}{\TeV}.
Superimposed, component~B appears symmetric and more compact with a 1-$\sigma$ radius of $\sim0.1^\circ$, and shows a harder spectrum with no clear cut-off.
We have interpreted the results in a leptonic scenario, in which the $\gamma$\xspace-ray emission is due to high-energy electrons provided by the energetic pulsar PSR~J1809$-$1917\xspace, which is known to power an X-ray PWN.
The model is based on three `generations' of electrons, associated with component~A, component~B, and the X-ray PWN, respectively (going from old to recently injected electrons).
The measured extent and spectrum of component~A are compatible with a halo of old electrons that have escaped the PWN.
The presence of SNRs and molecular clouds within the region suggests that (part of) the $\gamma$\xspace-ray emission could also be of hadronic origin.
Indeed, we found that both of the components of HESS~J1809$-$193\xspace can in principle be modelled within a hadronic scenario.
However, a lack of correlation between the $\gamma$\xspace-ray emission of component~A and the gas present in the region disfavours a hadronic interpretation for this component.
Conversely, for component~B, which is spatially coincident with the shell of the SNR G011.0$-$00.0\xspace and several molecular clouds, this is a viable alternative explanation.
The measurement of $\gamma$\xspace-ray emission up to \SI{100}{\TeV} with HAWC could be viewed as additional support for this interpretation.
It would, however, leave the X-ray PWN without a counterpart at TeV energies (component~A being associated with electrons injected long ago only), which would be unexpected when comparing with other PWN systems \citep{Kargaltsev2013}.
Our analysis of \emph{Fermi}-LAT\xspace data has confirmed the presence of an extended source, J1810.3$-$1925e\xspace, that based on its location and morphology appears to be associated to component~A of HESS~J1809$-$193\xspace.
However, the spectrum of J1810.3$-$1925e\xspace does not connect smoothly to that of component~A of HESS~J1809$-$193\xspace, implying the need for a spectral hardening around \SI{100}{\GeV}.
While our presented model is not able to describe this feature, we note that the overall shape of the SED is reminiscent of that of another well-known PWN system, Vela~X, which also exhibits a break at around \SI{100}{\GeV} \citep{Tibaldo2018}.
However, multiple distinct components have not been resolved at TeV energies for this system yet.
Furthermore, with its characteristic age of only $\sim$\SI{10}{kyr} \citep{Manchester2005} and a very low braking index of $n=1.4$ \citep{Lyne1996}, the Vela pulsar has an evolution history quite different from PSR~J1809$-$1917\xspace.
Another interesting PWN to compare to is HESS~J1825$-$137, which is the prototype of an extended ($\sim$\SI{100}{\pc} diameter) PWN that shrinks in size at high $\gamma$\xspace-ray energies \citep{HESS_J1825_2019}.
The pulsar powering HESS~J1825$-$137, PSR~B1823$-$13 (PSR~J1826$-$1334), is quite similar to PSR~J1809$-$1917\xspace in terms of spin-down power ($\dot{E}=\SI{2.8e36}{\erg\per\second}$), period ($P=\SI{101}{\milli\second}$), and distance ($d=\SI{3.6}{\kpc}$), but may be slightly younger (characteristic age $\tau_c=\SI{21.4}{\kilo\year}$) \citep{Manchester2005}.
Comparing their $\gamma$\xspace-ray PWN, HESS~J1809$-$193\xspace is somewhat less extended than HESS~J1825$-$137 and does not exhibit an energy-dependent morphology.
On the other hand, HESS~J1809$-$193\xspace seems to be composed of two distinct components, whereas HESS~J1825$-$137 can be modelled with a single component that decreases in extent with increasing energy.
This may suggest that the PWN systems have evolved differently, for example due to differences in the density of the surrounding ISM, or due to a different evolution of the corresponding pulsar (e.g.\ \citeauthor{Khangulyan2018} \citeyear{Khangulyan2018} have proposed an unusually short birth period of $P_0\sim\SI{1}{\milli\second}$ for PSR~B1823$-$13).
Finally, it is interesting that HESS~J1809$-$193\xspace shows characteristics very similar to those of HESS~J1702$-$420 \citep{HESS_J1702_2021}: both have been resolved into a compact, hard-spectrum component surrounded by an extended, softer-spectrum component.
This may in principle suggest a similar origin of the $\gamma$\xspace-ray emission, however, HESS~J1702$-$420 is a `dark' source that lacks an obvious counterpart at other wavelengths \citep[see also][]{Giunti2022}, hampering a further comparison with HESS~J1809$-$193\xspace.
While we are not able to draw definitive conclusions about the origin of the $\gamma$\xspace-ray emission of HESS~J1809$-$193\xspace, our detailed and simultaneous characterisation of its morphology and spectrum is a big step towards understanding this source.
Further observations, in particular with HAWC \citep{HAWC2017a} as well as with the upcoming Cherenkov Telescope Array \citep[CTA;][]{CTA2018} and Southern Wide-Field Gamma-Ray Observatory \citep[SWGO;][]{SWGO2019}, will be crucial in further broadening our knowledge about HESS~J1809$-$193\xspace.
\begin{acknowledgements}
The support of the Namibian authorities and of the University of Namibia in facilitating the construction and operation of H.E.S.S.\xspace is gratefully acknowledged, as is the support by
the German Ministry for Education and Research (BMBF),
the Max Planck Society,
the German Research Foundation (DFG),
the Helmholtz Association,
the Alexander von Humboldt Foundation,
the French Ministry of Higher Education, Research and Innovation,
the Centre National de la Recherche Scientifique (CNRS/IN2P3 and CNRS/INSU),
the Commissariat \`{a} l'\'{E}nergie atomique et aux \'{E}nergies alternatives (CEA),
the U.K. Science and Technology Facilities Council (STFC),
the Irish Research Council (IRC) and the Science Foundation Ireland (SFI),
the Knut and Alice Wallenberg Foundation,
the Polish Ministry of Education and Science, agreement no.~2021/WK/06,
the South African Department of Science and Technology and National Research Foundation,
the University of Namibia,
the National Commission on Research, Science \& Technology of Namibia (NCRST),
the Austrian Federal Ministry of Education, Science and Research and the Austrian Science Fund (FWF),
the Australian Research Council (ARC),
the Japan Society for the Promotion of Science,
the University of Amsterdam and
the Science Committee of Armenia grant 21AG-1C085.
We appreciate the excellent work of the technical support staff in Berlin, Zeuthen, Heidelberg, Palaiseau, Paris, Saclay, T\"{u}bingen and in Namibia in the construction and operation of the equipment.
This work benefited from services provided by the H.E.S.S.\xspace Virtual Organisation, supported by the national resource providers of the EGI Federation.
This research made use of the
\textsc{Astropy}\footnote{\url{https://www.astropy.org}} \citep{Robitaille2013,PriceWhelan2018},
\textsc{Matplotlib}\footnote{\url{https://matplotlib.org}} \citep{Hunter2007},
and
\textsc{Corner}\footnote{\url{https://corner.readthedocs.io}} \citep{ForemanMackey2016}
software packages.
\end{acknowledgements}
\bibliographystyle{aa}
|
{
"arxiv_id": "2302.13597",
"language": "en",
"timestamp": "2023-02-28T02:24:59",
"url": "https://arxiv.org/abs/2302.13597",
"yymm": "2302"
} | \section{Introduction}\label{sec:introduction}
As set systems, hypergraphs appear in various contexts, such as databases, clustering, and machine learning.
A hypergraph can be represented in various ways. As a generalization of graphs, one can represent vertices by points and hyperedges by connected sets in $\mathbb R^d$ such that each set contains exactly the points of a hyperedge. It is desirable that these sets satisfy additional properties, e.g., being (strictly) convex, similar or even translates of each other.
For an introductory example, suppose we are organizing a workshop and have a list of accepted talks.
Clearly, each participant wants to quickly identify talks of their specific interest.
In order to create a good overview, we want to find a good representation. To this end, we label each talk by several tags, e.g., \texttt{hypergraphs}, \texttt{graph drawing}, \texttt{complexity theory}, \texttt{planar graphs}, etc.
Then, we create a representation, where each tag is represented by a unit disk (or another nice geometric object of our choice) containing points representing the talks that have this tag, see \Cref{fig:Geometric-Hypergraph} for an example..
In other words, we are interested in a geometric representation of the hypergraph where the vertex set is given by the talks and tags define the hyperedges.
In this work, we investigate the complexity of deciding whether a given hypergraph has such a geometric representation.
We start with a formal definition.
\pagebreak
\subparagraph{Problem Definition.}
In a \emph{geometric representation} of a hypergraph $H=(V,E)$, each vertex $v\in V$ is associated with a point $p_v\in \mathbb{R}^d$ and each hyperedge $e\in E$ is associated with a connected set $s_e\subset \mathbb{R}^d$ such that $\{p_v\mid v\in V\}\cap s_e=\{p_v\mid v\in e\}$ for all $e\in E$.
We say that a given hypergraph $H$ is \textit{representable} by some (possibly infinite) family $\ensuremath{\mathcal{F}}\xspace$ of sets in ${\mathbb{R}^d}$, if there exist $P\subset \mathbb{R}^d$ and $S \subseteq \ensuremath{\mathcal{F}}\xspace$ such that $(P,S)$ is a geometric representation of~$H$.
For a family \ensuremath{\mathcal{F}}\xspace of geometric objects, we define \textsc{Recognition}\xspace{}(\ensuremath{\mathcal{F}}\xspace) as the problem to determine whether a given hypergraph is representable by \ensuremath{\mathcal{F}}\xspace.
\begin{figure}[tbp]
\centering
\includegraphics{figures/Geometric-Hypergraph.pdf}
\caption{An abstract hypergraph and a geometric representation with unit disks.}
\label{fig:Geometric-Hypergraph}
\end{figure}
Next, we give some definitions describing
the geometric families studied in this work.
\subparagraph{\text{Bi-curved}\xspace, \text{Difference-separable}\xspace, and \text{Computable}\xspace Convex Sets.}
We study convex sets that are \text{bi-curved}\xspace, \text{difference-separable}\xspace and \text{computable}\xspace.
While the first two properties are needed for \ensuremath{\exists\mathbb{R}}\xspace-hardness,
the last one is used to show \ensuremath{\exists\mathbb{R}}\xspace-membership.
Let $C\subset \mathbb{R}^d$ be a convex set.
We call $C$ \textit{\text{computable}\xspace} if
for any point $p\in \mathbb{R}^d$ we can decide on a real~RAM whether $p$ is contained
in $C$.
We say that $C$ is \textit{\text{bi-curved}\xspace}
if there exists a unit vector $v\in\mathbb{R}^d$, such that there are two distinct tangent hyperplanes on $C$ with normal vector $v$;
with each of these hyperplanes intersecting $C$ in a single point, and $C$ being \emph{smooth} at both of these intersection points.
Informally, a convex set is \text{bi-curved}\xspace, if its boundary has two smoothly curved parts in which the tangent hyperplanes are parallel.
Note that a convex, \text{bi-curved}\xspace set is necessarily bounded.
As a matter of fact, any strictly convex bounded set in any dimension is \text{bi-curved}\xspace. For such sets, any unit vector $v$ fulfills the conditions. As can be seen in \Cref{fig:niceShape} (left), being strictly convex is not necessary for being \text{bi-curved}\xspace.
\begin{figure}[tbh]
\centering
\includegraphics[page=12]{figures/halfspaces.pdf}
\caption{Left: two parallel tangent hyperplanes of a burger-like set proving its \text{bi-curved}\xspace{ness}. Middle: a hyperplane separating the symmetric difference of two translates of the burger-like set.
Right: two cubes in $\mathbb{R}^3$ whose symmetric difference cannot be separated by a plane.
}
\label{fig:niceShape}
\end{figure}
We call $C$ \textit{\text{difference-separable}\xspace} if
for any two translates $C_1,C_2$ of $C$, there exists a hyperplane which strictly separates $C_1\setminus C_2$ from $C_2\setminus C_1$.
Being \text{difference-separable}\xspace is fulfilled by any convex set in $\mathbb{R}^2$, see \Cref{fig:niceShape} (middle) for an example. For a proof of this fact we refer to \cite[Corollary 2.1.2.2]{ma2000phd}. However, in higher dimensions this is not the case:
for a counterexample, consider two $3$-cubes
as in \Cref{fig:niceShape} (right).
In higher dimensions, the \text{bi-curved}\xspace and \text{difference-separable}\xspace families include the balls and ellipsoids.
We are not aware of other natural geometric families with those two properties.
We are now ready to state our results.
\subparagraph{Results.}
Our main contribution is to revive
the study of
recognition of geometric hypergraphs.
We first consider the maybe simplest type of
geometric hypergraphs, namely those that stem from
halfspaces.
It is known due to Tanenbaum, Goodrich, and Scheinerman~\cite{tanenbaum1995Halfspaces} that the
\textsc{Recognition}\xspace{} problem for geometric hypergraphs of halfspaces is \ensuremath{\text{NP}}\xspace-hard, but their proof actually implies \ensuremath{\exists\mathbb{R}}\xspace-hardness as well. We present a slightly different proof of this fact due to two reasons.
Firstly, their proof lacks details
about extensions to higher dimensions.
Secondly, it is a good stepping stone towards our proof of \Cref{thm:translates}.
\begin{theorem}[Tanenbaum, Goodrich, Scheinerman~\cite{tanenbaum1995Halfspaces}]\label{thm:halfspaces}
For every $d\geq 2$, \textsc{Recognition}\xspace{}(\ensuremath{\mathcal{F}}\xspace) is \ensuremath{\exists\mathbb{R}}\xspace-complete for the family~\ensuremath{\mathcal{F}}\xspace of halfspaces in $\mathbb{R}^d$.
\end{theorem}
Next we consider families of objects that are translates of a given object.
\begin{theorem}\label{thm:translates}
For $d\geq2$, let $C\subseteq \mathbb{R}^d$ be a convex, \text{bi-curved}\xspace, \text{difference-separable}\xspace and \text{computable}\xspace set,
and let \ensuremath{\mathcal{F}}\xspace be the family of all translates of $C$.
Then \textsc{Recognition}\xspace{}(\ensuremath{\mathcal{F}}\xspace) is \ensuremath{\exists\mathbb{R}}\xspace-complete.
\end{theorem}
We note that for $d=1$, the \textsc{Recognition}\xspace{} problems of halfspaces and translates of convex sets can be solved by sorting and thus can be decided in polynomial time.
One might be under the impression that the \textsc{Recognition}\xspace{} problem
is \ensuremath{\exists\mathbb{R}}\xspace-complete for every reasonable family of geometric objects of dimension at least two.
We show that is not the case by looking at translates of polygons.
\begin{restatable}{theorem}{polygonTranslates}\label{thm:polygons}
Let $P$ be a simple polygon with integer coordinates,
and \ensuremath{\mathcal{F}}\xspace the family of all translates of $P$.
Then \textsc{Recognition}\xspace{}(\ensuremath{\mathcal{F}}\xspace) is contained in \ensuremath{\text{NP}}\xspace.
\end{restatable}
\subparagraph{Organization.}
We give an overview over our proof techniques in \Cref{sec:sketch}. Full proofs of \Cref{thm:polygons} as well as the membership parts of \Cref{thm:halfspaces,thm:translates} are found in \Cref{sec:Membership}. We introduce the version of pseudohyperplane stretchability used in our hardness reductions in \Cref{sec:stretchability}. Full proofs of the hardness parts of \Cref{thm:halfspaces,thm:translates} can be found in \Cref{sec:halfspaces,sec:niceshapes}, respectively.
\subparagraph{Open problems.}
As mentioned above, we are not aware of interesting
families of \text{bi-curved}\xspace and \text{difference-separable}\xspace sets in higher dimensions beyond balls and ellipsoids.
The families of translates of a given polygon show the
need for some curvature
in order to show \ensuremath{\exists\mathbb{R}}\xspace-hardness.
We wonder if it is sufficient for \ensuremath{\exists\mathbb{R}}\xspace-hardness to assume
curvature at only one boundary part instead of two opposite ones.
Another open question is to consider families that
include rotated copies or homothetic copies of a fixed geometric object.
Allowing for rotation, it is conceivable that \ensuremath{\exists\mathbb{R}}\xspace-hardness even holds for polygons.
\subsection{Related work}
In this section we give a concise overview over related work on the complexity class \ensuremath{\exists\mathbb{R}}\xspace, geometric intersection graphs, and on other set systems related to hypergraphs.
\subparagraph{The Existential Theory of the Reals.}
\label{par:ETR}
The complexity class \ensuremath{\exists\mathbb{R}}\xspace (pronounced as `ER' or `exists R')
is defined via its canonical complete problem \textsf{ETR}\xspace (short for \emph{Existential Theory of the Reals}) and contains all problems that polynomial-time many-one reduce to it.
In an \textsf{ETR}\xspace instance, we are given a sentence of the form
\[
\exists x_1, \ldots, x_n \in \mathbb{R} :
\varphi(x_1, \ldots, x_n),
\]
where~$\varphi$ is a well-formed and quantifier-free formula consisting of polynomial equations and inequalities in the variables and the logical connectives $\{\land, \lor, \lnot\}$.
The goal is to decide whether this sentence is true.
The complexity class \ensuremath{\exists\mathbb{R}}\xspace gains its importance from its numerous influential complete problems.
Important \ensuremath{\exists\mathbb{R}}\xspace-completeness results include the realizability of abstract order types~\cite{Mnev1988_UniversalityTheorem,Shor1991_Stretchability}, geometric linkages~\cite{Schaefer2013_Realizability}, and the recognition of geometric intersection graphs, as further discussed below.
More results concern graph drawing~\cite{dobbins_AreaUniversality_Journal,Erickson2019_CurveStraightening,Lubiw2018_DrawingInPolygonialRegion,Schaefer2021_FixedK}, the Hausdorff distance~\cite{HausDorff}, polytopes~\cite{Dobbins2019_NestedPolytopes,Richter1995_Polytopes}, Nash-equilibria~\cite{Berthelsen2019_MultiPlayerNash,Bilo2016_Nash, Bilo2017_SymmetricNash,Garg2018_MultiPlayer,Schaefer2017_FixedPointsNash},
training neural networks~\cite{Abrahamsen2021_NeuralNetworks, 2022trainFull},
matrix factorization~\cite{Chistikov2016_Matrix,Schaefer2018_TensorRank,Shitov2016_MatrixFactorizations,Shitov2017_PSMatrixFactorization,tunccel2022computational},
continuous constraint satisfaction problems~\cite{Miltzow2022_ContinuousCSP},
geometric packing~\cite{Abrahamsen2020_Framework}, the art gallery problem~\cite{Abrahamsen2018_ArtGallery,stade2022complexity}, and covering polygons with convex polygons~\cite{Abrahamsen2022_Covering}.
\subparagraph{Geometric Hypergraphs}
Many aspects of hypergraphs with geometric representations have been studied. Hypergraphs represented by touching polygons in $\mathbb{R}^3$ have been studied by Evans et al.~\cite{evans}. Bounds on the number of hyperedges in hypergraphs representable by homothets of a fixed convex set $S$ have been established by Axenovich and Ueckerdt~\cite{axenovichHomothets}. Smorodinsky studied the chromatic number and the complexity of coloring of hypergraphs represented by various types of sets in the plane~\cite{smorodinsky}. Dey and Pach~\cite{deyPachExtremal} generalize many extremal properties of geometric graphs to hypergraphs where the hyperedges are induced simplices of some point set in $\mathbb{R}^d$. Haussler and Welzl~\cite{haussler1986epsilon} defined $\epsilon$-nets, subsets of vertices of hypergraphs called range spaces with nice properties. Such $\epsilon$-nets of geometric hypergraphs have been studied quite intensely~\cite{enets4,enets3,pachEpsilonNets2,pachEpsilonNets1}.
While there are many structural results, we are not aware of any research into the complexity of recognizing hypergraphs given by geometric representations, other than the recognition of embeddability of simplicial complexes, as we will discuss in the next paragraph.
\subparagraph{Other Representations of Hypergraphs.}
Hypergraphs are in close relation with abstract simplicial complexes. In particular, an abstract simplicial complex (complex for short) is a set system that is closed under taking subsets. A $k$-complex is a complex in which the maximum size of a set is $k$. In a geometric representation of an abstract simplicial complex $H=(V,E)$ each $\ell$-set of $E$ is represented by a $\ell$-simplex such that two simplices of any two sets intersect exactly in the simplex defined by their intersection (and are disjoint in case of an empty intersection). Note that 1-complexes are graphs and hence deciding the representability in the plane corresponds to graph planarity (which is in \ensuremath{\text{P}}\xspace). In stark contrast, Abrahamsen, Kleist and Miltzow recently showed that deciding whether a 2-complex has a geometric embedding in~$\mathbb R^3$ is \ensuremath{\exists\mathbb{R}}\xspace-complete~\cite{abrahamsenComplexes}; they also prove hardness for other dimensions.
Similarly, piecewise linear embeddings of simplicial complexes have been studied~\cite{vcadek2014computing,cadek2014polynomial,vcadek2017algorithmic,matouvsek2018embeddability,matousek2011hardness,mesmay2020embeddability,Skopenkov2020}.
\subparagraph{Recognizing Geometric Intersection Graphs.}
Given a set of geometric objects, its intersection graph has a vertex for each object, and an edge between any two intersecting objects.
The complexity of recognizing geometric intersection graphs has been studied for various geometric objects. We summarize these results in \Cref{fig:Class-Overview}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figures/Class-Overview.pdf}
\caption{Containment relations of geometric intersection graphs.
Recognition of a green class is in \ensuremath{\text{P}}\xspace, of a grey class is \ensuremath{\text{NP}}\xspace-complete, of a blue class is \ensuremath{\exists\mathbb{R}}\xspace-complete, and of a white class is unknown.}
\label{fig:Class-Overview}
\end{figure}
While intersection graphs of
circle chords (Spinnrad~\cite{spinrad1994recognition}),
unit intervals (Looges and Olariu~\cite{loogesUnitInterval})
and intervals (Booth and Lueker~\cite{boothInterval})
can be recognized in polynomial time, recognizing
string graphs (Schaefer and Sedgwick~\cite{schaefer2003recognizing})
is NP-complete. In contrast, \ensuremath{\exists\mathbb{R}}\xspace-completeness of recognizing intersection graphs has been proved for
(unit) disks by McDiarmid and M{\"u}ller \cite{mcdiarmid2013integer},
convex sets by Schaefer~\cite{schaeferConvex},
downward rays by Cardinal et al.~\cite{Cardinal2018_Intersection},
outer segments by Cardinal et al.~\cite{Cardinal2018_Intersection},
unit segments by Hoffmann et al.~\cite{unitSegER},
segments by Kratochv{\'\i}l and Matou{\v{s}}ek~\cite{Kratochvil1994_IntersectionGraphs},
$k$-polylines by Hoffmann et al.~\cite{unitSegER}, and
unit balls by Kang and M{\"u}ller~\cite{Kang2012}.
The existing research landscape indicates that recognition problems of intersection graphs are \ensuremath{\exists\mathbb{R}}\xspace-complete in case that
the family of objects satisfy two conditions:
Firstly, they need to be ``geometrically solid'',
i.e., not strings.
Secondly, some non-linearity must be present by
either allowing rotations, or by the objects having some curvature.
Our results indicate that this general intuition might translate to the recognition of geometric hypergraphs.
\subsection{Overview of Proof Techniques}\label{sec:sketch}
We prove containment in \ensuremath{\exists\mathbb{R}}\xspace and \ensuremath{\text{NP}}\xspace using standard arguments, providing witnesses and verification algorithms.
We prove the hardness parts of \Cref{thm:halfspaces,thm:translates} by reduction from stretchability of pseudohyperplane arrangements.
The hypergraph we build from the given arrangement differs from the one built in the proof of \Cref{thm:halfspaces} given in~\cite{tanenbaum1995Halfspaces}, since we wish to use a single construction which works nicely for both theorems.
Given a simple pseudohyperplane arrangement $\ensuremath{\mathcal{A}}\xspace$, we construct a hypergraph~$H$ as follows:
We double each pseudohyperplane by giving it a parallel \emph{twin}. In this arrangement, we place a point in every $d$-dimensional cell. These points represent the vertices of $H$. Every pseudohyperplane $\ell$ then defines a hyperedge, which contains all of the points on the same side of $\ell$ as its twin pseudohyperplane. See \Cref{fig:hypergraphConstruction} for an illustration of this construction.
Because this construction can also be performed on a hyperplane arrangement, it is straightforward to prove that if $\ensuremath{\mathcal{A}}\xspace$ is stretchable, $H$ can be represented by halfspaces. Conversely, we show that the hyperplanes bounding the halfspaces in a representation of $H$ must be a stretching of $\ensuremath{\mathcal{A}}\xspace$.
For \Cref{thm:translates}, \text{bi-curved}\xspace{}ness of a set $C$ implies that locally, $C$ can approximate any halfspace with normal vector close to $v$ as in the definition of \text{bi-curved}\xspace. This allows us to prove that stretchability of $\ensuremath{\mathcal{A}}\xspace$ implies representability of $H$ by translates of $C$. The set~$C$ being \text{difference-separable}\xspace is used when reconstructing a hyperplane arrangement from a representation of $H$.
\section{Membership}
\label{sec:Membership}
In this section, we show \ensuremath{\exists\mathbb{R}}\xspace- and \ensuremath{\text{NP}}\xspace-membership.
Recall that the class \ensuremath{\text{NP}}\xspace is usually described by the existence of a witness and a verification algorithm.
The same characterization exists for \ensuremath{\exists\mathbb{R}}\xspace using a real verification algorithm.
Instead of the witness consisting of binary words of polynomial length,
in addition a polynomial number of real-valued numbers are allowed as a witness.
Furthermore, in order to be able to use those real numbers, the verification algorithm is
allowed to work on the so-called real RAM model of computation.
The real RAM allows arithmetic operations with real numbers in constant time~\cite{Erickson2022_SmoothingGap}.
\subsection{Halfspaces}
Here, we show the \ensuremath{\exists\mathbb{R}}\xspace-membership part of \Cref{thm:halfspaces}.
\begin{lemma}
Fix $d\geq 1$ and let \ensuremath{\mathcal{F}}\xspace denote the family of halfspaces in $\mathbb{R}^d$. Then \textsc{Recognition}\xspace{}(\ensuremath{\mathcal{F}}\xspace) is contained in \ensuremath{\exists\mathbb{R}}\xspace.
\end{lemma}
\begin{proof}
We formulate an \textsf{ETR}\xspace formula from the hypergraph $H$ as follows.
For each vertex/point, we create variables $p = (p_1,\ldots,p_d)$ to represent the point.
Similarly, for each hyperedge/halfspace, we create variables $h= (h_1,\ldots,h_{d+1})$ to represent the coefficients of the halfspace.
Then for each point $p$ that is supposed
to be in some halfspace $h$, we create the constraint:
\[h_1p_1 + \ldots h_dp_d \leq h_{d+1}.\]
Similarly, if $p$ is not contained in a halfspace $h$, we create
the constraint:
\[h_1p_1 + \ldots h_dp_d > h_{d+1}.\]
This is a valid \textsf{ETR}\xspace sentence that is equivalent to
the representability of $H$.
Note that for any fixed dimension $d$ the \textsf{ETR}\xspace sentence is
of polynomial size.
\end{proof}
\subsection{Translates of \text{Computable}\xspace Sets}
Here, we show the \ensuremath{\exists\mathbb{R}}\xspace-membership part of \Cref{thm:translates}.
\begin{lemma}
For some $d\geq 1$, let $C\subseteq\mathbb{R}^d$ be a computable set and let \ensuremath{\mathcal{F}}\xspace be the family of all translates of $C$. Then, \textsc{Recognition}\xspace{}(\ensuremath{\mathcal{F}}\xspace) is contained in \ensuremath{\exists\mathbb{R}}\xspace.
\end{lemma}
\begin{proof}
We describe a real verification algorithm as
mentioned above.
The witness consists of the (real) coordinates of the points representing the vertices and the coefficients of the translation vectors
representing the hyperedges.
By definition of \text{computable}\xspace, a verification algorithm can efficiently check if each point is contained in the correct set.
\end{proof}
\subsection{Translates of Polygons -- Proof of \texorpdfstring{\Cref{thm:polygons}}{Theorem 3}}
Here, we show \Cref{thm:polygons}, i.e., \ensuremath{\text{NP}}\xspace-membership of \textsc{Recognition}\xspace{} of translates of some simple polygon $P$.
\polygonTranslates*
\begin{proof}
The proof uses a similar argument to the one used to show that
the problem of packing translates of polygons inside a polygon is in \ensuremath{\text{NP}}\xspace~\cite{Abrahamsen2020_Framework}.
For an illustration, consider \Cref{fig:triangulated_polygon}.
We first triangulate the convex hull of $P$, such that each edge of $P$ appears in the triangulation.
Then, a representation of a hypergraph $H$ by translates of $P$ gives rise to a certificate as follows: For each pair of a point $p$ and a translate $o$ of $P$, we specify whether $p$ lies in the convex hull of $O$, and if it does, in which triangle $p$ lies.
\begin{figure}[htb]
\centering
\includegraphics[page =2]{figures/triangulated_polygon.pdf}
\caption{The polygon $P$, its triangulation, and the triangle that $p$ is contained.}
\label{fig:triangulated_polygon}
\end{figure}
Such a certificate can be tested in polynomial time: we create a linear program whose variables describe the locations of the points $p$ and the translation vectors of each translate of~$P$, and whose constraints enforce the points to lie in the triangles described by the certificate.
This linear program has a number of constraints and variables polynomial in the size of $H$, and can be thus solved in polynomial time.
The solution of this linear program gives the location of the points and the translation vectors of the polygons. This implies that these coordinates are all polynomial and could be used as a certificate directly.
\end{proof}
\section{Pseudohyperplane Stretchability}\label{sec:stretchability}
A \emph{pseudohyperplane arrangement}
in $\mathbb{R}^d$ is an arrangement of
\emph{pseudohyperplanes},
where a pseudohyperplane is a set homeomorphic to a hyperplane, and each intersection of pseudohyperplanes is homeomorphic to a plane of some dimension.
In the classical definition, every set of $d$ pseudohyperplanes has a non-empty intersection. Here, we consider \emph{partial pseudohyperplane arrangements (PPHA\xspace{s})},
where not necessarily every set of $\leq d$ pseudohyperplanes has a common intersection.
A PPHA\xspace is \emph{simple} if no more than $k$ pseudohyperplanes intersect in a space of dimension $d-k$, in particular, no $d+1$ pseudohyperplanes have a common intersection. We call the $0$-dimensional intersection points of $d$ pseudohyperplanes the \emph{vertices} of the arrangement.
A simple PPHA\xspace \ensuremath{\mathcal{A}}\xspace \emph{stretchable} if there exists a hyperplane arrangement \ensuremath{\mathcal{A}}\xspace' such that each vertex in \ensuremath{\mathcal{A}}\xspace also exists in \ensuremath{\mathcal{A}}\xspace' and each (pseudo\nobreakdash-)hyperplane splits this set of vertices the same way in $\ensuremath{\mathcal{A}}\xspace$ and $\ensuremath{\mathcal{A}}\xspace'$. In other words, each vertex of \ensuremath{\mathcal{A}}\xspace lies on the correct side of each hyperplane in \ensuremath{\mathcal{A}}\xspace'.
We then call the hyperplane arrangement \ensuremath{\mathcal{A}}\xspace' a \emph{stretching} of \ensuremath{\mathcal{A}}\xspace.
The problem \textsc{$d$\nobreakdash-Stretchability}\xspace is the problem of deciding whether a simple PPHA\xspace in $\mathbb{R}^d$ is stretchable.
For $d=2$, \textsc{$d$\nobreakdash-Stretchability}\xspace contains the stretchability of simple pseudoline arrangements which is known to be \ensuremath{\exists\mathbb{R}}\xspace-hard~\cite{Mnev1988,Shor1991_Stretchability}.
It is straightforward to prove \ensuremath{\exists\mathbb{R}}\xspace-hardness for all $d\geq 2$.
\begin{theorem}\label{thm:hardnessofstretcha}
\textsc{$d$\nobreakdash-Stretchability}\xspace is \ensuremath{\exists\mathbb{R}}\xspace-hard for all $d\geq 2$.
\end{theorem}
\begin{proof}
We reduce from stretchability of simple pseudoline arrangements, which is \ensuremath{\exists\mathbb{R}}\xspace-hard as shown in~\cite{Mnev1988,Shor1991_Stretchability}.
Consider a simple pseudoline arrangement $L$ in the $x_1x_2$-plane. We consider $d-2$ pairwise orthogonal hyperplanes $h_1,\ldots,h_{d-2}$ whose common intersection is the $x_1x_2$-plane; e.g., the hyperplanes defined $x_i=0$ for $i=3,\dots, d$. The intersection of these hyperplanes serves as a canvas in which we aim to embed $L$.
We extend each pseudoline of $\ell$ to a pseudohyperplane $h_\ell$ by extending it orthogonally to all $h_1,\ldots,h_{d-2}$, see \Cref{fig:CanvasHyperplane}.
\begin{figure}[htb]
\centering
\includegraphics{figures/CanvasHyperplane.pdf}
\caption{Extending a simple pseudoline arrangement (dashed) to a partial pseudohyperplane arrangement in $\mathbb{R}^3$. The grey hyperplane is the ``canvas'' hyperplane $h_1$.}
\label{fig:CanvasHyperplane}
\end{figure}
Clearly, the resulting pseudohyperplane arrangement \ensuremath{\mathcal{A}}\xspace can be built in polynomial time. Note that all intersection points of $d$ pseudohyperplanes in \ensuremath{\mathcal{A}}\xspace correspond to intersection points of $L$.
If $L$ is stretchable, \ensuremath{\mathcal{A}}\xspace is clearly stretchable, as the above construction can be applied to the stretched line arrangement of $L$.
If \ensuremath{\mathcal{A}}\xspace is stretchable, $L$ is stretchable, since restricting each hyperplane $h_\ell$ to the intersection of the hyperplanes $h_1,\ldots,h_{d-2}$ yields a line arrangement which is equivalent to $L$.
As we have thus reduced stretchability of simple pseudoline arrangements to \mbox{\textsc{$d$\nobreakdash-Stretchability}\xspace}, this concludes the proof.
\end{proof}
\section{Hardness for Halfspaces -- Proof of \texorpdfstring{\Cref{thm:halfspaces}}{Theorem 1}}\label{sec:halfspaces}
\begin{proof}[Proof of \Cref{thm:halfspaces}]
We reduce from \textsc{$d$\nobreakdash-Stretchability}\xspace. Let $\ensuremath{\mathcal{A}}\xspace$ be a simple PPHA\xspace. For an example consider \Cref{fig:hypergraphConstruction}. In a first step, we insert a parallel twin $\ell'$ for each pseudohyperplane\xspace~$\ell$. The twin is close enough to $\ell$ such that $\ell$ and $\ell'$ have the same intersection pattern. Since $\ell$ and $\ell'$ are parallel, they do not intersect each other. This yields an arrangement $\ensuremath{\mathcal{A}}\xspace'$.
In a second step, we introduce a point in each $d$-dimensional cell of $\ensuremath{\mathcal{A}}\xspace'$; each point represents a vertex in our hypergraph $H$.
Lastly, we define a hyperedge for each pseudohyperplane\xspace $\ell$ of $\ensuremath{\mathcal{A}}\xspace'$: The hyperedge contains all of the points that lie on the same side of the pseudohyperplane\xspace as its twin pseudohyperplane\xspace.
Note that we define a hyperedge for every pseudohyperplane\xspace of $\ensuremath{\mathcal{A}}\xspace'$, including the twins inserted in the first step.
\begin{figure}[htb]
\centering
\includegraphics[page=1]{figures/halfspaces.pdf}\hfill
\includegraphics[page=3]{figures/halfspaces.pdf}\hfill
\includegraphics[page=4]{figures/halfspaces.pdf}
\caption{Illustration for the proof of \Cref{thm:halfspaces}. Construction of the hypergraph $H$ from a simple (partial) pseudohyperplane\xspace arrangement $\ensuremath{\mathcal{A}}\xspace$.}
\label{fig:hypergraphConstruction}
\end{figure}
It remains to show that $H$ is representable by halfspaces if and only if $\ensuremath{\mathcal{A}}\xspace$ is stretchable.
If $\ensuremath{\mathcal{A}}\xspace$ is stretchable, the construction of a representation of $H$ is straightforward: Consider a hyperplane arrangement \ensuremath{\mathcal{B}}\xspace which is a stretching of $\ensuremath{\mathcal{A}}\xspace$.
Then, for each hyperplane, we add a parallel hyperplane very close, so that their intersection patterns coincide. This results in a hyperplane arrangement $\ensuremath{\mathcal{B}}\xspace'$. We now prove that every $d$-dimensional cell of $\ensuremath{\mathcal{A}}\xspace'$ must also exist in $\ensuremath{\mathcal{B}}\xspace'$.
First, note that each such cell corresponds to a cell of $\ensuremath{\mathcal{A}}\xspace$, which has at least one vertex on its boundary. All vertices of $\ensuremath{\mathcal{A}}\xspace$ exist in $\ensuremath{\mathcal{B}}\xspace$ by definition of a stretching. Furthermore, the subarrangement of the $d$ hyperplanes in $\ensuremath{\mathcal{B}}\xspace$ intersecting in this vertex must be simple, since their intersection could not be $0$-dimensional otherwise. In the twinned hyperplane arrangement $\ensuremath{\mathcal{B}}\xspace'$, all $3^d$ of the $d$-dimensional cells incident to this vertex (a cell is given by the following choice for each of the hyperplane pairs: above both hyperplanes, between the hyperplanes, or below both hyperplanes) must exist. This proves that all $d$\nobreakdash-dimensional cells of $\ensuremath{\mathcal{A}}\xspace'$ also exist in $\ensuremath{\mathcal{B}}\xspace'$. Inserting a point in each such $d$-dimensional cell and considering the (correct) halfspaces bounded by the hyperplanes of $\ensuremath{\mathcal{B}}\xspace'$ yields a representation of $H$.
We now consider the reverse direction. Let $(P,\mathcal H)$ be a tuple of points and halfspaces representing $H$.
Let $h_{i,1}$ and $h_{i,2}$ be the two halfspaces associated with a pseudohyperplane\xspace~$\ell_i$ of $\ensuremath{\mathcal{A}}\xspace$.
Let $p_i$ denote the $(d-1)$-dimensional hyperplane bounding $h_{i,1}$. We show that the family $\{p_i\}_i$ of these hyperplanes is a stretching of $\ensuremath{\mathcal{A}}\xspace$.
For each intersection point $q$ of $d$ pseudohyperplane\xspace{s} $\ell_1,\dots \ell_d$ in $\ensuremath{\mathcal{A}}\xspace$, we consider the corresponding $2d$ pseudohyperplane\xspace{s} in $\ensuremath{\mathcal{A}}\xspace'$. The PPHA\xspace $\ensuremath{\mathcal{A}}\xspace'$ contains $3^d$ $d$-dimensional cells incident to their $2^d$ intersections; each of which contains a point.
We first show that the associated halfspaces must induce at least $3^d$ cells, one of which is bounded and represents the intersection point, see also \Cref{fig:HalfspacesStretchability}:
These $3^d$ points have pairwise distinct patterns of whether or not they are contained in each of the $2d$ halfspaces. Thus, these points need to lie in distinct cells of the arrangement of halfspaces, which proves the claim.
\begin{figure}[htb]
\centering
\includegraphics[page=5]{figures/halfspaces.pdf}\hfil
\includegraphics[page=6]{figures/halfspaces.pdf}\hfil
\includegraphics[page=8]{figures/halfspaces.pdf}
\caption{Illustration for the proof of \Cref{thm:halfspaces}. Representability of $H$ implies stretchability of~\ensuremath{\mathcal{A}}\xspace.}
\label{fig:HalfspacesStretchability}
\end{figure}
Moreover, every point in $P$ belongs to exactly one of these $3^d$ cells. In particular, the central bounded cell, denoted by $c(q)$, contains exactly one point of $P$.
Now, we argue that the complete cell $c(q)$ (and thus in particular the intersection point of the hyperplanes representing $q$) lies on the correct side of each hyperplane $p$ in $\{p_i\}_i$. Note that, by construction of the hypergraph $H$, the $3^d$ points of $q$ lie on the same side of $p$. Suppose for a contradiction that $p$ intersects $c(q)$. Then there exist two unbounded cells incident to $c(q)$ which lie on different sides of $p$; these cells can be identified by translating $p$ until it intersects $c(q)$ only in the boundary. This yields a contradiction to the fact that the $3^d$ points of $q$ lie on the same side of $p$.
We conclude that each intersection point of $d$ pseudohyperplanes in \ensuremath{\mathcal{A}}\xspace also exists in the arrangement $\{p_i\}_i$ and lies on the correct side of all hyperplanes. Thus, $\{p_i\}_i$ is a stretching of \ensuremath{\mathcal{A}}\xspace and \ensuremath{\mathcal{A}}\xspace is stretchable.
\end{proof}
\section{Hardness for Convex, \text{Bi-curved}\xspace, and \text{Difference-separable}\xspace Sets -- Proof of \texorpdfstring{\Cref{thm:translates}}{Theorem 2}}\label{sec:niceshapes}
We are now going to prove the hardness part of \Cref{thm:translates}.
To this end, consider any fixed convex, \text{bi-curved}\xspace, and \text{difference-separable}\xspace set $C$ in $\mathbb{R}^d$. Note that we can assume $C$ to be fully-dimensional, since otherwise each connected component would live in some lower-dimensional affine subspace, with no interaction between such components.
We use the same reduction from the problem \textsc{$d$\nobreakdash-Stretchability}\xspace as in the proof for halfspaces in the previous section and show that the constructed hypergraph $H$ is representable by translates of $C$ if and only if the given PPHA\xspace \ensuremath{\mathcal{A}}\xspace is stretchable.
\begin{lemma}\label[lemma]{lem:if}
If \ensuremath{\mathcal{A}}\xspace is stretchable, $H$ is representable by translates of $C$.
\end{lemma}
\begin{proof}
We assume that \ensuremath{\mathcal{A}}\xspace is stretchable. We already proved in the previous section that thus there exists an arrangement of hyperplanes, in which we can create a twin of each hyperplane (with a tiny distance $\alpha$ between the twins), and in which we can place all the vertices of $H$ in the appropriate $d$-dimensional cells. If a vertex is placed between two twin hyperplanes, we assume it to be equidistant to them. As before, we denote this arrangement of hyperplanes and points by $\ensuremath{\mathcal{B}}\xspace'$.
Let $v$ be the unit vector certifying that $C$ is \text{bi-curved}\xspace; recall the definition in \Cref{sec:introduction}. Because $C$ is smooth at the touching points of the tangent hyperplanes with normal vector~$v$, there exists $\epsilon>0$, such that any unit vector $w$ with $\lVert w-v \rVert_2\leq\varepsilon$ also fulfill the conditions to certify that $C$ is \text{bi-curved}\xspace.
We now assume that $\ensuremath{\mathcal{B}}\xspace'$ fulfills the following properties:
\begin{enumerate}
\item the normal vectors of all hyperplanes have distance at most $\varepsilon$ to $v$ or to $-v$
\item every intersection point of $d$ hyperplanes as well as every point representing a vertex of~$H$, is contained in $[-1,1]^d$.
\end{enumerate}
Both properties can be achieved by applying some affine transformation with positive determinant, thus preserving the combinatorial structure of $\ensuremath{\mathcal{B}}\xspace'$.
To represent the hyperedges of $H$, we will now use very large copies of $C$.
Note that technically we are not allowed to scale $C$, but scaling $C$ by a factor $f$ is equivalent to scaling the arrangement by a factor $1/f$. Let $C^f$ be the set $C$ scaled by factor $f$.
In order to determine the necessary scaling factor $f$, we consider the curvature of $C^f$ in all the points where the tangent hyperplanes of $C^f$ with normal vector~$w$ for~$\lVert w-v\rVert_2\leq \varepsilon$ intersect $C^f$. In each such tangent hyperplane $h$ with (unit) normal vector~$w$, we draw a $(d-1)$-ball $B$ of radius $10\sqrt{d}$ around the touching point $h\cap C^f$. Note that $10\sqrt{d}$ is larger than the length of any line segment contained in the box $[-1,1]^d$. Now,~$f$ has to be large enough such that $C^f$ contains every point $p+w\cdot \lambda$, for~$p\in B$ and $\alpha/10\leq \lambda\leq 10\sqrt{d}$.
This ensures that the boundary of $C^f$ does not curve away from the tangent hyperplane too quickly, and that~$C^f$ is ``thick''. In other words, $C^f$ locally behaves like an only very slightly curved halfspace. See \Cref{fig:largeness} for an illustration of this requirement on $C^f$.
\begin{figure}[htb]
\centering
\includegraphics{figures/largeness.pdf}
\caption{An illustration of the requirement on the scaling factor $f$. The set $C^f$ must contain the grey region.}
\label{fig:largeness}
\end{figure}
We now replace each hyperplane $h$ of the arrangement $\ensuremath{\mathcal{B}}\xspace'$ by a translate $C_h^f$ of $C^f$, placed such that $h$ is a tangent hyperplane of $C_h^f$, the single point $h\cap C_h^f$ lies within the box $[-1,1]^d$, and $C_h^f$ lies completely to the side of $h$ containing its twin hyperplane. It remains to prove that $C_h^f$ contains exactly those points of $\ensuremath{\mathcal{B}}\xspace'$ which are on this side of $h$. Firstly, $C_h^f$ cannot contain more points, since $C_h^f$ is a subset of the halfspace delimited by $h$ containing its twin hyperplane. Second, we claim that $C_h^f$ contains all these points. To see this, note that within the box $[-1,1]^d$ containing all points, the boundary of $C_h^f$ is close enough to $h$ that it must contain all points between $h$ and its twin, since these points are located equidistant to the two hyperplanes. Furthermore, all points on the other side of the twin hyperplane are also contained in $C_h^f$ since within the box $[-1,1]^d$, the boundary $\delta(C_h^f)$ lies completely between $h$ and its twin hyperplane.
\end{proof}
\begin{lemma}\label[lemma]{lem:onlyif}
If the hypergraph $H$ is representable by translates of $C$, then
\ensuremath{\mathcal{A}}\xspace is stretchable.
\end{lemma}
\begin{proof}
Assume $H$ is representable.
By construction, the two translates $C_{i,r},C_{i,l}$ of $C$ corresponding to the two hyperedges of each pseudohyperplane\xspace $\ell_i$ must intersect as they contain at least one common point. We call their convex intersection the \emph{lens} of this pseudohyperplane\xspace.
For each pseudohyperplane\xspace $\ell_i$ of $\ensuremath{\mathcal{A}}\xspace$, we consider some hyperplane $p_i$ which separates $C_{i,r}\setminus C_{i,l}$ from $C_{i,l}\setminus C_{i,r}$. Such a hyperplane exists since $C$ is \text{difference-separable}\xspace.
Let $\mathcal P:=\{p_i\}_i$ be the hyperplane arrangement consisting of all these separators.
We aim to show that $\mathcal P$ is a stretching of $\ensuremath{\mathcal{A}}\xspace$.
To this end, consider $d$ pseudohyperplanes $\ell_1,\ldots,\ell_{d}$ which intersect in $\ensuremath{\mathcal{A}}\xspace$.
For an illustration consider \Cref{fig:stretcha}.
Furthermore, consider one more pseudohyperplane $\ell'$, and let $p'$, $C_r'$, $C_l'$ denote the separator hyperplane and translates of $C$ corresponding to $\ell'$. We show that the intersection $I_p:=p_1\cap \ldots\cap p_d$ is a single point which lies on the same side of $p'$ as the point $I_\ell:=\ell_1\cap\ldots\cap\ell_d$ lies of $\ell'$.
\begin{figure}[htb]
\centering
\includegraphics[page=1]{figures/thm2stretcha.pdf}
\caption{Illustration for the proof of \Cref{lem:onlyif}.
Left: pseudohyperplanes $\ell_1,\dots,\ell_d,\ell'$ in $\ensuremath{\mathcal{A}}\xspace$
Right: corresponding hyperplanes $p_1,\dots,p_d,p'$ in $\mathcal P$.}
\label{fig:stretcha}
\end{figure}
The hyperplane $p'$ divides the space into two halfspaces $h_r$ and $h_l$ such that $C_r' \backslash C_l'\subseteq h_r$ and $C_l' \backslash C_r'\subseteq h_l$.
By construction, the two hyperedges defined for $\ell'$ cover all vertices of $H$ and the vertices in the cells around $I_\ell$ belong to only one hyperedge.
Suppose without loss of generality that these vertices only belong to the hyperedge represented by $C_l'$.
We will show that the intersection $I_p$ must then be a point in $h_l$.
We first show that the intersection $I_p$ is a point, i.e., $0$-dimensional. Consider all $2^d$ $d$-dimensional cells of \ensuremath{\mathcal{A}}\xspace around $I_\ell$. The construction of $H$ implies that each such cells contains a distinct point, and these points must all lie in distinct cells of the sub-arrangement of the involved hyperplanes $p_1,\ldots, p_d$. Assuming that $I_p$ is not a single point, this sub-arrangement is not simple, and the hyperplanes divide space into strictly fewer than $2^d$ cells, which results in a contradiction.
\begin{figure}[htb]
\centering
\includegraphics[page=2]{figures/thm2stretcha.pdf}
\caption{Illustration for the proof of \Cref{lem:onlyif}. The cone $S$ must intersect $C'_l\setminus C'_r$, which contradicts $I_p$ lying in $h_r$.
}
\label{fig:stretcha2}
\end{figure}
Next we prove that $I_p$ is in $h_l$. Assume towards a contradiction that $I_p$ is in $h_r$, see also \Cref{fig:stretcha2}.
Consider the $d$ lines that are formed by the intersections of subsets of $d-1$ hyperplanes among $p_1,...,p_{d}$. Each of these lines is the union of two rays beginning at $I_p$. Observe that the hyperplane $p'$ can only intersect one of the two rays forming each line.
Let $S$ be the convex cone centered at $I_p$ defined by the $d$ non-intersected rays. Observe that $S$ does not intersect $p'$, so $S$ must be fully contained in $h_r$, i.e., $S \cap h_l = \emptyset$. Note, however, by the construction of the hypergraph, there must be a point that lies in $S \cap (C_l'\setminus C_r')\subseteq S\cap h_l$, which is a contradiction.
We conclude that $\mathcal P$ is a stretching of \ensuremath{\mathcal{A}}\xspace, and thus \ensuremath{\mathcal{A}}\xspace is stretchable.
\end{proof}
\Cref{lem:if,lem:onlyif} combined now yield hardness of \textsc{Recognition}\xspace{}(\ensuremath{\mathcal{F}}\xspace) for the family \ensuremath{\mathcal{F}}\xspace of translates of $C$.
|
{
"arxiv_id": "2302.13592",
"language": "en",
"timestamp": "2023-02-28T02:24:51",
"url": "https://arxiv.org/abs/2302.13592",
"yymm": "2302"
} | \section{Introduction}
Let $\Qal_3$ be an algebraic closure of $\Q_3$ and $G=\Gal({\Qal_3}/{\Q_3})$ its absolute Galois group.
Given an elliptic curve ${E}/{\Q_3}$, let $E[3^n]$ denote its group of $3^n$-torsion points with value in $\Qal_3$ and
\[
T_3(E)=\underset{P\mapsto 3P}{\varprojlim} E[3^n]
\]
its $3$-adic Tate module. It is a free $\Z_3$-module of rank $2$ with a continuous and linear action of $G$.
The $3$-adic representation of $G$ associated to $E$ (also called Tate module) is
\[
V_3(E)=\Q_3\otimes_{\Z_3}T_3(E).
\]
A $3$-adic representation $V$ of $G$ arises from an elliptic curve over $\Q_3$ if there exists ${E}/{\Q_3}$ such
that $V\simeq V_3(E)$. We wish to classify all $3$-adic representations arising from elliptic curves over $\Q_3$ up to isomorphism with an additional condition:
the considered elliptic curve needs to have potential good reduction, that is, it acquires good reduction over a finite extension
of $\Q_3$. Such curves have nice geometric properties which are carried over the representations. Indeed, it is well known that the Tate module of an elliptic curve with potential good reduction is potentially crystalline.
Such representations are completely determined --- via the contravariant functor $\mathbf{D}_{\mathrm{pcris}}^{*}$ --- by their
associated filtered $(\phi,\Gal({K}/{\Q_3}))$-module, a purely semilinear object.
\vspace{\baselineskip}
Let ${E}/{\Q_3}$ be an elliptic curve acquiring good reduction over a finite Galois extension ${K}/{\Q_3}$ of maximal unramified
subfield $K_0$ such that $e=e({K}/{\Q_3})$ is minimal. Let $\D=(D,\Fil)$ be its associated filtered $(\phi,\Gal({K}/{\Q_3}))$-module,
$D_0$ the subspace of elements of $D$ fixed by $\Gal({K_0}/{\Q_3})$ and $\phi_0=\phi\restriction_{D_0}$ the $\Q_3$-linear restriction
of $\phi$. We denote by $\W(\D)$ the Weil representation associated to $\D$. It is known that $\D$ satisfies the following
properties:
\vspace{\baselineskip}
\begin{enumerate}[label= (\arabic*)]
\item $P_{\Char}(\phi_0)(X)=X^2+a_3X+3$, with $a_3\in\{-3,-2,-1,0,1,2,3\}$
\item $\W(\D)$ is defined over $\Q$
\item $\bigwedge^2_{K_0}\D=K_0\{-1\}$
\item $\D$ is of Hodge-Tate type $(0,1)$.
\end{enumerate}
\vspace{\baselineskip}
These conditions alone are sufficient to guarantee that a $2$-dimensional $3$-adic representation of $G$ comes from an elliptic curve
over $\Q_3$ in the case of tame potential good reduction (see~\cite{Vo2}, \S 5.4). It is not known yet if these are
sufficient in the presence of wild potential good reduction as well, however they are still necessary.
Starting from these conditions and imposing geometric descent datum, we provide a list of isomorphism classes of
filtered $(\phi,\Gal({K}/{\Q_3}))$-modules. Then we show that every object in the list arises from an elliptic curve over $\Q_3$.
\vspace{\baselineskip}
Some of the classes described in this paper can directly be deduced from the $p\ge 5$ case when $(e,p)=1$ (see~\cite{Vo2}).
To the best of our knowledge, the case
of $\ell$-adic representations ($\ell\neq 3$) --- which is encoded in terms of unfiltered $(\phi,\Gal({K}/{\Q_3}))$-modules --- hasn't
been completeley treated in the litterature. However some specific cases can be found (see~\cite{Co} for $e=12$).
Our genuine new results are the cases of wild potential good reduction ($e=3,6$ and $12$) with $e=12$ being the first case
of non abelian inertia. We provide proofs in the tame case for the sake of completeness. The classification is synthetized
in Table~\ref{tab:Table 1}.
\vspace{\baselineskip}
\renewcommand{\arraystretch}{1.3}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|l|c|}
\hline
$e$ & Reduction type & $K$ & Frobenius & Filtered $(\phi,\Gal({K}/{\Q_3}))$-module & \#Classes\\
\hline
$1$ & Supersingular & $\Q_3$ & $a_3=-3$ & $\Dc(1;-3;0)$ & $1$ \\
\cline{4-6}
& & & $a_3=0$ & $\Dc(1;0;0)$ & 1 \\
\cline{4-6}
& & & $a_3=3$ & $\Dc(1;3;0)$ & 1 \\
\cline{2-6}
& Ordinary & $\Q_3$ & $a_3=-2$ & $\Dc(1;-2;\alpha)$, $\alpha\in\{0,1\}$ & 2\\
\cline{4-6}
& & & $a_3=-1$ & $\Dc(1;-1;\alpha)$, $\alpha\in\{0,1\}$ & 2 \\
\cline{4-6}
& & & $a_3=1$ & $\Dc(1;1;\alpha)$, $\alpha\in\{0,1\}$ & 2 \\
\cline{4-6}
& & & $a_3=2$ & $\Dc(1;2;\alpha)$, $\alpha\in\{0,1\}$ & 2 \\
\hline
$2$ & Supersingular & $\Q_3(\sqrt{3})$ & $a_3=-3$ & $\Dc(2;-3;0)$ & 1\\
\cline{4-6}
& & & $a_3=0$ & $\Dc(2;0;0)$ & 1 \\
\cline{4-6}
& & & $a_3=3$ & $\Dc(2;3;0)$ & 1 \\
\cline{2-6}
& Ordinary & $\Q_3(\sqrt{3})$ & $a_3=-2$ & $\Dc(2;-2;\alpha)$, $\alpha\in\{0,1\}$ & 2\\
\cline{4-6}
& & & $a_3=-1$ & $\Dc(2;-1;\alpha)$, $\alpha\in\{0,1\}$ & 2 \\
\cline{4-6}
& & & $a_3=1$ & $\Dc(2;1;\alpha)$, $\alpha\in\{0,1\}$ & 2 \\
\cline{4-6}
& & & $a_3=2$ & $\Dc(2;2;\alpha)$, $\alpha\in\{0,1\}$ & 2 \\
\hline
$4$ & Supersingular & $\Q_3(\sqrt[4]{3})$ & $a_3=0$ & $\Dpc(4;0;\alpha), \alpha\in\Proj^1(\Q_3) $ & $\Proj^1(\Q_3)$ \\
\hline
$3$ & Supersingular & $\Lng(\zeta_4)$ & $a_3=0$ & $\Dpcng(3;0;\alpha), \alpha\in \mathcal{M}_3^{\mathrm{na}}$ & $\Proj^1(\Q_3)$ \\
\cline{3-6}
& & $\Lg=\Q_3(\pi)$ & $a_3=-3$ & $\Dpcg(3;-3,\mu;\pi), \mu\in\{1,2\}$ & 2 \\
\cline{4-6}
& & & $a_3=0$ & $\Dpcg(3;0,\mu;\pi), \mu\in\{-1,1\}$ & 2 \\
\cline{4-6}
& & & $a_3=3$ & $\Dpcg(3;3,\mu;\pi), \mu\in\{-2,-1\}$ & 2 \\
\hline
$6$ & Supersingular & $\Lng(\zeta_4,\sqrt{3})$ & $a_3=0$ & $\Dpcng(6;0;\alpha), \alpha\in \mathcal{M}_6^{\mathrm{na}}$ & $\Proj^1(\Q_3)$ \\
\cline{3-6}
& & $\Lg(\sqrt{3})$ & $a_3=-3$ & $\Dpcg(6;-3,\mu;\pi), \mu\in\{1,2\}$ & 2 \\
\cline{4-6}
& & & $a_3=0$ & $\Dpcg(6;0,\mu;\pi), \mu\in\{-1,1\}$ & 2 \\
\cline{4-6}
& & & $a_3=3$ & $\Dpcg(6;3,\mu;\pi), \mu\in\{-2,-1\}$ & 2 \\
\hline
$12$ & Supersingular & $K_1$ & $a_3=0$ & $\Dpc(12;0;1;\epsilon;\alpha), \alpha\in\mathcal{M}_{12}^{1,\epsilon}, \epsilon\in\{0,1\}$ & $\Proj^1(\Q_3)$ \\
\cline{3-6}
& & $K_2$ & $a_3=0$ & $\Dpc(12;0;2;\epsilon;\alpha), \alpha\in \mathcal{M}_{12}^{2,\epsilon}, \epsilon\in\{0,1\}$ & $\Proj^1(\Q_3)$ \\
\cline{3-6}
& & $K_3$ & $a_3=0$ & $\Dpc(12;0;3;\epsilon;\alpha), \alpha\in \mathcal{M}_{12}^{3,\epsilon}, \epsilon\in\{0,1\}$ & $\Proj^1(\Q_3)$ \\
\cline{3-6}
& & $K_4$ & $a_3=0$ & $\Dpc(12;0;4;\epsilon;\alpha), \alpha\in \mathcal{M}_{12}^{4,\epsilon}, \epsilon\in\{0,1\}$ & $\Proj^1(\Q_3)$ \\
\cline{3-6}
& & $K_5$ & $a_3=0$ & $\Dpc(12;0;5;\epsilon;\alpha), \alpha\in \mathcal{M}_{12}^{5,\epsilon}, \epsilon\in\{0,1\}$ & $\Proj^1(\Q_3)$ \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:Table 1}Isomorphism classes of filtered $(\phi,\Gal({K}/{\Q_3}))$-modules arising from elliptic curves over $\Q_3$ with potential good reduction.}
\end{table}
Notice the appearance of new supersingular traces $a_3=\pm 3$, a specificity of $p=3$ (compared to $p\ge 5$), obvious consequence of the Weil
conjectures. One could think they should naturally appear everytime the reduction is supersingular, and yet this is not the case.
The reason behind this absence lies in the automorphism group $\Aut_{\Fd_p}(\Et)$ of ${\Et}/{\Fd_3}$: it behaves differently
depending on the trace of the Frobenius. Furthermore, we now need to deal with multiple fields of good reduction.
Indeed, wild finite extensions of $\Q_3^{\mathrm{un}}$ aren't unique as opposed to the tame ones. This leads to interesting
new phenomena. The case $e=12$ is uniform, the five fields are almost indistinguishable. When $e=3$ the situation is different
between the two possible fields. The non abelian extension has only one possible value for the Frobenius and an infinity of
isomorphism classes. The abelian extension, on the other hand, has every supersingular values but only two classes for each.
Let us finally mention that the ordinary cases have simply disappeared when $e>2$, again a feature of elliptic curves
over $\Fd_3$. Notations for the filtered $(\phi,\Gal({K}/{\Q_3}))$-modules and their set of parameters are detailed in the
fourth section.
\section{Theoretical background}
Let $G_{\Q_p}=\Gal({\Qal_p}/{\Q_p})$ the absolute Galois group of $\Q_p$, we denote by $\Q_p^{\mathrm{un}}$ its maximal unramified
extension and $I_{\Q_p}=\Gal({\Qal_p}/{\Q_p}^{\mathrm{un}})$ its inertia subgroup.
\subsection{Elliptic curves}
Let ${E}/{\Q_p}$ be an elliptic curve (we refer to~\cite{Si} for anything related to elliptic curves).
One can assume given a suitable change of coordinates, that the coefficients of a Weierstrass equation of $E$ are in $\Z_p$ and
that the valuation of its discriminant is minimal. A Weierstrass equation satisfying these two properties is called minimal.
Suppose $E$ is given by a minimal Weierstrass equation, reducing each coefficient we obtain a curve ${\Et}/{\Fd_p}$.
The reduced curve need not be an elliptic curve itself, in fact it will be if and only if
$v(\Delta)=0$ (i.e. $\Delta(\Et)=\Delta\bmod p\Z_p\neq 0$). When the reduced curve is an elliptic curve we say that $E$ has
good reduction. Let ${L}/{\Q_p}$ be a finite extension of $\Q_p$ and consider $E_L=E\times_{\Q_p}L$ the extension of $E$ to $L$.
Allowing changes of coordinates defined over $L$ may give us a minimal model of $E_L$ with $v(\Delta_L)=0$, so that $E_L$ has
good reduction. When there exists such an extension we say that $E$ has potential good reduction. This property only depends
on the action of inertia, which means we can choose ${L}/{\Q_p}$ to be totally ramified so ${\Et_L}/{\Fd_p}$. Denote
by $a_p(E)=a_p(\Et_L)$ the trace of the characteristic polynomial of the Frobenius endomorphism acting on $\Vl(\Et_L)$
for some $\ell\neq p$. It is known that $a_p(E)$ is an integer independent of $\ell$
satisfying $\vert a_p(E)\vert_{\infty}\le 2\sqrt{p}$ as well as an invariant of the isogeny class of $\Et_L$ over $\Fd_p$
(see~\cite{Ho-Ta}). Furthermore we have the following relation:
\[
a_p(\Et_L)=p+1-\# \Et_L(\Fd_p).
\]
We say that $\Et_L$ is ordinary when $(p,a_p(\Et_L))=1$, supersingular when $p\mid a_p(\Et_L)$.
\subsection{$\ell$-adic Galois representations}
Let $p,\ell$ be distinct prime numbers. An $\ell$-adic representation of $G_{\Q_p}$
(or $\Ql[G_{\Q_p}]$-module) is a finite dimensional $\Ql$-vector space with a linear and continuous action of $G_{\Q_p}$.
We denote such an object by $(V,\rho_{\ell})$ where $V$ is a $\Ql$-vector space
and $\rho_{\ell}:G_{\Q_p}\longrightarrow \Aut_{\Ql}(V)$ the group homomorphism describing the action.
If the inertia subgroup $I_{\Q_p}$ of $G_{\Q_p}$ acts trivially on $V$ we say that the representation has good reduction.
In this case it factors into a representation of the absolute Galois group $G_{\Fd_p}$ of $\Fd_p$ and is completely determined
by it. When there exists a finite extension ${L}/{\Q_p}$ such that $I_L$ acts trivially on $V$ we say that the representation
has potential good reduction. One easily checks that having potential good reduction is equivalent to $\rho_{\ell}(I_L)$
being finite. Let ${E}/{\Q_p}$ be an elliptic curve, the group $G_{\Q_p}$ acts on $E(\Qal_p)$ by acting on the coefficients of its
points.
Since addition is $G_{\Q_p}$-equivariant, the group of $n$-torsion points $E[n]$ of $E(\Qal_p)$ is stable by action of $G_{\Q_p}$
and we define the $\ell$-adic Tate module associated to $E$ by
\[
\Tl(E)=\varprojlim_{P\mapsto\ell P} E[\ell^n].
\]
It is a free $\Zl$-module of rank $2$ equipped with a continuous and $\Zl$-linear action of $G_{\Q_p}$.
Tensoring by $\Ql$ we get $\Vl(E)=\Ql\otimes\Tl(E)$, an $\ell$-adic representation of $G_{\Q_p}$. It is well known
that $\Vl(E)$ has (potential) good reduction if and only if $E$ has (potential) good reduction. If ${E}/{\Q_p}$ is an elliptic curve
with potential good reduction, there exists a unique finite extension $M_{E}/{\Q_p}^{\mathrm{un}}$ of minimal degree over
which $E$ acquires good reduction. We call that minimal degree the semi-stability defect of $E$, denoted by $\dst(E)$.
Consider $\Vl(E)$ the $\ell$-adic representation associated to $E$, since $E$ has potential good reduction there
exists ${L}/{\Q_p}$ finite of minimal ramification index satisfying $\rho_{E,\ell}(I_L)=0$, it is then easy to see that
\[
M_E=\Qal_p^{\ker(\rho_{E,\ell}\restriction_{I_L})}.
\]
If ${L}/{\Q_p}$ is a finite extension with $L^{\mathrm{un}}=M_E$ then $E$ acquires good reduction over $L$ and $\dst(E)=e({L}/{\Q_p})$,
it is the minimal ramification index among all good reduction fields of $E$. It is also worth to notice that
if $L,{L^{\prime}}/{\Q_p}$ satisfy $L^{\mathrm{un}}={(L^{\prime})}^{\mathrm{un}}$, then they are interchangeable in the sense
that ${E}/{\Q_p}$ acquire good reduction over $L$ if and only if it acquire good reduction over $L^{\prime}$. Furthermore,
we know that $\dst(E)\in\{1,2,3,4,6,12\}$ and $\rho_{E,\ell}(I_{\Q_p})$ is either a cyclic group of order $1,2,3,4,6$ or the
non Abelian semi-direct product of a cyclic group of order $4$ by a group of order $3$ (see~\cite{Se1},\S 5.6). The degree of a
minimal good reduction field is bounded by the image of inertia and the structure of its inertia subgroup is known.
\subsection{Filtered $(\phi,\Gal({K}/{\Q_p}))$-modules}
Let ${K}/{\Q_p}$ be a finite Galois extension, $K_0$ the maximal unramified extension of $\Q_p$ inside $K$ and $G_K=\Gal(\Qal_p/K)$
its absolute Galois group. Denote by $\sigma$ the absolute Frobenius on $K_0$. A filtered
$(\phi,\Gal({K}/{\Q_p}))$-module $\D=(D,\Fil)$ is a finite dimensional $K_0$-vector space $D$ together with:
\begin{enumerate}[label= (\roman*)]
\item a $\sigma$-semilinear action of $\Gal({K}/{\Q_p})$
\item a $\sigma$-semilinear, $\Gal({K}/{\Q_p})$-equivariant and bijective Frobenius $\phi:D\tilde{\longrightarrow}D$
\item a filtration $\Fil={(\Fil^{i}D_K)}_{i\in\Z}$ on $D_K=K\otimes_{K_0}D$ by $\Gal({K}/{\Q_p})$-stable subspaces
such that $\Fil^{i}D_K=D_K$ for $i\ll 0$ and $\Fil^{i}D_K=0$ for $i\gg 0$.
\end{enumerate}
Such objects form a category we will denote by $\MF_{\phi}(G_{\Q_p})$. The morphisms are the $K_0$-linear maps $f$ commuting
to the Frobenius and the action of $\Gal({K}/{\Q_p})$ as well as preserving the filtration
(i.e. $f_K(\Fil^{i}D_K)\subseteq\Fil^{i}D^{\prime}_K$). The Tate twist $\D\{-1\}$ of $\D$ is the $K_0$-vector space $D$ with
the same action of $\Gal({K}/{\Q_p})$, $\phi\{-1\}=p\phi$ and $\Fil^i{(D\{-1\})}_K=\Fil^{i-1}D_K$. We say that $\D$ is
of Hodge-Tate type $(0,1)$ if $\Fil^{i}D_K=D_K$ for $i\le 0$, $\Fil^{i}D_K=0$ for $i\ge 2$ and $\Fil^1D_K$ is
a non trivial subspace of $D_K$. Recall the construction of $\Bcris$ and the notion of
(weak) admissibility in~\cite{Fo1}
Let $V$ be a $p$-adic representation of $G_{\Q_p}$, one can associate to $V$ a
filtered $(\phi,\Gal({K}/{\Q_p}))$-module via the contravariant functor:
\[
\Dcris[K]:\Rep_{\Q_p}(G_{\Q_p}) \longrightarrow \MF_{\phi}(\Gal({K}/{\Q_p})): V\longmapsto \Hom_{\Q_p[G_K]}(V,\Bcris).
\]
The inequality $\dim_{\Q_p}V\le\dim_{K_0}\Dcris[K](V)$ is always verified, and we say that a representation $V$ of $G_{\Q_p}$ is
crystalline over $K$ if the equality holds. Viewing $V$ as a representation of $G_K$ by restriction, then $V$ is
potentially crystalline over $K$ as a representation of $G_{\Q_p}$ if and only if it is crystalline as a representation
of $G_K$. This functor establishes an anti-equivalence of categories between the category of $p$-adic representations of
$G_{\Q_p}$ crystalline over $K$ and the category of admissible filtered $(\phi,\Gal({K}/{\Q_p}))$-modules (see~\cite{Fo1}).
The $p$-adic Tate modules of ellitic curves over $\Q_p$ with potential good reduction give rise to such representations, in fact,
the following holds:
\begin{theorem*}[\cite{Co-Io}, Thm.4.7]
Let ${E}/{\Q_p}$ be an elliptic curve,
the $p$-adic representation $V_p(E)$ is (potentially) crystalline if and only if $E$ has (potential) good reduction.
\end{theorem*}
Each filtered $(\phi,\Gal({K}/{\Q_p}))$-module has a linear object naturally attached to it, namely its Weil representation.
Recall that the Weil group of $\Qal_p$ is defined by the short exact sequence
\[
1\longrightarrow I_{\Q_p}\longrightarrow W_{\Q_p} \overset{\nu}{\longrightarrow} \Z\longrightarrow 1
\]
and we let $W_K=G_K\cap W_{\Q_p}$. To every $(\phi,\Gal({K}/{\Q_p}))$-module $\D$ we can associate
a $K_0$-vector space $\Delta$ with a continuous $K_0$-linear action of $W_{\Q_p}$ in the following way:
\[
\rho: W_{\Q_p}\longrightarrow \Aut_{K_0}(\Delta): \omega\longmapsto (\omega\bmod W_K)\phi^{-\nu(\omega)}
\]
where $\Delta=D$ as a $K_0$-vector space. The pair $\W(\D)=(\Delta,\rho)$ is called a Weil representation,
it is defined over $\Q$ if $\Tr(\rho(\omega))\in\Q$ for every $\omega\in W_{\Q_p}$.
\section{Strategy}
Let us fix a semi-stability defect $e\in\{ 1,2,3,4,6,12\}$. The first step is to determine every finite Galois extensions
with ramification index $e$ that arises as a field of good reduction of some elliptic curve defined over $\Q_3$.
We now fix ${K}/{\Q_3}$ to be one such extension. The next step is to describe the list of ($2$-dimensional) filtered
$(\phi,\Gal ({K}/{\Q_3}))$-modules $\D$ that satisfies properties $(1)-(4)$. We then show that given an elliptic curve ${E}/{\Q_3}$ with
potential good reduction over $K$, its associated filtered $(\phi,\Gal ({K}/{\Q_3}))$-module $\Dcris[K] (V_3 (E))$ is necessarily
isomorphic to one object $\D$ of our list. Finally, given an object $\D$ in the list, we need to find an elliptic curve ${E}/{\Q_3}$
such that
\[
\Dcris[K] (V_3 (E))\simeq\D.
\]
One last point require some discussion. Given an unfiltered $2$-dimensional $(\phi,\Gal ({K}/{\Q_3}))$-module $\D$,
the set of Hodge-Tate type $(0,1)$ filtrations on $\D$ is in bijection with $\Proj^1(\Q_3)$. Indeed, the set
of $\Gal ({K}/{\Q_3})$-stable $K$-lines of $D_K=K\otimes_{K_0}D$ is in bijection with the set of $\Q_3$-lines of
\[
D_K^{\Gal({K}/{\Q_3})}=\{ x\in D_K\vert \forall g \in \Gal({K}/{\Q_3}),g.x=x\}
\]
a $2$-dimensional $\Q_3$-vector space. This means that if $\phi$ has only trivial stable subspaces, there are infinitely
many weakly admissible filtrations on $\D$. In the following we will define sets that parameter our filtrations. This little
fact ensures that these sets will always be non empty, even though it could not be clear at first glance.
\section{Classification}
We provide the list of admissible filtered $(\phi,\Gal({K}/{\Q_3}))$-modules satisfying our geometric conditions with ${K}/{\Q_3}$ a
minimal Galois extension of good reduction. We then show that every elliptic curve defined over $\Q_3$ with potential
good reduction has associated filtered $(\phi,\Gal({K}/{\Q_3}))$-module isomorphic to an object of the list.
\subsection{The crystalline case ($e=1$)}
We start our classification with the representations coming from elliptic curves ${E}/{\Q_3}$ with good reduction ($K=\Q_3$).
There are two distinct cases behaving differently depending on the trace of Frobenius $a_3(\Et)$ of ${\Et}/{\Fd_3}$.
\subsubsection{The supersingular case}
Let $a\in\{-3,0,3\}$ and $\alpha\in\Proj^1(\Q_3)$. We denote by $\Dc(1;a;\alpha)$ the filtered $\phi$-module
(of Hodge-Tate type $(0,1)$) defined by:
\begin{itemize}
\item $D=\Q_3 e_1\oplus\Q_3 e_2$
\item $M_B(\phi)=\begin{pmatrix} 0 & -3 \\ 1 & -a \end{pmatrix}$, where $B=(e_1,e_2)$
\item $\Fil^1D=(\alpha e_1 + e_2)\Q_3$.
\end{itemize}
Identifying $\Proj^1(\Q_3)$ with $\Q_3\sqcup\{\infty\}$, we let $\alpha e_1 +e_2=e_1$ when $\alpha=\infty$.
\begin{proposition}
Let $a\in\{-3,0,3\}$ and $\alpha\in\Proj^1(\Q_3)$. The filtered $\phi$-module $\Dc(1;a;\alpha)$ satisfies conditions $(1)-(4)$ and is admissible.
\end{proposition}
\begin{proof}
Condition $(1)$ is obvious and $(4)$ is verified by definition. Only $(2)$ and $(3)$ remains which are easily checked
by computation.
\end{proof}
\begin{proposition}
Let ${E}/{\Q_3}$ be an elliptic curve with good reduction such that $a_3=a_3(\Et)\in\{-3,0,3\}$ and $\D=\Dcris[\Q_3](V_3(E))$.
There exists an isomorphism of filtered $\phi$-modules between $\D$ and $\Dc(1;a_3;0)$. Moreover, if $a,b\in\{ -3,0,3\}$
then $\Dc(1;a;0)$ and $\Dc(1;b;0)$ are isomorphic if and only if $a=b$.
\end{proposition}
\begin{proof}
Let $D$ (resp. $D^{\prime}$) be the $\Q_3$-vector space associated to $\D$ (resp. $\Dc(1;a_3;0)$).
Let $B=(e_1,e_2)$ and $B^{\prime}=(e_1^{\prime},e_2^{\prime})$ be basis for $D$ and $D^{\prime}$ respectively such that
\[
M_B(\phi)=\begin{pmatrix} 0 & -3 \\ 1 & -a_3 \end{pmatrix}=M_{B^{\prime}}(\phi^{\prime}).
\]
Such a basis of $D$ always exists since $P_{\Char}(\phi)(X)=X^2+a_3X+3$ as $\D$ satisfies the condition $(1)$.
A $\Q_3$-isomorphism $\eta$ between $D$ and $D^{\prime}$ is $\phi$-equivariant if and only if
\[
M_{B,B^{\prime}}(\eta)\in C(M_B(\phi)).
\]
Where $C(M_B(\phi))$ denotes the centralizer of $M_B(\phi)$ in $\mathrm{GL}_2(\Q_3)$.
Notice that since
\[
C(M_B(\phi))=C(\Q_3[M_B(\phi)])
\]
and $P_{\Char}(\phi)(X)$ is irreducible, the Double Centralizer Theorem implies
\[
C(M_B(\phi))=\Q_3[M_B(\phi)]=\Q_3(M_B(\phi)).
\]
In particular, every non zero element of $\Q_3(M_B(\phi))$ is an isomorphism of $\phi$-modules between $(D,\phi)$
and $(D^{\prime},\phi^{\prime})$. Consider $\Fil^1 D=(\alpha e_1 + \beta e_2)\Q_3$, $(\alpha,\beta)\neq (0,0)$.
The matrix
\[
\begin{pmatrix} \alpha & -3\beta \\ \beta & \alpha-a_3\beta \end{pmatrix}
\]
is invertible because the homogenous polynomial $X^2-a_3XY+3Y^2$ only has trivial roots in ${(\Q_3)}^2$.
Let $(\lambda,\mu)\in\Q_3^2$ be the unique solution to the system of equations
\[
\begin{pmatrix} \alpha & -3\beta \\ \beta & \alpha-a_3\beta \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix}
0 \\ 1
\end{pmatrix}
\]
it follows that $(\lambda,\mu)\neq (0,0)$ and
\[
\begin{pmatrix} \lambda & -3\mu \\ \mu & \lambda-a_3\mu \end{pmatrix}\begin{pmatrix} \alpha \\ \beta \end{pmatrix} = \begin{pmatrix}
0 \\ 1
\end{pmatrix}.
\]
Therefore, the map $\lambda\Id+\mu M_B(\phi)\in \Q_3[M_B(\phi)]$ defines an isomorphism of filtered $\phi$-modules
between $\D$ and $\Dc(1;a_3;0)$. One checks the last assertion by a simple computation.
\end{proof}
\begin{remark}
There are $3$ isomorphism classes of filtered $\phi$-modules in the supersingular case, one for each value taken by $a$.
\end{remark}
\subsubsection{The ordinary case}
Let $a\in\{-2,-1,1,2\}$ and $\alpha\in\Proj^1(\Q_3)$. We denote by $\Dc(1;a;\alpha)$ the filtered $\phi$-module defined by:
\begin{itemize}
\item $D=\Q_3 e_1\oplus\Q_3 e_2$
\item $M_B(\phi)=\begin{pmatrix} u & 0 \\ 0 & u^{-1}3 \end{pmatrix}$, where $u\in\Z_3^{\times}$
such that $u+u^{-1}3=-a$
\item $\Fil^1D=(\alpha e_1 +e_2)\Q_3$.
\end{itemize}
\begin{proposition}
Let $a\in\{-2,-1,1,2\}$ and $\alpha\in\Proj^1(\Q_3)$. The filtered $\phi$-module $\Dc(1;a;\alpha)$ satisfies
conditions $(1)-(4)$ and is admissible for $\alpha\neq\infty$.
\end{proposition}
\begin{proposition}
Let ${E}/{\Q_3}$ be an elliptic curve with good reduction such that $a_3=a_3(\Et)\in\{-2,-1,1,2\}$ and $\D=\Dcris[\Q_3](V_3(E))$.
There exists an isomorphism of filtered $\phi$-modules between $\D$ and $\Dc(1;a_3;\alpha)$ for some $\alpha\in\{0,1\}$.
Moreover, if $(\alpha,a),(\beta,b)\in\{ 0,1\}\times \{ -2,-1,1,2\}$ then $\Dc(1;a;\alpha)$ and $\Dc(1;b;\beta)$ are isomorphic
if and only if $(\alpha,a)=(\beta,b)$.
\end{proposition}
\begin{proof}
Since $\D$ is admissible, the only possible filtrations are defined by a $\Q_3$-line of the
form $\Fil^1 D=(\beta e_1+e_2)\Q_3$ for some $\beta\in\Q_3$.
Let $\alpha\in\{ 0,1\}$ and $D^{\prime}$ be the $\Q_3$-vector space associated to $\Dc(1;a_3;\alpha)$.
Let $B=(e_1,e_2)$, $B^{\prime}=(e_1^{\prime},e_2^{\prime})$ be basis of $D$ and $D^{\prime}$ respectively, such that
\[
M_B(\phi)=\begin{pmatrix} u & 0 \\ 0 & u^{-1}3 \end{pmatrix}=M_{B^{\prime}}(\phi^{\prime}),\ u\in\Z_3^{\times},\ u+u^{-1}3=-a_3.
\]
Such a basis exists because $\D$ satisfies $(1)$ and $(a_3,3)=1$ and thus we have
\[
P_{\Char}(\phi)(X)=X^2+a_3X+3=(X-u)(X-u^{-1}3)\text{ for some }u\in\Z_3^{\times}.
\]
A $\Q_3$-isomorphism $\eta$ between $D$ and $D^{\prime}$ is $\phi$-equivariant if and only if
\[
M_{B,B^{\prime}}(\eta)\in C(M_B(\phi)).
\]
This time, since $P_{\Char}(\phi)(X)$ is a product of distinct linear factors
\[
C(\Q_3[M_B(\phi)])=\Q_3[M_B(\phi)]=\left\lbrace \begin{pmatrix}\lambda & 0 \\ 0 & \mu\end{pmatrix}:\ \lambda,\mu\in\Q_3\right\rbrace\simeq\Q_3[X]/(X-u)\times\Q_3[X]/(X-u^{-1}3).
\]
If $\beta=0$, then every invertible element of $C(M_B(\phi))$ defines an isomorphism of filtered $\phi$-modules
between $\D$ and $\Dc(1;a_3;0)$. If $\beta\neq 0$, then taking $\lambda=\beta^{-1}$ and $\mu=1$ gives an isomorphism
of filtered $\phi$-modules between $\D$ and $\Dc(1;a_3;1)$.
\end{proof}
\begin{remark}
There are $8$ isomorphism classes of filtered $\phi$-modules in the ordinary case, two for each possible value taken by $a$.
\end{remark}
\subsection{The quadratic case ($e=2$)}
Let ${E}/{\Q_3}$ with semi-stability defect $\dst(E)=2$. Since $2$ and $3$ are coprime, the only quadratic extension
of $\Q_3^{\mathrm{un}}$ is $\Q_3^{\mathrm{un}}(\sqrt{3})$. Let $K=\Q_3(\sqrt{3})$, it is a Galois extension of
degree $2$ with Galois group $\langle \tau_2\rangle$ over which $E$ acquire good reduction. As usual we denote
by $a_3=a_3(\Et)$ the trace of the Frobenius of ${\Et}/{\Fd_3}$.
\subsubsection{The supersingular case}
Let $a\in\{ -3,0,3\}$ and $\alpha\in\Proj^1(K)$. We denote by $\Dc(2;a;\alpha)$ the filtered $(\phi,\Gal({K}/{\Q_3}))$-module
defined by:
\begin{itemize}
\item $D=\Q_3 e_1\oplus\Q_3 e_2$
\item $M_B(\phi)=\begin{pmatrix} 0 & -3 \\ 1 & -a\end{pmatrix}$
\item $M_B(\tau_2)=\begin{pmatrix} -1 & 0 \\ 0 & -1\end{pmatrix}$
\item $\Fil^1D_K=(\alpha\otimes e_1 + 1\otimes e_2)K$, where $D_K=K\otimes_{K_0}D$.
\end{itemize}
\begin{proposition}
Let $a\in\{-3,0,3\}$ and $\alpha\in\Proj^1(K)$. The filtered $(\phi,\Gal({K}/{\Q_3}))$-module $\Dc(2;a;\alpha)$ satisfies
conditions $(1)-(4)$ and is admissible.
\end{proposition}
\begin{proposition}
Let ${E}/{\Q_3}$ be an elliptic curve with $\dst(E)=2$ such that $a_3=a_3(\Et)\in\{-3,0,3\}$ and $\D=\Dcris[K](V_3(E))$.
There exists an isomorphism of filtered $(\phi,\Gal({K}/{\Q_3}))$-modules between $\D$ and $\Dc(2;a_3;0)$. Moreover,
if $a,b\in\{ -3,0,3\}$ then $\Dc(2;a;0)$ and $\Dc(2;b;0)$ are isomorphic if and only if $a=b$.
\end{proposition}
\begin{proof}
Let $D$ be the underlying $\Q_3$-vector space associated to $\D$ and $B=(e_1,e_2)$ a basis of $D$ such that
\[
M_B(\phi)=\begin{pmatrix} 0 & -3 \\ 1 & -a_3\end{pmatrix}.
\]
The element $\tau_2$ is seen as a $\Q_3$-automorphism of $D$ and is of order $2$. Since $\D$ satisfies conditions $(2)-(3)$,
we have $P_{\Char}(\tau_2)(X)\in\Q[X]$ and $\detm(\tau_2)=1$, so that
\[
P_{\Char}(\tau_2)(X)={(X+1)}^2
\]
thus $\tau_2=-\Id$. The $K$-line $(1\otimes e_1)K$ is stable by $\tau_2$ and if $\alpha\in K$, the $K$-line
$(\alpha\otimes e_1 +1\otimes e_2)K$ is stable by $\tau_2$ if and only if $\alpha\in\Q_3$. Let $\alpha\in\Proj^1(K)$ such
that $\Fil^1 D_K=(\alpha\otimes e_1 +1\otimes e_2)K$ is the $K$-line defining the filtration of $\D$, we will
show that $\D\simeq \Dc(2;a_3;0)$. If $\alpha=0$ it is obvious. If $\alpha=\infty$, the isomorphism is given
by $e_1\mapsto e_2$ and $e_2\mapsto -3e_1$. Finally, if $\alpha\neq 0,\infty$ it is given by $e_1\mapsto (3/\alpha)e_1 + e_2$
and $e_2\mapsto -3e_1 + (3/\alpha-a_3)e_2$.
\end{proof}
\subsubsection{The ordinary case}
Let $a\in\{-2,-1,1,2\}$ and $\alpha\in\Proj^1(\Q_3)$. We denote by $\Dc(2;a;\alpha)$ the filtered $(\phi,\Gal({K}/{\Q_3}))$-module
defined by:
\begin{itemize}
\item $D=\Q_3 e_1\oplus \Q_3 e_2$,
\item $M_B(\phi)=\begin{pmatrix} u & 0\\ 0 & u^{-1}3\end{pmatrix}$ where $u\in\Z_3^{\times}$ such that $u+u^{-1}3=-a$
\item $M_B(\tau_2)=\begin{pmatrix} -1 & 0 \\ 0 & -1\end{pmatrix}$
\item $\Fil^1D_K=(\alpha\otimes e_1+1\otimes e_2)K$.
\end{itemize}
\begin{proposition}
Let $a\in\{-2,-1,1,2\}$ and $\alpha\in\Proj^1(\Q_3)$. The filtered $(\phi,\Gal({K}/{\Q_3}))$-module $\Dc(2;a;\alpha)$ satisfies
conditions $(1)-(4)$ and is admissible for $\alpha\neq\infty$.
\end{proposition}
\begin{proposition}
Let ${E}/{\Q_3}$ be an elliptic curve with $\dst(E)=2$ such that $a_3=a_3(\Et)\in\{-2,-1,1,2\}$ and $\D=\Dcris[K](V_3(E))$.
There exists an isomorphism of filtered $(\phi,\Gal({K}/{\Q_3}))$-modules between $\D$ and $\Dc(2;a_3;\alpha)$ for
some $\alpha\in\{ 0,1\}$. Moreover, if $(\alpha,a),(\beta,b)\in\{0,1\}\times\{-2,-1,1,2\}$ then $\Dc(2;a;\alpha)$
and $\Dc(2;b;\beta)$ are isomorphic if and only if $(\alpha,a)=(\beta,b)$.
\end{proposition}
\begin{proof}
See ordinary crystalline case for the description of $\phi$ and the supersingular quadratic case for the description
of $\tau_2$ and the filtration.
\end{proof}
\begin{remark}
These are exactly the twists by the ramified quadratic character associated to ${\Q_3(\sqrt{3})}/{\Q_3}$ of the corresponding crystalline cases.
\end{remark}
\subsection{The quartic case ($e=4$)}
Let ${E}/{\Q_3}$ with semi-stability defect $\dst(E)=4$. Again, since $4$ and $3$ are coprime, the only quartic extension
of $\Q_3^{\mathrm{un}}$ is $\Q_3^{\mathrm{un}}(\sqrt[4]{3})$. Let us fix $\zeta_4$ a primitive fourth root of unity and $\pi_4$ a root of $X^4-3$ in $\Qal_3$.
Consider $L=\Q_3(\pi_4)$, $K=L(\zeta_4)$ its algebraic closure and $K_0=\Q_3(\zeta_4)$ the maximal unramified
extension of ${K}/{\Q_3}$. Our curve necessarily acquires good reduction over $L$. Let $\sigma\in G({K_0}/{\Q_3})$ be the absolute
Frobenius on $K_0$, $\omega\in G({K}/{\Q_3})$ a lifting of $\sigma$ fixing $L$ and $\tau_4$ a generator of $G(K/K_0)=I({K}/{\Q_3})$.
Then $G({K}/{\Q_3})=\langle \tau_4\rangle\rtimes \langle \omega \rangle$ with $\tau_4\omega =\omega\tau_4^{-1}$.
\vspace{\baselineskip}
Let $\alpha\in\Proj^1(\Q_3)$. We denote by $\Dpc(4;0;\alpha)$ the filtered $(\phi,\Gal({K}/{\Q_3}))$-module defined by:
\begin{itemize}
\item $D=K_0e_1\oplus K_0e_2$
\item $\phi (e_1)=e_2,\ \phi (e_2)=-3e_1$
\item $M_B(\tau_4)=\begin{pmatrix} \zeta_4 & 0 \\ 0 & \zeta_4^{-1}\end{pmatrix}$
\item $\omega (e_1)=e_1,\ \omega (e_2)=e_2$
\item $\Fil^1D_K=(\alpha\pi_4^{-1}\otimes e_1 + \pi_4\otimes e_2)K$.
\end{itemize}
\begin{proposition}
Let $\alpha\in\Proj^1(\Q_3)$. The filtered $(\phi,\Gal({K}/{\Q_3}))$-module $\Dpc(4;0;\alpha)$ satisfies conditions $(1)-(4)$ and is admissible.
\end{proposition}
\begin{proposition}
Let ${E}/{\Q_3}$ be an elliptic curve with $\dst(E)=4$ and $\D=\Dcris[K](V_3(E))$. There exists an isomorphism of
filtered $(\phi,\Gal({K}/{\Q_3}))$-modules between $\D$ and $\Dpc(4;0;\alpha)$. Moreover if $\alpha,\beta\in\Proj^1(\Q_3)$,
then $\Dpc(4;0;\alpha)\simeq \Dpc(4;0;\beta)$ if and only if $\alpha =\beta$.
\end{proposition}
\begin{proof}
Let $D$ be the underlying $K_0$-vector space associated to $\D$, the element $\tau_4$ acts $K_0$-linearly over $D$ and
the morphism
\[
I({K}/{\Q_3})\longrightarrow\Aut_{K_0}(D)
\]
is injective by minimality of $e({K}/{\Q_3})$. We identify $\tau_4$ to its image in $\Aut_{K_0}(D)$, it is an element of order $4$.
Again, because $\D$ satisfies $(2)-(3)$ we have $\detm(\tau_4)=1$ and $P_{\Char}(\tau_4)(X)\in\Q[X]$ so that
\[
P_{\Char}(\tau_4)(X)=P_{\min}(\tau_4)(X)=X^2+1=(X-\zeta_4)(X-\zeta_4^{-1})
\]
in particular $\tau_4$ is diagonalizable in $K_0$ and has distinct eigenvalues. Let $(e_1,e_2)$ be a diagonalization basis
of $\tau_4$ over $K_0$. The relation $\tau_4\omega =\omega\tau_4^{-1}$ implies that $\omega(e_i)\in K_0e_i$, $i=1,2$.
Denote by $\omega_i=\omega\vert_{K_0e_i}$, the group $\langle\omega_i\rangle$ acts semi-linearly over $K_0e_i$.
Descent theory tells us that ${(K_0e_i)}^{\langle\omega_i\rangle}\neq\{0\}$. We can then find a basis $(e_1,e_2)$ of $D$
over $K_0$ which is fixed by $\omega$ and such that $\tau_4 (e_1)=\zeta_4e_1,\ \tau_4 (e_2)=\zeta_4^{-1}e_2$. Since $\phi$
is $\Gal({K}/{\Q_3})$-equivariant, it commutes to $\tau_4$ and $\omega$, a simple calculation shows that $\phi (e_1)\in\Q_3 e_2$
and $\phi (e_2)\in\Q_3 e_1$. Since $\detm(\phi)=3$, $\phi (e_1)=ae_2$ and $\phi (e_2)=-3a^{-1}e_1$,
necessarily $a\in\Q_3^{\times}$. That way we show that there exists a $K_0$-basis of $D$ such that
\[
\phi (e_1)=e_2,\ \phi (e_2)=-3e_1,\ \tau_4 (e_1)=\zeta_4e_1,\ \tau_4 (e_2)=\zeta_4^{-1}e_2,\ \omega (e_1)=e_1,\ \omega (e_2)=e_2
\]
We have now described the $(\phi,\Gal({K}/{\Q_3}))$-module structure on $D$. In particular, we see that $a_3=0$, i.e. $\Et_L$ is
supersingular, but the two other supersingular values $3$ and $-3$ cannot appear.
What is left is to determine the $K$-line $\Fil^1D_K$ which defines the filtration, it needs to satisfy the weak admissibility
condition and be $\Gal({K}/{\Q_3})$-stable. Since $\phi$ does not have any stable subspaces, it is immediate.
The $K$-line $(1\otimes e_1)K$ is stable by action of $\Gal({K}/{\Q_3})$. Let $\beta\in\Q_3$
and $\Fil^1D_K=(\beta\otimes e_1+1\otimes e_2)K$. One easily shows that $\Fil^1 D_K$ is stable by $\omega$ if and
only if $\beta\in L$ and by $\tau_4$ if and only if $\pi_4^2\beta\in K_0$. Then $\Fil^1 D_K$ is stable by
action of $\Gal({K}/{\Q_3})$ if and only if $\pi_4^2\beta\in L\cap K_0=\Q_3$. Let $\alpha=\pi_4^2\beta\in\Q_3$,
we can rewrite our $K$-line defining the filtration as
\[
\Fil^1 D_K=(\alpha \pi_4^{-1}\otimes e_1 + \pi_4\otimes e_2)K
\]
it is then clear that $\D\simeq\Dpc(4;0;\alpha)$.
Let $\alpha,\beta\in\Proj^1(\Q_3)$, consider the following filtered $(\phi,\Gal({K}/{\Q_3}))$-modules:
$\D=\Dpc(4;0;\alpha)$, $\D^{\prime}=\Dpc(4;0;\beta)$ and let $B=(e_1,e_2)$, $B^{\prime}=(e_1^{\prime},e_2^{\prime})$
be $K_0$-basis of $D$ and $D^{\prime}$ their respective underlying $K_0$-vector spaces. Let $\psi:\D\longrightarrow \D^{\prime}$
be a non zero morphism of filtered $(\phi,\Gal({K}/{\Q_3}))$-modules. Let $D_0=D^{\langle\omega\rangle}$
and $D_0^{\prime}={(D^{\prime})}^{\langle\omega^{\prime}\rangle}$. The relation $\psi\circ\omega = \omega^{\prime}\circ\psi$
implies $\psi(D_0)\subseteq D_0^{\prime}=\Q_3 e_1^{\prime}\oplus\Q_3 e_2^{\prime}$.
Moreover, $\psi\circ\tau_4 =\tau_4^{\prime}\circ\psi$ implies $\psi (e_i)\in K_0 e_i^{\prime}$, $i=1,2$.
Then there exists $a,d\in\Q_3$ such that $\psi (e_1)=ae_1^{\prime}$ and $\psi (e_2)=de_2^{\prime}$.
Finally, $\psi\circ\phi =\phi^{\prime}\circ\psi$ leads to $a=d$. Denoting by $\psi_K$ the $K$-linear extension
of $\psi$, we see that $\psi_K(\Fil^1D_K)\subseteq\Fil^1D_K^{\prime}$ if and only if $\alpha=\beta$.
\end{proof}
\subsection{The cubic case ($e=3$)}
Let ${E}/{\Q_3}$ be an elliptic curve with semi-stability defect $\dst(E)=3$. There are exactly $9$ totally ramified extensions
of degree $3$ of $\Q_3$ (see~\cite{LMFDB}). Since $E$ acquires good reduction over a degree $3$ Galois extension
of $\Q_3^{\mathrm{un}}$, we are interested in the ones that keep their ramification index after Galois closure.
Indeed, if $e({F}/{\Q_3})=3$ but $e({F^{\Gal}}/{\Q_3})>3$, then $[{(F^{\Gal})}^{\mathrm{un}}:\Q_3^{\mathrm{un}}]>3$ is not minimal.
There are only $4$ such extensions; among these, $3$ are Abelian and the last one has a Galois closure of degree $6$ with
Galois group isomorphic to $S_3$. One easily shows (using the Kronecker-Weber Theorem) that the three considered Abelian
extensions are exactly the degree $3$ totally ramified subextensions of $\Q_3(\zeta_{13},\zeta_9+\zeta_9^{-1})$, so they have the same maximal unramified extension, namely $\Q_3^{\mathrm{un}}(\zeta_9+\zeta_9^{-1})$, and are therefore interchangeable. This is not the case of the non Abelian extension. Let $\Lg=\Q_3(\zeta_9+\zeta_9^{-1})$ (resp.\ $\Lng=\Q_3(X^3-3X^2+6)$) be a minimal field of good reduction for ${E}/{\Q_3}$ in the Abelian (resp.\ non Abelian) case.
\begin{proposition}
Let $E,{E^{\prime}}/{\Q_3}$ be elliptic curves with $\dst(E)=\dst(E^{\prime})=3$ and consider their respective
Weil-Deligne types $\tau_E$ and $\tau_{E^{\prime}}$. We have the following equivalence:
\[
\tau_E\simeq_{\mathbb{C}}\tau_{E^{\prime}}\Leftrightarrow M_E=M_{E^{\prime}}.
\]
\end{proposition}
\begin{proof}
The left to right implication is obvious
since $M_E={(\Q_3^{\mathrm{un}})}^{\ker(\tau_E)}$, $M_{E^{\prime}}={(\Q_3^{\mathrm{un}})}^{\ker(\tau_{E^{\prime}})}$ and two
isomorphic representations share the same kernel. If $M_E=M_E^{\prime}$ then $\ker(\tau_E)=\ker(\tau_{E^{\prime}})$ and
both types factors into faithful irreducible representations of $\Gal(M_{E}/{\Q_3}^{\mathrm{un}})\simeq\Z/3\Z$ defined over $\Q$,
which are necessarily isomorphic.
\end{proof}
Using~\cite{DFV},Table 1 we see that there are only two isomorphism classes of Weil-Deligne types for $p=e=3$ so
that $\Lg\Q_3^{\mathrm{un}}$ and $\Lng\Q_3^{\mathrm{un}}$ are indeed distinct.
\subsubsection{The non Abelian case}
Let ${E}/{\Q_3}$ with $\dst(E)=3$ acquiring good reduction over $\Lng$ with $K=\Lng(\zeta_4)$ its Galois closure.
Denote by $K_0$ the maximal unramified extension of ${K}/{\Q_3}$ and $\sigma\in \Gal({K_0}/{\Q_3})$ the absolute Frobenius.
Let $\omega\in \Gal({K}/{\Q_3})$ be a lifting of $\sigma$ fixing $\Lng$ and $\tau_3$ a generator of $\Gal(K/K_0)=I({K}/{\Q_3})$.
Then, $\Gal({K}/{\Q_3})=\langle \tau_3\rangle\rtimes \langle \omega \rangle$ with $\tau_3\omega =\omega\tau_3^{-1}$
(the unique non trivial semi-direct product).
Let $\alpha\in\mathcal{M}_3^{\mathrm{na}}:=\{\alpha\in \Lng\vert \tau_3(\alpha)=(3\zeta_4+\alpha)/(1+\zeta_4\alpha)\}$.
We denote by $\Dpcng(3;0;\alpha)$ the filtered $(\phi,\Gal({K}/{\Q_3}))$-module defined by:
\begin{itemize}
\item $D=K_0e_1\oplus K_0e_2$
\item $\phi (e_1)=e_2,\ \phi (e_2)=-3e_1$
\item $M_B(\tau_3)=\begin{pmatrix} -\frac{1}{2} & \frac{3}{2}\zeta_4 \\ \frac{1}{2}\zeta_4 & -\frac{1}{2}\end{pmatrix}$
\item $\omega (e_1)=e_1,\ \omega (e_2)=e_2$
\item $\Fil^1D_K=(\alpha\otimes e_1 + 1\otimes e_2)K$.
\end{itemize}
\begin{proposition}
Let $\alpha\in\mathcal{M}_3^{\mathrm{na}}$. The filtered $(\phi,\Gal({K}/{\Q_3}))$-module $\Dpcng(3;0;\alpha)$ satisfies
conditions $(1)-(4)$ and is admissible.
\end{proposition}
\begin{proposition}
Let ${E}/{\Q_3}$ be an elliptic curve with $\dst(E)=3$ acquiring good reduction over $\Lng$ and $\D=\Dcris[K](V_3(E))$.
There exists $\alpha\in\mathcal{M}_3^{\mathrm{na}}$ such that $\D$ and $\Dpcng(3;0;\alpha)$ are isomorphic as
filtered $(\phi,\Gal({K}/{\Q_3}))$-modules. Moreover, if $\alpha,\beta\in\mathcal{M}_3^{\mathrm{na}}$,
then $\Dpcng(3;0;\alpha)\simeq \Dpcng(3;0;\beta)$ if and only if $\alpha =\beta$.
\end{proposition}
\begin{proof}
Denote by $D$ the underlying $K_0$-vector space associated to $\D$, the element $\tau_3$ acts $K_0$-linearly over $D$ and
the morphism
\[
I({K}/{\Q_3})\longrightarrow\Aut_{K_0}(D)
\]
is injective by minimality of $e({K}/{\Q_3})$. We identify $\tau_3$ to its image in $\Aut_{K_0}(D)$, it is an element of order $3$.
Since $\zeta_3\notin K_0$,
\[
P_{\Char}(\tau_3)(X)=P_{\min}(\tau_3)(X)=X^2+X+1.
\]
Let $B=(e_1,e_2)$ be a $K_0$-basis of $D$ fixed by $\omega$ such that $\phi(e_1)=e_2$ and $\phi(e_2)=-3e_1-a_3e_2$.
Such a basis always exists since $\omega$ acts semi-linearly over $D$ and $\phi\omega=\omega\phi$.
Let $\lambda,\mu,\lambda^{\prime},\mu^{\prime}\in K_0$ such that
\[
M_B(\tau_3)=\begin{pmatrix} \lambda & \mu^{\prime} \\ \mu & \lambda^{\prime}\end{pmatrix}.
\]
We already know that $\lambda^{\prime}=-\lambda-1$ and $\mu^{\prime}=P(\lambda)(-\mu)$ where $P=P_{\Char}(\tau_3)$.
The relations $\tau\omega=\omega\tau^{-1}$ and $\tau\phi=\phi\tau$ imply that $a_3=0$,
$\sigma(\lambda)=-\lambda-1$, $\sigma(\mu)=-\mu$ and $P(\lambda)/(-\mu)=3\mu$.
In conclusion:
\begin{itemize}
\item $\phi(e_1)=e_2,\ \phi(e_2)=-3e_1$
\item $\omega(e_1)=e_1,\ \omega(e_2)=e_2$
\item $M_B(\tau_3)=\begin{pmatrix} \lambda & 3\mu \\ \mu & -\lambda-1\end{pmatrix}$,$\ \lambda\in -\frac{1}{2}+\Q_3\zeta_4,\ \mu\in\Q_3^{\times}\zeta_4,\text{ and } P(\lambda)+3\mu^2=0$.
\end{itemize}
Let
\[
M=\begin{pmatrix} \lambda +\frac{1}{2} & -3\mu -\frac{3}{2}\zeta_4 \\ \mu -\frac{1}{2}\zeta_4 & \lambda +\frac{1}{2}\end{pmatrix}
\]
clearly $\det(M)=0$, let $(a,b)\in{\ker(M)}^{G({K}/{\Q_3})}\subseteq K_0^2$ be a non zero element. Then
\[
B^{\prime}=(e_1^{\prime},e_2^{\prime})=(ae_1+be_2, -3be_1+ae_2)
\]
is a $K_0$-basis of $D$ such that
\begin{itemize}
\item $\phi(e_1^{\prime})=e_2^{\prime},\ \phi(e_2^{\prime})=-3e_1^{\prime}$
\item $\omega(e_1^{\prime})=e_1^{\prime},\ \omega(e_2^{\prime})=e_2^{\prime}$
\item $M_{B^{\prime}}(\tau_3)=\begin{pmatrix} -\frac{1}{2} & \frac{3}{2}\zeta_4 \\ \frac{1}{2}\zeta_4 & -\frac{1}{2}\end{pmatrix}$.
\end{itemize}
Again, we denote by $(e_1,e_2)$ such a basis.
One easily checks that $(1\otimes e_1)K$ and $(1\otimes e_2)K$ are not stable by $\tau_3$.
Let $\alpha\in K^{\times}$ and $\Fil^1 D_K =(\alpha\otimes e_1 + 1\otimes e_2)$.
A simple calculation shows that such a $K$-line is stable by the action of $\Gal({K}/{\Q_3})$ if and only if $\alpha\in\Lng$ and
$\tau_3(\alpha)=(3\zeta_4+\alpha)/(1+\zeta_4\alpha)$.
Let $B,B^{\prime}$ be $K_0$-basis of $\D=\Dpcng(3;0;\alpha)$ and $\D^{\prime}=\Dpcng(3;0;\beta)$ respectively.
One easily shows that an isomorphism $\eta$ of $(\phi,\Gal({K}/{\Q_3}))$-modules between $\D$ and $\D^{\prime}$ is of the form
\[
M_{B,B^{\prime}}(\eta)=\begin{pmatrix} a & 0 \\ 0 & a\end{pmatrix},\ a\in\Q_3^{\times}.
\]
Denoting by $\eta_K:D_K\longrightarrow D_K^{\prime}$ the $K$-linear extension of $\eta$, it is then clear that
\[
\eta_K((\alpha\otimes e_1 +1\otimes e_2)K)\subseteq (\beta\otimes e_1 +1\otimes e_2)K \Leftrightarrow \alpha=\beta .
\]
\end{proof}
\subsubsection{The Abelian case}
Let ${E}/{\Q_3}$ with $\dst(E)$ acquiring good reduction over a degree $3$ Abelian extension of $\Q_3$.
There are only $4$ such extensions and among them the unique unramified one. These are exactly the sub-extensions
of $\Q_3(\zeta_{13},\zeta_9+\zeta_9^{-1})$, hence they share the same maximal unramified extension inside $\Qal_3$.
Let $K=\Lg$ be one of these $3$ extensions, its Galois group $\Gal({K}/{\Q_3})=I({K}/{\Q_3})=\langle \tau_3\rangle$ is cyclic of order $3$.
Let $\alpha\in\mathcal{M}_3^{\mathrm{a}}:=\{ \alpha\in \Lg\vert\tau_3(\alpha)=(\alpha -1)/\alpha\}$ and
\[
(a,\mu)\in(\{-3\}\times\{1,2\})\sqcup (\{0\}\times\{-1,1\})\sqcup (\{3\}\times\{-2,-1\}).
\]
We denote by $\Dpcg(3;a,\mu;\alpha)$ the filtered $(\phi,\Gal({K}/{\Q_3}))$-module defined by:
\begin{itemize}
\item $D=\Q_3 e_1\oplus \Q_3 e_2$,
\item $M_B(\tau_3)=\begin{pmatrix} 0 & -1 \\ 1 & -1\end{pmatrix}$
\item $a=-3$: $\mu=1$: $M_B(\phi)=\begin{pmatrix} 1 & 1 \\ -1 & 2\end{pmatrix}$,$\ $ $\mu=2$: $M_B(\phi)=\begin{pmatrix} 2 & -1 \\ 1 & 1\end{pmatrix}$
\item $a=0$: $\mu=1$: $M_B(\phi)=\begin{pmatrix} 1 & -2 \\ 2 & -1\end{pmatrix}$,$\ $ $\mu=-1$: $M_B(\phi)=\begin{pmatrix} -1 & 2 \\ -2 & 1\end{pmatrix}$
\item $a=3$: $\mu=-1$: $M_B(\phi)=\begin{pmatrix} -1 & -1 \\ 1 & -2\end{pmatrix}$,$\ $ $\mu=-2$: $M_B(\phi)=\begin{pmatrix} -2 & 1 \\ -1 & -1\end{pmatrix}$
\item $\Fil^1D_K=(\alpha\otimes e_1+1\otimes e_2)K$.
\end{itemize}
\begin{proposition}
Let $\alpha\in\mathcal{M}_3^{\mathrm{a}}$. The filtered $(\phi,\Gal({K}/{\Q_3}))$-module $\Dpcg(3;a,\mu;\alpha)$
satisfies conditions $(1)-(4)$ and is admissible.
\end{proposition}
\begin{proposition}
Let ${E}/{\Q_3}$ be an elliptic curve with $\dst(E)=3$ acquiring good reduction over $K$ and $\D=\Dcris[K](V_3(E))$.
There exists $\alpha\in\mathcal{M}_3^{\mathrm{a}}$
and $(a,\mu)\in(\{-3\}\times\{1,2\})\sqcup (\{0\}\times\{-1,1\})\sqcup (\{3\}\times\{-2,-1\})$ such that $\D$
and $\Dpcg(3;a,\mu;\alpha)$ are isomorphic as filtered $(\phi,\Gal({K}/{\Q_3}))$-modules.
Moreover, if $\alpha,\beta\in\mathcal{M}_3^{\mathrm{a}}$
and $(a,\mu),(b,\nu)\in(\{-3\}\times\{1,2\})\sqcup(\{0\}\times\{-1,1\})\sqcup(\{3\}\times\{-2,-1\})$,
then $\Dpcg(3;a,\mu;\alpha)\simeq \Dpcg(3;b,\nu;\beta)$ if and only if $(a,\mu) =(b,\nu)$.
\end{proposition}
\begin{proof}
Denote by $D$ the underlying $\Q_3$-vector space associated to $\D$, the element $\tau_3$ acts $\Q_3$-linearly over $D$ and
the natural morphism
\[
I({K}/{\Q_3})\longrightarrow\Aut_{\Q_3}(D)
\]
is injective by minimality of $e({K}/{\Q_3})$. We identify $\tau_3$ to its image in $\Aut_{\Q_3}(D)$, it is an element of order $3$.
Since $\zeta_3\notin \Q_3$,
\[
P_{\Char}(\tau_3)(X)=P_{\min}(\tau_3)(X)=X^2+X+1.
\]
Let $B=(e_1,e_2)$ be a $\Q_3$-basis of $D$ such that:
\[
M_B(\tau_3)=\begin{pmatrix} 0 & -1 \\ 1 & -1 \end{pmatrix}.
\]
Since $P_{\Char}(\phi)(X)=X^2+a_3X+3$ and $\phi\tau_3=\tau_3\phi$, we have
\[
M_B(\phi)=\begin{pmatrix} \lambda & -2\lambda-a_3 \\ 2\lambda+a_3 & -\lambda-a_3 \end{pmatrix},\ \lambda\in\Q_3
\]
with $\det(\phi)=3\lambda^2+3\lambda a_3+a_3^2=3$, i.e. $\lambda$ is a root of $3X^2+3a_3X+a_3^2-3$. But this polynomial
has roots in $\Q_3$ if and only if $3\mid a_3$, so $a_3\in\{-3,0,3\}$. Considering every possible value of $a_3$ we obtain:
\begin{itemize}
\item if $a_3=0$, $\lambda$ is a root of $X^2-1$ i.e. $\lambda\in\{-1,1\}$
\item if $a_3=3$, $\lambda$ is a root of $X^2+3X+2$ i.e. $\lambda\in\{-2,-1\}$
\item if $a_3=-3$, $\lambda$ is a root of $X^2-3X+2$ i.e. $\lambda\in\{1,2\}$.
\end{itemize}
Observe that $(1\otimes e_1)K$ and $(1\otimes e_2)K$ are not stable by action of $G({K}/{\Q_3})$. Let $\alpha\in K^{\times}$,
the $K$-line $(\alpha\otimes e_1+1\otimes e_2)K$ is stable by $\tau_3$ if and only if $\tau_3(\alpha)=(\alpha -1)/\alpha$.
So that $\D\simeq\Dpcg(3;a_3,\lambda;\alpha)$ with $\alpha$ and $(a_3,\lambda)$ satisfying the desired conditions.
Let $\alpha,\beta\in\mathcal{M}_3^{\mathrm{a}}$
and $(a,\mu),(b,\nu)\in\{-3\}\times\{1,2\}\sqcup\{0\}\times\{-1,1\}\sqcup\{3\}\times\{-2,-1\}$.
Consider $\D=\Dpcg(3;a,\mu;\alpha)$ and $\D^{\prime}=\Dpcg(3;b,\nu;\beta)$, we will first show that their
underlying $(\phi,\Gal({K}/{\Q_3}))$-modules are not isomorphic. Let $B$ and $B^{\prime}$ be $\Q_3$-basis of $D$ and $D^{\prime}$
respectively. A morphism $\eta:D\longrightarrow D^{\prime}$ commuting to $\tau_3$ and $\phi$ must be of the form
\[
M_{B,B^{\prime}}(\eta)=\begin{pmatrix} c & -d \\ d & c-d\end{pmatrix}
\]
where $(c,d)\in\Q_3^2$ is in the kernel of the following linear map
\[
\begin{pmatrix} \lambda -\mu & 2(\mu-\lambda)+b-a \\ \mu-\lambda +b-a & \mu-\lambda\end{pmatrix}.
\]
The determinant of this matrix is $-(3{(\mu-\lambda)}^2+3(\mu-\lambda)(b-a)+{(b-a)}^2)$. There exists $(c,d)\neq (0,0)$ in
the kernel if and only if $(\mu-\lambda)$ is a root of
\[
3X^2+3(b-a)X+{(b-a)}^2.
\]
But such a polynomial has roots in $\Q_3$ if and only if $a=b$, in which case its roots are zero. This shows that if $\D$
and $\D^{\prime}$ are isomorphic as $(\phi,\Gal({K}/{\Q_3}))$-modules then $\lambda=\mu$ and $a=b$. Now suppose
that $(\lambda,a)=(\mu,b)$, let $\Fil^1 D_K=(\alpha\otimes e_1+1\otimes e_2)K$
and $\Fil^1 D^{\prime}_K=(\beta\otimes e_1+1\otimes e_2)K$, the $K$-lines defining the filtrations on $\D$
and $\D^{\prime}$. If $\alpha=\beta$ taking $c=1$ and $d=0$ gives us an obvious isomorphism. In the other case,
we see that $(\alpha\beta -\beta +1)/(\alpha-\beta)\in\Q_3$ and taking some $d\neq 0$
and $c=d(\alpha\beta -\beta +1)/(\alpha-\beta)$ gives us the desired isomorphism.
\end{proof}
\begin{remark}
We observe two differences with the non Abelian case: the supersingular traces $3$ and $-3$ do appear and for each trace value
there are two isomorphism classes of $(\phi,\Gal({K}/{\Q_3}))$-modules (not considering filtration). We will explain the absence
of these traces in the last section. As for the two isomorphism classes observe that they are unramified twists of each other.
\end{remark}
\subsection{The sextic case ($e=6$)}
This section can be summarized by the following result: if ${E}/{\Q_3}$ has a semi-stability defect of $3$ then its
quadratic twist ${E^{\prime}}/{\Q_3}$ by the character associated to $\sqrt{3}$ has a semi-stability defect of $6$, and vice versa.
Consequently, if ${F}/{\Q_3}$ is a field of good reduction for $E$, then $F(\sqrt{3})$ is a field of good reduction for $E^{\prime}$.
\subsubsection{The non Abelian case}
Let ${E}/{\Q_3}$ with $\dst(E)=6$ acquiring good reduction over $\Lng(\sqrt{3})$ and let $K=\Lng(\sqrt{3},\zeta_4)$ be its
Galois closure. We have $\Gal({K}/{\Q_3})=(\langle\tau_3\rangle\times\langle\tau_2\rangle)\rtimes\langle\omega\rangle$
and $I({K}/{\Q_3})=\langle\tau_3\rangle\times\langle\tau_2\rangle$ is cyclic of order $6$.
Let $\alpha\in\mathcal{M}_6^{\mathrm{na}}=\{ \alpha\in \Lng(\sqrt{3})\vert \tau_3(\alpha)=(3\zeta_4+\alpha)(1+\zeta_4\alpha)\}$.
We denote by $\Dpcng(6;0;\alpha)$ the filtered $(\phi,\Gal({K}/{\Q_3}))$-module defined by:
\begin{itemize}
\item $D=K_0e_1\oplus K_0e_2$
\item $\phi (e_1)=e_2,\ \phi (e_2)=-3e_1$
\item $M_B(\tau_3)=\begin{pmatrix} -1/2 & 3/2\zeta_4 \\ 1/2\zeta_4 & -1/2\end{pmatrix}$
\item $M_B(\tau_2)=\begin{pmatrix} -1 & 0 \\ 0 & -1\end{pmatrix}$
\item $\omega (e_1)=e_1,\ \omega (e_2)=e_2$
\item $\Fil^1D_K=(\alpha\otimes e_1 + 1\otimes e_2)K$.
\end{itemize}
\begin{proposition}
Let $\alpha\in\mathcal{M}_6^{\mathrm{na}}$. The filtered $(\phi,\Gal({K}/{\Q_3}))$-module $\Dpcng(6;0;\alpha)$ satisfies
conditions $(1)-(4)$ and is admissible.
\end{proposition}
\begin{proposition}
Let ${E}/{\Q_3}$ be an elliptic curve with $\dst(E)=6$ acquiring good reduction over $\Lng(\sqrt{3})$ and $\D=\Dcris[K](V_3(E))$.
There exists $\alpha\in\mathcal{M}_6^{\mathrm{na}}$ such that $\D$ and $\Dpcng(6;0;\alpha)$ are isomorphic as
filtered $(\phi,\Gal({K}/{\Q_3}))$-modules. Moreover if $\alpha,\beta\in\mathcal{M}_6^{\mathrm{na}}$,
then $\Dpcng(6;0;\alpha)\simeq \Dpcng(6;0;\beta)$ if and only if $\alpha =\beta$.
\end{proposition}
\begin{proof}
Similar to the cubic non Abelian case using the natural injection
\[
I({K}/{\Q_3})=\langle\tau_2\rangle\times\langle\tau_3\rangle\hookrightarrow\Aut_{K_0}(D).
\]
\end{proof}
\subsubsection{The Abelian case}
Let ${E}/{\Q_3}$ with $\dst(E)=6$ acquiring good reduction over $K=\Lng(\sqrt{3})$. Then
$\Gal({K}/{\Q_3})=I({K}/{\Q_3})=\langle\tau_3\rangle\times\langle\tau_2\rangle$ is cyclic of order $6$.
Let $\alpha\in\mathcal{M}_6^{\mathrm{a}}=\{ \alpha\in L\vert \tau_3(\alpha)=(\alpha-1)/\alpha\}$
and
\[
(a,\mu)\in(\{-3\}\times\{1,2\})\sqcup (\{0\}\times\{-1,1\})\sqcup (\{3\}\times\{-2,-1\}).
\]
We denote by $\Dpcg(6;a,\mu;\alpha)$ the filtered $(\phi,\Gal({K}/{\Q_3}))$-module defined by:
\begin{itemize}
\item $D=\Q_3 e_1\oplus \Q_3 e_2$
\item $M_B(\tau_3)=\begin{pmatrix} 0 & -1 \\ 1 & -1\end{pmatrix}$
\item $M_B(\tau_2)=\begin{pmatrix} -1 & 0 \\ 0 & -1\end{pmatrix}$
\item $a=-3$: $\mu=1$: $M_B(\phi)=\begin{pmatrix} 1 & 1 \\ -1 & 2\end{pmatrix}$,$\ $ $\mu=2$: $M_B(\phi)=\begin{pmatrix} 2 & -1 \\ 1 & 1\end{pmatrix}$
\item $a=0$: $\mu=1$: $M_B(\phi)=\begin{pmatrix} 1 & -2 \\ 2 & -1\end{pmatrix}$,$\ $ $\mu=-1$: $M_B(\phi)=\begin{pmatrix} -1 & 2 \\ -2 & 1\end{pmatrix}$
\item $a=3$: $\mu=-1$: $M_B(\phi)=\begin{pmatrix} -1 & -1 \\ 1 & -2\end{pmatrix}$,$\ $ $\mu=-2$: $M_B(\phi)=\begin{pmatrix} -2 & 1 \\ -1 & -1\end{pmatrix}$
\item $\Fil^1D_K=(\alpha\otimes e_1+1\otimes e_2)K$.
\end{itemize}
\begin{proposition}
Let $\alpha\in\mathcal{M}_6^{\mathrm{a}}$. The filtered $(\phi,\Gal({K}/{\Q_3}))$-module $\Dpcg(6;a,\mu;\alpha)$ satisfies
conditions $(1)-(4)$ and is admissible.
\end{proposition}
\begin{proposition}
Let ${E}/{\Q_3}$ be an elliptic curve with $\dst(E)=6$ acquiring good reduction over $K$ and $\D=\Dcris[K](V_3(E))$.
There exists $\alpha\in\mathcal{M}_6^{\mathrm{a}}$
and $(a,\mu)\in(\{-3\}\times\{1,2\})\sqcup (\{0\}\times\{-1,1\})\sqcup (\{3\}\times\{-2,-1\})$ such
that $\D$ and $\Dpcg(6;a,\mu;\alpha)$ are isomorphic as filtered $(\phi,\Gal({K}/{\Q_3}))$-modules.
Moreover if $\alpha,\beta\in\mathcal{M}_6^{\mathrm{a}}$
and $(a,\mu),(b,\nu)\in(\{-3\}\times\{1,2\})\sqcup(\{0\}\times\{-1,1\})\sqcup(\{3\}\times\{-2,-1\})$,
then $\Dpcg(6;a,\mu;\alpha)\simeq \Dpcg(6;b,\nu;\beta)$ if and only if $(a,\mu) =(b,\nu)$.
\end{proposition}
\begin{proof}
Similar to the cubic Abelian case using the following injection
\[
I({K}/{\Q_3})=\langle\tau_2\rangle\times\langle\tau_3\rangle\hookrightarrow\Aut_{\Q_3}(D).
\]
\end{proof}
\subsection{The dodecic case ($e=12$)}
If an elliptic curve ${E}/{\Q_3}$ has a semi-stability defect $\dst(E)=12$, then its minimal field of good reduction
has his Galois closure ${K}/{\Q_3}$ satisfying:
\[
\left\{
\begin{array}{ll}
\Gal({K}/{\Q_3})\simeq\Z/3\Z\rtimes D_4\simeq (\Z/3\Z\rtimes \Z/4\Z)\rtimes \Z/2\Z \\
I({K}/{\Q_3})\simeq\Z/3\Z\rtimes\Z/4\Z.
\end{array}
\right.
\]
More precisely:
\[
\left\{
\begin{array}{ll}
\Gal({K}/{\Q_3})=(\langle \tau_3\rangle\rtimes\langle\tau_4\rangle)\rtimes\langle\omega\rangle \\
I({K}/{\Q_3})=G(K/K_0)=\langle \tau_3\rangle\rtimes\langle\tau_4\rangle
\end{array}
\right.
\]
with relations:
\[
\left\{
\begin{array}{ll}
\text{ord}(\tau_4)=4, \text{ord}(\tau_3)=3, \text{ord}(\omega)=2\\
\tau_4\tau_3\tau_4^{-1}=\tau_3^2=\tau_3^{-1}\\
\omega\tau_4\omega=\tau_4^{-1}\\
\tau_3\omega=\omega\tau_3 .
\end{array}
\right.
\]
(see~\cite{Co},Rem. 3.1). Looking at~\cite{LMFDB} we see that there are exactly $10$ such fields, namely:
\begin{itemize}
\item $K_1=\Q_3(X^{12} + 3X^4 + 3)$
\item $K_2=\Q_3(X^{12} - 3X^{11} - 3X^9 + 3X^7 - 3X^4 - 3)$
\item $K_3=\Q_3(X^{12}+3)$
\item $K_4=\Q_3(X^{12} + 9X^{10} + 9X^9 - 9X^8 + 6X^6 + 9X^5 - 9X^4 - 3X^3 + 9X^2 - 9X - 12)$
\item $K_5=\Q_3(X^{12} + 9X^{11} + 9X^{10} + 9X^9 + 9X^8 - 9X^7 - 12X^6 - 9X^2 - 3)$
\item $K_6=\Q_3(X^{12} + 3X^{10} - 3X^9 - 3X^7 + 3X^6 + 3X^5 + 3X^4 + 3X^3 - 3)$
\item $K_7=\Q_3(X^{12} - 3X^{11} - 3X^{10} + 3X^9 + 3X^5 - 3X^4 + 3X^3 + 3)$
\item $K_8=\Q_3(X^{12} - 9X^{11} + 9X^9 - 9X^8 + 9X^7 - 12X^6 + 3X^3 + 9X^2 + 9X - 12)$
\item $K_9=\Q_3(X^{12} + 9X^{11} + 9X^{10} - 3X^9 - 9X^8 - 9X^7 + 3X^6 + 9X^5 - 9X^4 + 6X^3 - 9X^2 - 9X + 12)$
\item $K_{10}=\Q_3(X^{12} - 9X^{11} + 6X^9 + 9X^8 + 3X^6 + 9X^5 + 9X^4 + 3X^3 - 9X^2 + 9X + 3)$.
\end{itemize}
Looking at their respective Galois lattices we observe that $K_i^{\mathrm{un}}=K_j^{\mathrm{un}}$
if and only if $i\equiv j\bmod 5\Z$, so that there are effectively 5 fields of good reduction. Furthermore,
every one of these $5$ fields appear as the reduction field of some elliptic curve (see~\cite{Fr-Kr},Thm.17, (7)).
For $i=1,\ldots,10$ we let $L_i$ be the maximal totally ramified sub-extension of $K_i$, so that $K_i=L_i(\zeta_4)$.
Let $\alpha\in\mathcal{M}_{12}^{i,\epsilon}=\{\alpha\in L_i\vert \tau_4(\alpha)=-\alpha\text{ and } \tau_3(\alpha)=(\alpha+{(-1)}^{\epsilon+1}3)/(1+{(-1)}^{\epsilon}\alpha)\}$
for some $i\in\{1,\ldots,5\}$ and $\epsilon\in\{ 0,1\}$. We let $K_0=\Q_3(\zeta_4)$ be the maximal unramified extension
of $\Q_3$ in $K_i$ which is independant of $i$. We denote by $\Dpc(12;0;i;\epsilon;\alpha)$ the
filtered $(\phi,\Gal({K_i}/{\Q_3}))$-module defined by:
\begin{itemize}
\item $D=K_0e_1\oplus K_0e_2$
\item $M_B(\tau_4)=\begin{pmatrix} \zeta_4 & 0 \\ 0 & \zeta_4^{-1}\end{pmatrix}$
\item $M_B(\tau_3)=\begin{pmatrix} -\frac{1}{2} & \frac{{(-1)}^{\epsilon+1}3}{2} \\ \frac{{(-1)}^{\epsilon}}{2} & -\frac{1}{2}\end{pmatrix}$
\item $\phi(e_1)=e_2;\ \phi(e_2)=-3e_1$
\item $\omega(e_1)=e_1;\ \omega(e_2)=e_2$
\item $\Fil^1D_{K_i}=(\alpha\otimes e_1 + 1\otimes e_2)K_i$.
\end{itemize}
\begin{proposition}
Let $i\in\{1,\ldots,5\}$, $\epsilon\in\{0,1\}$ and $\alpha\in\mathcal{M}_{12}^{i,\epsilon}$. The filtered $(\phi,\Gal({K_i}/{\Q_3}))$-module $\Dpc(12;0;i;\epsilon;\alpha)$
satisfies conditions $(1)-(4)$ and is admissible.
\end{proposition}
\begin{proposition}
Let ${E}/{\Q_3}$ be an elliptic curve with $\dst(E)=12$ acquiring good reduction over $K_i$ for some $i\in\{1,\ldots,5\}$
and $\D=\Dcris[K_i](V_3(E))$.
There exists $\epsilon\in\{0,1\}$ and $\alpha\in\mathcal{M}_{12}^{i,\epsilon}$ such that $\D$ and $\Dpc(12;0;i;\epsilon;\alpha)$
are isomorphic as filtered $(\phi,\Gal({K_i}/{\Q_3}))$-modules. Moreover if $\epsilon,\epsilon^{\prime}\in\{0,1\}$
and $\alpha\in\mathcal{M}_{12}^{i,\epsilon}$, $\beta\in \mathcal{M}_{12}^{i,\epsilon^{\prime}}$,
then $\Dpc(12;0;i;\epsilon;\alpha)\simeq \Dpc(12;i;\epsilon^{\prime};\beta)$
if and only if $(\alpha,\epsilon) =(\beta,\epsilon^{\prime})$.
\end{proposition}
\begin{proof}
Let $D$ be the underlying $K_0$-vector space associated to $\D$. As usual, the inertia subgroup of ${K_i}/{\Q_3}$ injects
in $\Aut_{K_0}(D)$ and we identify $\tau_4$ and $\tau_3$ to their respective image. As in the quartic case we show that there is
a $K_0$-basis $B=(e_1,e_2)$ of $D$ such that:
\[
\left\{
\begin{array}{lll}
M_B(\tau_4)=\begin{pmatrix} \zeta_4 & 0 \\ 0 & \zeta_4^{-1}\end{pmatrix}\\
\omega(e_1)=e_1,\ \omega(e_2)=e_2 \\
\phi(e_1)=e_2,\ \phi(e_2)=-3e_1 .
\end{array}
\right.
\]
The relations between $\tau_3$ and $\tau_4,\omega$ and $\phi$ implies that there is some $\epsilon^{\prime}\in\{ 0,1\}$ such that
\[
M_B(\tau_3)=\begin{pmatrix} -\frac{1}{2} & \frac{3{(-1)}^{\epsilon^{\prime}+1}}{2}\\ \frac{{(-1)}^{\epsilon^{\prime}}}{2} & -\frac{1}{2}\end{pmatrix}.
\]
A simple calculation shows that the $K_i$-lines of $D_{K_i}=K_i\otimes_{K_0}D$ stable by action of $\Gal({K_i}/{\Q_3})$ are of the
form
\[
(\alpha\otimes e_1 + 1\otimes e_2)K_i
\]
with $\alpha\in L_i$ satisfying the desired conditions.
Let $\epsilon,\epsilon^{\prime}\in\{0,1\}$ and $\alpha\in \mathcal{M}_{12}^{i,\epsilon}$, $\beta\in \mathcal{M}_{12}^{i,\epsilon^{\prime}}$.
Looking only at their underlying $(\phi,\Gal({K_i}/{\Q_3}))$-modules, we see that $\Dpc(12;0;i;\epsilon;\alpha)$
and $\Dpc(12;0;i;\epsilon^{\prime};\beta)$ are isomorphic if and only if $\epsilon=\epsilon^{\prime}$.
Now supposing $\epsilon=\epsilon^{\prime}$ and adding the filtration, we check that a morphism
between $\Dpc(12;0;i;\epsilon;\alpha)$ and $\Dpc(12;0;i;\epsilon;\beta)$ must be of the form $\lambda\Id$
with $\lambda\in\Q_3^{\times}$, so that necessarily $\alpha=\beta$.
\end{proof}
\begin{remark}
As in the cubic Abelian case, observe that $\Dpc(12;0;i;0;\alpha)$ and $\Dpc(12;0;i;1;\alpha)$ are unramified twists of
each other as $(\phi,\Gal({K_i}/{\Q_3}))$-modules.
\end{remark}
\section{Elliptic curves with given Tate module}
\subsection{Minimal Galois pairs}
We refer to~\cite{Vo2} for the definition and properties of Galois pairs.
\begin{proposition}\label{minpairs}
Every (unfiltered) $(\phi,\Gal({K}/{\Q_3}))$-module appearing in Table~\ref{tab:Table 1} comes from a minimal Galois pair for ${K}/{\Q_3}$.
\end{proposition}
\begin{proof}
We only treat the wild cases that aren't quadratic twists, i.e.\ the cubic and dodecic ones. Let us denote
$\Et=\Et_0\times_{\Fd_3}\Fd_9$. The minimal Galois pairs are given in the following table:
\renewcommand{\arraystretch}{1.3}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|l|l|}
\hline
Ram.\ ind. & $K$ & Trace & Minimal Galois pair & $(\phi,\Gal({K}/{\Q_3}))$-module\\
\hline
$e=3$ & $\Lng(\zeta_4)$ & $a_3=0$ & $\Et_0: y^2=x^3+x$ & $\Dpcng(3;0)$ \\
& & & $\Gamma=\langle\tau\rangle,\text{ unique }$3$\text{-Sylow of}\Aut_{\Fd_9}(\Et)$ &\\
& & & $\nu:\tau_3\mapsto\tau, \omega\mapsto f_{\sigma}$ &\\
\cline{2-5}
& $\Lg$ & $a_3=-3$ & $\Et_0: y^2=x^3-x+1$ & $\Dpcg(3;-3)$ \\
& & & $\Gamma=\langle\tau\rangle,\text{ unique }$3$\text{-Sylow of}\Aut_{\Fd_3}(\Et_0)$ &\\
& & & $\nu:\tau_3\mapsto\tau$ &\\
\cline{3-5}
& & $a_3=0$ & $\Et_0: y^2=x^3-x$ & $\Dpcg(3;0)$\\
& & & $\Gamma=\langle\tau\rangle,\text{ unique }$3$\text{-Sylow of}\Aut_{\Fd_3}(\Et_0)$ &\\
& & & $\nu:\tau_3\mapsto\tau$ &\\
\cline{3-5}
& & $a_3=3$ & $\Et_0: y^2=x^3-x-1$ & $\Dpcg(3;3)$ \\
& & & $\Gamma=\langle\tau\rangle,\text{ unique }$3$\text{-Sylow of}\Aut_{\Fd_3}(\Et_0)$ &\\
& & & $\nu:\tau_3\mapsto\tau$ &\\
\hline
$e=12$ & $K_i$ & $a_3=0$ & $\Et_0: y^2=x^3+x$ & $\Dpc(12;0;i;1)$ \\
& & & $\Gamma=\Aut_{\Fd_9}(\Et)\simeq I({K}/{\Q_3})$ & \\
& & & $\nu:\omega\mapsto f_{\sigma}$ & \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:Table 2}Minimal Galois pairs for $e=3$ and $e=12$.}
\end{table}
It is not hard to see that $\nu$ is injective and the field of definition of $\Gamma$
is minimal. Everyone of these objects give rise to a $(\phi,\Gal({K}/{\Q_3}))$-module which is necessarily in our list by
construction (they have the right Frobenius and Galois action). Notice that except for the non abelian cubic case,
there are always two isomorphisms classes of $(\phi,\Gal({K}/{\Q_3}))$-modules. We only showed that one of them comes from
a Galois pair but in fact both does since they are unramified twists of each other.
\end{proof}
\begin{remark}
When $a_3(\Et_0)=\pm3$ a Galois pair for $K={\Lng(\zeta_4)}/{\Q_3}$ is never minimal because $\Aut_{\Fd_9}(\Et)$ is too small
compared to $\Gal({K}/{\Q_3})$. It is another way to see why those traces are absent from our list in that case.
\end{remark}
\subsection{A complete classification}
To every $3$-adic potentially crystalline representation $V$ of $\Gal({\Qal_3}/{\Q_3})$ corresponds a weakly admissible
filtered $(\phi,\Gal({K}/{\Q_3}))$-module $\Dcris[K](V)$. This association is functorial in a fully faithful way. In this section,
we will show that every objects described in Table~\ref{tab:Table 1} comes from an elliptic curves over $\Q_3$ with potential
good reduction. It turns out that we can use the same tools and ingredients than M. Volkov in her treatment of the
tame case (see~\cite{Vo2}).
\begin{theorem*}
Let $\D$ be one of the filtered $(\phi,\Gal({K}/{\Q_3}))$-module in Table~\ref{tab:Table 1}. There exists ${E}/{\Q_3}$ with potential
good reduction such that $\D\simeq\Dcris[K](V_3(E))$.
\end{theorem*}
\begin{proof}
We give a sketch of a proof, following the arguments of~\cite{Vo2} in Thm.5.7. The $\phi_0$-module $D_0$ comes
from an elliptic curve ${\Et_0}/{\Fd_3}$ with the right Frobenius (via the Dieudonné module of its $p$-divisible group).
Let $\Et=\Et_0\times_{\Fd_3}k$, since $\D=\Dcris[K](V)$ for some crystalline representation $V$ of $\Gal(\Qal_3/K)$
with Hodge-Tate weights $(0,1)$, there exists a $p$-divisible group $\mathcal{G}/\Oc_K$ lifting $\Et(p)/k$ with Tate module
isomorphic to $V$ (see~\cite{Br},Thm.5.3.2).
The triple $(\mathcal{G},\Et(p),\tilde{\mathcal{G}}\tilde{\longrightarrow}\Et(p))$ determines (using Serre-Tate Theorem)
an elliptic curve $E/K$ with good reduction (i.e.\ an elliptic scheme over $\mathcal{O}_K$) such that $V_3(E)\simeq V$.
Finally, a minimal Galois pair $(\Et_0,\Gamma,\nu)$ for ${K}/{\Q_3}$ (which always exists in the tame case
by~\cite{Vo2},Thm.4.11 and in the wild case by Prop.~\ref{minpairs}) furnishes the necessary descent datum
to obtain ${E_0}/{\Q_3}$ such that $E=E_0\times_{\Q_3}K$ with $V\simeq V_3(E_0)$.
\end{proof}
|
{
"arxiv_id": "2302.13712",
"language": "en",
"timestamp": "2023-02-28T02:28:07",
"url": "https://arxiv.org/abs/2302.13712",
"yymm": "2302"
} | \section{Introduction}
Systems displaying multiple forms of long-range order in their ground state have always fascinated physicists for their potential to exhibit a complex phase diagram. Different from simpler systems, they can host multiple phase transitions and reveal new intermediate phases between the ground state and the high-temperature phase.
Apart from multi-component systems, such as multiband superconductors or bosonic mixtures, also single-component systems can present a similar scenario.
The two-dimensional (2D) Coulomb gas (CG) model is a paradigmatic example.
The 2D CG is an effective model for superconducting (SC) and superfluid vortices which, in two dimensions, are equivalent to logarithmically interacting charges.
In the limit of small vortex fugacity, the model undergoes a Berezinskii-Kosterlitz-Thouless (BKT)~\cite{berezinskyDestructionLongrangeOrder1972,kosterlitzOrderingMetastabilityPhase1973, kosterlitzCriticalPropertiesTwodimensional1974} transition separating a low-temperature phase, where vortices and antivortices are tightly bound in pairs, from a high-temperature phase where free vortices proliferate and lead to a discontinuous vanishing of the condensate phase rigidity.
As the vortex fugacity $g$ increases above a critical value $g_c$, however, the low-temperature phase of the system undergoes a first-order phase transition from a vortex-vacuum superfluid to a vortex-antivortex superfluid crystal, which additionally breaks the discrete $Z_2$ symmetry associated with the two energetically equivalent checkerboard configurations of the lattice.
As a result, in this regime the ground state exhibits two coexisting orders: a quasi-long-range order of the superfluid phase, characterized by a finite superfluid stiffness $J_s$, and a long-range positional order, characterized by a finite Ising order parameter for the staggered vorticity $M_{stag}$.
Establishing how such a vortex supersolid melts into the disordered high-temperature phase has been a topic of great interest.
The phase diagram of the 2D Coulomb gas at large vortex fugacity has been extensively investigated both for discrete lattice models ~\cite{leeNewCriticalBehavior1990, leeDenseTwodimensionalClassical1991, leePhaseTransitionsClassical1992} and in the continuum limit~\cite{lidmarMonteCarloSimulation1997}. In the presence of a discrete underlying grid, it was shown~\cite{leeNewCriticalBehavior1990, leeDenseTwodimensionalClassical1991, leePhaseTransitionsClassical1992} that at large vortex fugacity, the system undergoes two distinct phase transitions with an intermediate non-superfluid phase where the discrete $Z_2$ symmetry is spontaneously broken.
Addressing this problem within a 2D XY model has proven to be much more challenging.
A ground state formed by a $Z_2$ vortex supersolid can be realized, in this model, by applying a uniform transverse magnetic field to the system with half a magnetic flux quantum crossing each plaquette of the spin lattice. The resulting model is the well-known fully frustrated XY (FFXY) model. Over the years this has been the subject of extended theoretical discussions, with a series of conflicting analytical and numerical results about the number of phase transitions and their nature~\cite{teitelTwoDimensionalFullyFrustrated2013}. Finally, in 1996 Olsson~\cite{olssonTwoPhaseTransitions1995} numerically demonstrated the presence of two phase transitions that are very close together, with the BKT critical temperature, $T_{BKT}$, slightly smaller than the Ising critical temperature, $T_I$, associated with the vanishing of $M_{stag}$.
The theoretical argument for the observed splitting was afterwards provided by Korshunov~\cite{korshunovKinkPairsUnbinding2002}. The continuous nature of the Ising transition ensures that, when approaching $T_I$ from below, the proliferation of Ising domain walls with a net polarization continuously decreases both $M_{stag}$ and $J_s$. Hence, there are in general two possible scenarios that describe the melting of a ground state with coexisting superfluidity and staggered vortex structures: 1) the system exhibits a preemptive first-order phase transition with $J_s$ and $M_{stag}$ vanishing discontinuously at the same critical temperature; 2) the system undergoes two phase transitions with $T_{BKT}<T_I$. Indeed, as soon as domain-wall excitations reduce $J_s$ below the BKT critical value $J_s(T_{BKT})= 2T_{BKT}/\pi$, vortex-antivortex pairs unbind and $J_s$ drops discontinuously to zero.
The FFXY model exhibits the second scenario, as confirmed also by more recent numerical studies~\cite{hasenbuschTwodimensionalXYModel2005, okumuraSpinchiralityDecouplingCritical2011}.
Yet, although the ground state of the FFXY model shares the same orders and symmetries as that of the 2D CG model at large vortex fugacity $g$, neither the FFXY nor the classical XY model allows for a systematic study of the phase diagram as a function of $g$.
The XY model is, indeed, a single-coupling model where the value of the vortex fugacity cannot be tuned independently but is rather fixed by the value of the spin-exchange coupling $J$.
In the present work, we face this challenge by studying the phase diagram of the modified XY model that we introduced in a previous work~\cite{maccariInterplaySpinWaves2020}, where the vortex fugacity can be tuned independently and in a direct way without changing the relevant interactions at play~\cite{duranVortexLatticeTwodimensional2020}. By employing large-scale Monte Carlo simulations we assess the phase diagram of the model and show that the system undergoes a single first-order phase transition with $T_{BKT}=T_I$ for a finite range of values of the vortex fugacity $g_c<g<g^*$, while for $g>g^*$ the two phase transitions split apart with $T_{BKT}<T_I$. The quantitative numerical characterisation of a BKT transition at large but finite vortex fugacity, which goes beyond the traditional BKT picture with a line of fixed points at zero fugacity, is relevant in numerous physical systems,
including two-dimensional Kondo lattices~\cite{mizukamiExtremelyStrongcouplingSuperconductivity2011, Balatsky_heavyfermion2012}, and recently in the description of the metal-insulator transition in disordered 2D materials \cite{karcher2023}. In thin superconducting films, a finite density of vortex-antivortex pairs can be induced at low temperatures by spatially correlated-disorder~\cite{maccariBroadeningBerezinskiiKosterlitzThoulessTransition2017, ilariamaccariBKTUniversalityClass2018}, while stable configurations of vortex supersolids can be realized via magnetic pinning arrays~\cite{milosevicVortexAntivortexLatticesSuperconducting2004, milosevicVortexantivortexLatticesSuperconducting2005} or superconductor/ferromagnet hybrid structures~\cite{bobbaVortexantivortexCoexistenceNbbased2014}. The formation and melting of a vortex-antivortex lattice in superfluid $^4$He films can be observed by the presence of a transverse mode that can exist only in the crystalline phase, and the vortex fugacity can be tuned by additional $^3$He atoms \cite{zhang1993}. More recent realisations include ultracold fermionic gases \cite{botelho2006} and polariton fluids \cite{hivet2014}.
High vortex fugacities may also emerge in long-range interacting systems. Indeed, generic power-law couplings $1/r^{\alpha}$ may disrupt the BKT in $d=2$ by increasing the vortex fugacity\,\cite{giachetti2021berezinskii,giachetti2022berezinskii}. It is worth noting that $1/r^{2}$ interactions induce BKT scaling also in several $d=1$ models\,\cite{kosterlitz1976phase,cardy1981one}.
\section{The model}
The model studied in this work is a modified version of the original XY model with an extra potential term added to tune the vortex fugacity independently from the ferromagnetic coupling $J$.
The Hamiltonian of the modified XY model, introduced in our previous work~\cite{maccariInterplaySpinWaves2020}, reads:
\begin{equation}
H_{XY}^{{\mu}}= -J\sum_{i, \nu=\hat{x},\hat{y}} \cos(\theta_i- \theta_{i+\nu})- {\mu} \sum_{i} \big( I_{P_i} \big)^2,
\label{H_2}
\end{equation}
with $I_{P_i}$ the spin current circulating around a unit plaquette $P_i$ of area $a^2=1$,
\begin{equation}
I_{P_i}=\sin(\theta_i - \theta_{i+\hat{x}}) + \sin(\theta_{i+\hat{x}}- \theta_{i+\hat{x}+\hat{y}})+ \sin(\theta_{i+\hat{x}+\hat{y}} - \theta_{i+\hat{y}}) + \sin(\theta_{i+\hat{y}} - \theta_{i}).
\label{current}
\end{equation}
For $\mu=0$, Eq.\eqref{H_2} is the classical XY model, where the value of the vortex fugacity is fixed by the bare spin stiffness $J$. On the other hand, by considering nonzero values of $\mu$ one can independently tune $g$ to either favour for $\mu>0$, or disfavour for $\mu<0$, the vortex nucleation in the system. Thus, by increasing $\mu>0$, the value of the vortex-core energy $\mu_v\propto -\mu$ decreases and, in turn, the value of the vortex fugacity $g=2\pi e^{-\beta \mu_v}$ increases.
The energy-entropy balance for the proliferation of free vortices suggests that the BKT critical temperature decreases as the value of $\mu$ increases. At the same time, it is also apparent that there exists a critical value $\mu=\mu_c$ at which the ground state of the system undergoes a first-order phase transition from a superfluid with vanishing vortex density $\rho_v(T\to 0)\to 0$ (``vortex vacuum'') to a vortex-antivortex superfluid crystal with $\rho_v(T\to 0)\to 1$~\cite{leeDenseTwodimensionalClassical1991}.
While in our previous work~\cite{maccariInterplaySpinWaves2020} we focused on the regime $\mu<\mu_c$
here we will investigate the phase diagram of the model \eqref{H_2} for $\mu>\mu_c$, whose ground state is a $Z_2$ vortex supersolid. As a function of $\mu$, we will determine the value of the two critical temperatures: $T_{BKT}$, at which a superfluid quasi-condensate forms, and $T_{I}$, at which a charge-ordered state forms, that is described by a real $Z_2$ order parameter
associated with the two possible staggered magnetizations of the vortex-antivortex lattice.
This systematic investigation will enable us to assess the phase diagram of the system and to establish, for each value of $\mu$, whether the system displays two separate phase transitions, or a single preemptive first-order phase transition where both the superfluid stiffness $J_s$ and the staggered magnetization $M_{stag}$ jump discontinuously to zero at the same critical temperature $T_{BKT}= T_{I}$.
\section{Monte Carlo simulations}
We assess the phase diagram of the model \eqref{H_2} in the regime $\mu>\mu_c$ via large-scale Monte Carlo (MC) simulations. This allows us to properly account for the non-trivial interactions between the different topological phase excitations at play, which include vortices, Ising-like domain walls between the two possible values of $M_{stag}$, and kink-antikink excitations along the domain walls~\cite{olssonKinkantikinkUnbindingTransition2005}.
We studied the model \eqref{H_2} on a discrete square grid of spacing $a=1$ and size $N= L\times L$, for different values of the linear size $L$.
Details of our MC simulations can be found in the Supplementary Materials.
To assess the values of the $BKT$ critical temperature, we computed the superfluid stiffness $J^{\nu}_s$, which measures the response of the system to a phase twist $\Delta_{\nu}$ along a given direction $\nu$. This can be thought of in terms of twisted boundary conditions, $\theta_{i + L\hat{\nu} }= \theta_i + \Delta_{\nu}$, reabsorbed via a gauge transformation in a new set of variables $\theta_i^{'} = \theta_i - r_{i, \nu} \Delta_{\nu}/L$, with periodic boundary conditions. For a superconducting film, it corresponds to the response to a transverse gauge field $\bf{A}$ and it signals the onset of perfect diamagnetism, i.e., the well-known Meissner effect.
$J_s$ is defined as:
\begin{equation}
J_{s}^{\nu} \equiv -\frac{1}{L^2}\frac{\partial^2 F(\Delta_{\nu})}{\partial A_{\nu}^2} \Big|_{A_{\nu}=0}
\label{Js_def}
\end{equation}
and has two contributions
\begin{align}
\label{eq:Jsx}
J_s^{\nu} = J_d^{\nu} - J_p^{\nu},
\end{align}
the diamagnetic ($J_d^{\nu}$) and the paramagnetic ($J_p^{\nu}$) response functions
\begin{align}
\label{Jd}
J_d^{\nu} &= \frac{1}{L^2} \Big[ \Bigl\langle \frac{\partial^2 H}{\partial A_{\nu}^2}\Big|_{0}\Bigr\rangle \Big], \\
\label{jp}
J_{p}^{\nu} &= \frac{\beta}{L^2} \Big[ \Bigl\langle \Bigl(\frac{\partial H}{\partial A_{\nu}}\Big|_{0} \Bigr)^2\Bigr\rangle -\Bigl\langle \frac{\partial H}{\partial A_{\nu}}\Big|_{0} \Bigr\rangle ^2 \Big],
\end{align}
where $\langle \dots \rangle$ stands for the thermal average over the MC steps.
The explicit expressions of $J_d^{\nu}$ and $J_p^{\nu}$ are reported in the Appendix of~\cite{maccariInterplaySpinWaves2020}.
In this work, we have computed the superfluid response along $\nu \equiv \hat{x}$ and in what follows we will simply refer to $J_s \equiv J_s^{\hat{x}}$.
When increasing the temperature below $T_{BKT}$, the superfluid stiffness continuously decreases mainly due to the presence of non-topological phase excitations, such as spin waves and domain walls with a net polarization~\cite{korshunovKinkPairsUnbinding2002}. As soon as $T_{BKT}$ is reached, the proliferation of free vortices becomes entropically favoured and $J_s$ discontinuously jumps to zero.
According to the Nelson-Kosterlitz criterion~\cite{nelsonUniversalJumpSuperfluid1977}, at the critical point $J_s$ and $T_{BKT}$ are linked via the universal relation: $J_s(T_{BKT})=2 T_{BKT}/\pi$, which ultimately allows for the determination of the critical temperature.
In this work, we assess the value of $T_{BKT}$ by the BKT finite-size scaling of the superfluid stiffness~\cite{weberMonteCarloDetermination1988}:
\begin{equation}
J_s(\infty, T_{BKT})=\frac{J_s(L, T_{BKT})}{1 + (2\log(L/L_0))^{-1}},
\label{scaling_BKT}
\end{equation}
where $L_0$ is chosen to give the best crossing point at finite temperature (see also Supplementary Materials S2). The BKT finite-size scaling of $J_s$ for $\mu=0.3 > \mu_c$ is reported in Fig.~\ref{mu0.3}(a), where we found $L_0=10.5$.
On the other hand, in order to assess the $Z_2$ Ising critical temperature $T_I$ associated with the melting of the vortex-antivortex crystal, we define a vortex ordering parameter as the \emph{staggered magnetization}:
\begin{equation}
M_{stag} \equiv \sum_{i} (-1)^{x_i + y_i} q_i,
\label{mstagg}
\end{equation}
where $i$ labels the unitary plaquette of the spin lattice located at $(x_i, y_i)$. The vortex charge $q_i$ takes the values $q_i=0,+1,-1$, respectively, if a vortex, an antivortex, or zero vortices are located at the centre of the $i$-th plaquette. A vortex-antivortex crystal is characterised by $\langle M_{stag} \rangle = \pm 1$, according to the two possible equivalent configurations of the vortex-antivortex checkerboard.
To determine the value of $T_I$, we analyse the finite-size scaling of the Binder cumulant $U_{stag}$ associated with the staggered magnetization:
\begin{equation}
U_{stag}=\frac{\langle M_{stag}^4\rangle}{3 \langle M_{stag}^2\rangle ^2}.
\label{ustagg}
\end{equation}
In the high-temperature limit the Binder cumulant approaches $U_{stag}(T \gg T_I) \to 1$ and in the low-temperature limit $U_{stag}(T \ll T_I) \to 0.3$, while at the critical point it is expected to assume a universal value independent on the system size~\cite{binderCriticalPropertiesMonte1981}.
The finite-size scaling of the Binder cumulant is reported in Fig.\ref{mu0.3}(b) for $\mu=0.3$.
At this value of the vortex chemical potential $\mu=0.3$, we found two distinct and yet very close critical temperatures with $T_{BKT}=2.0040 \pm 0.0003 $ slightly smaller than $T_I=2.01595\pm 0.00004$.
As a further numerical confirmation of the splitting between the two phase transitions, we follow the scheme proposed by Olsson~\cite{olssonTwoPhaseTransitions1995}. Olsson's scheme consists in extracting a set of temperatures $T_{L}$ for different system sizes $L$, which are defined as the temperatures where the superfluid stiffness crosses the $2T/\pi$ BKT critical line, i.e., $J_s(T_L, L)=2T_L/\pi$.
By increasing the size $L$, $T_L$ decreases and approaches the thermodynamic limit $T_{L\to \infty}\to T_{BKT}$ from above. If the two phase transitions are separated with $T_{BKT}<T_I$, the value of the staggered magnetization $\langle M_{stag}(T_L, L)\rangle$ at $T_L$ should increase with increasing system size $L$ and eventually reach a nonzero value in the thermodynamic limit. This is precisely what we observe in this case, as reported in Fig.~\ref{mu0.3}(c). At the temperatures $T_L$, indicated by a dashed vertical line, the value of $\langle M_{stag}(T_L, L)\rangle$ increases, confirming that $T_{BKT}<T_I$.
To establish the phase diagram of the model \eqref{H_2}, we repeated the same analysis for different values of $\mu$.
\begin{figure}[t!]
\centering
\includegraphics[width= 1.1\linewidth]{{Fig_mu0.3}.png}
\caption{Monte Carlo results for the case $\mu=0.3$. (a) Determination of $T_{BKT}$ from finite-size scaling of the superfluid stiffness $J_s$ renormalized according to the BKT scaling Eq.~\eqref{scaling_BKT} for $L=64, 96, 128, 192, 256$ (from top to bottom). The best crossing point is obtained with $L_0=10.5$. As expected, it lies on the $2T/\pi$ critical line (continuous black line). The dotted line indicates the extracted BKT critical temperature. (b) Determination of the Ising critical temperature $T_I$ from finite-size scaling of the Binder cumulant $U_{stag}$ defined in Eq.\eqref{ustagg}. The crossing point locates $T_I$, indicated here with a dashed-dotted line. (c) Olsson's plot~\cite{olssonTwoPhaseTransitions1995} for different values of the system size $L$. At the BKT critical point, while the superfluid stiffness jumps from $J_s(T_{BKT}^-)=2T_{BKT}^-/\pi$ to $J_s(T_{BKT}^+)=0$, the staggered magnetization is observed to increase with $L$ and reaches a finite value in the thermodynamic limit. This is an additional confirmation that $T_{BKT}<T_I$ in this case.
The error bars are computed via a standard bootstrapping resampling method. Where not visible, the error bars are smaller than the point symbols.}
\label{mu0.3}
\end{figure}
When approaching the critical value $\mu_c$ below which the ground state is a vortex-vacuum superfluid, we find that the separation between the two phase transitions reduces until they eventually merge into a single first-order phase transition at $\mu=\mu^*>\mu_c$. In particular, while down to $\mu=0.2$ (see Figs.~S2-S5 of the Supplementary Materials) we still find evidence of a splitting between the two transitions, at $\mu=0.175$ our numerical simulations suggest that the system undergoes a single first-order phase transition.
The numerical evidence for a single first-order transition is threefold. The first indications in this sense are the failure of the BKT scaling Eq.~\eqref{scaling_BKT} for the superfluid stiffness (see Fig.~S6(a)) and the pronounced peaks in the Binder cumulant in the proximity of the critical point (see Fig.~S6(b))~\cite{vollmayrFiniteSizeEffects1993}.
Second, an unambiguous demonstration of first-order phase transition at $\mu=0.175$ is provided by the presence of two peaks in the energy-density distribution $P(E/N)$ at the critical point. As reported in Fig.~\ref{PE}, at $\mu=0.175$ the minimum value $P(E_{min}/N)$ of the distribution between the two peaks vanishes by increasing the system size $L$ (see Fig.~\ref{PE}(a)), very differently from the case $\mu=0.2$ where $P(E_{min}/N)$ increases with $L$ (see Fig.~\ref{PE}(b)).
\begin{figure}[h!]
\centering
\includegraphics[width= 0.9\linewidth]{{PE_mu0.175_0.2}.pdf}
\caption{Evidence for a first-order transition: the energy-density distribution $P(E/N)$ is shown for different system sizes $L$ ($N=L^2$) at the temperature corresponding to the specific-heat peak. While for (a) $\mu=0.175$ there are two peaks indicating a first-order transition, for (b) $\mu=0.2$ in the thermodynamic limit a single peak emerges consistent with a continuous transition second-order transition.
}
\label{PE}
\end{figure}
Third, for a more quantitative analysis of the order of the transition, we looked at the finite-size scaling of the maximum value $C_v^{max}$ of the specific heat at the critical temperature. The specific heat $C_v$ being defined as:
\begin{equation}
C_v= \frac{1}{T^2 L^2} \left( \langle E^2 \rangle - \langle E \rangle^2 \right),
\end{equation}
where $E$ is the total energy of the system.
For a second-order phase transition, $C_v^{max}$ scales as $C_v^{max} \propto L^{2/\nu -d}$, where $d=2$ is the spatial dimension of the system and $\nu=1$ is the critical exponent.
Conversely, when the transition is of first order, for the Ising model in two dimensions the specific-heat peak scales as the volume of the system~\cite{vollmayrFiniteSizeEffects1993}, i.e., $C_v^{max} \propto L^{d}$. For $\mu=0.175, 0.2, 0.3$, we have extracted the value of $C_v^{max}$ at different system sizes $L$ and derived the exponent $C_v^{max} \propto L^{y}$ via a linear fit of the data in a log-log plot (see Fig.~\ref{cvmax}).
For $\mu=0.3$, this analysis yields $y=0.2\pm0.01$ (see Fig.~\ref{cvmax}(a)), in good agreement with the value $y=0$
expected in two spatial dimensions for a Ising-like second-order phase transition. For smaller $\mu$, instead, we observe a more divergent behaviour with $y=1.21 \pm 0.02$ at $\mu=0.2$ (see Fig.~\ref{cvmax}(b)) and, ultimately, $y=1.93\pm0.02$ for $\mu=0.175$, which is consistent with a first-order phase transition (see Fig.~\ref{cvmax}(c)).
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{Cvmax_exp.pdf}
\caption{Finite-size scaling analysis of the specific-heat peak $C_v^{max}$ at (a) $\mu=0.3$; (b) $\mu=0.2$; (c) $\mu=0.175$. The points in the three panels correspond to the linear sizes $L=96, 128, 192, 256$. }
\label{cvmax}
\end{figure}
Taken together, these findings consistently indicate the presence of a critical value $0.175\leq \mu^*<0.2$ at which the two phase transitions merge into a single first-order transition. At the same time, they also suggest the presence of a tricritical point $0.175\leq \mu_{\rm tric}<0.2$ at which the $Z_2$ second-order Ising transition becomes first order. Our data seem to indicate that for the modified XY model $\mu_{\rm tric}\equiv \mu^*$. At present, however, we cannot rule out the possibility that, although they are very close, $\mu_{\rm tric}>\mu^*$.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{phase_diag.pdf}
\caption{(a) $\mu-T$ phase diagram of the model \eqref{H_2}. The light-blue and green areas indicate the two possible low-temperature states of the system. For $\mu \leq 0.14$ this is a vortex-vacuum superconducting state (green area), while for $\mu \geq 0.145$ it turns into a vortex-antivortex superconducting crystal (light-blue area).
The BKT critical points (green triangles) in the region $\mu<\mu_c$ of the phase diagram are those derived in our previous work~\cite{maccariInterplaySpinWaves2020} and separate the vortex-vacuum SC state from the disordered high-temperature state. In the region $\mu_c<\mu<\mu^*$ the system undergoes a single first-order transition (red dots) from a vortex-antivortex SC crystal to a disordered non-SC state. Finally, for $\mu>\mu^*$ the two phase transitions separate and an intermediate non-SC state with a finite $Z_2$ crystalline order appears. To highlight the splitting of the two critical temperatures in this region of the phase diagram, in panel (b) we report the value of $T_I-T_{BKT}$ as a function of the chemical potential $\mu$. A linear fit of the data (dashed grey line) gives an estimate $\mu^*=0.192$ of the value at which the two phase transitions merge.
The error bars are computed via a standard bootstrapping resampling method. Where not visible, the error bars are smaller than the point symbols.}
\label{phasediag}
\end{figure}
The complete phase diagram of the model \eqref{H_2} is shown in Fig.~\ref{phasediag}(a). For $\mu<\mu_c$ the BKT critical temperatures are those derived in our previous work~\cite{maccariInterplaySpinWaves2020}.
In the regime $\mu_c< \mu \leq 0.175$, the critical temperatures of the first-order phase transition have been computed by a finite-size scaling analysis of the temperatures corresponding to the specific-heat peak $C_v^{max}(T_c, L)$ (see Fig.~S7).
According to Fig.~\ref{phasediag}(a), for $\mu<\mu_c$ the system exhibits a single BKT phase transition from a quasi-long-range ordered superconducting state to a disordered one. By increasing the value of $\mu$ at low temperatures, the vortex fugacity increases until, at $0.14<\mu_c < 0.145$, the system undergoes a first-order phase transition~\cite{leeDenseTwodimensionalClassical1991} from a vortex-vacuum superconductor to a vortex supersolid which additionally breaks the $Z_2$ discrete symmetry associated with the two possible vortex-antivortex crystal configurations.
By increasing the chemical potential above the critical value $\mu_c$, we find that up to a value of $\mu^*>\mu_c$ there exists a single first-order transition line separating the vortex-antivortex SC crystal from the high-temperature disordered state. For $\mu>\mu^*$, instead, the two phase transitions split apart with $T_{BKT} <T_I$. In this regime, a new intermediate phase appears where the system is a non-superconducting vortex-antivortex crystal spontaneously breaking the $Z_2$ symmetry associated with the charge ordering.
Differently from the 2D Coulomb gas counterpart~\cite{leeNewCriticalBehavior1990}, however, the region of the phase diagram hosting this new phase is quite small and the two transitions remain close for all values of $\mu$ studied. Nonetheless, the splitting between the two transitions $\Delta T_c= T_I - T_{BKT}$ increases almost linearly with $\mu$ (see Fig.~\ref{phasediag}(b)). Via a linear fit of $\Delta T_c$ vs $\mu$, we also extracted an estimate of $\mu^*$ at which the two transitions merge. The obtained value $\mu^*=0.192$ is consistent with the analysis reported above.
\section{Conclusions}
In this study, we conducted a comprehensive numerical investigation of the modified XY model by introducing a plaquette term to control the fugacity of vortices. Our findings reveal that as the vortex fugacity increases, the low-temperature superfluid BKT state turns into a vortex supersolid
with finite superconducting density and charge ordering.
At low temperatures, this state emerges from the superconducting vacuum via a first-order phase transition. However, as the temperature increases, a complex phase diagram emerges. At temperatures $T\lesssim 1$ and chemical potential $\mu\leq0.14$, a BKT transition line branches out of the first-order line, and vortex unbinding destroys the superconducting order. The transition line separating this new disordered state from the superconducting crystal remains first order up to $\mu^{*}\approx \mu_{\rm tric}$, while for larger $\mu$ an increasing temperature leads to the vanishing of superfluid order via the BKT mechanism, followed by the melting of the normal vortex-antivortex crystal into the disordered state via an Ising-like second-order line, as shown in Fig.\,\ref{phasediag}.
Our results are consistent with the analysis conducted in Ref.\,\cite{leeDenseTwodimensionalClassical1991} for the two-dimensional Coulomb gas, but two important differences stand out:
\begin{enumerate}
\item First, the area between the two transition lines separating the superconducting crystal from the normal crystal and the disordered state is extremely small and only grows linearly by increasing the chemical potential.
\item Second, the branching point of the second BKT line coincides within our numerical precision with the tricritical point $\mu_{\rm tric}$, where the first-order line meets the second-order Ising transition.
\end{enumerate}
These differences may be attributed to the intrinsic differences between the two Hamiltonians, particularly to the fact that the topological excitations, i.e., the vortices, are coupled to the low-energy spin waves in the XY model, while this interaction is neglected in the Coulomb gas representation of the problem. Additionally, while our study focuses primarily on the superfluid stiffness $J_s$, Ref.\,\cite{leeDenseTwodimensionalClassical1991} characterizes the superconductor by the inverse dielectric constant. These two quantities are closely related in the traditional XY model with $\mu=0$, but the same relation does not hold in this study, where the plaquette term in the Hamiltonian \eqref{H_2} gives an explicit contribution to the superfluid stiffness.
In conclusion, resolving the nature of the unconventional tricritical point, where the first- and second-order lines meet with the infinite-order BKT line, requires the derivation of an improved BKT flow equation that can capture the mechanism of defect unbinding at finite fugacity. Such a theoretical framework should be able to capture both BKT scaling and the second-order transition line within the same formalism, and its development represents the most significant future direction of this work.
\section*{Acknowledgements}
The simulations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at the National Supercomputer Center at Link\"oping, Sweden. I.M. acknowledges the Carl Trygger foundation through grant number CTS 20:75. This work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under project-ID 273811115 (SFB1225 ISOQUANT) and under Germany’s Excellence Strategy EXC2181/1-390900948 (the Heidelberg STRUCTURES Excellence Cluster).
\section*{References}
\bibliographystyle{iopart-num}
|
{
"arxiv_id": "2302.13669",
"language": "en",
"timestamp": "2023-02-28T02:26:58",
"url": "https://arxiv.org/abs/2302.13669",
"yymm": "2302"
} | \section{EFFECT OF END-FUNCTIONAL GROUPS ON OSMOTIC PRESSURE}
We demonstrate that the difference in the end-functional groups is negligible for the osmotic pressure $\Pi$.
Figure~\ref{fig:end}(a) shows $\Pi=\Pi(c)$ for four-arm star polymer (PEG) solutions with maleimide (MA, green-filled diamonds) and hydroxy (OH, green open diamonds) end-functional groups with $M=40$ kg$/$mol.
The variation in $\Pi$ is within experimental accuracy.
\\
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{Sup1.pdf}
\caption{
(a) Polymer mass concentration (c) dependence of the osmotic pressure ($\Pi$) in the four-arm star PEG solutions with different functional groups of MA (green-filled diamonds) and OH (green-open diamonds) with $M=40$ kg$/$mol.
The data of green-filled diamonds are the same as those in Figs.~1 and 2 in the main text.
(b) Time course of the weight-swelling ratio $W/W_{0}$ from the as-prepared state ($W_0$) to the equilibrium state ($W$) of PEG solutions in membrane osmometry.
For the linear PEG, we set $c_\mathrm{ext}=60$ g$/$L for $c_{0}=40$ g$/$L with $M=20$ kg$/$mol (gray triangles).
For the three-arm star PEG, we set $c_\mathrm{ext}=55$ g$/$L for $c_{0}=30$ g$/$L with $M=20$ kg$/$mol (yellow pentagons).
For the four-arm star PEG, we set $c_\mathrm{ext}=90$ g$/$L for $c_{0}=40$ g$/$L with $M=10$ kg$/$mol (green squares) and $c_\mathrm{ext}=80$ g$/$L for $c_{0}=60$ g$/$L with $M=40$ kg$/$mol (green diamonds).
For the eight-arm star PEG, we set $c_\mathrm{ext}=140$ g$/$L for $c_{0}=130$ g$/$L with $M=40$ kg$/$mol (red circles).
}
\label{fig:end}
\end{figure}
\section{VERIFICATION OF REACHING EQUILIBRIUM}
To ensure that each solution sample reached equilibrium in membrane osmometry, we show the time course of the weight-swelling ratio $W/W_{0}$ for the linear, three-arm, four-arm, and eight-arm star polymer (PEG) solutions in Fig.~\ref{fig:end}(b).
Approximately one week was required to achieve an equilibrium state (horizontal black lines).
Thus, we determined the equilibrium state as the point at which $W/W_{0}$ remained constant for two to three days.
\\
\begin{table}[t!]
\label{table:0}
\caption{Data of the osmotic pressure ($\Pi$) for linear, three-arm, four-arm, and eight-arm star poly(ethylene glycol) in aqueous solutions at $298$~K measured in this study.}
\begin{ruledtabular}
\begin{tabular}{D{.}{.}{-1}D{.}{.}{3}D{.}{.}{-1}D{.}{.}{3}D{.}{.}{-1}D{.}{.}{3}}
\multicolumn{1}{c}{$c$} &
\multicolumn{1}{c}{$\Pi$} &
\multicolumn{1}{c}{$c$} &
\multicolumn{1}{c}{$\Pi$} &
\multicolumn{1}{c}{$c$} &
\multicolumn{1}{c}{$\Pi$}\\
\multicolumn{1}{c}{(kg$/$m$^{3}$)} &
\multicolumn{1}{c}{(kPa)} &
\multicolumn{1}{c}{(kg$/$m$^{3}$)} &
\multicolumn{1}{c}{(kPa)} &
\multicolumn{1}{c}{(kg$/$m$^{3}$)} &
\multicolumn{1}{c}{(kPa)}\\
\hline
\multicolumn{6}{c}{\textbf{Linear with $M=20$ kg$/$mol}}\\
16.4 & 3.45 & 45.3 & 27.18 & 73.3 & 66.70\\
18.4 & 4.51 & 54.0 & 34.98 & 82.8 & 80.38\\
20.0 & 5.76 & 55.5 & 34.98 & 83.3 & 54.64\\
22.6 & 7.20 & 57.8 & 44.10 & 83.6 & 80.38\\
30.7 & 10.71 & 58.2 & 44.10 & 95.9 & 95.74\\
37.9 & 15.15 & 69.7 & 66.70 & 97.5 & 112.99\\
39.2 & 15.15 & 70.3 & 54.64 & 99.9 & 95.74\\
43.6 & 20.60 & 72.1 & 54.64 & 109.2 & 112.99\\
\hline
\multicolumn{6}{c}{\textbf{Three-arm with $M=20$ kg$/$mol}}\\
19.7 & 5.76 & 45.3 & 20.60 & 76.7 & 54.64\\
24.3 & 7.20 & 45.8 & 20.60 & 86.2 & 66.70\\
27.0 & 8.85 & 56.2 & 27.18 & 90.1 & 80.38\\
33.4 & 12.81 & 69.2 & 34.98 & 96.5 & 95.77\\
40.2 & 15.15 & 73.7 & 44.10 & & \\
\hline
\multicolumn{6}{c}{\textbf{Four-arm with $M=10$ kg$/$mol (Ref.~\cite{yasuda2020universal})}}\\
19.7 & 7.20 & 43.5 & 23.75 & 83.5 & 66.70\\
25.5 & 8.85 & 50.2 & 27.18 & 88.6 & 66.70\\
25.8 & 10.71 & 50.8 & 27.18 & 96.8 & 80.38\\
26.9 & 10.71 & 56.6 & 34.98 & 98.3 & 80.38\\
28.5 & 12.81 & 58.5 & 34.98 & 102.7 & 95.77\\
31.4 & 15.15 & 63.8 & 44.10 & 107.4 & 95.77\\
36.0 & 15.15 & 64.3 & 44.10 & 108.6 & 95.77\\
37.4 & 17.74 & 75.0 & 54.64 & 121.4 & 112.99\\
40.1 & 20.60 & 76.0 & 54.64 & & \\
\hline
\multicolumn{6}{c}{\textbf{Four-arm with $M=40$ kg$/$mol (Ref.~\cite{yasuda2020universal})}}\\
15.9 & 2.55 & 34.6 & 10.71 & 60.1 & 27.18\\
18.4 & 3.45 & 37.9 & 10.71 & 74.5 & 39.37\\
22.3 & 4.51 & 45.0 & 15.15 & 81.0 & 49.18\\
29.3 & 7.20 & 45.5 & 15.15 & 90.4 & 60.70\\
31.1 & 7.20 & 50.1 & 20.60 & 99.1 & 80.38\\
34.3 & 10.71 & 50.7 & 20.60 & & \\
\hline
\multicolumn{6}{c}{\textbf{Eight-arm with $M=40$ kg$/$mol}}\\
18.0 & 2.55 & 49.4 & 10.71 & 109.2 & 66.70\\
20.2 & 3.45 & 52.1 & 15.15 & 109.9 & 66.70\\
22.8 & 4.51 & 55.8 & 17.74 & 111.2 & 54.64\\
29.4 & 4.51 & 72.4 & 20.60 & 117.0 & 80.38\\
30.1 & 4.51 & 77.1 & 27.18 & 121.1 & 80.38\\
33.3 & 7.20 & 79.5 & 27.18 & 122.3 & 66.70\\
35.2 & 7.20 & 81.0 & 27.18 & 123.4 & 95.77\\
35.8 & 7.20 & 82.8 & 34.98 & 123.6 & 95.77\\
36.9 & 4.51 & 97.1 & 34.98 & 125.7 & 80.38\\
42.8 & 10.71 & 100.9 & 44.99 & 137.2 & 113.00\\
43.3 & 7.20 & 103.7 & 54.64 & 157.0 & 113.00\\
46.2 & 12.81 & 105.7 & 44.99 & & \\
46.2 & 12.81 & 106.3 & 66.70 & & \\
\end{tabular}
\end{ruledtabular}
\label{table:Pos}
\end{table}
\section{DATA OF OSMOTIC PRESSURE OF LINEAR AND STAR PEG IN AQUEOUS SOLUTIONS}
All experimental data of the osmotic pressure $\Pi$ at various polymer mass concentrations $c$ measured via membrane osmometry in this study are listed in Table.~\ref{table:Pos}.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{Sup2.pdf}
\caption{(a) Evaluation of the dimensionless virial ratio $\gamma$.
Each solid line is obtained from the one-parameter least-square fit of $\gamma$ with Eq.~(\ref{eq:virial}) to the data for each sample, using $c^*$ estimated in Fig.~2(d) in the main text.
According to Eq.~(\ref{eq:virial}), the slope of each best fit line gives $\gamma$ of each arm number $f$.
(b) Dimension virial ratio $\gamma$ for various $f$.
The error bar indicates the standard error of the least-square fit.
The dashed line represents $\gamma\approx0.25$ for the linear polymer solutions~\cite{flory1953principles}.
}
\label{fig:third}
\end{figure}
\section{THIRD VIRIAL COEFFICIENT OF STAR POLYMER SOLUTIONS}
We evaluated the third virial coefficient in linear and star polymer (PEG) solutions based on the concentration ($c$) dependence of osmotic pressure $\Pi$.
In the universal EOS given by Eq.~(1) in the main text, the third virial coefficient $A_{3}$ corresponds to the dimensionless virial ratio $\gamma\equiv A_{3}/(A_{2}^{2}M)$ defined in Eq.~(2) in the main text.
From the virial expansion up to the third order [Eq.~(2) in the main text], we have
\begin{equation}
\left( \frac{\Pi}{c} - \frac{RT}{M} \right) \frac{1}{c}=
\frac{RT}{Mc^{*}}
\left(1 + \frac{c\gamma}{c^{*}}
\right).
\label{eq:virial}
\end{equation}
Using $M$ for each sample and $c^{*}$ evaluated in Fig.~2(d) in the main text at $298$~K, we show that the one-parameter least-squares fit to the data of $c$ dependence of $(\Pi/c - RT/M)/c$ in Fig.~\ref{fig:third}(a).
The slopes of the best-fit lines in Fig.~\ref{fig:third}(a) give $\gamma$ for each sample.
\\
For the linear, three-arm, four-arm, and eight-arm star polymer solutions, we have $\gamma=0.25(4)$, $\gamma=0.27(6)$, $\gamma=0.20(8)$ (with $M=10$ kg$/$mol) and $\gamma=0.26(3)$ (with $M=40$ kg$/$mol), and $\gamma=0.25(8)$, respectively.
Numbers in parentheses represent standard errors.
For the linear polymer solutions, the obtained $\gamma$ is consistent with $\gamma\approx0.25$~\cite{flory1953principles}.
Although the error bounds are not small for the three-, four-, and eight-arm star polymer solutions, the obtained $\gamma$ agrees well with $\gamma \approx 0.25$ for linear polymer solutions~\cite{flory1953principles}.
This result is consistent with the result that star polymer solutions with up to eight arms are described by the universal EOS~(Fig.~1 in the main text).
\\
\section{PREVIOUS RESEARCH FOR DILUTE LINEAR AND STAR POLYMERS IN GOOD SOLVENT}
In Figs.~3(b) and 4(a) and (b) in the main text, we exhibit the dilute solution properties of $R_{g}$, $A_{2}$, and $\Psi^*$ with various $M$, $T$, and types of polymer and solvent for linear and star polymers in good solvents.
All the data were measured through the light scattering reported in Refs.~\cite{sato1987second,nakamura1991third,miyaki1978excluded,okumoto1998excluded,okumoto2000excluded,yamamoto1971more,fukuda1974solution,roovers1980hydrodynamic,roovers1986linear,douglas1990characterization,roovers1974preparation,bauer1989chain,khasat1988dilute,huber1984dynamic,roovers1983analysis}.\\
For Figs.~2(b) and 3(a) in the main text, we used the
linear (black upward triangles~\cite{sato1987second,nakamura1991third,miyaki1978excluded}),
four-arm (green rightward triangles~\cite{okumoto1998excluded}), and
six-arm (blue downward triangles~\cite{okumoto2000excluded})
poly(styrene) (PS) in benzene (Bz) at $298$~K.\\
For the main panel and the inset in Fig.~3(b) in the main text, we used data from
linear PS in Bz at $298$~K (upward triangles~\cite{sato1987second,nakamura1991third,miyaki1978excluded}),
linear PS in Bz at $303$~K (lower left triangles~\cite{fukuda1974solution} and upper left triangles~\cite{yamamoto1971more}),
linear PS in toluene (TL) at $303$~K (black upper left triangles~\cite{yamamoto1971more}),
linear PS in TL at $308$~K (squares~\cite{douglas1990characterization,roovers1980hydrodynamic}),
linear poly(butadiene) (PB) in cyclohexane (CH) at $308$~K (squares~\cite{douglas1990characterization,roovers1986linear}),
three-arm star PS in TL at $293$~K (lower-right triangles,~\cite{huber1984dynamic}),
three-arm star PS in TL at $308$~K (upper-right triangles,~\cite{khasat1988dilute}),
four-arm star PS in Bz at $298$~K (rightward triangles~\cite{okumoto1998excluded}),
four-arm star PB in CH at $298$~K (squares~\cite{douglas1990characterization}),
six-arm star PS in Bz at $298$ K (downward triangles~\cite{okumoto2000excluded}),
six-arm star PB in CH at $298$~K (squares~\cite{douglas1990characterization,roovers1974preparation}),
eight-arm star poly(isoprene) (PI) in CH at $296$~K (diamonds~\cite{bauer1989chain}),
ten-arm star PI in CH at $296$ K (diamonds~\cite{bauer1989chain}),
twelve-arm star PI in CH at $296$ K (diamonds~\cite{bauer1989chain}),
twelve-arm star PS in TL at $308$~K (leftward triangles~\cite{roovers1983analysis}),
eighteen-arm star PB in CH at $298$~K (squares~\cite{douglas1990characterization}),
eighteen-arm star PI in CH at $296$ K (diamonds~\cite{bauer1989chain}),
eighteen-arm star PS in TL at $298$~K (leftward triangles~\cite{roovers1983analysis}),
twenty-two arm star PI in CH at $296$ K (diamonds~\cite{bauer1989chain}),
twenty-six arm star PI in CH at $296$~K (diamonds~\cite{bauer1989chain}),
thirty-two arm star PI in CH at $296$~K (diamonds~\cite{bauer1989chain}), and
fifty-six arm star PI in CH at $296$~K (diamonds~\cite{bauer1989chain}).
\end{document} |
{
"arxiv_id": "2302.13633",
"language": "en",
"timestamp": "2023-02-28T02:25:51",
"url": "https://arxiv.org/abs/2302.13633",
"yymm": "2302"
} | \section{Introduction}
Projective, or von Neumann, measurements collapse the observed quantum system on eigenstates of a Hermitian operator, while more general measurements, described by positive operator-valued measures, collapse the system on states from an overcomplete set~\cite{wiseman_quantum_2009}.
A gradual transition between the two situations can be realized in continuous measurements using meter fields, a canonical example of which is an optical interferometric measurement of the position of a harmonic oscillator~\cite{braginsky_quantum_1992}.
Position measurements are associated with mechanical resonators~\cite{purdy_observation_2013}, collective atomic spins~\cite{hammerer_quantum_2010,grangier_back-action-induced_1994}, ferromagnetic solid-state media~\cite{graf_cavity_2018}, single molecules~\cite{roelli_molecular_2016}, or density waves in liquids~\cite{shkarin_quantum_2019}, that are linearly probed by traveling optical or microwave fields.
The boundary between generalized and von Neumann measurements occurs at a certain value of the measurement rate~\cite{meng_mechanical_2020}.
When the rate is slower than the oscillation, measurements with the meter in the vacuum input state project the oscillator on coherent states.
When the rate is faster than the oscillation, measurements project the oscillator on position-squeezed states.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{intro.pdf}
\caption{\small{a) An optical probe spatially shaped in a square tophat beam travels through an atomic ensemble with the total spin $\bm{J}$ in a magnetic field $B$, and is detected using balanced polarization homodyning. The detected quadrature is selected using the $\lambda/2$ and $\lambda/4$ waveplates. The total spin is oriented by the repump beam traveling along $x$. PBS: polarization beam splitter.
b) The polarization angle $\beta$ of the probe as a meter for the spin projection $\hat{J}_z$. c) A photograph of an anti-spin-relaxation coated cell. The channel with probed atoms is indicated by the blue rectangle. d) The orange curves show power spectral densities (PSD) of homodyne signals recorded at $\Delta/(2\pi) = \SI{7}{GHz}$ at different quadratures. The trace showing the largest squeezing is highlighted by the blue curve. The black curve is the theoretical prediction based on the global fit including all quadratures (see the SI). The gray curve is the shot-noise level. The red curve is the theoretical optimum-quadrature squeezing spectrum.}}
\label{fig:intro}
\end{figure}
In addition to the oscillator state, the rate of position measurement affects the output state of the meter field~\cite{meng_mechanical_2020}.
The quadratures of the meter are correlated, and their fluctuations can be below the vacuum level~\cite{fabre_quantum-noise_1994,mancini_quantum_1994}.
In the slow measurement regime, the correlations and the associated squeezing exist in a narrow frequency band near the resonance, and have a strong frequency dependence due to the time-averaged response of the oscillator to the measurement backaction.
When the measurement is faster than the oscillation, the correlations and squeezing are broadband and frequency-independent at low frequencies, where the oscillator responds to the backaction instantaneously.
The detection of squeezing means observing the backaction-driven motion of the oscillator at frequencies much lower than the resonance, which is a necessary condition for position squeezing~\cite{meng_mechanical_2020}.
The squeezing of the meter light is both a valuable quantum resource and a figure of merit for the purity of the light-oscillator interaction.
In the slow regime, we realize a measurement of a collective spin of a room-temperature atomic ensemble at a rate fifteen times higher than the rate of thermal decoherence.
The generated squeezing of the meter light reaches $11.5^{+2.5}_{-1.5}\,\t{dB}$ at the output of the cell, exceeding the squeezing demonstrated previously using collective atomic spins~\cite{mccormick_strong_2007,boyer_entangled_2008,thomas_entanglement_2020}, optomechanical cavities~\cite{brooks_non-classical_2012,safavi-naeini_squeezed_2013,purdy_strong_2013}, levitated nanoparticles~\cite{magrini_squeezed_2022,militaru_ponderomotive_2022},
and compact on-chip sources utilizing material nonlinearity~\cite{zhang_squeezed_2021},
while approaching the results achievable using bulk nonlinear crystals~\cite{vahlbruch_detection_2016}.
In the fast-measurement regime, we detect broadband squeezing in a bandwidth of several MHz while keeping the backaction-imprecision product~\cite{clerk_introduction_2010} within $\SI{20}{\percent}$ from the value saturating the Heisenberg uncertainty relation.
These results enable new regimes for sensing surpassing the standard quantum limit~\cite{vyatchanin_quantum_1993,mason_continuous_2019},
tests of uncertainty relations for past quantum states~\cite{tsang_optimal_2009,bao_retrodiction_2020},
quantum control of material oscillators~\cite{wilson_measurement-based_2015,rossi_measurement-based_2018,tebbenjohanns_quantum_2021,magrini_real-time_2021},
and links between collective spins and other material systems~\cite{moller_quantum_2017,thomas_entanglement_2020,karg_light-mediated_2020,schmid_coherent_2022}.
\section{Measurements of spin oscillators}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{quad_sweep.pdf}
\caption{\small{a) Homodyne signal PSDs at $\Delta/(2\pi) = \SI{3}{GHz}$ and different detection angles $\phi$ indicated in the figure. The points are experimental data. The green and orange traces are obtained close to $\hat{P}_\t{L}$ and $\hat{X}_\t{L}$, respectively, and the olive, blue and purple---at intermediate quadratures. The gray points show the shot-noise level. The black curves are theoretical predictions based on the global fit including the spectra at 15 quadratures (see the SI). The red curve is the optimum-quadrature squeezing spectrum predicted by the single-oscillator model.
b) The spectra of classically driven motion of the collective spin.
The eight peaks visible at low probe powers correspond to bare oscillator modes due to the transitions between adjacent $m_F$ levels. Their frequencies are determined by the linear and quadratic Zeeman energies, and magnitudes are determined by the macroscopic populations of the $m_F$ levels as shown in the inset.
The spectra at high powers expose the hybridized oscillator modes.
}}
\label{fig:quadsweep}
\end{figure*}
Linearly polarized light traveling through an oriented atomic medium (as illustrated in \figref{fig:intro}a-b) continuously measures the projection of the total spin on the propagation direction, $\hat{J}_z$, via polarization rotation. This measurement acts back on the spin via quantum fluctuations of ponderomotive torque. When the input light is in a strong coherent state, and the spin satisfies the Holstein-Primakoff approximation~\cite{arecchi_atomic_1972}, the process can be described in terms of linearly coupled pairs of canonically conjugate position and momentum variables. The canonical variables of the spin, $\hat{X}_\t{S}$ and $\hat{P}_\t{S}$, are the normalized projections defined as $\hat{X}_\t{S}=\hat{J}_z/\sqrt{\smash[b]{\hbar\langle J_x\rangle}}$ and $\hat{P}_\t{S}=-\hat{J}_y/\sqrt{\smash[b]{\hbar\langle J_x\rangle}}$, which satisfy the commutation relation $[\hat{X}_\t{S},\hat{P}_\t{S}] = i$. The variables of the light, $\hat{X}_\t{L}$ and $\hat{P}_\t{L}$, are the quadratures proportional to the amplitude and phase differences between the circularly polarized components, respectively. Their commutator is $[\hat{X}_\t{L}(t),\hat{P}_\t{L}(t')]=(i/2) \delta(t-t')$. The Heisenberg uncertainty principle constrains the two-sided spectral densities of the imprecision in the $\hat{P}_\t{L}$-quadrature measurements, $S_\t{imp}$, and the measurement backaction, $S_\t{BA}$, as $\sqrt{S_\t{imp}\, S_\t{BA}}\ge\hbar/2$ (see Ref.~\cite{clerk_introduction_2010} and the SI). This uncertainty relation is saturated if the detection efficiency is perfect and there is no excess measurement noise.
When the ensemble is probed far-detuned from optical transitions, the total spin couples to the probe via the position-measurement Hamiltonian $\hat{H}_\t{int}=-2\hbar\sqrt{\Gamma}\,\hat{X}_\t{L}\hat{X}_\t{S}$, and modifies the probe variables according to the input-output relations~\cite{hammerer_teleportation_2005,thomas_entanglement_2020}
\begin{align}
&\hat{P}_\t{L}^\t{out}(t)=\hat{P}_\t{L}^\t{in}(t)+\sqrt{\Gamma}\, \hat{X}_\t{S}(t),
&\hat{X}_\t{L}^\t{out}(t)=\hat{X}_\t{L}^\t{in}(t),
\end{align}
where $\Gamma$ is the measurement rate proportional to the optical power. The measurement backaction force is $\hat{F}_\t{QBA}=2\sqrt{\Gamma} \hat{X}_\t{L}^\t{in}$. The response of the spin to the measurement backaction in this situation is described by the Fourier-domain susceptibility $\chi[\Omega]=\Omega_\t{S}/(\Omega_\t{S}^2-\Omega^2-i\Omega\gamma_0)$, where $\Omega_\t{S}$ is the resonance Larmor frequency and $\gamma_0$ is the intrinsic decay rate.
The response induces correlations between $\hat{X}_\t{L}^\t{out}$ and $\hat{P}_\t{L}^\t{out}$ that can be observed by detecting intermediate quadratures of light,
$\hat{Q}^\phi_\t{L}=\sin(\phi)\hat{X}_\t{L}^\t{out}+\cos(\phi)\hat{P}_\t{L}^\t{out}$.
The two-sided spectra of those quadratures, detected by a homodyne with efficiency $\eta$, are given by
\begin{multline}\label{eq:Sphi}
S_\phi[\Omega]=1/4+(\eta\Gamma/2)\,\t{Re}\left(\chi[\Omega]\right)\sin(2\phi)\\
+\eta\Gamma(\Gamma+\gamma_\t{th})|\chi[\Omega]|^2\cos(\phi)^2,
\end{multline}
where $\gamma_\t{th}=(2\,n_\t{th}+1)\gamma_0$ is the thermal decoherence rate. The term $\propto \cos(\phi)^2$ is due to the spin oscillator motion, and the term $\propto \sin(2\theta)$ is due to the cross-correlation between $\hat{X}_\t{S}$ and $\hat{X}_\t{L}^\t{out}$. Negative cross-correlation can squeeze $S_\phi[\Omega]$ below the vacuum level of 1/4.
In a more general situation, the internal dynamics of the collective spin are those of $2F$ harmonic oscillators, where $F$ is the ground-state angular momentum number of the atomic species. Their annihilation operators,
\begin{equation}\label{eq:bmdef}
\hat{b}_m=\frac{1}{\sqrt{\Delta N_m}}\sum_{j=1}^N \ket{m+1}_j\bra{m}_j,
\end{equation}
are introduced using the multilevel Holstein-Primakoff approximation~\cite{kurucz_multilevel_2010}.
In \eqref{eq:bmdef}, $m$ is the projection quantum number of the single-atom angular momentum on the $x$ axis, $\ket{m+1}_j\bra{m}_j$ are the jump operators between the states $\ket{m}_j$ and $\ket{m+1}_j$ of the individual atoms, and $\Delta N_m=N_{m+1}-N_m$ are the differences in the mean numbers of atoms in the corresponding states. The frequencies of the oscillators are the energy differences between $\ket{m}_j$ and $\ket{m+1}_j$, controlled by an external static magnetic field. The oscillator-light interaction is described by the Hamiltonian
\begin{equation}\label{eq:HamGeneral}
\hat{H}_\mathrm{int} = -2\hbar\sum_{m=-F}^{F-1}\sqrt{\Gamma_{m}} \left(\hat{X}_{m}\hat{X}_\mathrm{L}+\zeta_m\hat{P}_m\hat{P}_\mathrm{L}\right),
\end{equation}
where the quadratures of the modes satisfy $[\hat{X}_{m},\hat{P}_m]=i$, $\Gamma_{m}$ are the measurement rates, and $\zeta_m=\zeta(2m+1)/7$ determine the strengths of dynamical backaction. The common factor $\zeta$ is a function of the optical detuning $\Delta$ and the level structure.
The deviation of the interaction Hamiltonian~(\ref{eq:HamGeneral}) from that of pure position measurement, $\zeta=0$, results in dynamical-backaction damping with rates $\gamma_{\t{DBA},m}=2\zeta_m\Gamma_m$, and increases the quantum backaction-imprecision product by an amount proportional to $\zeta^2$ (see the SI), which is small in all our experiments.
The oscillators experience thermal decoherence due to the spontaneous scattering and the collisions of atoms. The thermal occupancy of the intrinsic damping bath is $n_\t{th}=N_m/\Delta N_m$, experimentally found to be independent of $m$.
The multimode structure can affect the response of the spin to the measurement backaction at frequencies close to $\Omega_\t{S}$, while far away from $\Omega_\t{S}$ the spin acts as a single oscillator with $\hat{X}_\t{S}=\sum_m \sqrt{\Gamma_{m}/\Gamma}\hat{X}_{m}$ that is measured at the total rate $\Gamma=\sum_m \Gamma_{m}$ and experiences decoherence at the rate $\gamma_\t{th}=\sum_m \gamma_{\t{th},m}\Gamma_{m}/\Gamma$, where $\gamma_{\t{th},m}$ are the individual decoherence rates of the modes. The quantum cooperativities for the individual modes are defined as the ratios of the measurement and decoherence rates. For the total spin, the cooperativity is $\mathcal{C}_\t{q}=\Gamma/\gamma_\t{th}$.
\section{Experiment}
An ensemble of $N\approx 2\times10^{10}$ cesium-133 atoms at \SI{52}{\degree C} is contained in the \SI{1}{mm}$\times$\SI{1}{mm}$\times$\SI{4}{cm} channel of a glass chip, shown in \figref{fig:intro}c. The channel is coated with paraffin to reduce the spin decoherence from wall collisions~\cite{balabas_polarized_2010}, and is positioned in a homogeneous magnetic field directed along the $x$ axis (\figref{fig:intro}a). The ensemble is continuously probed by a $y-$polarized laser beam propagating in the $z$ direction that has the wavelength \SI{852.3}{nm}, blue-detuned from the $F=4\to F'=5$ transition of the D2 line by $\Delta/(2\pi)=0.7-\SI{7}{GHz}$. The ensemble is also continuously repumped using circularly polarized light resonant with the $F=3\to F'=2$ transition of the D2 line. The combination of spontaneous scattering of probe photons and repumping maintains a steady-state distribution of atoms over the magnetic sublevels of the $F=4$ ground state, which has the macroscopic spin orientation along the magnetic field with polarization $\langle\hat{J}_x\rangle/(NF)\approx0.78$. The steady-state populations are independent of the probe power in our regime, and correspond to the occupancy of the thermal bath $n_\t{th}= 0.9\pm 0.1$. The resonance frequencies of the oscillators are set by the Larmor frequency and split by $0-\SI{40}{kHz}$ in different regimes by the quadratic Zeeman and tensor Stark effects. The Larmor frequency can be positive or negative depending on the orientation of the magnetic field, setting the signs of the effective oscillator masses. We work in the negative-mass configuration \cite{moller_quantum_2017}, but the effects that we observe, in particular the squeezing levels, do not change upon the reversal of the sign of mass (see the SI). The output light is detected using balanced polarization homodyning, which enables shot-noise-limited detection at frequencies down to \SI{10}{kHz}.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{broadband.pdf}
\caption{\small{
a-b) Homodyne signal PSDs at $\Delta/(2\pi)=\SI{0.7}{GHz}$. The gray curves show the experimental shot-noise levels, and the red curves are the theoretical optimum-quadrature squeezing spectra derived from~\eqref{eq:Sphi}.
a) Spectra for $|\Omega_\t{S}|/(2\pi)=\SI{1.09}{MHz}$ and $\SI{1.79}{MHz}$.
The orange and blue curves are measurements with the quadrature angle set to detect $\hat{P}_\t{L}$ and a quadrature $\phi$ close to $\hat{X}_\t{L}$, respectively.
LO: local oscillator, th: theoretical.
b) Orange curves show homodyne spectra recorded at $|\Omega_\t{S}|/(2\pi)=\SI{5}{MHz}$ and at different quadratures $\phi$. The trace with the largest squeezing is highlighted by the blue curve. The black curve is the theoretical prediction based on the global fit including all quadratures (see the SI).
c) The spectra taken at the $\hat{P}_\t{L}$ quadrature when the probe beam is Gaussian (blue curve) and tophat (orange curve). The gray curve is the shot noise. The inset shows the beam intensity distributions over the \SI{1}{mm}$\times$\SI{1}{mm} channel cross section recorded without the cell.}}
\label{fig:broadband}
\end{figure*}
\section{Results}
In \figref{fig:intro}d, we present homodyne spectra recorded at the optical detuning $\Delta/(2\pi)=\SI{7}{GHz}$ over a range of detection quadratures $\phi$. In this measurement, dynamical backaction effects are small ($\zeta\approx 0.01$), and the probed spin behaves as a single oscillator subjected to position measurements. The data in \figref{fig:intro}d shows squeezing down to \SI{7.5}{dB}, attained by the highlighted blue trace. From a global fit of the spectra at all quadratures, we infer the measurement rate $\Gamma/(2\pi)=\SI{13}{kHz}$ and the quantum cooperativity $\mathcal{C}_\t{q}=11$. The measurement rate can be verified directly from \figref{fig:intro}d via the width $\Delta\Omega$ of the frequency band over which squeezing is present in any of the traces, which in the backaction-dominated regime is $\Delta\Omega \sim\Gamma$. The envelope of the traces in \figref{fig:intro}d is described by the spectrum given by \eqref{eq:Sphi} minimized over the detection quadrature at each frequency. Neglecting the imaginary part of the response, the optimum-quadrature spectrum is given by
\begin{equation}\label{eq:Sopt}
S_\t{min}[\Omega]=\frac{1}{4}-\frac{\eta}{2}\frac{\Gamma}{\Gamma+\gamma_\t{th}}\,D\left(\frac{\Omega-\Omega_\t{S}}{\Gamma+\gamma_\t{th}}\right),
\end{equation}
where $D(x)= 1/\left(1+\sqrt{1+4x^2}\right)$. The red curve plotted in \figref{fig:intro}d additionally accounts for 0.7 shot noise units of excess $\hat{P}_\t{L}$-quadrature noise from the thermal motion of fast-decaying spin modes (see \secref{sec:broadModes}).
This noise is the main limitation for the backaction-imprecision product in this measurement, which equals $1.5\times(\hbar/2)$.
Due to the scaling $\Gamma\propto 1/\Delta^2$, higher measurement rates are achievable with the probe laser tuned closer to the atomic transition. In \figref{fig:quadsweep}a we present data obtained at the optical detuning of \SI{3}{GHz} using \SI{8.4}{mW} of probe power. In this measurement $\zeta=0.054$, in which case the dynamical backaction results in optical damping and hybridization of the oscillator modes, as well as optical squeezing in the $\hat{X}_\t{L}$-quadrature (see the green trace in \figref{fig:quadsweep}a). Since the thermal decoherence of the oscillators is due to baths at a temperature close to zero, the optical damping improves the maximum magnitude of squeezing by about \SI{0.5}{dB}.
The minimum noise shown by the blue trace in \figref{fig:quadsweep}a is $8.5^{+0.1}_{-0.1}\,\t{dB}$ below the shot noise level.
The overall detection efficiency of our setup is $\eta=\SI[separate-uncertainty=true]{91\pm 3}{\percent}$, and the transmission loss at the exit window of the cell is \SI{1.6}{\percent}, which means that the magnitude of the squeezing at the exit of the cell is $11.5^{+2.5}_{-1.5}\,\t{dB}$.
The backaction-imprecision product in this measurement is $1.9\times(\hbar/2)$, which is higher than in the measurement at \SI{7}{GHz} detuning due to the higher excess $\hat{P}_\t{L}$-quadrature noise (two shot noise units).
The experimental spectra in \figref{fig:quadsweep}a can be understood as arising from the coupled dynamics of two nearly-degenerate bright modes of the spin, which we refer to as modes $a$ and $b$.
To extract their effective parameters, we globally fit the set of spectra recorded over an extended range of quadrature angles (see the SI). We find the total measurement rate to be $\Gamma/(2\pi)=\SI{52}{kHz}$, the individual quantum cooperativities to be $\mathcal{C}_\t{q}^a=12$ and $\mathcal{C}_\t{q}^b=4$, and the total cooperativity to be $\mathcal{C}_\t{q}=15$.
The lower envelope of the experimental traces is in agreement with the optimum-quadrature spectrum predicted by the single-oscillator model using the same $\Gamma$ and $\mathcal{C}_\t{q}$.
The bright modes $a$ and $b$ emerge due to the coupling of the individual spin oscillators via the common reservoir of the probe optical modes with coupling rates proportional to $\zeta_m$ and $\Gamma_m$. To illustrate this effect, we set the laser detuning to \SI{0.7}{GHz}, where the dynamical backation coefficient is larger, $\zeta=0.18$, and excite the oscillators with classical white noise applied via a magnetic field.
The spectra of the $P$ quadrature of the output light at different probe powers are shown in \figref{fig:quadsweep}b.
At the lowest power, the eight bare spin oscillators due to the transitions between adjacent $m_F$ levels are individually resolved.
As the probe power is increased, the resonances first merge in two (the $a$ and $b$ modes) and then three. The macroscopic occupancies of different $m_F$ levels in the atomic ensemble remain the same at all powers, as we separately check, which means that the change in the output spectrum is only due to the coupled dynamics of the collective oscillators.
At the detuning of \SI{0.7}{GHz} from the optical transition, the measurement rate of the spin motion can be as high as the oscillation frequency.
While around the Larmor resonance, in a frequency band of approximately one hundred kHz, the coupling between individual spin oscillators is pronounced, at frequencies much lower than the resonance the spin behaves as a single oscillator, and the quantum measurement backaction manifests via broadband squeezing of light.
In \figref{fig:broadband}a, we present spectra recorded using \SI{12.8}{mW} of optical probe power at two resonance frequencies, \SI{1.09}{MHz} and \SI{1.79}{MHz}, in which the bandwidth of low-frequency squeezing extends down to \SI{30}{kHz}.
The minimum noise levels of the homodyne signals (\SI{6.5}{dB} below the shot noise for the \SI{1.09}{MHz} data) are consistent with the quantum cooperativity $\mathcal{C}_\t{q}=8$.
The measurement rate can be estimated from the signal-to-shot-noise ratio on the $P$ quadrature in \figref{fig:broadband}a using the formula
\begin{equation}
S_{\phi=0}[0]=1/4+\eta \left(\Gamma/\Omega_\t{S}\right)^2,
\end{equation}
which yields $\Gamma/(2\pi)\approx\SI{2}{MHz}$, a value higher than the resonance frequencies.
To further corroborate the measurement rate, we perform a quadrature sweep with the resonance frequency set to \SI{5}{MHz} and using \SI{10.2}{mW} of probe power (\figref{fig:broadband}b). From fitting this data, we find $\Gamma/(2\pi)=\SI{1.77}{MHz}$, which is consistent within ten percent with the previous estimate corrected for the difference in the probe powers.
Theoretically, the optimum-quadrature noise levels should saturate as the Fourier frequency approaches zero, to a value around $0.22$ shot-noise units for the \SI{1.09}{MHz} data in \figref{fig:broadband}a, while experimental noise levels increase at low frequencies due to excess noise from the atomic ensemble.
The backaction-imprecision product for the measurements in \figref{fig:broadband}a is below $1.2\times(\hbar/2)$ at frequencies higher than \SI{100}{kHz}. This value is closer to saturating the Heisenberg uncertainty relation than the values in the slow-measurement experiments, because the fast-decaying modes are in the backaction-dominated regime, and do not contribute excess thermal noise. The limiting factors for the product in this case are the dynamical backaction and detection inefficiency.
\section{Fast-decaying modes}\label{sec:broadModes}
In addition to the collective oscillators described by the annihilation operators from \eqref{eq:bmdef}, in which all atoms contribute equally, there are other modes of the spin in our system~\cite{shaham_quantum_2020,tang_spin-noise_2020}. The resonance frequencies of these modes coincide with $\Omega_\t{S}$, but their decay rates are limited by the rate of atoms flying through the probe field ($\gamma_{0,\t{flight}}/(2\pi)\approx \SI{300}{kHz}$) rather than collisions with the walls and other atoms ($\gamma_{0,\t{coll}}/(2\pi)\approx\SI{200}{Hz}$).
The annihilation operators of these modes are
\begin{equation}
\hat{b}'_{m}=\frac{1}{\sqrt{\Delta N_m\, \langle\Delta g(t)^2\rangle_c}}\sum_{j=1}^N \Delta g_j(t)\, \ket{m}_j\bra{m+1}_j,
\end{equation}
where $g_j(t)$ are the coupling rates between the optical probe and the individual atoms (see the SI) and $\langle\Delta g^2\rangle_c$ is the squared deviation of the coupling from the mean averaged over classical trajectories, assumed to be the same for all atoms.
The measurement rate of the fast-decaying modes is $\propto \langle\Delta g^2\rangle_c$, while the measurement rate of the slow-decaying modes is $\propto \langle g\rangle_c^2$.
An enabling feature of our experiment is the high 3D uniformity of the optical probe field, achieved using a tophat beam configuration, which reduces $\langle\Delta g^2\rangle_c$ and thus the readout of the fast-decaying modes.
In \figref{fig:broadband}c, we compare the spectra recorded at the $\hat{P}_\t{L}$-quadrature using a tophat and a wide Gaussian probe beam with equal optical powers in the slow-measurement regime. The thermal noise contributed by the fast-decaying modes is reduced from 1 to $0.3$ shot-noise units on resonance upon switching from the Gaussian to the tophat probe.
The absolute non-uniformity of the coupling~\cite{borregaard_scalable_2016,dideriksen_room-temperature_2021} for the tophat beam is estimated to be $\langle\Delta g^2\rangle_c/\langle g\rangle^2_c=0.6$ based on the camera imaging.
\section{Outlook}
Continuous measurements that combine high measurement rate, quantum cooperativity, and detection efficiency can be used for single-shot generation of spin-squeezed states and quantum state tomography~\cite{vanner_pulsed_2011}.
The entanglement link between the material spin and traveling light entailed by the squeezing enables quantum-coherent coupling of spins with other material systems~\cite{thomas_entanglement_2020,karg_light-mediated_2020}.
While the backaction-imprecision product in all our measurements is already within a factor of two from the Heisenberg bound, it can be further improved by optimizing the probe power for measurements of the $\hat{P}_\t{L}$-quadrature. Our measurements were optimized for quadratures intermediate between $\hat{X}_\t{L}$ and $\hat{P}_\t{L}$ (i.e. for ``variational" readout~\cite{vyatchanin_quantum_1993}) which can yield superior results~\cite{habibi_quantum_2016} in quantum sensing and control.
This work also establishes room-temperature atomic spin oscillators as a practical platform for engineering quantum light with high levels of squeezing, which is a basic resource for interferometric sensing and optical quantum information processing~\cite{zhang_squeezed_2021}.
The highest demonstrated squeezing, reaching \SI{8.5}{dB} at the detection, is narrowband, but its frequency can be tuned by the magnetic field without degrading the level within the range of approximately $0.8-\SI{5}{MHz}$ in our experiments.
\section{Acknowledgements}
The authors thank Micha\l{} Parniak, Jörg Müller, Rebecca Schmieg, and Ivan Galinskiy for general help and useful discussions.
This work was supported by the European Research Council (ERC) under the Horizon 2020 (grant agreement No 787520) and by VILLUM FONDEN under a Villum Investigator Grant no. 25880.
SF acknowledges funding from the European Union’s Horizon 2020 research program under the Marie Sklodowska-Curie grant agreement No. 847523 ``INTERACTIONS".
\section{Experimental setup}
\label{app:exp}
\begin{figure}[h]
\includegraphics[width=0.6\textwidth]{Atomic_setup.pdf}
\caption{Experimental setup. A linearly polarized light probe is spatially shaped as a square-tophat beam. The probe interacts with an optically polarized ensemble of Cesium atoms located in a glass chip. The macroscopic atomic polarization $J_x$ is oriented along the magnetic field $B$. The optical probe and the atomic ensemble interact via Faraday interaction in the dispersive regime. The output probe light is detected using a polarization self-homodyning setup. PBS: Polarizing beamsplitter. $\lambda/2$: Half wave plate. $\lambda/4$: Quarter wave plate. Beam shaper: Gaussian-to-tophat beam-shaping lens.}
\label{fig:si:setup}
\end{figure}
A detailed schematic of our experimental setup is presented in \figref{fig:si:setup}.
The probed cesium-133 atoms are located in a channel of a glass chip with \SI{1}{mm}$\times$\SI{1}{mm} cross-section and \SI{40}{mm} length. The chip is enclosed in a glass cell, which has a stem attached to it that contains a piece of cesium metal providing a reservoir of atoms. The cell interior is coated with an anti-spin-relaxation paraffin coating to decrease the decoherence due to the collisions of atoms with walls.
The cell is heated to $\SI[separate-uncertainty=true]{52\pm2}{^{\circ}C}$ and placed in a stationary homogeneous magnetic field directed along the $x$ axis, which is created by a pair of rectangular coils parallel to the $yz$ plane. Additional time-dependent magnetic field directed along the $y$ axis can be created using another pair of coils parallel to the $xz$ plane, which has the effect of applying a classical force to the atomic oscillator. The cell and the entire set of coils are enclosed in a multi-layer magnetic shield, including $\mu$-metal layers to eliminate the magnetic field of the Earth and an aluminum layer to protect the spins from external high-frequency magnetic noise.
The atoms interact with two light beams: the probe, which is linearly polarized and propagates along the channel, and the repump, which is circularly polarized and propagates perpendicular to the channel, along the $x$ axis. Both light beams have wavelengths around \SI{852.3}{nm}, close to the D2 transition from the ground state of Cs.
The ground state of Cs is split into two hyperfine levels, with the magnetic momentum numbers $F=3$ and $F=4$, and each hyperfine level is further split into $(2F+1)$ magnetic sublevels.
The repump beam is produced by a diode laser and has the power in the range of $8-\SI{10}{mW}$.
It is blue-detuned by \SI{80}{MHz} from the $F=3\to F'=2$ transition of the D2 line, and resonant with all transitions $F=3\to F'=2,3,4$ within the Doppler linewidth, where the primes denote electronically excited states.
The cross-section of the chip channel containing atoms is chosen to be square to avoid lensing of the repump beam. In order to uniformly illuminate the elongated channel, the repump beam is shaped by a combination of a Powell lens and a cylindrical collimating lens. The repump transfers all atoms to $F=4$ level, and simultaneously creates macroscopic spin orientation in the ensemble because of its circular polarization. The chirality of the polarization, $\sigma_{+}$ or $\sigma_{-}$, determines the sign of the mass of the oscillator~\cite{moller_quantum_2017}. Our experiments are done with a negative-mass oscillator, but the results, including the observed levels of squeezing, are largely independent of the sign of the mass (see \secref{sec:si:massFlip}).
The probe beam is blue-detuned by $0.7-\SI{7}{GHz}$ from $F=4\to F'=5$ transition; it is produced by a Ti:Sa laser and has the power up to \SI{13}{mW}. The probe interacts with the ensemble in the dispersive regime, but the residual spontaneous scattering of photons from it contributes to the spin decoherence.
The linear polarization of the probe is set along the $y$ axis to maximize the optical damping by the dynamical backaction (which nevertheless remains small), and simultaneously decouple the spin from the classical intensity fluctuations. The small amount of optical damping in our experiments improves the maximum observed level of squeezing (see \secref{sec:si:optDamping}).
The decoherence rate due to the spontaneous scattering is proportional to the probe power, and is the primary limitation for the achievable quantum cooperativity in our work.
The distribution of the atoms among the magnetic sublevels is determined by the interplay of the spontaneous scattering processes due to the probe and the repump beam, and is independent of the probe power and detuning within our range of parameters.
After the interaction with the atomic ensemble, the relevant quadratures of the probe beam are detected using polarization homodyning. The quadrature angle is selected using a combination of a quarter waveplate and a half waveplate. A key advantage of the polarization homodyning method is the perfect spatial overlap between the detected modes of light and the local oscillator. The electronic noise floor of the photodetector is typically about \SI{30}{dB} below the shot noise level and hence is negligible.
The maximum narrowband squeezing of light observed in the regime when $\Gamma\ll|\Omega_S|$ is approximately independent of the Larmor frequency within the range of Larmor frequencies between \SI{0.8}{MHz} and \SI{5}{MHz}. At low frequencies, the limitation is due to classical noises acting on the spins, and at high frequencies due to the inhomogeneity of the magnetic field, which could be straightforwardly improved.
In order to minimize the coupling to the fast-decaying modes of the spin ensemble (see \secref{sec:si:spinOscillators}), the probe beam is shaped into a square tophat beam using a high-transmission beam shaping lens (Topag GTH-3.6-1.75FA), and an additional system of regular spherical lenses described in \secref{sec:si:tophat}. The resulting beam has a supergaussian intensity cross section $I(x,y)\propto \exp(-2(x/w_x)^{2n}-2(y/w_y)^{2n})$ with $n\approx 3.2$ and $2w_x\approx2w_y\approx\SI{0.84}{mm}$, which change negligibly in the $z$ direction over the length of the cell channel.
The on-resonance extraneous thermal noise in the slow-measurement regime was experimentally found to be lower by a factor of 3.6 for the tophat beam probe compared to the Gaussian beam probe with the maximum width allowed by the cell channel.
The transmission of the probe beam through the cell reaches 96.8$\%$, limited by the reflection and scattering of light upon hitting the cell windows, with the loss of light due to the clipping of the beam being negligible.
In order to infer the generated level of squeezing from detected, we assume that the transmission loss is equally contributed by the input and the output windows.
\section{The modes of an ensemble of moving atoms interacting with light}\label{sec:si:spinOscillators}
In this section, we describe $N$ moving atoms interacting with the probe light field, and derive input-output relations for the optical quadratures in terms of two types of collective spin oscillator modes: usual Larmor precession modes, and modes scrambled by the atomic motion.
Individual atoms interact with the light field with the strengths $g_k(t)$ (where $k=1,..,N$ is the integer index that labels the atoms) that is proportional to the intensity of the light field at their instantaneous position.
The interaction strengths randomly change in time as atoms move inside the cell. The motions of different atoms are assumed to have the same statistical properties and be uncorrelated between each other.
The statistics of motion are characterized by decomposing the couplings into their mean value, $\bar{g}$, and deviations, $\Delta g_k(t)$,
\begin{equation}
g_k(t)=\bar{g}+\Delta g_k(t),
\end{equation}
and specifying the motional correlation function, $R(\tau)$,
\begin{equation}
\frac{\langle \Delta g_k(t_1)\,\Delta g_l(t_2)\rangle_{c}}{\langle \Delta g(t)^2\rangle_{c}}=\delta_{kl}R(t_1-t_2),
\end{equation}
where $\delta_{kl}$ is the Kronecker symbol and $\langle \cdot\rangle_{c}$ denotes motional averaging (following the notation of Ref.~\cite{shaham_quantum_2020}, to separate from the quantum averaging $\langle \cdot\rangle$). The normalization factor, $\langle\Delta g(t)^2\rangle_{c}$, is the mean squared deviation among the individual atom-light couplings. According to the ergodic hypothesis, the result of the averaging is the same regardless of whether it is done over the time or the realizations of the ensemble.
The dispersive interaction between the light and the $k-$th atom in the ensemble is described by the Hamiltonian~\cite{sherson_deterministic_2007,thomas_entanglement_2020}
\begin{equation}
\hat{H}_\t{int}^{(k)}=\hbar\, g_k(t)\left[a_0 \hat{I}+a_1 \hat{S}_z \hat{j}_z^{(k)}+a_2\left(\hat{I}\, \hat{j}_z^{(k)}\hat{j}_z^{(k)}-2\hat{S}_x\left(\hat{j}_x^{(k)}\hat{j}_x^{(k)}-\hat{j}_y^{(k)}\hat{j}_y^{(k)}\right)-2\hat{S}_y\left(\hat{j}_x^{(k)}\hat{j}_y^{(k)}+\hat{j}_y^{(k)}\hat{j}_x^{(k)}\right)\right)\right],
\end{equation}
where $\hat{S}_{x,y,z}$ are the Stokes parameters of the input light~\cite{sherson_deterministic_2007}, $\hat{I}$ is the intensity of the input light, and the parameters $a_{0,1,2}$ are functions of the level structure and the laser detuning from the optical transition~\cite{vasilyev_quantum_2012}. After linearization assuming a strong coherent $y$-polarized light probe with the mean amplitude $\bar{a}$, the Hamiltonian is expressed as
\begin{equation}\label{eq:si:Hintj}
\hat{H}^{(k)}_\t{int}= \hat{H}^{(k)}_\t{Stark}-\hbar\frac{\bar{a}g_k(t)}{\sqrt{2}}\left[a_1 \hat{j}_z^{(k)}\hat{X}_\t{L}-2a_2\left(\hat{j}_x^{(k)}\hat{j}_y^{(k)}+\hat{j}_y^{(k)}\hat{j}_x^{(k)}\right)\hat{P}_\t{L}\right],
\end{equation}
where the Stark Hamiltonian $\hat{H}^{(k)}_\t{Stark}=\hbar g_k(t)\left[a_0 + a_2\left(\hat{j}_x^{(k)}\hat{j}_x^{(k)}-\hat{j}_y^{(k)}\hat{j}_y^{(k)}+\hat{j}_z^{(k)}\hat{j}_z^{(k)}\right)\right]\hat{I}$ describes the energy shifts due to the dynamic Stark effect,
and $\hat{X}_\t{L}$ and $\hat{P}_\t{L}$ are the polarization quadratures of the light field normalized such that they satisfy the commutation relation
\begin{equation}
[\hat{X}_\t{L}(t_1),\hat{P}_\t{L}(t_2)]=(i/2)\delta(t_1-t_2).
\end{equation}
The spin components of individual atoms $\hat{j}^{(k)}_{x,y,z}$ can be expressed in terms of the jump operators $\hat{\sigma}_{n,m}^{(k)}$ between the ground state sublevels,
\begin{equation}
\hat{\sigma}_{n,m}^{(k)}= \ket{n}_k\bra{m}_k,
\end{equation}
where $m,n=-F,...,F$ is the projection of the angular momentum on the $x$ axis (which coincides with the direction of the magnetic field), and $F$ is the total angular momentum quantum number of the ground state level.
In this notation,
\begin{equation}\label{eq:si:Hintsigma}
H^{(k)}_\t{int}=\hat{H}^{(k)}_\t{Stark}+\hbar\frac{\bar{a}g_k(t)}{2\sqrt{2}}\sum_{m=-F}^{F-1}C_m\left( ia_1\left(\hat{\sigma}_{m+1,m}^{(k)}-\hat{\sigma}_{m,m+1}^{(k)}\right)\hat{X}_\t{L}+2(2m+1)a_2\left(\hat{\sigma}_{m+1,m}^{(k)}+\hat{\sigma}_{m,m+1}^{(k)}\right)\hat{P}_\t{L}\right),
\end{equation}
where $\hat{H}^{(k)}_\t{Stark}=\hbar\sum_{m}g_k(t)\left(a_0+a_2m^2\right)\hat{I}\,\hat{\sigma}_{m,m}^{(k)}$ is the Stark energy, and $C_m=\sqrt{F(F+1)-m(m+1)}$ are Clebsch–Gordan coefficients. When transiting from \eqref{eq:si:Hintj} to \eqref{eq:si:Hintsigma} we neglected the terms involving second-order coherences that only couple to $\hat{I}$ and are negligibly small in our case.
The individual atomic spins are precessing in a homogeneous magnetic field directed along the $x$ axis.
Taking the zero of the energy scale to be the ground state energy of free atoms, the Hamiltonian of the precession is expressed as
\begin{equation}
\hat{H}^{(k)}_\t{S}=\sum_{m=-F}^{F} E_{\t{Zeem},m}\,\hat{\sigma}_{m,m}^{(k)},
\end{equation}
where $E_{\t{Zeem},m}$ are the Zeeman energies of the magnetic sublevels that include contributions linear and quadratic in $m$. The total Hamiltonian of all atoms,
\begin{equation}
\hat{H}=\sum_{k=1}^N \left(\hat{H}^{(k)}_\t{S}+\hat{H}^{(k)}_\t{int}\right),
\end{equation}
can be expressed using collective operators: the total numbers of atoms in the magnetic sublevels, denoted by $\hat{N}_m$, and two sets of coherences between neighboring $m$ levels, denoted by $\hat{\Sigma}_{m}$ and $\hat{\Sigma}_{m}'$. The operators are defined as
\begin{align}
\hat{N}_m=\sum_{k=1}^N \hat{\sigma}_{m,m}^{(k)},&
&\hat{\Sigma}_m=\sum_{k=1}^N \hat{\sigma}_{m+1,m}^{(k)},&
&\hat{\Sigma}'_m=\frac{1}{\sqrt{\langle\Delta g^2\rangle_{c}}}\sum_{k=1}^N \Delta g_k(t)\, \hat{\sigma}_{m+1,m}^{(k)},
\end{align}
where $m=-F,...,F-1$ for the $\Sigma$ operators and $m=-F,...,F$ for the $N$ operators.
The expression for the Hamiltonian, neglecting a small contribution due to the inhomogeneity of the Stark shift, is
\begin{multline}\label{eq:si:HtotSigma}
\hat{H}=\sum_{m=-F}^{F} E_{m}\,\hat{N}_m+\hbar\sum_{m=-F}^{F-1}\frac{\bar{g}\bar{a}a_1}{2\sqrt{2}}C_m\left( i\left(\hat{\Sigma}_{m}-\hat{\Sigma}_{m}^\dagger\right)\hat{X}_\t{L}+\zeta_m\left(\hat{\Sigma}_{m}+\hat{\Sigma}_{m}^\dagger\right)\hat{P}_\t{L}\right)\\
+\hbar\sum_{m=-F}^{F-1}\frac{\sqrt{\langle\Delta g^2\rangle_{c}}\,\bar{a}a_1}{2\sqrt{2}}C_m\left( i\left(\hat{\Sigma}'_{m}-\hat{\Sigma}_{m}^{\prime\,\dagger}\right)\hat{X}_\t{L}+\zeta_m\left(\hat{\Sigma}'_{m}+\hat{\Sigma}_{m}^{\prime\,\dagger}\right)\hat{P}_\t{L}\right),
\end{multline}
where $\zeta_m=2(2m+1)a_2/a_1$, and $E_{m}=E_{\t{Zeem},m}+E_{\t{Stark},m}$ is the sum of the Zeeman and the Stark energies. In the limit of a large number of atoms in the ensemble, the two sets of $\hat{\Sigma}_{m}$ operators are independent and have constant commutators,
\begin{align}
&\left[\hat{\Sigma}_n,\hat{\Sigma}_m^\dagger\right]=\delta_{nm}\left(\hat{N}_{m+1}-\hat{N}_m\right)&&\underset{N\gg 1}{\longrightarrow}&&\delta_{nm}\left(N_{m+1}-N_m\right),\\
&\left[\hat{\Sigma}_n,\hat{\Sigma}_{m}^{\prime\,\dagger}\right]=\delta_{nm}\sum_k\frac{\Delta g_k(t)}{\sqrt{\langle\Delta g^2\rangle_{c}}}\left(\hat{\sigma}_{m+1,m+1}^{(k)}- \hat{\sigma}_{m,m}^{(k)}\right)&&\underset{N\gg 1}{\longrightarrow}&& 0,\\
&\left[\hat{\Sigma}'_n,\hat{\Sigma}_{m}^{\prime\,\dagger}\right]=\delta_{nm}\sum_j\frac{\Delta g_j(t)^2}{\langle\Delta g^2\rangle_{c}}\left(\hat{\sigma}_{m+1,m+1}^{(j)}- \hat{\sigma}_{m,m}^{(j)}\right)&&\underset{N\gg 1}{\longrightarrow}&&\delta_{nm}\left(N_{m+1}-N_m\right),
\end{align}
where $m,n=-F,...,F-1$, and $N_m=\langle \hat{N} \rangle$ are the average macroscopic populations of the magnetic sublevels.
By normalizing the $\Sigma$ operators to satisfy the canonic commutation relations, we can introduce two sets of bosonic modes, $\hat{b}_m$ and $\hat{b}'_m$, that appear in the main text,
\begin{align}
&\hat{b}_m=\hat{\Sigma}_m/\sqrt{\Delta N_m},
&\hat{b}'_m=\hat{\Sigma}'_m/\sqrt{\Delta N_m},
\end{align}
where $\Delta N_m=N_{m+1}-N_m$.
The modes described by $\hat{b}_m$ are those usually identified with the Larmor precession of the spin ensemble as a whole. They experience coupling to the probe light that is averaged over the atomic trajectories~\cite{borregaard_scalable_2016}, and their coherence time is high, limited by the reorientation of individual spins due to the collisions with the walls and between each other, and by the spontaneous scattering of probe photons. The modes described by $\hat{b}'_m$ experience additional damping and decoherence due to the atoms flying in and out of the probe beam. We refer to them as the fast-decaying modes. Introducing the quadratures of the spin oscillators,
\begin{align}\label{eq:atomicOscQuad}
&\hat{X}_m\equiv\frac{1}{i\sqrt{2}}\left(\hat{b}_{m}-\hat{b}_{m}^\dagger\right),&
&\hat{P}_m\equiv-\frac{1}{\sqrt{2}}\left(\hat{b}_{m}+\hat{b}_{m}^\dagger\right),&
&\hat{X}'_m\equiv\frac{1}{i\sqrt{2}}\left(\hat{b}'_{m}-\hat{b}_{m}^{\prime\,\dagger}\right),&
&\hat{P}'_m\equiv-\frac{1}{\sqrt{2}}\left(\hat{b}_{m}'+\hat{b}_{m}^{\prime\,\dagger}\right),
\end{align}
which satisfy $[\hat{X}_m,\hat{P}_m]=i$ and $[\hat{X}'_m,\hat{P}'_m]=i$, and using the fact that, in the Holstein-Primakoff approximation, the numbers of atoms in the $m-$th levels satisfy
\begin{equation}
\hat{N}_m\approx N_m+\frac{1}{2}\left(\hat{b}^\dagger_m\hat{b}_m+\hat{b}_m^{\prime\,\dagger}\hat{b}'_m-\hat{b}^\dagger_{m-1}\hat{b}_{m-1}-\hat{b}_{m-1}^{\prime\,\dagger}\hat{b}'_{m-1}+\t{h.c.}\right),
\end{equation}
the total Hamiltonian in \eqref{eq:si:HtotSigma} is expressed as
\begin{multline}\label{eq:si:hamOscFull}
\hat{H}=\hbar\sum_{m=-F}^{F-1}\left[\frac{\Omega_m}{2}\left(\hat{X}_m^2+\hat{P}_m^2\right)+\frac{\Omega_m}{2}\left(\hat{X}_m^{\prime\,2}
+\hat{P}_m^{\prime\,2}\right)\right.\\\left.-2\sqrt{\Gamma_m}\left(\hat{X}_m\hat{X}_\t{L}+\zeta_m\hat{P}_m\hat{P}_\t{L}\right)
-2\sqrt{\Gamma'_m}\left(\hat{X}'_m\hat{X}_\t{L}+\zeta_m\hat{P}'_m\hat{P}_\t{L}\right)\right],
\end{multline}
which is a Hamiltonian of $4F$ oscillators linearly coupled to a propagating field. The frequencies $\Omega_m$ are determined by the energy splittings between different magnetic sublevels due to the Zeeman and Stark effects,
\begin{equation}
\hbar\Omega_m=E_{\t{Zeem},m}-E_{\t{Zeem},m+1}-\hbar\bar{g}a_2I(2m+1),
\end{equation}
and
the measurement rates for the slow- and the fast-decaying modes are identified as
\begin{align}
&\Gamma_m=\bar{g}^2(\bar{a}a_1\,C_m)^2\Delta N_m/16,&
&\Gamma'_m=\langle\Delta g^2\rangle_{c}\,(\bar{a}a_1\,C_m)^2\Delta N_m/16.
\end{align}
The input-output relations for the quadratures of the light field are derived based on \eqref{eq:si:hamOscFull} as described in Ref.~\cite{hammerer_teleportation_2005}. They are given by
\begin{align}
&\hat{X}_\t{L}^\t{out}(t)=\hat{X}_\t{L}^\t{in}(t)-\sum_{m=-F}^{F-1}\zeta_m\left(\sqrt{\Gamma_m}\hat{P}_{m}(t)+\sqrt{\Gamma'_{m}}\hat{P}'_{m}(t)\right),\label{eq:si:XI-O}\\
&\hat{P}_\t{L}^\t{out}(t)=\hat{P}_\t{L}^\t{in}(t)+\sum_{m=-F}^{F-1}\left(\sqrt{\Gamma_m}\hat{X}_{m}(t)+\sqrt{\Gamma'_{m}}\hat{X}'_{m}(t)\right),\label{eq:si:PI-O}
\end{align}
and the Heisenberg equations of motion for the slow-decaying modes are
\begin{align}
&\frac{d}{dt}\hat{X}_m(t)=\Omega_m \hat{P}_m(t)-\sum_{n=-F}^{F-1}\zeta_m\sqrt{\Gamma_m}\left(\sqrt{\Gamma_n}\hat{X}_{n}(t)+\sqrt{\Gamma'_{n}}\hat{X}'_{n}(t)\right)-2\zeta_m\sqrt{\Gamma_m}\hat{P}_\t{L}^\t{in}(t),\label{eq:si:XHhc}\\
&\frac{d}{dt}\hat{P}_m(t)=-\Omega_m \hat{X}_m(t)-\sum_{n=-F}^{F-1}\zeta_n\sqrt{\Gamma_m}\left(\sqrt{\Gamma_n}\hat{P}_{n}(t)+\sqrt{\Gamma'_{n}}\hat{P}'_{n}(t)\right)+2\sqrt{\Gamma_m}\hat{X}_\t{L}^\t{in}(t).\label{eq:si:PHhc}
\end{align}
Eq.~(\ref{eq:si:XHhc}-\ref{eq:si:PHhc}) show that the oscillators experience damping or antidamping by dynamical backaction with the rates $\gamma_{\t{DBA},m}=2\zeta_m\Gamma_m$, and are coupled between each other at the rates $\sqrt{\gamma_{\t{DBA},m}\gamma_{\t{DBA},n}}$ due to the interaction with the common optical bath. For practical calculations, intrinsic dissipation due to the atomic collisions and spontaneous scattering is added to Eq.~(\ref{eq:si:XHhc}-\ref{eq:si:PHhc}) using the usual quantum Langevin approach~\cite{thomas_entanglement_2020}. The temperatures of the effective thermal baths can be determined from the equilibrium numbers of excitation in the modes in the absence of probing, $n_\t{th}\equiv\langle \hat{b}^\dagger_m\hat{b}_m\rangle=(N_m/\Delta N_m)$, which are calculated directly from the definitions of $\hat{b}_m$ under the assumption that the processes that determine the equilibrium populations $N_m$ affect all atoms independently.
The Heisenberg equations of motion describing the evolution of the modes from the fast-decaying family are identical to Eq.~(\ref{eq:si:XHhc}-\ref{eq:si:PHhc}), except that they include additional terms due to the explicit time dependence of their operators. These terms are more convenient to present for the annihilation operators than for the quadratures, they are given by
\begin{equation}
\frac{d}{dt}\hat{b}'_m(t)=-i\left[\hat{b}'_m,\hat{H}\right]+\frac{1}{\sqrt{\Delta N_m\,\langle\Delta g^2\rangle_{c}}}\sum_{k=1}^N \left(\frac{d}{dt}\Delta g_k(t)\right) \hat{\sigma}_{m+1,m}^{(k)},
\end{equation}
where $-i[\hat{b}'_m,\hat{H}]$ contributes the terms due to the coherent evolution and the coupling to the light field that are completely analogous to those present in Eq.~(\ref{eq:si:XHhc}-\ref{eq:si:PHhc}). The added terms give rise to both extra dissipation and fluctuations. If the motional correlation function is exponential, $\langle \Delta g_k(t_1)\,\Delta g_k(t_2) \rangle\propto e^{-\gamma_b|t_1-t_2|/2}$, as it was suggested in \cite{borregaard_scalable_2016}, the stochastic evolution of $\Delta g_k(t)$ can be modeled by the Ornstein–Uhlenbeck process,
\begin{equation}
\frac{d}{dt}\Delta g_k(t)=-\frac{\gamma_{b}}{2}\Delta g_k(t)+\sqrt{\gamma_{b}} f_k(t),
\end{equation}
where $\langle f_k(t_1) f_k(t_2)\rangle_{c}=\langle\Delta g^2\rangle_{c}\,\delta(t_1-t_2)$.
In this case, the extra terms in the Heisenberg-Langevin equations for $\hat{b}'$ can be re-expressed as
\begin{equation}
\frac{d}{dt}\hat{b}'_m(t)=-i\left[\hat{b}'_m,\hat{H}\right]-\frac{\gamma_{b}}{2}\hat{b}'_m(t)+\sqrt{\gamma_{b}}\hat{\mathcal{F}}'_b(t),
\end{equation}
where $\langle \hat{\mathcal{F}}^{\prime\dagger}_b(t_1) \hat{\mathcal{F}}'_b(t_2)\rangle=n_\t{th}\delta(t_1-t_2)$ and $n_\t{th}=N_m/\Delta N_m$ is the thermal occupancy of the bath. While the atomic motion increases the decoherence rate, the thermal bath occupancies for the fast- and slow-decaying modes are the same.
\section{The backaction-imprecision product in homodyne detection}
The two conjugated quadratures of the probe light that after interaction with the atomic ensemble, $\hat{X}^\t{out}_\t{L}$ and $\hat{P}^\t{out}_\t{L}$, as well as any intermediate quadrature $\hat{Q}^\phi_\t{L}$,
\begin{equation}
\hat{Q}^\phi_\t{L}(t)=\sin(\phi)\hat{X}_\t{L}^\t{out}(t)+\cos(\phi)\hat{P}_\t{L}^\t{out}(t),
\end{equation}
can be detected by balanced polarization homodyning after passing the output light through a combination of a half and a quarter waveplates. The rotation angles of the waveplates allow setting the detection angle $\phi$. The two-sided power spectral density (PSD) of the photocurrent signal is given by
\begin{equation}
S_\phi[\Omega]=\frac{1}{4}\left(1-\eta\right) +\eta\int_{-\infty}^{\infty} e^{i\Omega \tau}\left\langle\hat{Q}^\phi_\t{L}(t+\tau)\,\hat{Q}^\phi_\t{L}(t) \right\rangle d\tau,
\end{equation}
where $\eta$ is the detection efficiency.
When the optical field is in the vacuum state, its correlation is given by $\langle\hat{Q}^\phi_\t{L}(t+\tau)\,\hat{Q}^\phi_\t{L}(t) \rangle=(1/4)\delta(\tau)$, and therefore $S_\phi[\Omega]=1/4$; this value is the shot noise level. The observation $S_\phi[\Omega]<1/4$ means that some of the Fourier-domain modes of light are in squeezed states.
The spectral density of the photocurrent when the homodyne is tuned to detect the $P$ quadrature is given by
\begin{equation}
S_\phi[\Omega] =\frac{1}{4} +\eta\Gamma\, S_{X_S X_S}[\Omega]+\eta\,S_{PP,\t{ext}}[\Omega],
\end{equation}
where $S_{X_S X_S}[\Omega]$ is the spectrum of the total spin motion, and $S_{PP,\t{ext}}[\Omega]$ is the extraneous noise. In the slow-measurement regime when $\Gamma\ll|\Omega_S|$, $S_{PP,\t{ext}}$ comes from the thermal noise of fast-decaying modes (see \secref{sec:si:fit}), and in the fast-measurement regime when $\Gamma\sim|\Omega_S|$, $S_{PP,\t{ext}}=0$. There is no detectable extraneous noise in the $X$ quadrature of light in our experiments.
The spectrum of the imprecision noise for measurements on the $P$ quadrature is given by
\begin{equation}
S_\t{imp}[\Omega]=\frac{1/4+S_{PP,\t{ext}}[\Omega]}{\eta\Gamma},
\end{equation}
The spectrum of the backaction noise is given by
$S_\t{BA}[\Omega]=\hbar^2\left(\Gamma(1+\zeta^2)+\gamma_\t{sc}\right)$,
where $\gamma_\t{sc}$ is the decoherence rate of the oscillator due to spontaneous scattering, which is proportional to the probe power. We conservatively estimate $\gamma_\t{sc}/\Gamma$ as $1/\mathcal{C}_\t{q}$ (as if all the decoherence of spin oscillators comes from spontaneous scattering).
Overall, the backaction-imprecision product in terms of the two-sided spectral densities is found as
\begin{equation}
\sqrt{S_\t{imp}\,S_\t{BA}}=(\hbar/2)\sqrt{\frac{1}{\eta}\left(1+\frac{S_{PP,\t{ext}}}{\t{SN}}\right)\left(1+\zeta^2+\frac{1}{\mathcal{C}_\t{q}}\right)}.
\end{equation}
where $\t{SN}=1/4$ is the shot noise level. This expression exposes how various imperfections of the measurements, including the finite detection efficiency, the extraneous noise, the ``heating'' due to spontaneous scattering, and the dynamical backaction, elevate the backaction-imprecision product above the quantum limit of $\hbar/2$ in our experiments.
\section{The modeling of the experimental data}\label{sec:si:fit}
To process the experimental data, we model the homodyne spectrum as arising from the dynamics of several oscillator modes coupled to the probe field, using the input-output relations that are expressed analogously to Eqs.(\ref{eq:si:XI-O}) and (\ref{eq:si:PI-O}),
\begin{align}
&\hat{X}_\t{L}^\t{out}(t)=\hat{X}_\t{L}^\t{in}(t)-\sum_{i=1}^{n_\t{modes}}\zeta_i\sqrt{\Gamma_i}\hat{P}_{i}(t),
&\hat{P}_\t{L}^\t{out}(t)=\hat{P}_\t{L}^\t{in}(t)+\sum_{i=1}^{n_\t{modes}}\sqrt{\Gamma_i}\hat{X}_{i}(t).\label{eq:si:XPI-Omodel}
\end{align}
and the Heisenberg equations of motion analogous to Eqs.(\ref{eq:si:XHhc}) and (\ref{eq:si:PHhc}),
\begin{align}
&\frac{d}{dt}\hat{X}_i(t)=\Omega_i \hat{P}_i(t)-\frac{\gamma_{0,i}}{2}\hat{X}_i(t)-\sum_{j=1}^{n_\t{modes}}\zeta_i\sqrt{\Gamma_i\Gamma_j}\hat{X}_{j}(t)-2\zeta_i\sqrt{\Gamma_i}\hat{P}_\t{L}^\t{in}(t)+\hat{F}_i^{X}(t),\label{eq:si:XHhcModel}\\
&\frac{d}{dt}\hat{P}_i(t)=-\Omega_i \hat{X}_i(t)-\frac{\gamma_{0,i}}{2}\hat{P}_i(t)-\sum_{j=1}^{n_\t{modes}}\zeta_j\sqrt{\Gamma_i\Gamma_j}\hat{P}_{j}(t)+2\sqrt{\Gamma_i}\hat{X}_\t{L}^\t{in}(t)+\hat{F}_i^{P}(t).\label{eq:si:PHhcModel}
\end{align}
The index $i$ counts the modes of the model, corresponding to the hybridized resonances we observe in the experimental spectra.
The model accounts for the intrinsic dissipation of the modes characterized by the damping rates $\gamma_{0,i}$, and thermal forces $\hat{F}_i^{X,P}(t)$ via the quantum Langevin approach. The correlators of the thermal forces are
\begin{align}
&\left\langle\hat{F}_i^{X}(t_1)\hat{F}_j^{X}(t_2)\right\rangle=\left\langle\hat{F}_i^{P}(t_1)\hat{F}_j^{P}(t_2)\right\rangle=\delta_{ij}\gamma_{0,i}(n_\t{th}+1/2)\delta(t_1-t_2),
&\left\langle\hat{F}_i^{X}(t_1)\hat{F}_j^{P}(t_2)+\hat{F}_j^{P}(t_2)\hat{F}_i^{X}(t_1)\right\rangle=0.
\end{align}
The intrinsic dissipation in our experiments is dominated by spin depolarization due to the atomic collisions and spontaneous scattering of probe photons, which is why we assume that it symmetrically affects $X$ and $P$, and that the thermal noises are delta-correlated \cite{vasilyev_quantum_2012}. The thermal occupancy of the intrinsic bath is $n_\t{th}=0.9\pm 0.1$, as extracted from the equilibrium macroscopic population distribution of atoms over the magnetic sublevels.
The fast-decaying modes are treated as one, because their frequency splitting is much smaller than their decoherence rates. This mode is accounted differently for different detunings of the optical probe. At large detunings, the measurement rate for the fast-decaying mode also is much smaller than its decoherence rate, and the dynamic backaction is negligible. In this case, it contributes incoherent thermal noise to the measurement of slow-decaying modes. The spectrum of this noise in the $\hat{P}_\t{L}$ quadrature of the output light is given by
\begin{equation}
S_{PP,\t{ext}}[\Omega] = \Gamma'\int_{-\infty}^{\infty} e^{i(\Omega-\Omega_S) \tau} \frac{\langle \Delta g(t+\tau) \Delta g(t) \rangle_c}{\langle \Delta g(t)^2\rangle_c}d\tau,
\end{equation}
where $\Gamma'$ is the measurement rate of the mode and $\langle \Delta g(t+\tau) \Delta g(t) \rangle_c$ is the correlation function of the atomic motion (introduced in \secref{sec:si:spinOscillators}).
Experimentally, we find that this spectrum at frequencies close to the resonance has a Gaussian shape (consistent with a non-Markovian thermal bath), and describe it using the expression
\begin{equation}
S_{PP,\t{ext}}[\Omega]/\mathrm{SN} = A_{b}\,e^{-(\Omega-\Omega_\mathrm{S})^2/(2\gamma^2_{b})},
\end{equation}
where $A_{b}$ is the magnitude of the added noise on resonance in shot noise (SN) units, and $\gamma_{b}$ is the characteristic decay rate.
The spectral width of the broadband noise is closely related to the transition time $\tau$ of atoms through the probe beam, $\gamma_{\text{b}}\sim 1/\tau = v_{\t{th}}/w$, where $w$ is the width of the beam, $v_{\t{th}}=\sqrt{2k_\t{B} T/M_\t{Cs}}\approx\SI{200}{m/s}$ is the thermal velocity atoms, $T=\SI{52}{\degreeCelsius}$ is the operating temperature, $k_\t{B}$ is the Boltzmann constant and $M_\t{Cs}$ is the mass of one atom.
At the detuning of the optical probe equal to \SI{0.7}{GHz}, at which the measurement rate of the spin reaches the oscillation frequency, the fast-decaying mode of the atomic ensemble is in the backaction-dominated regime. We therefore include it as an extra oscillator in Eqs.~(\ref{eq:si:XPI-Omodel}---\ref{eq:si:PHhcModel}). This approach effectively approximates the correlation function of the thermal motion of the mode by an exponential, which in the spectral domain may introduce an error in the frequency window of several hundreds of kHz around the resonance, much smaller than the full bandwidth of the fit (several MHz).
The full comparison between the model and the experimental data at different optical detunings is shown in \figref{fig:si:fullFits}. The data obtained at \SI{7}{GHz} optical detuning is described by the response of a single oscillator mode to the measurement backaction. The data obtained at \SI{3}{GHz} detuning is described with $n_\t{modes}=2$. At \SI{0.7}{GHz}, we include the fast-decaying mode in the model and describe the experiment with $n_\t{modes}=3$.
The homodyne spectra at all quadratures are processed in one global fit, where the resonance frequencies $\Omega_i$, the measurement rates $\Gamma_i$, the dynamical backaction coefficients $\zeta_i$, the intrinsic damping rates $\gamma_{0,i}$, and the quadrature angles $\phi$ are free parameters, and the values of the thermal occupancy $n_\t{th}$ and the detection efficiency $\eta$ are taken from independent calibrations.
When processing the broadband measurements at \SI{0.7}{GHz}, we additionally correct for the frequency response of the measurement electronic chain. The total quantum cooperativity for the data in \figref{fig:si:fullFits}c is 4.6.
\begin{figure*}[t]
\centering
\includegraphics{full_fits.pdf}
\caption{\small{
a)-c) Power spectral densities (PSD) of homodyne signals recorded at different quadrature angles $\phi$ and laser detunings $\Delta$. The points of different colors show the experimental spectra for different quadrature angles as labeled in the legends. The black curves show the results of global fits at each detuning performed as described in \secref{sec:si:fit}. Gray points show the local oscillator shot noise. Panel a) displays only part of the 17 traces fitted in total.
d) The effect of changing the oscillator mass, $M$, on the homodyne spectrum measured at a quadrature intermediate between $\hat{X}_\t{L}$ and $\hat{P}_\t{L}$. The blue curve shows the spectrum recorded in a negative mass ($M$) configuration, the orange curve shows the spectrum recorded in a positive mass configuration, and the gray curve shows the local oscillator (LO) shot noise. The sign of the mass was changed by inverting the direction of the magnetic field with respect to the $x$ axis. The spectra were recorded using a \SI{12}{mW} probe detuned from the optical transition by \SI{3}{GHz}.}}
\label{fig:si:fullFits}
\end{figure*}
\section{The sign of the mass}\label{sec:si:massFlip}
Spin oscillators can have positive or negative effective masses depending on the orientation of the mean spin alignment $\langle\hat{J}_x\rangle$ with respect to the magnetic field. The sign of the mass determines the overall sign of the response $\chi[\Omega]$ of the oscillator to generalized forces, including the quantum backaction force when the oscillator is subjected to linear measurements. Negative-mass oscillators can cancel measurement backaction on regular material oscillators~\cite{moller_quantum_2017}, and become entangled with them~\cite{thomas_entanglement_2020}.
The sign of the oscillator mass, together with the detection angle and the Fourier frequency, determines the sign of the backaction-imprecision correlations observed in homodyne measurement records.
For multiple resonances, it also inverts the signs of the frequency splittings due to the Stark and quadratic Zeeman effects.
The total effect of inverting the mass sign on homodyne spectra is therefore the reflection of the spectra with respect to the Larmor frequency.
We observe this in \figref{fig:si:fullFits}d, where we invert the sign of the mass by changing the direction of the magnetic field.
\section{The spectrum of the homodyne signal in the presence of dynamical backaction}\label{sec:si:optDamping}
To illustrate the effect that the deviation of the interaction Hamiltonian from pure position measurement-type ($\zeta=0$) has on the detected spectra and the squeezing of light, we present an analytical solution for the optimum-quadrature homodyne spectrum in the single-oscillator model with arbitrary $\zeta\in[-1,1]$ under the rotating-wave approximation (RWA).
For a single mode, by solving Eqs.~(\ref{eq:si:XHhcModel}-\ref{eq:si:PHhcModel}) and using the input-output relations given by \eqref{eq:si:XPI-Omodel}, we find the spectrum of the output signal neglecting the detection losses as
\begin{equation}\label{eq:si:spectrumRWA}
S_\phi[\Omega]/\t{SN}= 1+2\t{Re}\left[\mathcal{A} \chi[\Omega]\right]+|\mathcal{A}\chi[\Omega]|^2\left(1+\frac{\gamma_\t{th}+\gamma_0}{\Gamma(1+\zeta)}\right),
\end{equation}
where $\t{SN}=1/4$ is the shot noise level, $\chi[\omega]=-(1/2)/(\Delta\Omega+i\gamma/2)$ is the RWA force susceptibility, $\Delta\Omega=\Omega-\Omega_\t{S}$ is the Fourier-detuning from the oscillator resonance, $\gamma=\gamma_0+2\zeta\Gamma$ is the total oscillator linewidth, and the transduction factor $\mathcal{A}$ is
\begin{equation}
\mathcal{A}=i \Gamma (1+\zeta) \left((1+\zeta)+(1-\zeta) e^{-2 i \phi }\right).
\end{equation}
By minimizing \eqref{eq:si:spectrumRWA} over the quadrature angle $\phi$, we find the frequency-dependent maximum-squeezing angle $\phi_\t{min}$ via
\begin{equation}
\tan\left(2\phi_\t{min}[\Omega]\right)=-\frac{2\Delta\Omega}{\gamma_\t{dec}},
\end{equation}
where the total decoherence rate $\gamma_\t{dec}=\gamma
_\t{th}+\gamma_\t{QBA}$ is the sum of the decoherence rates due to the intrinsic thermal noise, $\gamma_\t{th}$ and the quantum backaction, $\gamma_\t{QBA}$ which are defined as
\begin{align}
&\gamma_\t{th}=(2n_\t{th}+1)\gamma_0,&
&\gamma_\t{QBA}=\Gamma(1+\zeta^2).
\end{align}
The shot-noise normalized signal spectrum at the optimum quadrature is
\begin{equation}
S_{\phi_\t{min}}[\Omega]/\t{SN}=1-\frac{2\gamma_\t{DBA}/\gamma}{1+(2\Delta\Omega/\gamma)^2}-\frac{2\gamma_\t{dec}\Gamma/\gamma^2}{1+(2\Delta\Omega/\gamma)^2}\left((1-\zeta^2)\sqrt{1+\left(\frac{2\Delta\Omega}{\gamma_\t{dec}}\right)^2}-(1+\zeta^2)\right),
\end{equation}
where $\gamma_\t{DBA}=2\zeta\Gamma$ is the contribution of the dynamical backaction to the total oscillator linewidth (the optical damping).
The absolute minimum of the spectrum is found by further minimizing $S_{\phi_\t{min}}[\Omega]$ over $\Delta\Omega$, which can be done analytically in the general case, but yields a cumbersome result.
Instead of presenting this result, we restrict the attention to the case $\zeta\ll1$, which is relevant to our experiments, and estimate the minimum noise level by evaluating $S_{\phi_\t{min}}[\Omega]$ at $\Delta\Omega_\t{min,\zeta=0}=1/2\sqrt{\gamma(2\gamma_\t{dec}+\gamma)}$, the optimum Fourier detuning for $\zeta=0$. The result is
\begin{equation}
S_\t{min}\approx 1-\frac{\Gamma}{\gamma_\t{dec}+\gamma_0}-\frac{(\gamma_{0}+\gamma_\t{th})\gamma_\t{DBA}}{(\gamma_{0}+\gamma_\t{dec})^2}.
\end{equation}
When the thermal occupancy of the intrinsic bath is close to zero, and the quantum cooperativity is in the intermediate regime, such that $\gamma_\t{dec}$ has the same order of magnitude as $\gamma_0$, there is an improvement in the minimum noise level from a small positive optical damping.
\section{The generation of the collimated tophat beam}\label{sec:si:tophat}
\begin{figure}[h]
\includegraphics[width=1\textwidth]{tophat_generation.pdf}
\caption{Optical setups for the generation of collimated tophat beams. $\mathbf{M}_{a,b}$ are ray transfer matrices. a) A simple setup. The dashed black line shows how the beam would propagate after passing the beam shaper and the lens $f_1$, but without passing the negative lens $f_2$. EFL: effective focal length. b) A realistic setup designed using the condition $\mathbf{M}_a=\mathbf{M}_b$. Beam shaper: Gaussian-to-tophat beam-shaping lens.}
\label{fig:si:tophat}
\end{figure}
Optical beams with tophat transverse profiles are commonly produced by passing a collimated Gaussian beam through an aspherical beam shaper, and focusing the beam after the shaper using a spherical lens. In this configuration, the optimum tophat profile (giving the sharpest roll-off of the intensity distribution in the transverse direction) is realized before the focal point, and the beam is tightly focused. In our experiment, it is essential to create a beam in which the tophat profile coincides with the position of the beam waist, and has a relatively large transverse size, enabling a long Rayleigh length extending over the entire cell channel.
An intuition on how to produce a tophat beam that fulfills our criteria can be obtained by examining the setup shown in figure \figref{fig:si:tophat}a, which is a straightforward extension of the usual beam shaper application scheme with an addition of a negative lens $f_2$.
The optimum tophat transverse profile is realized at a distance one effective focal length (EFL) away from the first lens. The transverse width is proportional to the focal length $f_1$.
The beam is converging at the optimum point, because of the full fan angle of the tophat beam shaper (i.e. the divergence the shaper introduces in the beam).
By placing an appropriate negative lens $f_2$ in the optimum point, the beam can be collimated, and its waist position made coincide with the optimum location of the transverse profile.
The required focal length of the negative lens can be calculated given the size of the input Gaussian beam, $w_\text{in}$, and the full fan angle of the beam shaper, $\phi_\text{FA}$, as $f_2 = \frac{\phi_\text{FA}/w_\text{in} f_1}{\phi_\text{FA}/w_\text{in}-1/f_1}$.
The setup in \figref{fig:si:tophat}a would be challenging to implement directly, because the waist position of the beam is located inside the cell, where placing a lens is hardly realistic.
However, one can find an optical setup with an identical ray transfer matrix to the one in \figref{fig:si:tophat}a, but realized using a different physical arrangement of lenses. Such a setup is shown in \figref{fig:si:tophat}b.
The transfer matrices for the two setups, $\mathbf{M}_a$ and $\mathbf{M}_b$, are given by
\begin{align}
\mathbf{M}_a &=
\mathbf{L}(f_2)\mathbf{S}(f_1)\mathbf{L}(f_1),&
\mathbf{M}_b &=
\mathbf{S}(L_3)\mathbf{L}(F_2)\mathbf{S}(L_2)\mathbf{L}(F_1)\mathbf{S}(L_1),
\end{align}
where the matrices for propagation in free space, $\mathbf{S}$, and passing through a lens, $\mathbf{L}$, respectively, are
\begin{align}
\mathbf{S}(L) &=
\begin{bmatrix}
1 & L \\
0 & 1
\end{bmatrix},&
\mathbf{L}(f) &=
\begin{bmatrix}
1 & 0 \\
-1/f & 1
\end{bmatrix}.
\end{align}
In our experiment, the setup in \figref{fig:si:tophat}b is implemented using lenses of pre-determined focal lengths $F_1$ and $F_2$, while the separating distances $L_1$, $L_2$ and $L_3$ are adjusted to meet the condition $\mathbf{M}_a = \mathbf{M}_b$.
Additionally, the matrix $\mathbf{M}_a$ is supplemented by an inversion in the transverse plane, which can be interpreted as passing the beam through an extra 4f optical system, which is done in order to have more flexibility in the choice of lenses and more control over the resulting distances.
|
{
"arxiv_id": "2302.13705",
"language": "en",
"timestamp": "2023-03-01T02:10:26",
"url": "https://arxiv.org/abs/2302.13705",
"yymm": "2302"
} | \section{Introduction}
Adaptive observers are recurrent algorithms that simultaneously reconstruct unmeasured states and identify unknown parameters \cite{b1}. Nowadays various methods of such observers design have been proposed for linear SISO and MIMO systems \cite{b1, b2, b3, b4}, and also very important and interesting results have been obtained concerning state reconstruction of the nonlinear systems \cite{b5, b6} and ones with time-varying parameters \cite{b7, b8, b9}. In this study we return again to a deeply investigated problem of joint estimation of linear SISO systems state and parameters.
Retrospective analysis of existing solutions to the problem under consideration is given below. In \cite{b2, b3} adaptive observers for simultaneous estimation of states and unknown parameters of a linear SISO system have been proposed for the first time. Kreisselmeier \cite{b10} proposed a parameterization that, unlike in \cite{b2, b3}, allows one to completely separate the observer dynamics from the adaptive loop to make the design of suitable parameter adaptation schemes substantially simpler. Marino and Tomei in \cite{b11} generalized the result of \cite{b10} to linearizable nonlinear systems, and in \cite{b12} these results were additionally subjected to robustness stress tests. In \cite{b4}, motivated by \cite{b10, b11, b12}, an alternative observer with a simpler structure has been proposed. In \cite{b9} an exhaustive overview of some existing observer design algorithms for linearizable nonlinear systems with time-varying known parameters and time-invariant unknown parameters is given (see also the review \cite{b1}). The main drawbacks of all above-mentioned studies are two-fold:
\begin{enumerate}
\item[\textbf{D1)}] strong persistent excitation condition is required to guarantee convergence of the state observation and parameter errors;
\item[\textbf{D2)}] an appropriate state transformation is required to represent the system in a form without multiplications of unknown parameters and unmeasured states (observer canonical form, for example).
\end{enumerate}
To overcome the first drawback (\textbf{D1}), two different approaches have been proposed recently. In \cite{b13}, a modified observer \cite{b10} with exponential convergence of state observation and parameter errors under the weaker regressor finite excitation requirement is proposed. In \cite{b14}, based on the results of \cite{b6, b7}, a parameter estimation-based observer is developed, which, unlike existing ones, \emph{i}) ensures the parameter error finite-time convergence when the regressor finite excitation condition is met, \emph{ii}) reconstructs states using algebraic rather than differential equation.
The second problem (\textbf{D2}) is more complex in comparison with the first one, since it does not allow one to apply the existing adaptive observers to recover the physical states of the dynamical systems. Instead, virtual states of appropriately chosen state space form, which includes only products between unknown parameters and measured functions (mappings for output and input signals), are recovered. At the same time, to solve many real-world control problems (\emph{e.g}. two-mass systems vibration suppressing \cite{b15}) it is required to measure the physical states rather than the virtual ones.
In recent studies \cite{b16, b17}, two new adaptive observers of physical states of linear SISO systems with polynomial overparameterization have been proposed. The polynomial relation between the parameters of the observer canonical form and the original state space allows one to: (\emph{i}) parameterize and solve the problem of identification of parameters related to unmeasured states, (\emph{ii}) estimate the physical states of the system. The solution \cite{b16} is in line with the studies \cite{b2, b3, b9, b10, b13} and reconstructs the unmeasured state estimates using a differential equation, which is a copy of the system up to the Leuenberger corrective feedback. Observer in \cite{b17} is based on a parameter estimation-based observer design procedure \cite{b6, b7, b14} and, unlike \cite{b16}, forms states estimate using an algebraic equation and, unlike \cite{b14}, allows one to reconstruct the physical states of the system. Both proposed observers overcome \textbf{D1} and guarantee exponential convergence of the state observation error under extremely weaker finite excitation condition. Comparing the observers from \cite{b16, b17} with each other, it should be noted that the solution in \cite{b17} does not use the Luenberger corrective feedback and, therefore, is not affected by the peaking phenomenon in the course of transients (see Comparative Simulation in \cite{b16} and \cite{b17}).
In \cite{b18} the results of \cite{b16} are extended to the class of systems with external unmeasured disturbances generated by known exosystems with unknown initial conditions. In this study we extend the results of \cite{b17} in a similar way. The main advantages and salient features of the observer proposed in this research are summarized as follows:
\begin{enumerate}
\item[\textbf{F1)}] in comparison with \cite{b2, b3, b4, b8, b9, b10, b11, b12, b13, b14} physical states are reconstructed for a system represented in an arbitrary state space form rather than virtual states of the appropriately chosen state space form;
\item[\textbf{F2)}] the convergence of the unmeasured state estimates to their true values is guaranteed if the regressor finite excitation condition holds;
\item[\textbf{F3)}] in comparison with \cite{b16, b18}, the Luenberger corrective feedback is not applied, and state estimates are formed using an algebraic rather than a differential equation;
\item[\textbf{F4)}] in addition to the state and unknown parameter estimates, an unmeasured external disturbance generated by a known exosystem with unknown initial conditions is also reconstructed.
\end{enumerate}
The organization of this paper is as follows. Section II provides a rigorous problem statement. The design procedure for the proposed observer and its properties analysis are elucidated in Section III. Section IV presents some simulation results to validate the advantages of the proposed method in comparison with \cite{b16, b18}. The paper is wrapped up with conclusion and further research scope in Section V.
The below-given definitions of the heterogeneous mapping and the regressor finite excitation condition are used throughout the paper.
{\it \bf Definition 1.} \emph{A mapping ${\cal F}\left( x \right){\rm{:\;}}{\mathbb{R}^{{n_x}}} \to {\mathbb{R}^{{n_{\cal F}} \times {m_{\cal F}}}}$ is heterogeneous of degree ${\ell _{\cal F}} \ge 1$ if there exists ${\Xi _{\cal F}}\left( {\omega \left( t \right)} \right)=\linebreak = {\overline \Xi _{\cal F}}\left( {\omega \left( t \right)} \right)\omega \left( t \right) \in {\mathbb{R}^{{\Delta _{\cal F}} \times {n_x}}}{\rm{,\;}}{\Pi _{\cal F}}\left( {\omega \left( t \right)} \right) \in {\mathbb{R}^{{n_{\cal F}} \times {n_{\cal F}}}}$, and a mapping ${{\cal T}_{\cal F}}\left( {{\Xi _{\cal F}}\left( {\omega \left( t \right)} \right)x} \right){\rm{:\;}}{\mathbb{R}^{{\Delta _{\cal F}}}} \to {\mathbb{R}^{{n_{\cal F}} \times {m_{\cal F}}}}$ such that for all $ \omega \left( t \right) \in \mathbb{R}$ and $x \in {\mathbb{R}^{{n_x}}}$ the following conditions hold:}
\begin{equation}\label{eq1}
\begin{array}{c}
{\Pi _{\cal F}}\left( {\omega \left( t \right)} \right){\cal F}\left( x \right) = {{\cal T}_{\cal F}}\left( {{\Xi _{\cal F}}\left( {\omega \left( t \right)} \right)x} \right){\rm{, }}\\
{\rm{det}}\left\{ {{\Pi _{\cal F}}\left( {\omega \left( t \right)} \right)} \right\} \ge {\omega ^{{\ell _{_{\cal F}}}}}\left( t \right)\!{\rm{,}}{\Xi _{\cal F}}_{ij}\!\left( {\omega \left( t \right)} \right) = {c_{ij}}{\omega ^\ell }\left( t \right)\!{\rm{,}}\\
{c_{ij}} \in \left\{ {0,{\rm{ 1}}} \right\}{\rm{,\;}}\ell > 0.
\end{array}
\end{equation}
For instance, the mapping ${\cal F}\left( x \right) = {\rm{col}}\left\{ {{x_1}{x_2}{\rm{,\;}}{x_1}} \right\}$ with ${\Pi _{\cal F}}\left( \omega \right) = {\rm{diag}}\left\{ {{\omega ^2}{\rm{,\;}}\omega } \right\}{\rm{,\;}}{\Xi _{\cal F}}\left( \omega \right) = {\rm{diag}}\left\{ {\omega {\rm{,\;}}\omega } \right\}$ is heterogeneous of degree ${\ell _{\cal F}} = 3.$
{\it \bf Definition 2.} \emph{The regressor $\varphi \left( t \right) \in {\mathbb{R}^n}$ is finitely exciting $\varphi \left( t \right) \in {\rm{FE}}$ over the time range $\left[ {t_r^ + {\rm{;\;}}{t_e}} \right]$ if there exists $t_r^ + \ge 0$, ${t_e} > t_r^ +$ and $\alpha$ such that the following inequality holds:}
\begin{equation}\label{eq2}
\int\limits_{t_r^ + }^{{t_e}} {\varphi \left( \tau \right){\varphi ^{\rm{T}}}\left( \tau \right)d} \tau \ge \alpha {I_n}{\rm{,}}
\end{equation}
\emph{where $\alpha > 0$ is the excitation level, $I_{n}$ is an identity matrix.}
\section{Problem Statement}
A class of uncertain linear time-invariant overparametrized systems affected by bounded external disturbances are considered:\footnote{Dependencies from $\theta$ and $t$ can be further suppressed for the sake of brevity.}
\begin{equation}\label{eq3}
\begin{array}{l}
\dot x\left( t \right) = A\left( \theta \right)x\left( t \right) + B\left( \theta \right)u\left( t \right) + D\left( \theta \right)\delta \left( t \right){\rm{,}}\\
y\left( t \right) = {C^{\rm{T}}}x\left( t \right){\rm{,\;}}x\left( {{t_0}} \right) = {x_0}{\rm{,}}
\end{array}
\end{equation}
where $x\left( t \right) \in {\mathbb{R}^n}$ is the original (physical) system states with unknown initial conditions ${x_0}$, $\delta \left( t \right) \in \mathbb{R}$ is a bounded external disturbance, $A{\rm{:\;}}{\mathbb{R}^{{n_\theta }}} \to {\mathbb{R}^{n \times n}}{\rm{,}}$ $B{\rm{:\;}}{\mathbb{R}^{{n_\theta }}} \!\to\! {\mathbb{R}^n}{\rm{,\;}}$ $D{\rm{:\;}}{\mathbb{R}^{{n_\theta }}} \to {\mathbb{R}^n}$ denote known mappings with unknown parameter $\theta \in {\mathbb{R}^{{n_\theta }}}$, $C{\text{\;:=\;}}C(\theta)\in {\mathbb{R}^n}$ stands for a known vector or a mapping that depends only on $\theta$. The pair $\left( {{C^{\rm{T}}}{\rm{,\;}}A\left( \theta \right)} \right)$ is completely observable and only control $u\left( t \right) \in \mathbb{R}$ and output $y\left( t \right) \in \mathbb{R}$ signals are measurable.
Considering the control signal and disturbances, the following assumptions are adopted.
{\it \bf Assumption 1.} \emph{For all $t \ge {t_0}$ the control signal $u\left( t \right)$ ensures existence and boundedness of trajectories of the system \eqref{eq3}.}
{\it \bf Assumption 2.} \emph{The disturbance $\delta \left( t \right)$ is bounded, continuous and generated by a time-invariant exosystem:
\begin{equation}\label{eq4}
\begin{array}{l}
{{\dot x}_\delta }\left( t \right) = {{\cal A}_\delta }{x_\delta }\left( t \right){\rm{,\;}}{x_\delta }\left( {{t_0}} \right) = {x_{\delta 0}}{\rm{,}}\\
\delta \left( t \right) = h_\delta ^{\rm{T}}{x_\delta }\left( t \right){\rm{,}}
\end{array}
\end{equation}
where ${x_\delta }\left( t \right) \in {\mathbb{R}^{{n_\delta }}}$ stands for an exosystem state with unknown initial conditions ${x_{\delta 0}}\left( {{t_0}} \right)$, ${h_\delta } \in {\mathbb{R}^{{n_\delta }}}{\rm{,\;}}{{\cal A}_\delta } \in {\mathbb{R}^{{n_\delta } \times {n_\delta }}}$ are known vector and matrix such that the pair $\left( {h_\delta ^{\rm{T}}{\rm{,\;}}{{\cal A}_\delta }} \right)$ is observable.}
The goal is to design an adaptive observer, which ensures that the following equalities hold:
\begin{equation}\label{eq5}
\mathop {{\rm{lim}}}\limits_{t \to \infty } \left\| {\tilde x\left( t \right)} \right\| = 0{\rm{\;}}\left( {\exp } \right){\rm{,\;}}\mathop {{\rm{lim}}}\limits_{t \to \infty } \left\| {\tilde \delta \left( t \right)} \right\| = 0{\rm{\;}}\left( {\exp } \right){\rm{,}}
\end{equation}
where $\tilde x\left( t \right) = \hat x\left( t \right) - x\left( t \right)$ is a state observation error of the system \eqref{eq3}, $\tilde \delta \left( t \right) = \hat \delta \left( t \right) - \delta \left( t \right)$ is a disturbance observation error.
\section{Main Result}
In accordance with the results from \cite{b2}, for each completely observable linear system \eqref{eq3} there exist nonsingular matrices:
\begin{gather*}
{\small{
\begin{array}{c}
{T_{I}}\left( \theta \right) \!=\! {\begin{bmatrix}
{{A^{n - 1}}\left( \theta \right){{\cal O}_n}\left( \theta \right)}&{{A^{n - 2}}\left( \theta \right){{\cal O}_n}\left( \theta \right)}& \cdots &{{{\cal O}_n}\left( \theta \right)}
\end{bmatrix}}{\rm{,}}\\
{{\cal O}_n}\left( \theta \right) = {\cal O}\left( \theta \right){{\begin{bmatrix}
{{0_{1 \times \left( {n - 1} \right)}}}&1
\end{bmatrix}}^{\rm{T}}}{\rm{,}}\\
{{\cal O}^{ - 1}}\!\left( \theta \right) \!=\! {{\begin{bmatrix}
{C\left( \theta \right)}&{{{\left( {A\left( \theta \right)} \right)}^{\rm{T}}}C\left( \theta \right)}& \!\cdots\! &{{{\left( {{A^{n - 1}}\left( \theta \right)} \right)}^{\rm{T}}}C\left( \theta \right)}
\end{bmatrix}}^{\rm{T}}}{\rm{,}}
\end{array}}}
\end{gather*}
which define the similarity transformation $\xi \left( t \right) = T\left( \theta \right)x\left( t \right)$ to rewrite the system \eqref{eq3} in the observer canonical form:
\begin{equation}\label{eq7}
\dot \xi \left( t \right)\! =\! {A_0}\xi \left( t \right) + {\psi _a}\left( \theta \right)y\left( t \right) + {\psi _b}\left( \theta \right)u\left( t \right)+{\psi _d}\left( \theta \right)\delta \left( t \right){\rm{,}}
\end{equation}
\begin{equation}\label{eq8}
y\left( t \right) = C_0^{\rm{T}}\xi \left( t \right){\rm{,\;}}\xi \left( {{t_0}} \right) = {\xi _0}\left( \theta \right) = T\left( \theta \right){x_0}{\rm{,}}
\end{equation}
where
\begin{gather*}
\begin{array}{c}
{\psi _a}\left( \theta \right) = T\left( \theta \right)A\left( \theta \right){T^{ - 1}}\left( \theta \right){C_0}{\rm{,\;}}{\psi _b}\left( \theta \right) = T\left( \theta \right)B\left( \theta \right){\rm{,}}\\
{\psi _d}\left( \theta \right) = T\left( \theta \right)D\left( \theta \right)\\
{A_0} = {\begin{bmatrix}
{{0_n}}&{\begin{array}{*{20}{c}}
{{I_{n - 1}}}\\
{{0_{1 \times \left( {n - 1} \right)}}}
\end{array}}
\end{bmatrix}}{\rm{,\;}}\begin{array}{*{20}{c}}
{C_0^{\rm{T}} = {C^{\rm{T}}}\left( \theta \right){T^{ - 1}}\left( \theta \right) = }\\
{ = \left[ {\begin{array}{*{20}{c}}
1&{0_{n - 1}^{\rm{T}}}
\end{array}} \right]}
\end{array}{\rm{,}}
\end{array}
\end{gather*}
${T_{I}}\left( \theta \right){\rm{:}}=T^{-1}\left( \theta \right)$, ${{\cal O}_n}$ is the $n^{th}$ column of the matrix that is inverse to ${{\cal O}^{ - 1}}\left( \theta \right)$, $\xi \left( t \right) \in {\mathbb{R}^n}$ denotes state vector of the observer canonical form with unknown initial conditions ${\xi _0}$, the vector ${C_0} \in {\mathbb{R}^n}$ and mappings ${\psi_a}{\rm{,\;}}{\psi _b}{\rm{,\;}}{\psi _d}{\rm{:\;}}{\mathbb{R}^{{n_\theta }}} \to {\mathbb{R}^n}$ are known.
The similarity transformation $\xi \left( t \right) = T\left( \theta \right)x\left( t \right)$ and representation \eqref{eq7}, \eqref{eq8} motivate to reconstruct the unmeasured states $x\left( t \right)$ in the following way:
\begin{equation}\label{eq9}
\hat x\left( t \right) = {\hat T_I}\left( t \right)\hat \xi \left( t \right){\rm{,}}
\end{equation}
where $\hat \xi \left( t \right)$ is the estimate of observer canonical form \eqref{eq7} states, ${\hat T_I}\left( t \right)$ stands for the estimate of the matrix ${T_I}\left( \theta \right)$.
At the same time, the solution of the set of equations \eqref{eq4} is written as:
\begin{equation}\label{eq10}
\begin{array}{l}
{{\dot \Phi }_\delta }\left( t \right) = {{\cal A}_\delta }{\Phi _\delta }\left( t \right){\rm{,\;}}{\Phi _\delta }\left( {{t_0}} \right) = {I_{{n_\delta }}}{\rm{,}}\\
{x_\delta }\left( t \right) = {\Phi _\delta }\left( t \right){x_{\delta 0}}{\rm{,}}\\
\delta \left( t \right) = h_\delta ^{\rm{T}}{\Phi _\delta }\left( t \right){x_{\delta 0}}{\rm{,}}
\end{array}
\end{equation}
which motivate to estimate the external disturbance $\delta \left( t \right)$ as:
\begin{equation}\label{eq11}
\hat \delta \left( t \right) = h_\delta ^{\rm{T}}{\Phi _\delta }\left( t \right){\hat x_{\delta 0}}\left( t \right).
\end{equation}
According to equations \eqref{eq9} and \eqref{eq11}, the problem \eqref{eq5} of unmeasured signals estimation is transformed into the problem of identification of unknown parameters ${x_{\delta 0}}{\rm{,\;}}{T_I}\left( \theta \right)$ and estimation of unmeasured states $\xi \left( t \right)$ of the observer canonical form \eqref{eq7}. According to \cite{b6, b7, b14}, the problem of $\xi \left( t \right)$ estimation can be reduced to the one of parameter identification. Thus, using the results \cite{b14, b18, b19}, the following parameterizations are obtained for the unknown parameters $\eta \left( \theta \right) = {\rm{col}}\left\{ {{\psi _a}\left( \theta \right){\rm{,\;}}{\psi _b}\left( \theta \right)} \right\}$ and states $\xi \left( t \right)$.
{\bf{Lemma 1.}} \emph{Let ${t_\epsilon} > {t_0}$ be a sufficiently large predefined time instant, then for all $t \ge {t_\epsilon}$ the unknown parameters $\eta \left( \theta \right)$ and unmeasured states $\xi \left( t \right)$ satisfy the following regression models:}
\begin{equation}\label{eq12}
\begin{array}{c}
{\cal Y}\left( t \right) = \Delta \left( t \right)\eta \left( \theta \right),
\end{array}
\end{equation}
\vspace{-25pt}
\begin{gather*}
{\cal Y}\left( t \right) = k\left( t \right) \cdot {\rm{adj}}\left\{ {\varphi \left( t \right)} \right\}q\left( t \right){\rm{, }}\Delta \left( t \right) = k\left( t \right) \cdot {\rm{det}}\left\{ {\varphi \left( t \right)} \right\}
\end{gather*}
\vspace{-15pt}
\begin{equation}\label{eq13}
\begin{array}{c}
\xi \left( t \right) = z\left( t \right) + {R^{\rm{T}}}\left( t \right)\kappa \left( \theta \right){\rm{,}}\\
{\rm{ }}\kappa \left( \theta \right) = {\left[ {\begin{array}{*{20}{c}}
{\psi _a^{\rm{T}}\left( \theta \right)}&{\psi _b^{\rm{T}}\left( \theta \right)}&{\psi _d^{\rm{T}}\left( \theta \right)}
\end{array}} \right]^{\rm{T}}}{\rm{,}}\\
{R^{\rm{T}}}\left( t \right) = \left[ {\begin{array}{*{20}{c}}
{\Omega \left( t \right)}&{P\left( t \right)}&{U\left( t \right)}
\end{array}} \right]{\rm{,}}
\end{array}
\end{equation}
where
\begin{gather*}
\begin{array}{c}
q\left( t \right) = \int\limits_{{t_\epsilon}}^t {{e^{ - \sigma \tau }}{{\overline \varphi }_f}\left( \tau \right)\left( \begin{array}{c}
\overline q\left( \tau \right) - {k_1}{{\overline q}_f}\left( \tau \right) - \\
- {\beta ^{\rm{T}}}\left( {{F_f}\left( \tau \right) + l{y_f}\left( \tau \right)} \right)
\end{array} \right)d\tau } {\rm{,}}\\
{\rm{ }}q\left( {{t_\epsilon}} \right) = {0_{{\rm{2}}n}},\\
\end{array}
\end{gather*}
\vspace{-15pt}
\begin{equation}\label{eq14}
\begin{array}{c}
\varphi \left( t \right) = \int\limits_{{t_\epsilon}}^t {{e^{ - \sigma \tau }}{{\overline \varphi }_f}\left( \tau \right)\overline \varphi _f^{\rm{T}}\left( \tau \right)d\tau } {\rm{,\;}}\varphi \left( {{t_\epsilon}} \right) = {0_{{\rm{2}}n \times {\rm{2}}n}},
\end{array}
\end{equation}
\begin{equation}\label{eq15}
\begin{array}{l}
{{\dot {\overline q}}_f}\left( t \right) = - {k_1}{{\overline q}_f}\left( t \right) + \overline q\left( t \right){\rm{,\;}}{{\overline q}_f}\left( {{t_0}} \right) = 0,\\
{{\dot {\overline \varphi} }_f}\left( t \right) = - {k_1}{{\overline \varphi }_f}\left( t \right) + \overline \varphi \left( t \right){\rm{,\;}}{{\overline \varphi }_f}\left( {{t_0}} \right) = {0_{{\rm{2}}n}},\\
{{\dot F}_f}\left( t \right) = - {k_1}{F_f}\left( t \right) + F\left( t \right){\rm{,\;}}{F_f}\left( {{t_0}} \right) = {0_{{n_\delta }}},\\
{{\dot y}_f}\left( t \right) = - {k_1}{y_f}\left( t \right) + y\left( t \right){\rm{,\;}}{y_f}\left( {{t_0}} \right) = 0,
\end{array}
\end{equation}
\begin{equation}\label{eq16}
\begin{array}{c}
\overline q\left( t \right) = y\left( t \right) - C_0^{\rm{T}}z{\rm{,\;}}\overline \varphi \left( t \right) = {\begin{bmatrix}
{{{\dot \Omega }^{\rm{T}}}{C_0} + {N^{\rm{T}}}\beta }\\
{{{\dot P}^{\rm{T}}}{C_0} + {H^{\rm{T}}}\beta }
\end{bmatrix}}{\rm{,}}\\
\dot z\left( t \right) = {A_K}z\left( t \right) + Ky\left( t \right){\rm{,\;}}z\left( {{t_0}} \right) = {0_n}{\rm{,}}\\
\dot \Omega \left( t \right) = {A_K}\Omega \left( t \right) + {I_n}y\left( t \right){\rm{,\;}}\Omega \left( {{t_0}} \right) = {0_{n \times n}}{\rm{,}}\\
\dot P\left( t \right) = {A_K}P\left( t \right) + {I_n}u\left( t \right){\rm{,\;}}P\left( {{t_0}} \right) = {0_{n \times n}}{\rm{,}}\\
\end{array}
\end{equation}
\begin{gather*}
\begin{array}{c}
\dot U\left( t \right) = {A_K}U\left( t \right) + {I_n}\delta \left( t \right){\rm{,\;}}U\left( {{t_0}} \right) = {0_{n \times n}}{\rm{,}}\\
\dot F\left( t \right) = GF\left( t \right) + Gly\left( t \right) - lC_0^{\rm{T}}\dot z\left( t \right){\rm{,\;}}F\left( {{t_0}} \right) = {0_{{n_\delta }}}{\rm{,}}\\
\dot H\left( t \right) = GH\left( t \right) - lC_0^{\rm{T}}\dot P\left( t \right){\rm{,\;}}H\left( {{t_0}} \right) = {0_{{n_\delta } \times n}}{\rm{,}}\\
\dot N\left( t \right) = GN\left( t \right) - lC_0^{\rm{T}}\dot \Omega \left( t \right){\rm{,\;}}N\left( {{t_0}} \right) = {0_{{n_\delta } \times n}}{\rm{,}}
\end{array}
\end{gather*}
\emph{and, if $\overline \varphi \left( t \right) \in {\rm{FE}}$ over $\left[ {{t_\epsilon}{\rm{;\;}}{t_e}} \right]$, then for all $t \ge {t_e}$ it holds that ${\Delta _{{\rm{max}}}} \ge \Delta \left( t \right) \ge {\Delta _{{\rm{min}}}} > 0$.}
\emph{Here $k\left( t \right) > 0$ is a time-varying (or time-invariant) amplifier, ${k_1} > 0,{\rm{\;}}\sigma > 0$ are filters time constants, ${A_K} = \linebreak ={A_0} - KC_0^{\rm{T}}{\rm{,\;}}G$ stand for stable matrices of appropriate dimensions, $l \in {\mathbb{R}^{{n_\delta }}}$ denotes a vector such that the pair $\left( {G{\rm{,\;}}l} \right)$ is controllable, and $G$ is chosen so as to satisfy the condition $\sigma \left\{ {{{\cal A}_\delta }} \right\} \cap \sigma \left\{ G \right\} = 0$, $\beta \in {\mathbb{R}^{{n_\delta }}}$ is a solution of the following set of equations:}
\begin{gather*}
\begin{array}{l}
{M_\delta }{{\cal A}_\delta } - G{M_\delta } = l\overline h_\delta ^{\rm{T}}{\rm{,\;}}\overline h_\delta ^{\rm{T}} = h_\delta ^{\rm{T}}{{\cal A}_\delta }{\rm{,}}\\
\beta = \overline h_\delta ^{\rm{T}}M_\delta ^{ - 1}.
\end{array}
\end{gather*}
\emph{Proof of Lemma is presented in Supplementary material \cite{b20}.}
Following parametrizations \eqref{eq9}, \eqref{eq11}, \eqref{eq13}, in order to estimate the states $x\left( t \right)$, it is sufficient to obtain the estimates of $\kappa \left( \theta \right){\rm{,\;}}{T_I}\left( \theta \right)$ and ${x_{\delta 0}}.$ However, the regression equation \eqref{eq12} allows one to find only the parameters $\eta \left( \theta \right)$. Therefore, it is required to: \emph{a}) parametrize an equation with respect to (w.r.t.) ${x_{\delta 0}}$ and \emph{b}) transform \eqref{eq12} into the regression equations w.r.t. the parameters $\kappa \left( \theta \right){\rm{,\;}}{T_I}\left( \theta \right)$.
If the existence condition (where $\mathcal{L}_{ab} \in \mathbb{R}^{n_{\theta} \times n}$ is a matrix to take handpicked "good" elements from $\eta(\theta)$):
\begin{equation}\label{eq17}
\begin{array}{c}
{{\rm{det}} ^2}\left\{ {{\nabla _\theta }{\psi _{ab}}\left( \theta \right)} \right\} > 0,{\rm{\;}}\\{\psi _{ab}}\left( \theta \right) = {{\cal L}_{ab}}\eta \left( \theta \right){\rm{:\;}}{\mathbb{R}^{{n_\theta }}} \to {\mathbb{R}^{{n_\theta }}}
\end{array}
\end{equation}
of the inverse transform $\theta = {\cal F}\left( {{\psi _{ab}}} \right){\rm{:\;}}{\mathbb{R}^{{n_\theta }}} \to {\mathbb{R}^{{n_\theta }}}$ is met, then, according to \cite{b16, b17, b18}, there exists a theoretical possibility that $\kappa \left( \theta \right){\rm{,\;}}{T_I}\left( \theta \right)$ could be obtained using $\eta \left( \theta \right)$.
In subsection 3.1 a method is proposed to obtain the regression equations w.r.t. $\kappa \left( \theta \right){\rm{,\;}}{T_I}\left( \theta \right)$ and ${x_{\delta 0}}$ from \eqref{eq12}. In subsection 3.2 an adaptive observer is proposed that uses the above-mentioned regression equations and allows one to achieve the stated goal \eqref{eq5}.
{\bf{Remark 1.}} \emph{It should be noted that in the general case $\kappa \left( \theta \right){\rm{,\;}}{T_I}\left( \theta \right)$ do not satisfy the Lipschitz condition, so the problem \eqref{eq5} cannot be reduced to $\theta$ identification (the mappings $\kappa \left( {\hat \theta } \right){\rm{,\;}}{T_I}\left( {\hat \theta } \right)$ can become singular in the course of a transient process). The below-proposed approach overcomes this problem by a linear transformation and allows one to identify $\kappa \left( \theta \right){\rm{,\;}}{T_I}\left( \theta \right)$ without singularity burden operations.}
\subsection{Parametrization of equations w.r.t. $\kappa \left( \theta \right){\rm{,\;}}{T_I}\left( \theta \right){\rm{,\;}}{x_{\delta 0}}$}
First of all, we put forward several hypothesis that: \emph{a}) the mapping ${\cal F}\left( {{\psi _{ab}}} \right)$ can be transformed into a linear regression equation w.r.t $\theta$, and \emph{b}) using the parametrization w.r.t $\theta$, the linear regression equations w.r.t. ${\psi _d}\left( \theta \right){\rm{,\;}}{T_I}\left( \theta \right)$ can be obtained.
{\bf{Hypothesis 1.}} \emph{There exist heterogeneous in the sense of \eqref{eq1} mappings ${\cal G}\left( {{\psi _{ab}}} \right){\rm{:\;}}{\mathbb{R}^{{n_\theta }}} \to {\mathbb{R}^{{n_\theta } \times {n_\theta }}}$, ${\cal S}\left( {{\psi _{ab}}} \right){\rm{:\;}}{\mathbb{R}^{{n_\theta }}} \to {\mathbb{R}^{{n_\theta }}}$ such that:}
\begin{equation}\label{eq18}
\begin{array}{c}
{\cal S}\left( {{\psi _{ab}}} \right) = {\cal G}\left( {{\psi _{ab}}} \right){\cal F}\left( {{\psi _{ab}}} \right) = {\cal G}\left( {{\psi _{ab}}} \right)\theta {\rm{,}}\\
\end{array}
\end{equation}
\vspace{-15pt}
\begin{gather*}
{\Pi _\theta }\left( {\Delta \left( t \right)} \right){\cal G}\left( {{\psi _{ab}}} \right) = {{\cal T}_{\cal G}}\left( {{\Xi _{\cal G}}\left( {\Delta \left( t \right)} \right){\psi _{ab}}} \right){\rm{:\;}}{\mathbb{R}^{{\Delta _{\cal G}}}} \to {\mathbb{R}^{{n_\theta } \times {n_\theta }}}{\rm{,}}\\
{\Pi _\theta }\left( {\Delta \left( t \right)} \right){\cal S}\left( {{\psi _{ab}}} \right) = {{\cal T}_{\cal S}}\left( {{\Xi _{\cal S}}\left( {\Delta \left( t \right)} \right){\psi _{ab}}} \right){\rm{:\;}}{\mathbb{R}^{{\Delta _{\cal S}}}} \to {\mathbb{R}^{{n_\theta }}}{\rm{,}}
\end{gather*}
\emph{where $\det \left\{ {{\Pi _\theta }\left( {\Delta \left( t \right)} \right)} \right\} \ge {\Delta ^{{\ell _\theta }}}\left( t \right){\rm{,\;}}rank\left\{ {{\cal G}\left( {{\psi _{ab}}} \right)} \right\} = \linebreak = {n_\theta }{\rm{,\;}}{\ell _\theta } \ge 1$, ${\Xi _{\cal G}}\left( {\Delta \left( t \right)} \right) \in {\mathbb{R}^{{\Delta _{\cal G}} \times {n_\theta }}}$, ${\Xi _{\cal S}}\left( {\Delta \left( t \right)} \right) \!\in\! {\mathbb{R}^{{\Delta _{\cal S}} \times {n_\theta }}}$, and all mappings are known.}
Having introduced the notation
\begin{gather*}{{\cal Y}_{ab}}\left( t \right) = {{\cal L}_{ab}}{\cal Y},\;{{\cal M}_\theta }\left( t \right){\rm{:}} = {\rm{det}}\left\{ {{{\cal T}_{\cal G}}\left( {{{\overline \Xi }_{\cal G}}\left( \Delta \right){{\cal Y}_{ab}}} \right)} \right\},
\end{gather*}
the hypothesis is put forward that \eqref{eq18} can be transformed into regression equations w.r.t ${\psi _d}\left( \theta \right){\rm{,\;}}{T_I}\left( \theta \right)$.
{\bf{Hypothesis 2.}} \emph{There exist heterogenous in the sense of \eqref{eq1} mappings ${\cal Q}\left( \theta \right){\rm{:\;}}{\mathbb{R}^{{n_\theta }}} \to {\mathbb{R}^{n \times n}}{\rm{,\;}}{\cal P}\left( \theta \right){\rm{:\;}}{\mathbb{R}^{{n_\theta }}} \to {\mathbb{R}^{n \times n}}$ such that:}
\begin{equation}\label{eq19}
{\cal Q}\left( \theta \right) = {\cal P}\left( \theta \right){{T_{I}}}\left( \theta \right){\rm{,}}
\end{equation}
\vspace{-20pt}
\begin{gather*}
\begin{array}{c}
{\Pi _{{T_{I}}}}\left( {{{\cal M}_\theta }} \right){\cal P}\left( \theta \right) = {{\cal T}_{\cal P}}\left( {{\Xi _{\cal P}}\left( {{{\cal M}_\theta }} \right)\theta } \right){\rm{:\;}}{\mathbb{R}^{{\Delta _{\cal P}}}} \to {\mathbb{R}^{n \times n}}{\rm{,}}\\
{\Pi _{{T_{I}}}}\left( {{{\cal M}_\theta }} \right){\cal Q}\left( \theta \right) = {{\cal T}_{\cal Q}}\left( {{\Xi _{\cal Q}}\left( {{{\cal M}_\theta }} \right)\theta } \right){\rm{:\;}}{\mathbb{R}^{{\Delta _{\cal Q}}}} \to {\mathbb{R}^{n \times n}}{\rm{,}}
\end{array}
\end{gather*}
\emph{where ${\Xi _{\cal Q}}\left( {{{\cal M}_\theta }\left( t \right)} \right) \in {\mathbb{R}^{{\Delta _{\cal Q}} \times {n_\theta }}}$, ${\Xi _{\cal P}}\left( {{{\cal M}_\theta }\left( t \right)} \right) \in {\mathbb{R}^{{\Delta _{\cal P}} \times {n_\theta }}}$, $rank\left\{ {{\cal P}\left( \theta \right)} \right\} = n{\rm{,\;}}{\ell _{{T_{I}}}} \ge 1{\rm{,\;}}\det \left\{ {{\Pi _{{{T_{I}}}}}\left( {{{\cal M}_\theta }\left( t \right)} \right)} \right\} \ge \linebreak \ge {\cal M}_\theta ^{{\ell _{{T_{I}}}}}\left( t \right)$, and all mappings are known.}
{\bf{Hypothesis 3.}} \emph{There exist heterogeneous in the sense of \eqref{eq1} mappings ${\cal W}\left( \theta \right){\rm{:\;}}{\mathbb{R}^{{n_\theta }}} \to {\mathbb{R}^n}{\rm{,\;}}{\cal R}\left( \theta \right){\rm{:\;}}{\mathbb{R}^{{n_\theta }}} \to {\mathbb{R}^{n \times n}}$ such that:}
\begin{equation}\label{eq20}
{\cal W}\left( \theta \right) = {\cal R}\left( \theta \right){\psi _d}\left( \theta \right){\rm{,}}
\end{equation}
\vspace{-20pt}
\begin{gather*}
\begin{array}{c}
{\Pi _{{\psi _d}}}\left( {{{\cal M}_\theta }\left( t \right)} \right){\cal R}\left( \theta \right) = {{\cal T}_{\cal R}}\left( {{\Xi _{\cal R}}\left( {{{\cal M}_\theta }\left( t \right)} \right)\theta } \right){\rm{:\;}}{\mathbb{R}^{{\Delta _{\cal R}}}} \to {\mathbb{R}^{n \times n}}{\rm{,}}\\
{\Pi _{{\psi _d}}}\left( {{{\cal M}_\theta }\left( t \right)} \right){\cal W}\left( \theta \right) = {{\cal T}_{\cal W}}\left( {{\Xi _{\cal W}}\left( {{{\cal M}_\theta }\left( t \right)} \right)\theta } \right){\rm{:\;}}{\mathbb{R}^{{\Delta _{\cal W}}}} \to {\mathbb{R}^n}{\rm{,}}
\end{array}
\end{gather*}
\emph{where, ${\Xi _{\cal W}}\left( {{{\cal M}_\theta }\left( t \right)} \right) \in {\mathbb{R}^{{\Delta _{\cal W}} \times {n_\theta }}}$, ${\Xi _{\cal R}}\left( {{{\cal M}_\theta }\left( t \right)} \right) \in {\mathbb{R}^{{\Delta _{\cal R}} \times {n_\theta }}}$, ${\rm{det}}\left\{ {{\Pi _{{\psi _d}}}\left( {{{\cal M}_\theta }\left( t \right)} \right)} \right\} \!\ge\! {\cal M}_\theta ^{{\ell _{{\psi _d}}}}\left( t \right){\rm{, }}rank\left\{ {{\cal R}\left( \theta \right)} \right\} \!=\! n{\rm{,\;}}{\ell _{{\psi _d}}} \!\ge\! 1$, and all mappings are known.}
As it is thoroughly discussed in \cite{b16, b17, b18}, the hypotheses \eqref{eq18}-\eqref{eq20} hold in case if the mappings ${\cal F}\left( {{\psi _{ab}}} \right){\rm{,\;}}{\psi _d}\left( \theta \right){\rm{,\;}}{T_I}\left( \theta \right)$ are polynomials w.r.t. $\theta$, which is quite common situation as far as practical scenarios are concerned. The fact that \eqref{eq18}-\eqref{eq20} are met and the property ${\Xi _{\left( . \right)}}\left( {\omega \left( t \right)} \right) = {\overline \Xi _{\left( . \right)}}\left( {\omega \left( t \right)} \right)\omega \left( t \right)$ of heterogenous mappings \eqref{eq1} allows one to transform the regression equation w.r.t. $\eta \left( \theta \right)$ into the linear regression equation w.r.t. $\kappa \left( \theta \right){\rm{,\;}}{T_I}\left( \theta \right){\rm{,\;}}{x_{\delta 0}}$ without application of singularity burden operations (particularly, without division).
\textbf{Lemma 2.} \emph{The unknown parameters $\kappa \left( \theta \right)$, ${T_I}\left( \theta \right)$ and ${x_{\delta 0}}$ for all $t \ge {t_\epsilon}$ satisfy measurable regression equations:}
\begin{equation}\label{eq21}
\begin{array}{c}
{{\cal Y}_\kappa }\left( t \right) = {{\cal M}_\kappa }\left( t \right)\kappa \left( \theta \right){\rm{,}}\\
\end{array}
\end{equation}
\vspace{-20pt}
\begin{gather*}
\begin{array}{c}
{{\cal Y}_\kappa }\left( t \right) = {\rm{adj}}\left\{ {{\rm{blkdiag}}\left\{ {\Delta \left( t \right){I_{2n}}{\rm{,\;}}{{\cal M}_{{\psi _d}}}\left( t \right){I_n}} \right\}} \right\}{\begin{bmatrix}
{{\cal Y}\left( t \right)}\\
{{{\cal Y}_{{\psi _d}}}\left( t \right)}
\end{bmatrix}}{\rm{,}}\\
{{\cal M}_\kappa }\left( t \right) = {\rm{det}}\left\{ {{\rm{blkdiag}}\left\{ {\Delta \left( t \right){I_{2n}}{\rm{,\;}}{{\cal M}_{{\psi _d}}}\left( t \right){I_n}} \right\}} \right\}{\rm{,}}
\end{array}
\end{gather*}
\begin{equation}\label{eq22}
\begin{array}{c}
{{\cal Y}_{{T_I}}}\left( t \right) = {{\cal M}_{{T_I}}}\left( t \right){T_I}\left( \theta \right),\\
\end{array}
\end{equation}
\vspace{-20pt}
\begin{gather*}
\begin{array}{c}
{{\cal Y}_{{T_I}}}\left( t \right) = {\rm{adj}}\left\{ {{{\cal T}_{\cal P}}\left( {{{\overline \Xi }_{\cal P}}\left( {{{\cal M}_\theta }} \right){{\cal Y}_\theta }} \right)} \right\}{{\cal T}_{\cal Q}}\left( {{{\overline \Xi }_{\cal Q}}\left( {{{\cal M}_\theta }} \right){{\cal Y}_\theta }} \right){\rm{,}}\\
{{\cal M}_{{T_I}}}\left( t \right) = {\rm{det}}\left\{ {{{\cal T}_{\cal P}}\left( {{{\overline \Xi }_{\cal P}}\left( {{{\cal M}_\theta }} \right){{\cal Y}_\theta }} \right)} \right\}{\rm{,}}
\end{array}
\end{gather*}
\begin{equation}\label{eq23}
\begin{array}{c}
{{\cal Y}_{{x_{\delta 0}}}}\left( t \right) = {{\cal M}_{{x_{\delta 0}}}}\left( t \right){x_{\delta 0}}{\rm{,}}\\
\end{array}
\end{equation}
\vspace{-20pt}
\begin{gather*}
{{\cal Y}_{{x_{\delta 0}}}}\left( t \right) = {\rm{adj}}\left\{ {{V_f}\left( t \right)} \right\}{p_f}\left( t \right){\rm{,\;}}{{\cal M}_{{x_{\delta 0}}}}\left( t \right) = {\rm{det}}\left\{ {{V_f}\left( t \right)} \right\}{\rm{,}}
\end{gather*}
\emph{where}
1) \emph{the regression ${{\cal Y}_\theta }\left( t \right) = {{\cal M}_\theta }\left( t \right)\theta$ is formed using the following equations:}
\begin{gather*}
\begin{array}{c}
{{\cal Y}_\theta }\left( t \right) = {\rm{adj}}\left\{ {{{\cal T}_{\cal G}}\left( {{{\overline \Xi }_{\cal G}}\left( \Delta \right){{\cal Y}_{ab}}} \right)} \right\}{{\cal T}_{\cal S}}\left( {{{\overline \Xi }_{\cal S}}\left( \Delta \right){{\cal Y}_{ab}}} \right){\rm{,}}\\
{{\cal M}_\theta }\left( t \right) = {\rm{det}}\left\{ {{{\cal T}_{\cal G}}\left( {{{\overline \Xi }_{\cal G}}\left( \Delta \right){{\cal Y}_{ab}}} \right)} \right\}{\rm{,\;}}{{\cal Y}_{ab}}\left( t \right) = {{\cal L}_{ab}}{\cal Y}\left( t \right){\rm{,}}
\end{array}
\end{gather*}
2) \emph{the regression ${{\cal Y}_{{\psi _d}}}\left( t \right) = {{\cal M}_{{\psi _d}}}\left( t \right){\psi _d}\left( \theta \right)$ is formed using the following equations:}
\begin{gather*}
\begin{array}{c}
{{\cal Y}_{{\psi _d}}}\left( t \right) = {\rm{adj}}\left\{ {{{\cal T}_{\cal R}}\left( {{{\overline \Xi }_{\cal R}}\left( {{{\cal M}_\theta }} \right){{\cal Y}_\theta }} \right)} \right\}{{\cal T}_{\cal W}}\left( {{{\overline \Xi }_{\cal W}}\left( {{{\cal M}_\theta }} \right){{\cal Y}_\theta }} \right){\rm{,}}\\
{{\cal M}_{{\psi _d}}}\left( t \right) = {\rm{det}}\left\{ {{{\cal T}_{\cal R}}\left( {{{\overline \Xi }_{\cal R}}\left( {{{\cal M}_\theta }} \right){{\cal Y}_\theta }} \right)} \right\}.
\end{array}
\end{gather*}
3) \emph{the signals ${p_f}\left( t \right)$ and ${V_f}\left( t \right)$ are obtained as follows:}
\begin{gather*}
\begin{array}{l}
{p_f}\left( t \right) = \int\limits_{{t_\epsilon}}^t {{e^{ - \sigma \tau }}\left( {\Delta \left( \tau \right){{\left( {{I_{{n_\delta }}} \otimes {{\cal Y}_{{\psi _d}}}\left( \tau \right)} \right)}^{\rm{T}}}{V^{\rm{T}}}\left( \tau \right) \times } \right.} \\
\end{array}
\end{gather*}
\begin{equation}\label{eq24}
\begin{array}{l}
\left. { \times {C_0}{{\cal M}_{{\psi _d}}}\left( \tau \right)p\left( \tau \right)} \right)d\tau {\rm{,\;}}{p_f}\left( {{t_\epsilon}} \right) = {0_{{n_\delta }}}{\rm{,}}
\end{array}
\end{equation}
\vspace{-15pt}
\begin{gather*}
\begin{array}{c}
p\left( t \right) = \Delta \left( t \right)\overline q\left( t \right) - C_0^{\rm{T}}\Omega \left( t \right){{\cal L}_a}{\cal Y}\left( t \right) - C_0^{\rm{T}}P\left( t \right){{\cal L}_b}{\cal Y}\left( t \right){\rm{,}}\\
{{\cal L}_a}{\psi _{ab}}\left( \theta \right) = {\psi _a}\left( \theta \right){\rm{, }}{{\cal L}_b}{\psi _{ab}}\left( \theta \right) = {\psi _b}\left( \theta \right),\\
{V_f}\left( t \right) = \int\limits_{{t_\epsilon}}^t {{e^{ - \sigma \tau }}\left( {{\Delta ^2}\left( \tau \right){{\left( {{I_{{n_\delta }}} \otimes {{\cal Y}_{{\psi _d}}}\left( \tau \right)} \right)}^{\rm{T}}}{V^{\rm{T}}}\left( \tau \right){C_0} \times } \right.} \\
\times C_0^{\rm{T}}V\left( \tau \right)\left( {{I_{{n_\delta }}} \otimes {{\cal Y}_{{\psi _d}}}\left( \tau \right)} \right)d\tau {\rm{,\;}}{V_f}\left( {{t_\epsilon}} \right) = {0_{{n_\delta } \times {n_\delta }}},\\
\dot V\left( t \right) = {A_K}V\left( t \right) + \left( {h_\delta ^{\rm{T}}{\Phi _\delta }\left( t \right) \otimes {I_n}} \right){\rm{, }}V\left( {{t_0}} \right) = {0_{n \times n{n_\delta }}}{\rm{,}}
\end{array}
\end{gather*}
\emph{and, if $\overline \varphi \left( t \right) \in {\rm{FE}}$ and $\left( {h_\delta ^{\rm{T}}{\Phi _\delta }\left( t \right) \otimes {I_n}} \right) \in {\rm{FE}}$ over $\left[ {{t_\epsilon}{\rm{;\;}}{t_e}} \right]$, then for all $t \ge {t_e}$ it holds that:}
\begin{gather*}
\left| {{{\cal M}_\kappa }\left( t \right)} \right| \ge \underline {{{\cal M}_\kappa }} > 0{\rm{,\;}}\left| {{{\cal M}_{{T_I}}}\left( t \right)} \right| \ge \underline {{{\cal M}_{{T_I}}}} > 0{\rm{,\;}}\\\left| {{{\cal M}_{{x_{\delta 0}}}}\left( t \right)} \right| \ge \underline {{{\cal M}_{{x_{\delta 0}}}}} > 0.
\end{gather*}
\emph{Proof of Lemma 2 is given in Supplementary material \cite{b20}.}
~
\textbf{Remark 2.} \emph{The elements of the matrices ${\cal G}\left( {{\psi _{ab}}} \right){\rm{,\;}}{\cal P}\left( \theta \right){\rm{,\;}}{\cal R}\left( \theta \right)$ are handpicked as the denominators of the mappings ${\cal F}\left( {{\psi _{ab}}} \right){\rm{,\;}}{T_I}\left( \theta \right){\rm{,\;}}{\psi _d}\left( \theta \right)$, respectively. In their turn, ${\cal S}\left( {{\psi _{ab}}} \right){\rm{,\;}}{\cal Q}\left( \theta \right){\rm{,\;}}{\cal W}\left( \theta \right)$ are obtained as the result of direct multiplication – see their definitions in \eqref{eq18}-\eqref{eq20}.}
\subsection{Adaptive Observer Design}
Based on the regression equations \eqref{eq21}-\eqref{eq23} obtained in section 3.1 and using parameterizations \eqref{eq9}-\eqref{eq13}, the estimates of states and external disturbances are obtained as:
\begin{equation}\label{eq25}
\begin{array}{l}
\hat x\left( t \right) = {{\hat T}_I}\left( t \right)\hat \xi \left( t \right) = {{\hat T}_I}\left( t \right)\left( {z\left( t \right) + {{\hat R}^{\rm{T}}}\left( t \right)\hat \kappa \left( t \right)} \right){\rm{,}}\\
\hat \delta \left( t \right) = h_\delta ^{\rm{T}}{\Phi _\delta }\left( t \right){{\hat x}_{\delta 0}}\left( t \right){\rm{,}}\\
\dot {\hat U}\left( t \right) = {A_K}\hat U\left( t \right) + {I_n}\hat \delta \left( t \right){\rm{,\;}}\hat{U}\left( {{t_0}} \right) = {0_{n \times n}}{\rm{,}}
\end{array}
\end{equation}
where ${\hat R^{\rm{T}}}\left( t \right) = {\begin{bmatrix}
{\Omega \left( t \right)}&{P\left( t \right)}&{\hat U\left( t \right)}
\end{bmatrix}}{\rm{,}}$ and estimates $\hat \kappa \left( t \right){\rm{,\;}}{\hat T_I}\left( t \right){\rm{,\;}}{\hat x_{\delta 0}}\left( t \right)$ are obtained with the help of the following differential equations:
\begin{gather*}
\begin{array}{l}
\dot {\hat \kappa} \left( t \right) = \dot {\tilde \kappa} \left( t \right) = - {\gamma _\kappa }{{\cal M}_\kappa }\left( t \right)\left( {{{\cal M}_\kappa }\left( t \right)\hat \kappa \left( t \right) - {{\cal Y}_\kappa }\left( t \right)} \right),\\
\end{array}
\end{gather*}
\vspace{-15pt}
\begin{equation}\label{eq26}
\begin{array}{l}
{{\dot {\hat x}}_{\delta 0}}\left( t \right) = {{\dot {\tilde x}}_{\delta 0}}\left( t \right) = \\ = - {\gamma _{{x_{\delta 0}}}}{{\cal M}_{{x_{\delta 0}}}}\left( t \right)\left( {{{\cal M}_{{x_{\delta 0}}}}\left( t \right){{\hat x}_{\delta 0}}\left( t \right) - {{\cal Y}_{{x_{\delta 0}}}}\left( t \right)} \right),
\end{array}
\end{equation}
\vspace{-15pt}
\begin{gather*}
\begin{array}{l}
{{\dot {\hat T}_I}}\left( t \right) = {{\dot {\tilde T}_I}}\left( t \right) = - {\gamma _{{T_I}}}{{\cal M}_{{T_I}}}\left( t \right)\left( {{{\cal M}_{{T_I}}}\left( t \right){{\hat T}_I} - {{\cal Y}_{{T_I}}}\left( t \right)} \right), \\
{\gamma _\kappa } > 0,{\rm{\;}}{\gamma _{{x_{\delta 0}}}} > 0,{\rm{\;}}{\gamma _{{T_I}}} > 0,
\end{array}
\end{gather*}
where $\tilde \kappa \left( t \right) = \hat \kappa \left( t \right) - \kappa \left( \theta \right){\rm{,\;}}{\tilde x_{\delta 0}}\left( t \right) = {\hat x_{\delta 0}}\left( t \right) - {x_{\delta 0}}{\rm{,\;}}{\tilde T_I}\left( t \right) = \linebreak = {\hat T_I}\left( t \right) - {T_I}\left( \theta \right)$ are parametric errors.
With the stated goal \eqref{eq5} in mind, the properties of the adaptive observer \eqref{eq25}, \eqref{eq26} are analyzed in Theorem.
\textbf{Theorem.} \emph{Let Assumptions 1 and 2, condition \eqref{eq17} and Hypotheses \eqref{eq18}-\eqref{eq20} be met, then, if $\overline \varphi \left( t \right) \in {\rm{FE}}$ and $\left( {h_\delta ^{\rm{T}}{\Phi _\delta }\left( t \right) \otimes {I_n}} \right) \in {\rm{FE}}$, the observer \eqref{eq25}, \eqref{eq26} ensures that the goal \eqref{eq5} is achieved and additionally guarantees that:}
\begin{equation}\label{eq27}
\begin{array}{c}
\mathop {{\rm{lim}}}\limits_{t \to \infty } \left\| {{{\tilde x}_{\delta 0}}\left( t \right)} \right\| = {\rm{0 }}\left( {\exp} \right){\rm{,\;}}\mathop {{\rm{lim}}}\limits_{t \to \infty } \left\| {\tilde \kappa \left( t \right)} \right\| = {\rm{0}}\left( {\exp} \right){\rm{,}}\\
\mathop {{\rm{lim}}}\limits_{t \to \infty } \left\| {{{\tilde T}_I}\left( t \right)} \right\| = {\rm{0\;}}\left( {\exp} \right){\rm{,\;}}\\\mathop {{\rm{lim}}}\limits_{t \to \infty } \left\| {\hat U\left( t \right) - U\left( t \right)} \right\| = \mathop {{\rm{lim}}}\limits_{t \to \infty } \left\| {\tilde U\left( t \right)} \right\| = {\rm{0\;}}\left( {\exp} \right).
\end{array}
\end{equation}
\emph{Proof of Theorem is given in Supplementary material \cite{b20}.}
~
Thus, to solve the problem \eqref{eq5}, first of all, using the signals measured from system \eqref{eq3} and transformations \eqref{eq14}-\eqref{eq16}, the regression equations \eqref{eq12}, \eqref{eq13} are formed w.r.t. the unknown parameters $\eta \left( \theta \right)$ and states $\xi \left( t \right)$ of the observer canonical form \eqref{eq7}. After that, if conditions \eqref{eq17}-\eqref{eq20} of polynomial overparametrization of system \eqref{eq3} are met, then the obtained regression equation is transformed via \eqref{eq21}-\eqref{eq23} into linear regression equations w.r.t. similarity matrix ${T_I}\left( \theta \right)$, parameters $\kappa \left( \theta \right)$ and initial conditions ${x_{\delta 0}}$, respectively. Then, based on the obtained regression equations, the identification laws \eqref{eq26} are derived, and an algebraic state observer \eqref{eq25} is introduced. In accordance with the results of the theorem, the obtained observer guarantees that the goal \eqref{eq5} is achieved if extremely weak conditions $\overline \varphi \left( t \right) \in {\rm{FE}}$ and $\left( {h_\delta ^{\rm{T}}{\Phi _\delta }\left( t \right) \otimes {I_n}} \right) \in {\rm{FE}}$ are met. Moreover, the condition $\overline \varphi \left( t \right) \in {\rm{FE}}$ can be:
\emph{i}) verified online by checking that eigenvalues of $\varphi \left( t \right)$ are non-zero, \emph{ii}) met by dither signal injection, and the condition $\left( {h_\delta ^{\rm{T}}{\Phi _\delta }\left( t \right) \otimes {I_n}} \right) \in {\rm{FE}}$ can be verified offline because signal $h_\delta ^{\rm{T}}{\Phi _\delta }\left( t \right)$ is \emph{a priori} known in accordance with Assumption 2.
\section{Numerical Experiments}
The system from the experimental section of \cite{b18} has been considered:
\begin{equation}\label{eq28}
\begin{gathered}
\dot x = {\begin{bmatrix}
0&{{\theta _1} + {\theta _2}}&0 \\
{ - {\theta _2}}&0&{{\theta _2}} \\
0&{ - {\theta _3}}&0
\end{bmatrix}} x + {\begin{bmatrix}
0 \\
0 \\
{{\theta _3}}
\end{bmatrix}} u+ {\begin{bmatrix}
{{\theta _1}{\theta _2}}\\
0\\
0
\end{bmatrix}}\delta, \hfill \\
y = {\begin{bmatrix}
0&0&1
\end{bmatrix}} x. \hfill \\
\end{gathered}
\end{equation}
Being transformed into the observer canonical form \eqref{eq7}, the system \eqref{eq28} was described by the following vectors:
\begin{displaymath}
\begin{array}{c}
{\psi _a} = {\begin{bmatrix}
0 \\
{ - \left( {{\theta _1} + {\theta _2} + {\theta _3}} \right){\theta _2}} \\
0
\end{bmatrix}} {\text{,\;}}{\psi _b} = {\begin{bmatrix}
{{\theta _3}} \\
0 \\
{{\theta _3}{\theta _2}\left( {{\theta _2} + {\theta _1}} \right)}
\end{bmatrix}}{\text{,}}\\{\psi _d} = {\begin{bmatrix}
0&0&{{\theta _1}\theta _2^2{\theta _3}}
\end{bmatrix}}^{\rm{T}}
\end{array}
\end{displaymath}
where
\begin{displaymath}
{\psi _{ab}}\left( \theta \right) = col\left\{ { - \left( {{\theta _1} + {\theta _2} + {\theta _3}} \right){\theta _2}{\text{, }}{\theta _3}{\text{, }}{\theta _3}{\theta _2}\left( {{\theta _2} + {\theta _1}} \right)} \right\}.
\end{displaymath}
The exosystem \eqref{eq4} parameters were set as:
\begin{gather*}
{{\cal A}_\delta } = \begin{bmatrix}
0&1\\
{ - 10}&{ - 0.{\rm{01}}}
\end{bmatrix}{\rm{,\;}}h_\delta ^{\rm{T}} = {\begin{bmatrix}
1&0
\end{bmatrix}}.
\end{gather*}
The transformations ${{\cal T}_i}\left( . \right){\rm{,\;}}i \in \left\{ {{\cal S}{\rm{,\;}}{\cal G}{\rm{,\;}}{\cal P}{\rm{,\;}}{\cal Q}{\rm{,\;}}{\cal R}{\rm{,\;}}{\cal W}} \right\}$ were implemented as (for more details see \cite{b16, b17, b18}):
\begin{displaymath}
\begin{gathered}
{\mathcal{T}_\mathcal{S}}\!\left( {{{\overline \Xi }_\mathcal{S}}\left( \Delta \right){\mathcal{Y}_{ab}}} \right) \!\!=\!\! {\begin{bmatrix}
{{\mathcal{Y}_{2ab}}{{\left( {{\mathcal{Y}_{1ab}}{\mathcal{Y}_{2ab}} \!+\! \Delta {\mathcal{Y}_{3ab}}} \right)}^2} \!-\! \mathcal{Y}_{2ab}^4{\mathcal{Y}_{3ab}}} \\
{ - {\mathcal{Y}_{1ab}}{\mathcal{Y}_{2ab}} - \Delta {\mathcal{Y}_{3ab}}} \\
{{\mathcal{Y}_{2ab}}{\mathcal{Y}_{1ab}}}
\end{bmatrix}}{\text{,}}\\
{\mathcal{T}_\mathcal{G}}\left( {{{\overline \Xi }_\mathcal{G}}\left( \Delta \right){\mathcal{Y}_{ab}}} \right) = diag{\begin{Bmatrix}
{\mathcal{Y}_{2ab}^3\left( {{\mathcal{Y}_{1ab}}{\mathcal{Y}_{2ab}} + \Delta {\mathcal{Y}_{3ab}}} \right)}, \\
{\mathcal{Y}_{2ab}^2}, \\
{\Delta {\mathcal{Y}_{1ab}}}
\end{Bmatrix}}. \\
{{\cal T}_{\cal Q}}\left( {{{\overline \Xi }_{\cal Q}}\left( {{{\cal M}_\theta }} \right){{\cal Y}_\theta }} \right) = {\begin{bmatrix}
{ - {\mathcal{Y}_{2\theta }}\left( {{\mathcal{Y}_{1\theta }} + {\mathcal{Y}_{2\theta }}} \right)}&0&{\mathcal{M}_\theta ^2} \\
0&{ - {\mathcal{M}_\theta }}&0 \\
{{\mathcal{M}_\theta }}&0&0
\end{bmatrix}}{\text{, }}\\
{{{\cal T}_{\cal P}}\left( {{{\overline \Xi }_{\cal P}}\left( {{{\cal M}_\theta }} \right){{\cal Y}_\theta }} \right)} = diag\left\{ {{\mathcal{Y}_{2\theta }}{\mathcal{Y}_{3\theta }}{\text{, }}{\mathcal{Y}_{3\theta }}{\text{, }}{\mathcal{M}_\theta }} \right\},\\
{{\cal T}_{\cal W}}\left( {{{\overline \Xi }_{\cal W}}\left( {{{\cal M}_\theta }} \right){{\cal Y}_\theta }} \right) = {\begin{bmatrix}
0\\
0\\
{{{\cal Y}_{1\theta }}{\cal Y}_{2\theta }^2{{\cal Y}_{3\theta }}}
\end{bmatrix}} {\rm{,\;}}\\{{\cal T}_{\cal R}}\left( {{{\overline \Xi }_{\cal R}}\left( {{{\cal M}_\theta }} \right){{\cal Y}_\theta }} \right) = diag\left\{ {1,{\rm{ 1}}{\rm{, }}{\cal M}_\theta ^4} \right\}.
\end{gathered}
\end{displaymath}
The control signal was defined as a P-controller \linebreak $u = - 75\left( {r - y} \right)$. The reference signal $r$ and parameters of the system \eqref{eq28} were chosen as:
\begin{equation}\label{eq29}
r = 100 + 2{\text{.5}}{e^{ - t}}{\text{sin}}\left( {10t} \right){\text{, }}{\theta _1} = {\theta _2} = 1{\text{, }}{\theta _3} = - 1.
\end{equation}
The initial conditions of the system \eqref{eq28}, \eqref{eq4}, parameters of filters \eqref{eq14}-\eqref{eq16}, \eqref{eq24} and identification laws \eqref{eq26} were set as:
\begin{gather*}
\begin{array}{c}
K = { {\begin{bmatrix}
3&3&1
\end{bmatrix}}^{\rm{T}}}{\rm{,\;}}G = {\begin{bmatrix}
{ - 4}&1\\
{ - 2}&0
\end{bmatrix}}{\rm{,\;}}l = {\begin{bmatrix}
1\\
2
\end{bmatrix}}{\rm{,\;}}\beta = {\begin{bmatrix}
{20}\\
{ - 8}
\end{bmatrix}}{\rm{,}}\\
{x_0} = {{\begin{bmatrix}
{ - 1}&0&2
\end{bmatrix}}^{\rm{T}}}{\rm{,\;}}{x_{\delta 0}} = {{\begin{bmatrix}
{500}&{100}
\end{bmatrix}}^{\rm{T}}}{\rm{,\;}}\\\hat \kappa \left( 0 \right) = {{\rm{0}}_9}{\rm{,\;}}{{\hat T}_I}\left( 0 \right) = {0_{3 \times 3}}{\rm{,}}\\
{{\hat x}_{\delta 0}}\left( 0 \right) = {0_2}{\rm{,\;}}{k_1} = 25,{\rm{\;}}k = {10^{19}}{\rm{,\;}}\sigma = 1,{\rm{\;}}{t_\epsilon} = 25,{\rm{ }}\\
{\gamma _{{x_{\delta 0}}}} = 5 \cdot {10^{ - 82}}{\rm{,\;}}{\gamma _\kappa } = {10^{ - 74}}{\rm{,\;}}{\gamma _{{T_I}}} = {10^{ - 23}}.
\end{array}
\end{gather*}
The parameters of the adaptive laws \eqref{eq26} were chosen so as to ensure rate of convergence, which coincided with the one for the adaptive laws in \cite{b18}.
Figure 1 for all $t \ge 25$ depicts transients of $\tilde x\left( t \right)$ and $\tilde \delta \left( t \right)$ for experiments when the observer \eqref{eq25} and the one from \cite{b18} were used.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.55]{Fig1.pdf}}
\caption{Behavior of state $\tilde x\left( t \right)$ and disturbance $\tilde \delta \left( t \right)$ observation errors}
\label{fig1}
\end{figure}
Figure 2 for all $t \ge 25$ presents transients of parametric errors $\tilde \kappa \left( t \right){\rm{,\;}}{\tilde x_{\delta 0}}\left( t \right){\rm{,\;}}{\tilde T_I}\left( t \right)$ and the norm $\left\| {\tilde U\left( t \right)} \right\|$.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.55]{Fig2.pdf}}
\caption{Behavior of $\left\| {\tilde U\left( t \right)} \right\|$ and parametric errors $\tilde \kappa \left( t \right){\rm{,\;}}{\tilde x_{\delta 0}}\left( t \right){\rm{,\;}}{\tilde T_I}\left( t \right)$}
\label{fig2}
\end{figure}
Therefore, when the parameter identification process had been completed, the unmeasured state and disturbance observation errors exponentially converged to zero. As the estimates were obtained using the algebraic equations \eqref{eq25}, then, in comparison with the observer from \cite{b18}, there was no significant overshoot in the course of the transients.
\section{Conclusion}
For a class of uncertain linear time-invariant overparametrized systems an extended adaptive observer was proposed that, if the regressor finite excitation condition held, allowed one to reconstruct unmeasured states and bounded external disturbance generated by a known linear exosystem with unknown initial conditions.
In contrast to solutions \cite{b2, b3, b8, b10, b13, b14}, the proposed observer: \emph{i}) allowed one to reconstruct physical states $x\left( t \right)$ of the system \eqref{eq3} rather than virtual ones $\xi \left( t \right)$ of the observer canonical form \eqref{eq7}, \emph{ii}) provided exponential convergence of the unmeasured state and disturbance observation errors if the regressor finite excitation condition was met. In contrast to the earlier result \cite{b18}, the proposed adaptive observer: (\emph{a}) formed unmeasured states and external perturbation estimates using the algebraic rather than the differential equation, and (\emph{b}) was not affected by the peaking phenomenon (significant overshoot in the course of transients of $\tilde x\left( t \right)$ and $\tilde \delta \left( t \right)$).
|
{
"arxiv_id": "2302.13644",
"language": "en",
"timestamp": "2023-02-28T02:26:15",
"url": "https://arxiv.org/abs/2302.13644",
"yymm": "2302"
} | \section{Introduction}
\begin{figure}[H]
\centering
\includegraphics[page=1]{Images/3color.pdf}
\hspace{1cm}%
\includegraphics[page=2]{Images/3color.pdf}
\caption{Left: a 3-colorable graph. Right: a non-3-colorable graph.}
\label{fig:excolor}
\end{figure}
The \textsc{3-coloring} problem is one of the most fundamental problems in graph theory.
The \textsc{3-coloring} problem poses the following question:
given a graph $G$ with $n$ vertices, can we assign one of three colors (i.e. red, green, or blue) to every vertex, such that no two adjacent vertices are assigned the same color.
Notably, \textsc{3-coloring} is a special case of the \textsc{graph coloring} problem.
In \textsc{graph coloring}, we aim to minimize the number of colors we need to color a graph, such that no two adjacent vertices receive the same color.
The \textsc{graph coloring} is a classic problem in complexity theory; it is one of Karp's original 21 \textsc{NP}-complete problems \cite{Karp1972}.
A year after Karp, Lovász showed that \textsc{3-coloring} is \textsc{NP}-complete \cite{Lovasz1973}.
Specifically, it is the lowest number of colors for which it is \textsc{NP}-complete whether a graph can be colored using this number of colors:
\textsc{0-coloring} and \textsc{1-coloring} are both trivially decided in polynomial time, while a \textsc{2-coloring} graphs can be found, if one exists, by greedily coloring vertices.
\subsection{History of \textsc{3-coloring}}
\begin{table}[]
\centering
\begin{tabular}{lll}
\textbf{Year} & \textbf{Author} & \textbf{Time} \\ \hline
1976 & Lawler & $\mathcal{O}^*(1.4425^n)$ \\
1994 & Schiermeyer & $\mathcal{O}^*(1.415^n)$ \\
2000 & Beigel and Eppstein & $\mathcal{O}^*(1.3289^n)$
\end{tabular}
\caption{\label{tab:overview}Overview of previous algorithms for \textsc{3-coloring}.}
\end{table}
Trivially, \textsc{3-coloring} can be solved in time $\mathcal{O}^*(3^n)$, by attempting every assignment of colors, and validating in polynomial time whether it leads to a correct solution.
The first non-trivial algorithm was created in 1976 by Lawler \cite{Lawler1976}.
He figured that \textsc{3-coloring} can be solved by iterating over all maximal independent sets with polynomial delay.
For each maximal independent set $I$ in a graph $G$, we assign one color to all vertices in the maximal independent set.
Then, we verify whether $G \backslash I$ is 2-colorable.
If there does not exist an independent set $I$, such that $G \backslash I$ is 2-colorable, $G$ is not 3-colorable.
Moon and Moser showed there exist at most $\mathcal{O}(3^{n/3}) = \mathcal{O}(1.44225^n)$ maximal independent sets in any graph \cite{Moon1965}, which we can iterate over in at most time $\mathcal{O}(3^{n/3}) = \mathcal{O}(1.44225^n)$ \cite{Johnson1988}.
Verifying whether $G \backslash I$ is 2-colorable takes polynomial time, so Lawler's algorithm runs in time $\mathcal{O}^*(3^{n/3})$.
In 1994, Schiermeyer found an improved algorithm to solve \textsc{3-coloring} in time $\mathcal{O}^*(1.415^n)$ \cite{Schiermeyer1994}.
Critically, Schiermeyer's algorithm was based on the idea that for any vertex, its set of neighbors must be 2-colorable if the graph is 3-colorable.
Before the time $\mathcal{O}^*(1.3217^n)$ algorithm presented in this paper, the best known algorithm for \textsc{3-coloring} was Beigel and Eppstein's time $\mathcal{O}^*(1.3289^n)$ algorithm \cite{Beigel2005}.
Beigel and Eppstein published this paper in 2000.
They created an algorithm that solves the \textsc{(3,2)-Constraint Satisfaction Problem} (\textsc{(3,2)-CSP}) in time $\mathcal{O}^*(1.36444^n)$.
The \textsc{3-coloring} problem can be reduced to \textsc{(3,2)-CSP} without increasing the instance size, implying that \textsc{3-coloring} can be solved in time $\mathcal{O}^*(1.36444^n)$ as well.
Furthermore, Beigel and Eppstein found that there must exist vertices that are easier to color than by using the \textsc{(3,2)-CSP} algorithm.
By first coloring a subset of the colors, they reach the time $\mathcal{O}^*(1.3289^n)$ algorithm.
Our algorithm will expand upon Beigel and Eppstein's algorithm and further improve it.
\subsection{State of Graph Coloring}
Given a graph $G$, we can determine whether $G$ can be colored using at most $k$ colors.
Notably, in \textsc{graph coloring}, we want to minimize the value of $k$, such that we can color $G$ using $k$ colors.
The \textsc{graph coloring} problem can be solved in time $\mathcal{O}^*(2^n)$ by using the technique of inclusion-exclusion \cite{Bjrklund2009}.
Surprisingly, for any $k \geq 7$, the time $\mathcal{O}^*(2^n)$ algorithm is the fastest known.
The fastest known algorithm for \textsc{4-coloring} runs in time $\mathcal{O}^*(1.7272^n)$ and was published by Fomin et al. in 2007 \cite{Fomin2007}.
Fomin et al. showed that a graph will either have a low number of maximal independent sets or a low pathwidth.
They created an algorithm that iterates over all maximal independent sets and one that is fixed-parameter tractable in the pathwidth of the graph;
depending on the graph, they choose to run either of the two.
The algorithm that iterates over all independent sets uses the best known algorithm for \textsc{3-coloring} as a subroutine.
As such, our improvement to \textsc{3-coloring} also improves \textsc{4-coloring}.
We can now solve \textsc{4-coloring} in time $\mathcal{O}^*(1.7247^n)$.
Finally, for \textsc{5-coloring} and \textsc{6-coloring}, recent developments by Zamir showed that both \textsc{5-coloring} and \textsc{6-coloring} can be solved in time $\mathcal{O}^*((2-\epsilon)^n)$ for a small value $\epsilon$ (both with a different $\epsilon$) \cite{https://doi.org/10.48550/arxiv.2007.10790}.
For any $k \geq 7$, it is still an open question whether there exists a time $\mathcal{O}^*((2-\epsilon)^n)$ algorithm to solve $k$-\textsc{coloring}.
\subsection{Our Contribution}
In this paper, we prove the following theorem:
\begin{theorem}
\label{the:1}
There is an algorithm for \textsc{3-coloring} running in time $\mathcal{O}^*(1.3217^n)$ on $n$-vertex graphs.
\end{theorem}
We improve upon Beigel and Eppstein's time $\mathcal{O}^*(1.3289^n)$ algorithm.
Their algorithm has been the best known result for over twenty years.
As such, we believe our improvement in the runtime is an important result for the \textsc{3-coloring} problem.
\subsection{Organization}
We expand upon many of Beigel and Eppstein ideas; we discuss their algorithm in Section \ref{sec:2}.
We introduce a new graph structure, the maximal low-magnitude bushy forest, in Section \ref{sec:forest}, which helps us find vertices that can be colored relatively easily.
We determine how the maximal low-magnitude bushy forest allows us to color vertices more quickly in Section \ref{sec:analysis}.
We combine our findings in Section \ref{sec:lp}, where we analyze the runtime of our algorithm by creating a linear program.
We summarize our algorithm in \ref{sec:conclusion}.
\section{Summary of Beigel and Eppstein's algorithm}
\label{sec:2}
In this section, we discuss the time $\mathcal{O}^*(1.3289^n)$ algorithm to solve \textsc{3-coloring} by Beigel and Eppstein \cite{Beigel2005}.
Before the improvements presented in this paper, the algorithm from Beigel and Eppstein was the fastest known algorithm to solve \textsc{3-coloring}.
Notably, we created our algorithm by improving upon their algorithm.
We will present the concepts from Beigel and Eppstein that are important to our improvements.
Some lemmas are slightly modified from Beigel and Eppstein's algorithm, to optimize them for our algorithm.
Beigel and Eppstein showed that the \textsc{(3,2)-Constraint Satisfaction Problem} (\textsc{(3,2)-CSP}) can be used as a black box to solve \textsc{3-coloring} efficiently.
The \textsc{(3,2)-Constraint Satisfaction Problem} consists of a set of variables, each of which must be assigned one of at most three colors.
The combination of a variable and one of its colors is called a variable-color pair.
The \textsc{(3,2)-Constraint Satisfaction Problem} contains constraints between two variable-color pairs:
not both vertices can be assigned the color in their respective variable-color pairs.
We transform \textsc{3-coloring} into \textsc{(3,2)-Constraint Satisfaction Problem} as follows:
\begin{enumerate}
\item Every variable represents a vertex.
\item All variables have the same domain of colors as the vertex they represent.
\item For any two variables representing adjacent vertices, we add a constraint that they cannot both be the same color.
\end{enumerate}
\begin{theorem}[{\cite[Theorem~1]{Beigel2005}}]
There is an algorithm for the \textsc{(3,2)-Constraint Satisfaction Problem} running in time $\mathcal{O}^*(1.36443^n)$, where $n$ is the number of variables.
\end{theorem}
Beigel and Eppstein showed that the \textsc{(3,2)-Constraint Satisfaction Problem} can be solved in time $\mathcal{O}^*(1.36443^n)$.
The reduction does not increase the instance size, so \textsc{3-coloring} can be solved in time $\mathcal{O}^*(1.36443^n)$ as well.
Notably, given a partially colored graph, we can also use an algorithm for \textsc{(3,2)-CSP} to determine whether there exists an assignment of colors to the uncolored vertices, such that it becomes a valid 3-coloring.
In Beigel and Eppstein's algorithm for the \textsc{(3,2)-Constraint Satisfaction Problem}, there exists a reduction rule that can remove any variable with two or fewer colors from the instance in polynomial time.
Neighbors of colored vertices have at most two possible colors.
So, these vertices can be removed from the instance.
For instance, consider a small set of vertices $S$ that have a large set of neighbors $T$.
There exist at most $3^{|S|}$ valid color assignments for the vertices in $S$. By iterating over these color assignments, we could solve \textsc{3-coloring} in time $\mathcal{O}^*(1.36443^{|V|-|S|-|T|}\cdot3^{|S|})$.
Clearly, when $|T|$ is sufficiently large compared to $|S|$, this will improve the runtime of the algorithm.
In this section, we will explain how Beigel and Eppstein found a small set of vertices to color to improve the algorithm for \textsc{3-coloring} and our adjustments to their lemmas to improve them within the context of the new algorithm.
\subsection{Work Factor}
Beigel and Eppstein's algorithm use many branching rules to eliminate certain cases from the graph.
A branching rule is a strategy to solve an instance by solving several instances recursively.
To analyze these branches, they use the work factor:
\begin{definition}[Work Factor]
The \textit{work factor}, $\lambda(r_1, r_2, \dots)$, denotes the complexity of a branching rule.
We explore some number of branches in a branching rule.
Each of the branches explores an instance with a reduced instance size.
In the work factor, each value $r_1, r_2, \dots$ denotes the reduction in instance size in a branch.
Each work factor $\lambda(r_1, r_2, \dots) = c$, where $c$ is the largest zero of the function $f(x)=1-\sum x^{-r_i}$.
The work factor assists us in calculating the runtime of an algorithm.
Every step, some branching rule is applied.
Each branching rule will have some work factor $\lambda$: the algorithm will run within time $\mathcal{O}^*((\max \lambda)^n)$.
So, if for all work factors $\lambda \leq 1.3217$, the algorithm runs in time $\mathcal{O}^*(1.3217^n)$.
\end{definition}
\subsection{Removing Low-Degree Vertices}
By definition, low-degree vertices have few neighbors.
So, we want to avoid trying all of their color assignments.
Indeed, assigning a color to a vertex of low degree reduces the possible colors of few other vertices.
Furthermore, it is relatively unlikely that they will have a colored neighbor either.
Luckily, we can remove many low-degree vertices.
Vertices with two or more neighbors can be removed from the instance trivially:
their set of neighbors can never contain all three possible colors.
Thus, there will always be a color available for these vertices.
Furthermore, we can also limit the number of vertices of degree three:
we use a branching rule to ensure no connected subgraph of degree-three vertices exists that contains nine or more vertices or any cycle.
\begin{lemma}[{\cite[Lemma~20]{Beigel2005}}]
\label{lem:3cycle}
Let $G$ be a \textsc{3-coloring} instance in which some cycle consists only of degree-three vertices.
Then we can replace $G$ with smaller instances with work factor at most $1.2433$.
\end{lemma}
\begin{proof}
See Beigel and Eppstein\cite[Lemma~20]{Beigel2005}.
\end{proof}
The next lemma has been adapted from Beigel and Eppstein \cite[Lemma~21]{Beigel2005}.
Originally, it removed connected subgraphs of eight or more degree-three vertices, but we edit this to nine or more degree-three vertices.
\begin{lemma}
\label{lem:3subgraph}
Let $G$ be a \textsc{3-coloring} instance containing a connected subgraph of nine or more
degree-three vertices. Then we can replace $G$ with smaller instances with work factor at most $1.3022$.
\end{lemma}
\begin{proof}
Notice that the neighbors of any vertex must be 2-colorable.
So, if a vertex has three neighbors, at least two of its neighbors must receive the same color.
There are three possible pairs of neighbors to select, we will explore each of these in a branch.
When we select two vertices to have the same color, we can merge the two vertices:
the new vertex will neighbor all vertices that either of the two original vertices neighbored.
As the merged vertices neighbored the same vertex of degree three, this vertex will have degree two after the merge.
At this point, this vertex can be removed from the instance.
Furthermore, after removing it, we also remove any of its neighbors of degree three.
Suppose that a connected subgraph of degree-three vertices has $n \geq 9$ vertices.
We will now choose a vertex $v$, whose neighbors we will merge in the three branches.
In case there exists a vertex in the subgraph with three neighbors in the vertex, let any such vertex be $v$.
Otherwise, the subgraph must be a path.
In this case, select a vertex to be $v$, such that removing this vertex from the subgraph causes the subgraph to be split into connected components of size $\lceil \frac{n}{2} \rceil$ and $\lfloor \frac{n}{2} \rfloor$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[main/.style = {draw, circle}, minimum size=0.6cm]
\node[main] (1) {v};
\node[main] (2) [below left of=1] {};
\node[main] (3) [left of=2] {};
\node[main] (4) [below of=2] {};
\node[main] (5) [above of=1] {};
\node[main] (6) [above of=5] {};
\node[main] (7) [below right of=1] {};
\node[main] (8) [below of=7] {};
\node[main] (9) [right of=7] {};
\draw[] (1) -- (2);
\draw[] (2) -- (3);
\draw[] (2) -- (4);
\draw[] (1) -- (5);
\draw[] (5) -- (6);
\draw[] (1) -- (7);
\draw[] (7) -- (8);
\draw[] (7) -- (9);
\end{tikzpicture}
\hspace{2cm}%
\begin{tikzpicture}[main/.style = {draw, circle}, minimum size=0.6cm]
\node[main,fill=lightgray] (1) {v};
\node[main] (27) [below of=1] {};
\node[main] (3) [left of=2] {};
\node[main] (4) [below of=2] {};
\node[main,fill=lightgray] (5) [above of=1] {};
\node[main,fill=lightgray] (6) [above of=5] {};
\node[main] (8) [below of=7] {};
\node[main] (9) [right of=7] {};
\draw[] (1) -- (27);
\draw[] (27) -- (3);
\draw[] (27) -- (4);
\draw[] (1) -- (5);
\draw[] (5) -- (6);
\draw[] (1) -- (27);
\draw[] (27) -- (8);
\draw[] (27) -- (9);
\end{tikzpicture}
\caption{A connected subgraph of nine degree-three vertices (left), and one of the three branches which merged two neighbors (right). The bottom neighbors of $v$ are merged. \textcolor{gray}{Gray}: vertices can be removed from the instance.}
\label{fig:ex31}
\end{figure}
In each branch, $v$ will be removed from the instance and two of its neighbors are merged into one:
each branch reduces the instance size by at least one.
Then, the neighbor of $v$ that was not merged will lose a neighbor and possibly have only two remaining vertices.
In this case, it is removed as well.
We continue this process until no vertices of degree two exist within the subgraph.
There exists exactly one path from each vertex in the subgraph to $v$, so every vertex (besides $v$) can be removed from the instance in exactly one subgraph.
So, we get a work factor $\lambda(2+\alpha_1, 2+\alpha_2, 2+\alpha_3)$, where $\alpha_1+\alpha_2+\alpha_3=n-1$.
Furthermore, if any of $\alpha_1$, $\alpha_2$, or $\alpha_3$ is zero, then the other two must be $\lceil \frac{n}{2} \rceil$ and $\lfloor \frac{n}{2} \rfloor$.
The worst-case values for $\alpha_1$, $\alpha_2$, and $\alpha_3$ are $0$, $4$, and $4$.
We get a work factor of $\lambda(2, 6, 6)=1.3022$.
\end{proof}
\subsection{Coloring Vertices Faster}
After limiting the number of low-degree vertices in the graph, Beigel and Eppstein used a graph structure called a bushy forest to find a small set of vertices with a large set of neighboring vertices.
\begin{definition}[Bushy Forest]
A \textit{bushy forest} is a forest where every tree has at least one internal vertex and each internal vertex must be adjacent to at least four other vertices in the tree.
A bushy forest is maximal, when there does not exist any vertex outside the bushy forest with four neighbors outside the bushy forest, there does not exist any leaf in the bushy forest with three neighbors outside the bushy forest, and there does not exist any vertex outside the bushy forest adjacent to an internal vertex of the bushy forest.
\end{definition}
We find a maximal bushy forest and color the internal vertices of the bushy forest.
All leaves are adjacent to some internal vertex, so all leaves will have a colored neighbor.
Each internal vertex must have at least four neighbors within the bushy forest, so we expect many leaves to exist within the bushy forest.
To analyze the number of possible color assignments, we partition the vertices of the graph based on their relation to the maximal bushy forest:
\begin{itemize}
\item $R$: the root vertices of the trees in the bushy forest, where for each tree one arbitrary internal vertex is chosen to be the root vertex.
\item $I$: all internal vertices of the bushy forest that are not rood vertices.
\item $L$: the leaves of the bushy forest.
\item $N$: vertices outside the bushy forest that neighbor the bushy forest.
\item $U$: vertices outside the bushy forest that do not neighbor the bushy forest.
\end{itemize}
\begin{figure}[H]
\centering
\begin{tikzpicture}[main/.style = {draw, circle}, node distance=1cm, minimum size=0.6cm]
\node[main,fill=red,line width=1.5pt] (i1) {$R$};
\node[main,fill=orange,line width=1.5pt] (i2) [below of=1] {$I$};
\node[main,fill=Goldenrod,line width=1.5pt] (l1) [left of=i1] {$L$};
\node[main,fill=Goldenrod,line width=1.5pt] (l2) [above of=i1] {$L$};
\node[main,fill=Goldenrod,line width=1.5pt] (l3) [right of=i1] {$L$};
\node[main,fill=Goldenrod,line width=1.5pt] (l4) [right of=i2] {$L$};
\node[main,fill=Goldenrod,line width=1.5pt] (l5) [below of=i2] {$L$};
\node[main,fill=Goldenrod,line width=1.5pt] (l6) [left of=i2] {$L$};
\node[main,fill=green] (u1) [left of=l1] {$N$};
\node[main,fill=green] (u2) [left of=l2] {$N$};
\node[main,fill=green] (u3) [left of=l5] {$N$};
\node[main,fill=green] (u4) [left of=l6] {$N$};
\node[main,fill=green] (u5) [right of=l3] {$N$};
\node[main,fill=green] (u6) [right of=l2] {$N$};
\node[main,fill=green] (u7) [right of=l5] {$N$};
\node[main,fill=green] (u8) [right of=l4] {$N$};
\node[main,fill=cyan] (u9) [left of=u2] {$U$};
\node[main,fill=cyan] (u10) [left of=u3] {$U$};
\node[main,fill=cyan] (u11) [right of=u6] {$U$};
\node[main,fill=cyan] (u12) [right of=u7] {$U$};
\draw[line width=2.0pt] (i1) -- (i2);
\draw[line width=2.0pt] (i1) -- (l1);
\draw[line width=2.0pt] (i1) -- (l2);
\draw[line width=2.0pt] (i1) -- (l3);
\draw[line width=2.0pt] (i2) -- (l4);
\draw[line width=2.0pt] (i2) -- (l5);
\draw[line width=2.0pt] (i2) -- (l6);
\draw[] (l3) -- (l4);
\draw[] (l1) -- (l6);
\draw[] (l1) -- (u1);
\draw[] (l1) -- (u2);
\draw[] (l2) -- (u2);
\draw[] (l2) -- (u6);
\draw[] (l3) -- (u6);
\draw[] (l3) -- (u5);
\draw[] (l4) -- (u8);
\draw[] (l4) -- (u7);
\draw[] (l5) -- (u7);
\draw[] (l5) -- (u3);
\draw[] (l6) -- (u3);
\draw[] (l6) -- (u4);
\draw[] (u1) -- (u4);
\draw[] (u5) -- (u8);
\draw[] (u2) -- (u9);
\draw[] (u1) -- (u9);
\draw[] (u3) -- (u10);
\draw[] (u4) -- (u10);
\draw[] (u6) -- (u11);
\draw[] (u5) -- (u11);
\draw[] (u7) -- (u12);
\draw[] (u8) -- (u12);
\draw (u9) to [out=180, in=180] (u10);
\draw (u11) to [out=0, in=0] (u12);
\end{tikzpicture}
\caption{A maximal bushy forest (in bold) consisting of a single tree in a graph. \textcolor{red}{Red}: root vertices ($R$). \textcolor{orange}{Orange}: internal vertices ($I$). \textcolor{Goldenrod}{Yellow}: leaves ($L$). \textcolor{green}{Green}: neighbors to the bushy forest ($N$). \textcolor{cyan}{Cyan}: other vertices ($U$).}
\label{fig:exbf}
\end{figure}
Given a graph with any maximal bushy forest, we use these partitions to analyze the runtime of the algorithm.
For $R$ and $I$, the internal vertices of $F$, we try every possible color assignment.
Every root vertex has three possible colors, so there are $3^{|R|}$ possible color assignments for vertices in $|R|$.
Afterward, every tree will have its root vertex colored.
Vertices in $I$ adjacent to $R$ have two available colors.
We color every vertex in $I$ after a neighbor has been colored: there are only $2^{|I|}$ possible color assignments for vertices in $I$.
Then, every vertex in $L$ will have at least one colored neighbor, meaning that these can be removed from the instance in polynomial time.
This leaves vertices in $N$ and $U$ as the vertices that need to be solved by the \textsc{(3,2)-Constraint Satisfaction Problem}: this takes time $\mathcal{O}^*(1.36443^{|N|+|U|})$.
Overall, it takes time $\mathcal{O}^*(3^{|R|}\cdot2^{|I|}\cdot1.36443^{|N|+|U|})$ to solve \textsc{3-coloring} given a bushy forest $F$.
Luckily, we can also color some vertices in $N$ and $U$ to improve the running time of the algorithm.
To do this, we find another forest over the vertices $V$ not covered by the bushy forest $F$.
We denote this graph as $G[V-F]$: the induced subgraph of $V-F$ in $G$.
\begin{definition}[Chromatic Forest]
A forest of rooted trees, where the root of each tree has exactly three children and at most five grandchildren. Each child of the root has at most two children itself.
\end{definition}
Beigel and Eppstein showed that there must exist a chromatic forest that covers all vertices in $U$, a \textit{maximal chromatic forest}.
Beigel and Eppstein proved that the following algorithm will always find a valid maximal chromatic forest.
Here, a $K_{1,3}$ tree is a tree with a root vertex and three children.
\begin{enumerate}
\item Create a maximal forest of $K_{1,3}$ trees.
\item While possible, remove a $K_{1,3}$ tree and add two new ones.
\item Assign all remaining vertices in $U$ as a grandchild to some $K_{1,3}$ tree, such that no tree has six grandchildren.
\end{enumerate}
Beigel and Eppstein's analysis of the trees in the chromatic forest was slightly different from our analysis.
They expressed the number of color assignments in the number of degree-three vertices.
Instead, we express the number of color assignments in the number of vertices in $G[V-F]$.
We do this, such that we can further improve the running time by proving that some vertices with more than three neighbors must be included in the maximal chromatic forest.
For the next lemma, see Beigel and Eppstein for a similar proof \cite[Lemma~24]{Beigel2005}.
There, they express the runtime to color a tree in a chromatic forest in the number of degree-three vertices in the tree.
Furthermore, they also assume that there can be at most eight vertices in a connected subgraph of degree-three vertices.
\begin{lemma}
\label{lem:chromaticforest}
Let $T'$ be a tree in a chromatic forest. Then $T'$ can be colored with a work factor of $1.34004$.
\end{lemma}
\begin{proof}
Presume $T$ has at most four grandchildren.
In this case, we give the root one of the three possible colors.
This removes one color as a possibility for its three children, allowing those to be removed from the instance by the \textsc{(3,2)-CSP} algorithm.
The grandchildren remain and will have to be solved by the time $\mathcal{O}^*(1.36443^n)$ algorithm.
As every grandchild is colored by the $\mathcal{O}^*(1.36443^n)$ algorithm, they are relatively slow to color.
The worst-case scenario happens when there are as many grandchildren as possible: four.
We iterate over all three possible color assignments of the root vertex, while four vertices have to be colored by the \textsc{(3,2)-CSP} algorithm. On average, the runtime required per vertex in the tree to color all vertices in the tree is $(3\cdot 1.36443^4)^\frac{1}{8} < 1.34004$.
Now, let $T$ be a tree with exactly five grandchildren.
Vertices in the chromatic forest have at most three neighbors, so every child has at most two grandchildren.
There are five grandchildren, so two children have two grandchildren, while the other child has one grandchild.
We select the two children with two grandchildren and iterate over all color assignments for these vertices.
If they receive the same color, the root, and their combined four grandchildren will all have a colored neighbor.
This means that only the remaining other child and grandchild are colored through the \textsc{(3,2)-Constraint Satisfaction Problem}. If they receive different colors, the color of the root must be the third color.
Then, only the last grandchild is colored by the \textsc{(3,2)-CSP} algorithm.
This results in a runtime per vertex of $(3\cdot 1.36443^2 + 6\cdot 1.36443)^\frac{1}{9} < 1.338302 < 1.34004$.
Thus, the worst case is a tree with four grandchildren, which leads to the claim of the lemma that in the worst case, we can color a tree $T'$ with work factor $1.34004$.
\end{proof}
We have summarized all of Beigel and Eppstein's concepts that are important for our algorithm.
We remove vertices with fewer than three neighbors from the graph; and connected subgraphs of degree-three vertices that contain a cycle or nine or more vertices.
Afterward, we find a maximal bushy forest and a maximal chromatic forest to find a set of vertices with many neighbors.
We color specific vertices in the two forests, which allows us to remove their neighbors from the instance.
The remaining vertices will be colored using the \textsc{(3,2)-CSP} algorithm.
\section{Limiting Difficult-To-Color Vertices}
\label{sec:forest}
We will now discuss our main improvement:
a modification to the bushy forest that further restricts the number of vertices that can exist outside the bushy forest.
We will limit the existence of high-magnitude vertices in the set $N$.
This vertex enables many vertices to exist outside the bushy forest, which we want to avoid.
\begin{definition}[High-Magnitude Vertex]
A vertex adjacent to a maximal bushy forest is called a \textit{high-magnitude vertex} if it has three neighbors outside the maximal bushy forest.
\end{definition}
\begin{definition}[Maximal Low-Magnitude Bushy Forest]
A maximal bushy forest is a \textit{maximal low-magnitude bushy forest} where all the adjacent high-magnitude vertices are adjacent to a tree with a single internal vertex and four leaves.
Every two high-magnitude vertices adjacent to the same tree must share a common neighbor.
This neighbor must either be the leaf of this tree, or a vertex outside the bushy forest.
\end{definition}
We can transform any given maximal bushy forest into a maximal low-magnitude bushy forest in polynomial time.
If there exists a high-magnitude vertex that causes the bushy forest to not be a low-magnitude bushy forest, we can modify the tree adjacent to the high-magnitude vertex.
Every such modification either adds more internal vertices or adds a new tree to the bushy forest.
Furthermore, no modification will decrease the number of trees in the bushy forest.
The bushy forest can never include more trees or internal vertices than the number of vertices in the graph, we will find a maximal low-magnitude bushy forest.
In case a modification causes the bushy forest to not be maximal anymore, we add a new internal vertex to the bushy forest.
This will also increase the number of internal vertices.
\begin{lemma}
Let $G$ be a graph in which all vertex degrees are three or more, in which there is no cycle of degree-three vertices nor any connected subgraph of nine or more degree-three vertices. Then there exists a maximal low-magnitude bushy forest, which we can find in polynomial time.
\end{lemma}
\begin{proof}
First, we find a maximal bushy forest in polynomial time: greedily add new trees to the bushy forest, and add new internal vertices to trees, until this is no longer possible.
We will now modify this maximal bushy forest to find a maximal low-magnitude bushy forest.
Specifically, we will present cases where there exists a high-magnitude vertex adjacent to a tree with multiple internal vertices, more than four leaves, or a tree with a pair of high-magnitude vertices that do not share a common neighbor.
\begin{figure}[H]
\centering
\begin{tikzpicture}[main/.style = {draw, circle}, node distance=1cm, minimum size=0.6cm]
\node[main,line width=1.5pt,fill=Goldenrod] (1) {l};
\node[main,line width=1.5pt,fill=orange] (2) [below of=1] {};
\node[main,line width=1.5pt,fill=red] (3) [left of=2] {r};
\node[main,line width=1.5pt,fill=Goldenrod] (4) [below of=3] {};
\node[main,line width=1.5pt,fill=Goldenrod] (5) [below of=2] {};
\node[main,line width=1.5pt,fill=Goldenrod] (6) [left of=3] {};
\node[main,line width=1.5pt,fill=Goldenrod] (7) [right of=2] {};
\node[main,line width=1.5pt,fill=Goldenrod] (8) [above of=3] {};
\node[main] (9) [above of=1,fill=lime] {v};
\node[main] (10) [above of=9,fill=cyan] {};
\node[main] (11) [left of=9,fill=cyan] {};
\node[main] (12) [right of=9,fill=cyan] {};
\draw[line width=2.0pt] (1) -- (2);
\draw[line width=2.0pt] (2) -- (3);
\draw[line width=2.0pt] (2) -- (5);
\draw[line width=2.0pt] (2) -- (7);
\draw[line width=2.0pt] (3) -- (4);
\draw[line width=2.0pt] (3) -- (6);
\draw[line width=2.0pt] (3) -- (8);
\draw (1) -- (9);
\draw (9) -- (10);
\draw (9) -- (11);
\draw (9) -- (12);
\end{tikzpicture}
\hspace{1cm}%
\begin{tikzpicture}[main/.style = {draw, circle}, node distance=1cm, minimum size=0.6cm]
\node[main,line width=1.5pt,fill=Goldenrod] (1) {l};
\node[main,line width=1.5pt,fill=Goldenrod] (2) [below of=1] {};
\node[main,line width=1.5pt,fill=red] (3) [left of=2] {r};
\node[main,line width=1.5pt,fill=Goldenrod] (4) [below of=3] {};
\node[main,fill=green] (5) [below of=2] {};
\node[main,line width=1.5pt,fill=Goldenrod] (6) [left of=3] {};
\node[main,fill=green] (7) [right of=2] {};
\node[main,line width=1.5pt,fill=Goldenrod] (8) [above of=3] {};
\node[main,line width=1.5pt] (9) [above of=1,fill=red] {v};
\node[main,line width=1.5pt] (10) [above of=9,fill=Goldenrod] {};
\node[main,line width=1.5pt] (11) [left of=9,fill=Goldenrod] {};
\node[main,line width=1.5pt] (12) [right of=9,fill=Goldenrod] {};
\draw (1) -- (2);
\draw[line width=2.0pt] (2) -- (3);
\draw (2) -- (5);
\draw (2) -- (7);
\draw[line width=2.0pt] (3) -- (4);
\draw[line width=2.0pt] (3) -- (6);
\draw[line width=2.0pt] (3) -- (8);
\draw[line width=2.0pt] (1) -- (9);
\draw[line width=2.0pt] (9) -- (10);
\draw[line width=2.0pt] (9) -- (11);
\draw[line width=2.0pt] (9) -- (12);
\end{tikzpicture}
\caption{A graph with a bushy forest (in bold) displaying the situation before (left) and after (right) removing high-magnitude vertices adjacent to large trees. \textcolor{red}{Red}: root vertices ($R$).
\textcolor{orange}{Orange}: internal vertices ($I$). \textcolor{Goldenrod}{Yellow}: leaves ($L$). \textcolor{lime}{Lime}: high-magnitude vertices. \textcolor{green}{Green}: other neighbors to the bushy forest ($N$). \textcolor{cyan}{Cyan}: other vertices ($U$).}
\label{fig:ex41}
\end{figure}
Consider a tree consisting of multiple internal vertices with an adjacent high-magnitude vertex $v$, as shown in Figure \ref{fig:ex41}.
Let the leaf adjacent to $v$ be $l$ and let the root vertex $r$ of the tree be any internal vertex that is not adjacent to $l$ in the bushy forest.
Then, we can remove the original tree, and add two trees rooted at $r$ and $v$ respectively.
The tree rooted at $r$ will include four vertices from the original tree, while the tree rooted at $v$ will include its three neighbors outside the bushy forest and its neighboring leaf $l$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[main/.style = {draw, circle}, node distance=1cm, minimum size=0.6cm]
\node[main,line width=1.5pt,fill=Goldenrod] (1) {l};
\node[main,line width=1.5pt,fill=red] (2) [below of=1] {};
\node[main,line width=1.5pt,fill=Goldenrod] (3) [below right of=2] {};
\node[main,line width=1.5pt,fill=Goldenrod] (4) [below left of=2] {};
\node[main,line width=1.5pt,fill=Goldenrod] (5) [left of=2] {};
\node[main,line width=1.5pt,fill=Goldenrod] (7) [right of=2] {};
\node[main] (9) [above of=1,fill=lime] {v};
\node[main] (10) [above of=9,fill=cyan] {};
\node[main] (11) [left of=9,fill=cyan] {};
\node[main] (12) [right of=9,fill=cyan] {};
\draw[line width=2.0pt] (1) -- (2);
\draw[line width=2.0pt] (2) -- (3);
\draw[line width=2.0pt] (2) -- (5);
\draw[line width=2.0pt] (2) -- (7);
\draw[line width=2.0pt] (2) -- (4);
\draw (1) -- (9);
\draw (9) -- (10);
\draw (9) -- (11);
\draw (9) -- (12);
\end{tikzpicture}
\hspace{1cm}%
\begin{tikzpicture}[main/.style = {draw, circle}, node distance=1cm, minimum size=0.6cm]
\node[main,line width=1.5pt,fill=Goldenrod] (1) {l};
\node[main,line width=1.5pt,fill=red] (2) [below of=1] {};
\node[main,line width=1.5pt,fill=Goldenrod] (3) [below right of=2] {};
\node[main,line width=1.5pt,fill=Goldenrod] (4) [below left of=2] {};
\node[main,line width=1.5pt,fill=Goldenrod] (5) [left of=2] {};
\node[main,line width=1.5pt,fill=Goldenrod] (7) [right of=2] {};
\node[main,line width=1.5pt,fill=red] (9) [above of=1] {v};
\node[main,line width=1.5pt,fill=Goldenrod] (10) [above of=9] {};
\node[main,line width=1.5pt,fill=Goldenrod] (11) [left of=9] {};
\node[main,line width=1.5pt,fill=Goldenrod] (12) [right of=9] {};
\draw (1) -- (2);
\draw[line width=2.0pt] (2) -- (3);
\draw[line width=2.0pt] (2) -- (5);
\draw[line width=2.0pt] (2) -- (7);
\draw[line width=2.0pt] (2) -- (4);
\draw[line width=2.0pt] (1) -- (9);
\draw[line width=2.0pt] (9) -- (10);
\draw[line width=2.0pt] (9) -- (11);
\draw[line width=2.0pt] (9) -- (12);
\end{tikzpicture}
\caption{A graph with a bushy forest (in bold) displaying the situation before (left) and after (right) removing high-magnitude vertices adjacent to trees with many leaves. \textcolor{red}{Red}: root vertices ($R$). \textcolor{Goldenrod}{Yellow}: leaves ($L$). \textcolor{lime}{Lime}: high-magnitude vertices. \textcolor{cyan}{Cyan}: other vertices ($U$).}
\label{fig:ex42}
\end{figure}
Secondly, consider a high-magnitude vertex $v$ adjacent to a tree with a single internal vertex and at least five leaves, like shown in Figure \ref{fig:ex42}.
Let the leaf adjacent to $v$ be called $l$.
Remove $l$ from the existing tree, which will still be a valid tree with four leaves.
Then, add a new tree rooted at $v$ with four leaves:
the three neighbors originally outside the bushy forest and $l$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[main/.style = {draw, circle}, node distance=1cm, minimum size=0.6cm]
\node[main,line width=1.5pt,fill=Goldenrod] (1) {};
\node[main,line width=1.5pt,fill=red] (2) [below of=1] {};
\node[main,line width=1.5pt,fill=Goldenrod] (3) [below of=2] {};
\node[main,line width=1.5pt,fill=Goldenrod] (4) [left of=2] {};
\node[main,line width=1.5pt,fill=Goldenrod] (5) [right of=2] {};
\node[main,fill=lime] (6) [below left of=3] {w};
\node[main,fill=cyan] (7) [below left of=6] {};
\node[main,fill=cyan] (8) [left of=6] {};
\node[main,fill=cyan] (9) [above left of=6] {};
\node[main,fill=lime] (10) [above left of=1] {v};
\node[main,fill=cyan] (11) [above left of=10] {};
\node[main,fill=cyan] (12) [left of=10] {};
\node[main,fill=cyan] (13) [below left of=10] {};
\draw[line width=2.0pt] (1) -- (2);
\draw[line width=2.0pt] (2) -- (3);
\draw[line width=2.0pt] (2) -- (4);
\draw[line width=2.0pt] (2) -- (5);
\draw (3) -- (6);
\draw (6) -- (7);
\draw (6) -- (8);
\draw (6) -- (9);
\draw (1) -- (10);
\draw (10) -- (11);
\draw (10) -- (12);
\draw (10) -- (13);
\end{tikzpicture}
\hspace{1cm}%
\begin{tikzpicture}[main/.style = {draw, circle}, node distance=1cm, minimum size=0.6cm]
\node[main,line width=1.5pt,fill=Goldenrod] (1) {};
\node[main,fill=green] (2) [below of=1] {};
\node[main,line width=1.5pt,fill=Goldenrod] (3) [below of=2] {};
\node[main,fill=cyan] (4) [left of=2] {};
\node[main,fill=cyan] (5) [right of=2] {};
\node[main,line width=1.5pt,fill=red] (6) [below left of=3] {w};
\node[main,line width=1.5pt,fill=Goldenrod] (7) [below left of=6] {};
\node[main,line width=1.5pt,fill=Goldenrod] (8) [left of=6] {};
\node[main,line width=1.5pt,fill=Goldenrod] (9) [above left of=6] {};
\node[main,line width=1.5pt,fill=red] (10) [above left of=1] {v};
\node[main,line width=1.5pt,fill=Goldenrod] (11) [above left of=10] {};
\node[main,line width=1.5pt,fill=Goldenrod] (12) [left of=10] {};
\node[main,line width=1.5pt,fill=Goldenrod] (13) [below left of=10] {};
\draw (1) -- (2);
\draw (2) -- (3);
\draw (2) -- (4);
\draw (2) -- (5);
\draw[line width=2.0pt] (3) -- (6);
\draw[line width=2.0pt] (6) -- (7);
\draw[line width=2.0pt] (6) -- (8);
\draw[line width=2.0pt] (6) -- (9);
\draw[line width=2.0pt] (1) -- (10);
\draw[line width=2.0pt] (10) -- (11);
\draw[line width=2.0pt] (10) -- (12);
\draw[line width=2.0pt] (10) -- (13);
\end{tikzpicture}
\caption{A graph with a bushy forest (in bold) displaying the situation before (left) and after (right) removing high-magnitude vertices adjacent to the same tree. \textcolor{red}{Red}: root vertices ($R$). \textcolor{Goldenrod}{Yellow}: leaves ($L$). \textcolor{lime}{Lime}: high-magnitude vertices. \textcolor{green}{Green}: other neighbors to the bushy forest ($N$). \textcolor{cyan}{Cyan}: other vertices ($U$).}
\label{fig:ex43}
\end{figure}
Next, consider a tree adjacent with two distinct leaves adjacent to high-magnitude vertices $v$ and $w$, where $v$ and $w$ are neither adjacent nor share a neighbor outside the bushy forest, as shown in Figure \ref{fig:ex43}.
Then, we can remove this tree, and instead root new trees at $v$ and $w$.
Both new trees will have four leaves:
the three neighbors originally outside the bushy forest and the leaves to which they were adjacent.
As $v$ and $w$ are adjacent to distinct leaves and do not share a neighbor outside the bushy forest, both will form valid trees.
\begin{figure}[H]
\centering
\begin{tikzpicture}[main/.style = {draw, circle}, node distance=1cm, minimum size=0.6cm]
\node[main,line width=1.5pt,fill=red] (1) {};
\node[main,line width=1.5pt,fill=Goldenrod] (2) [below of=1] {};
\node[main,line width=1.5pt,fill=Goldenrod] (3) [above of=1] {};
\node[main,line width=1.5pt,fill=Goldenrod] (4) [left of=1] {};
\node[main,line width=1.5pt,fill=Goldenrod] (5) [right of=1] {};
\node[main,fill=lime] (6) [above right of=3] {v};
\node[main,fill=lime] (7) [above right of=5] {w};
\node[main,fill=cyan] (8) [above right of=6] {};
\node[main,fill=cyan] (9) [above right of=7] {};
\node[main,fill=cyan] (10) [above left of=6] {};
\node[main,fill=cyan] (11) [below right of=7] {};
\draw[line width=2.0pt] (1) -- (2);
\draw[line width=2.0pt] (1) -- (3);
\draw[line width=2.0pt] (1) -- (4);
\draw[line width=2.0pt] (1) -- (5);
\draw (3) -- (6);
\draw (5) -- (7);
\draw (6) -- (7);
\draw (6) -- (8);
\draw (6) -- (10);
\draw (7) -- (9);
\draw (7) -- (11);
\end{tikzpicture}
\hspace{1cm}%
\begin{tikzpicture}[main/.style = {draw, circle}, node distance=1cm, minimum size=0.6cm]
\node[main,fill=green] (1) {};
\node[main,fill=cyan] (2) [below of=1] {};
\node[main,line width=1.5pt,fill=Goldenrod] (3) [above of=1] {};
\node[main,fill=cyan] (4) [left of=1] {};
\node[main,line width=1.5pt,fill=Goldenrod] (5) [right of=1] {};
\node[main,line width=1.5pt,fill=red] (6) [above right of=3] {v};
\node[main,line width=1.5pt,fill=red] (7) [above right of=5] {w};
\node[main,line width=1.5pt,fill=Goldenrod] (8) [above right of=6] {};
\node[main,line width=1.5pt,fill=Goldenrod] (9) [above right of=7] {};
\node[main,line width=1.5pt,fill=Goldenrod] (10) [above left of=6] {};
\node[main,line width=1.5pt,fill=Goldenrod] (11) [below right of=7] {};
\draw (1) -- (2);
\draw (1) -- (3);
\draw (1) -- (4);
\draw (1) -- (5);
\draw[line width=2.0pt] (3) -- (6);
\draw[line width=2.0pt] (5) -- (7);
\draw[line width=2.0pt] (6) -- (7);
\draw[line width=2.0pt] (6) -- (8);
\draw[line width=2.0pt] (6) -- (10);
\draw[line width=2.0pt] (7) -- (9);
\draw[line width=2.0pt] (7) -- (11);
\end{tikzpicture}
\caption{A graph with a bushy forest (in bold) displaying the situation before (left) and after (right) removing high-magnitude vertices adjacent to large trees. \textcolor{red}{Red}: root vertices ($R$). \textcolor{Goldenrod}{Yellow}: leaves ($L$). \textcolor{lime}{Lime}: high-magnitude vertices. \textcolor{green}{Green}: other neighbors to the bushy forest ($N$). \textcolor{cyan}{Cyan}: other vertices ($U$).}
\label{fig:ex44}
\end{figure}
Finally, consider a tree with two distinct leaves adjacent to high-magnitude vertices $v$ and $w$, where $v$ and $w$ do not share a common neighbor outside the bushy forest, but are adjacent.
Then, we can remove this tree (which has a singular internal vertex) and instead create a new tree with both high-magnitude $v$ and $w$ as internal vertices, as shown in Figure \ref{fig:ex44}.
Thus, a high-magnitude vertex can only be adjacent to a tree in the maximal low-magnitude bushy forest with exactly one internal vertex and four leaves.
If there are multiple high-magnitude vertices adjacent to the same tree, they must either be adjacent to the same leaf of the tree or share a common neighbor outside the bushy forest.
\end{proof}
Now, any remaining high-magnitude vertices must be adjacent to a low-magnitude bushy forest, which constrains their appearance.
Furthermore, these high-magnitude vertices must share a common neighbor with any other high-magnitude vertices adjacent to the same tree.
As they have to share a common neighbor, this creates predictable structures within the graph when one tree is adjacent to many high-magnitude vertices.
In the next section, we explain how these structures also limit the number of vertices outside the bushy forest.
\section{A Bound on Difficult-To-Color Vertices}
\label{sec:analysis}
High-magnitude vertices only exist adjacent to certain trees, as described in the previous section.
High-magnitude vertices must share a common neighbor if they are adjacent to the same tree.
Using these constraints, we find a set of equations expressing the relationship between the different types of vertices.
We will use these constraints to formulate a linear program that finds the worst-case graph for our algorithm.
We further partition the vertices outside the bushy forest, such that we can analyze the effects of high-magnitude vertices:
\begin{itemize}
\item $N_1$: vertices in $N$ of degree three, with one neighbor in $L$.
\item $N_2$: vertices in $N$ with multiple neighbors in $L$.
\item $N_{3,i}$: high-magnitude vertices (vertices in $N$ with three neighbors outside the maximal low-magnitude bushy forest), adjacent to a tree to which in total $i$ high-magnitude vertices are adjacent.
If a high-magnitude vertex is adjacent to multiple trees in the bushy forest, we assign them to the highest value $i$ of its adjacent trees.
We refer to these vertices collectively as $N_3$.
\end{itemize}
Secondly, we will partition the vertices in $U$ to express their relation to the new partition of $N$:
\begin{itemize}
\item $U'$: vertices in $U$ of which all three neighbors must be in $N_3$.
\item $U_j$: vertices in $U$ (and not in $U'$) that appear in a connected subgraph of degree-three vertices containing $j$ vertices in $N_1$.
\end{itemize}
All vertices in $U$ must have degree three, so they must all appear in exactly one subgraph of degree-three vertices.
Furthermore, the connected subgraphs of degree-three vertices can contain at most eight vertices.
As such, all vertices in $U$ will either be in $U'$ or some set $U_j$ for $0 \geq j < 8$.
Finally, we notice that the number of vertices in $N_2$ and $N_3$ determines the maximum amount of degree-three vertices outside the bushy forest ($N_1$ and all vertices in $U$).
We count the edges that can exist between vertices in $N_2$ or $N_3$ and degree-three vertices outside the bushy forest. We split this into two categories:
\begin{enumerate}
\item Three edges per vertex in $U'$ to connect $N_3$ with $U'$.
\item The set of edges $S$ between vertices in $N_2$ or $N_3$ and vertices $N_1$ or a set $U_j$ where $0 \leq j \leq 7$.
\end{enumerate}
Now, we will show various constraints based on the relation between the partition of vertices. These relations will limit how many vertices in one partition can exist, based on the number of vertices in another partition:
\begin{lemma}
Let $G$ be a graph in which all vertex degrees are three or more, in which there is no cycle of degree-three vertices nor any connected subgraph of nine or more degree-three vertices. Let $F$ be a maximal low-magnitude bushy forest in $G$. Then,
\end{lemma}
\begin{align}
4\cdot|R|+2\cdot|I|\leq|L| \label{align:1}\\
|N_1|+2\cdot|N_2|+|N_3|\leq 2\cdot|L| \label{align:2}\\
|U'|\geq\frac{1}{5}\cdot|N_{3,5}|+\frac{2}{6}\cdot|N_{3,6}|+\frac{5}{7}\cdot|N_{3,7}|+|N_{3,8}|\label{align:3}\\
\sum_{j=0}^7 \left(\frac{10-j}{8-j}\cdot |U_j|\right)\leq 2\cdot |N_2|+ 3\cdot\sum_{i=1}^8|N_{3,i}|-3\cdot|U'|\label{align:4}
\end{align}
\begin{proof}
The equations describe the relationship between the partitions of the vertices.
We will prove the equations one by one:
\[ 4\cdot|R|+2\cdot|I|\leq|L| \]
This follows the definition of the bushy forest:
If a tree has a single internal vertex, it must have at least four leaves.
Every additional internal vertex causes the tree to have at least two more leaves:
we turn a leaf into an internal vertex.
This removes one of the leaves.
However, the internal vertex must now neighbor three leaves, as it only bordered the internal vertex as a leaf.
Thus, each additional internal vertex causes at least two leaves to appear in the bushy forest.
\[|N_1|+2\cdot|N_2|+|N_3|\leq 2\cdot|L|\]
This follows from the definitions $L$, $N_1$, $N_2$, and $N_3$:
vertices in $L$ have at most two neighbors outside the bushy forest.
Otherwise, the bushy forest would not be maximal.
Vertices in $N_1$ and $N_3$ have at least one neighbor in $L$, while vertices in $N_2$ must have at least two neighbors in $L$.
\[|U'|\geq\frac{1}{5}\cdot|N_{3,5}|+\frac{2}{6}\cdot|N_{3,6}|+\frac{5}{7}\cdot|N_{3,7}|+|N_{3,8}|\]
Recall the definition of $U'$: vertices in $U$ of which all neighbors must be high-magnitude vertices ($N_3$).
Vertices appear in $U'$ when there are five or more high-magnitude vertices adjacent to one tree of the bushy forest.
All high-magnitude vertices must share a common neighbor.
Then, some vertices in $U$ must be adjacent to three of the high-magnitude vertices if all must share a common neighbor.
We analyze how many vertices in $U'$ must exist for each tree in the bushy forest adjacent to five, six, seven, or eight high-magnitude vertices.
Consider a tree with five adjacent high-magnitude vertices (five vertices in $N_{3,5}$).
At most, two pairs of high-magnitude vertices have a common neighbor through a leaf of the tree.
So, at least one vertex does not share a common neighbor with any of the other four vertices through a leaf.
This vertex must share a common neighbor outside the bushy forest with all four other high-magnitude vertices.
However, it can only have three neighbors outside the bushy forest.
So, at least one of its neighbors must border two of the other four high-magnitude vertices:
it has three neighbors in $N_3$ and must be in $U'$.
For every five vertices in $N_{3,5}$, there must exist at least one vertex in $U'$.
Vertices in $N_1$ and $N_2$ can never have three neighbors in $N_3$, so they cannot ever take the role of a vertex in $U'$.
Vertices in $N_3$ adjacent to a different tree can have three neighbors in $N_3$.
However, this has a strictly better runtime than a vertex in $U$ taking the same role:
it removes a possible vertex in $U$ from the instance.
Vertices in $U$ are colored through the chromatic forest, which takes more time than the overall algorithm.
As such, we can assume that it is always a vertex in $U$ that must neighbor three vertices in $N_3$.
Using this technique, we determine that there must be at least two vertices in $U'$ for every six vertices in $N_{3,6}$, five vertices in $U'$ for every seven vertices in $N_{3,7}$, and eight vertices in $U'$ for every eight vertices in $N_{3,8}$.
\[\sum_{j=0}^7 \left(\frac{10-j}{8-j}\cdot |U_j|\right)\leq 2\cdot |N_2|+ 3\cdot|N_3|-3\cdot|U'|\]
Recall $S$: the set of edges between $N_2$ and $N_3$, and $N_1$ and $U$ (but not $U'$).
We will count the number of edges $S$ that may exist per vertex in $N_2$ and $N_3$, and the number of vertices in $U$ (but not $U'$) that may exist per edge in $S$ to prove this equation.
By definition, $N_2$ can have at most two neighbors outside the bushy forest, while $N_3$ will have exactly three.
However, we subtract all edges to vertices in $U'$.
Each vertex in $U'$ has three neighbors in $N_3$, so we subtract three edges that cannot go from $N_3$ to a different vertex in $U$ for each vertex in $U'$.
We get $|S| \leq 2\cdot |N_2|+ 3\cdot|N_3|-3\cdot|U'|$.
Then, we count the number of vertices that may exist in $U-U'$ (in $U$, but not in $U'$) per edge in $S$.
First, we will consider the connected components in $G[(U-U') \cup N_1$:
these vertices all have degree three, and cannot contain cycles as per Lemma \ref{lem:3cycle}.
Thus, they form a tree, which means that a connected component of $n$ vertices includes exactly $n-1$ edges.
Each edge has two endpoints within the component.
The sum of degrees of these vertices is $3\cdot n$, so the number of edges with one endpoint in this component and one outside this component must be $3\cdot n - 2\cdot (n-1)=n+2$.
The ratio of vertices to outgoing edges is $n/(n+2)$, which is maximized when $n$ is maximal: $n=8$.
All outgoing edges having an endpoint in $U-U'$ must have the other endpoint in $N_2$ or $N_3$ by definition.
However, every vertex in $N_1$ will have exactly one neighbor in $L$.
So, for every vertex in $N_1$ in the component, we subtract one neighbor of the component that must be in $N_2$ or $N_3$.
Furthermore, we calculated how many vertices $U_j$ can exist:
we must also subtract the number of vertices in $N_1$ from this.
Then, we get $\sum_{j=0}^7 (\frac{10-j}{8-j}\cdot |U_j|) \leq |S|$.
If we combine these two results, we get $\sum_{j=0}^7 (\frac{10-j}{8-j}\cdot |U_j|)\leq 2\cdot |N_2|+ 3\cdot|N_3|-3\cdot|U'|$.
\end{proof}
Next, we show that we can cover all high-magnitude vertices adjacent to a vertex in $U'$ in a maximal chromatic forest.
As we can color vertices in the chromatic forest more quickly than only using the \textsc{(3,2)-CSP} algorithm, this will improve the runtime of the algorithm.
\begin{definition}[Maximal High-Magnitude Chromatic Forest]
A \textit{maximal high-magnitude chromatic forest} is a maximal chromatic forest that covers all vertices in $U$, and all vertices in $N_3$ adjacent to a vertex in $U'$.
\end{definition}
\begin{lemma}
\label{lem:hmchromaticforest}
Let $G$ be a graph in which all vertex degrees are three or more, in which there is no cycle of degree-three vertices nor any connected subgraph of nine or more degree-three vertices. Let $F$ be a maximal low-magnitude bushy forest in $G$. Then, we can find a maximal high-magnitude chromatic forest in $G[V-F]$.
\end{lemma}
\begin{proof}
Beigel and Eppstein showed that we can find a chromatic forest in $G[V-F]$.
To find a chromatic forest, they first found a maximal forest of $K_{1,3}$ trees in $G[V-F]$:
trees with a root vertex and three children.
While possible, they modify the forest of $K_{1,3}$ trees by removing one $K_{1,3}$ tree and adding at least two.
Using this forest of $K_{1,3}$ trees, they showed that every vertex in $U$ can be assigned to one of the $K_{1,3}$ trees, such that no tree is assigned more than five grandchildren \cite[Lemma~25]{Beigel2005}.
We will now show that we can also include all high-magnitude vertices that are adjacent to a vertex in $U'$ in the chromatic forest.
This will turn it from a maximal chromatic forest to a maximal high-magnitude chromatic forest.
All high-magnitude vertices or vertices in $U$ will have three neighbors outside the bushy forest.
As such, if they are not included in the forest of $K_{1,3}$ trees, they are adjacent to at least one and at most three of these trees.
They cannot be adjacent to no trees, as then we could add a new $K_{1,3}$ rooted at this vertex.
Suppose a vertex $v$ is adjacent to $i$ $K_{1,3}$ trees, we give vertex $v$ weight $1/i$.
Beigel and Eppstein showed that we can find a maximal chromatic forest if the weight of all potential grandchildren does not exceed five for any tree.
Consider an arbitrary $K_{1,3}$ tree $T$ whose potential grandchildren have a weight over five.
For each of the three children, there can exist at most two grandchildren: at most six in total.
As each grandchild has weight at most one, there must be six potential grandchildren. Furthermore, at least five have unit weight and thus be adjacent to only one $K_{1,3}$ tree.
Suppose there exists a possible grandchild $v$ that neighbors a vertex that is not a possible grandchild of $T$.
Depending on the weight of $v$, we can always replace $T$ by two $K_{1,3}$ trees:
\begin{itemize}
\item $v$ has a weight of one.
We remove $T$ and create a new $K_{1,3}$ rooted at $v$.
As $v$ neighbors at most one other grandchild of $T$, there must be a child vertex of $T$ whose grandchildren do not border $v$.
We also create a new tree rooted at this child vertex.
\item $v$ has a weight greater than one.
Let $v'$ be the grandchild of $T$ that neighbors the same child of $T$ as $v$.
We remove $T$ and create a new $K_{1,3}$ tree rooted at the parent of $v$.
$v$ neighbors at most one other grandchild of $T$, while $v'$ neighbors at most two.
$T$ had six potential grandchildren, so there must be one that does not border $v$ or $v'$:
we can root a new $K_{1,3}$ tree at this vertex.
\end{itemize}
Let this neighbor of $v$ not be part of a $K_{1,3}$ tree. Then, we could remove $T$ and create two new trees: one rooted at $v$, and one at the child vertex of $T$ that does neighbor $v$.
Let $v$ instead border two $K_{1,3}$ trees.
In this case, we can also create two new trees.
The other grandchildren cannot be adjacent to another $K_{1,3}$ tree. So, we remove the original tree, and root one at the parent of $v$, and one at one of the grandchildren that are not adjacent to the parent of $v$. The only possible remaining configuration of vertices is shown in Figure \ref{fig:6ex2}
\begin{figure}[H]
\centering
\begin{tikzpicture}[main/.style = {draw, circle}, minimum size=0.6cm]
\node[main, line width=1.5pt] (1) {};
\node[main, line width=1.5pt] (2) [above of=1] {};
\node[main, line width=1.5pt] (3) [below left of=1] {};
\node[main, line width=1.5pt] (4) [below right of=1] {};
\node[main] (5) [above left of=2] {};
\node[main] (6) [above right of=2] {};
\node[main] (7) [left of=3] {};
\node[main] (8) [below of=3] {};
\node[main] (9) [right of=4] {};
\node[main] (10) [below of=4] {};
\draw[line width=2.0pt] (1) -- (2);
\draw[line width=2.0pt] (1) -- (3);
\draw[line width=2.0pt] (1) -- (4);
\draw[] (2) -- (7);
\draw[] (2) -- (9);
\draw[] (3) -- (5);
\draw[] (3) -- (10);
\draw[] (4) -- (6);
\draw[] (4) -- (8);
\draw[] (5) -- (6);
\draw[] (6) -- (9);
\draw[] (9) -- (10);
\draw[] (10) -- (8);
\draw[] (8) -- (7);
\draw[] (5) -- (7);
\end{tikzpicture}
\caption{The only possible configuration where a $K_{1,3}$ tree (bold) has grandchildren of weight larger than five within a maximal high-magnitude chromatic forest.}
\label{fig:6ex2}
\end{figure}
To find a $K_{1,3}$ tree whose potential grandchildren have weight over six, all six potential grandchildren must not have neighbors outside the $K_{1,3}$ tree.
All grandchildren must have three neighbors, one of which being a child vertex of $T$.
Each grandchild of $T$ must border two other grandchildren of $T$.
There must either be two cycles of length three through the grandchildren or one cycle of length six.
With two cycles, we could root two new trees, each rooted at one of the grandchildren.
So, there must exist a cycle of length six through the six grandchildren.
If all potential grandchildren are vertices in $U$, this would create a cycle of degree-three vertices. As such, in this situation, there must exist a high-magnitude vertex as a possible grandchild
We only consider high-magnitude vertices adjacent to vertices in $U'$ as potential grandchildren, so there must also exist at least one vertex in $U'$.
As a vertex in $U'$ is adjacent to three high-magnitude vertices, and neither the vertices in the $K_{1,3}$ tree nor the potential grandchildren can have neighbors outside this structure, there must be at least three high-magnitude vertices in the $K_{1,3}$ tree or its potential grandchildren.
Presume no further high-magnitude vertices or vertices in $U'$ are included.
Then, we can remove the current $K_{1,3}$ tree and create a new one rooted at the vertex in $U'$ instead, with the three high-magnitude vertices as its children.
Then, the tree will have fewer than six potential grandchildren, as only vertices in $U$ and high-magnitude vertices adjacent to vertices in $U'$ as potential grandchildren.
Otherwise, all grandchildren would be vertices in $U$.
In this case, the vertices in $U$ would form a cycle, which is not possible.
Consider the situation where the tree and its potential grandchildren contain more high-magnitude vertices adjacent to a vertex in $U'$.
Then, there must exist another vertex in $U'$. The high-magnitude vertices adjacent to this vertex in $U'$ cannot be adjacent to another tree in the bushy forest:
there cannot exist two disjoint $K_{1,3}$ trees in the $K_{1,3}$ tree and its potential grandchildren.
Otherwise, we could replace the $K_{1,3}$ tree with two such trees.
As such, the two vertices in $U'$ must share a common neighbor in $N_3$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[main/.style = {draw, circle}, minimum size=0.6cm]
\node[main,fill=red] (1) {U};
\node[main,fill=green] (2) [above of=1] {N};
\node[main,fill=green] (3) [below left of=1] {N};
\node[main,fill=green] (4) [below right of=1] {N};
\node[main,fill=red] (5) [above left of=2] {U};
\node[main,fill=cyan] (6) [above right of=2] {N};
\node[main,fill=cyan] (7) [left of=3] {N};
\node[main,fill=red] (8) [below of=3] {U};
\node[main,fill=red] (9) [right of=4] {U};
\node[main,fill=cyan] (10) [below of=4] {N};
\node[main,fill=green] (11) [below left of=10] {L};
\node[main,fill=green] (12) [below right of=9] {L};
\node[main,fill=green] (13) [ left of=7] {L};
\node[main,fill=red] (14) [below of=11] {R};
\draw[] (1) -- (2);
\draw[] (1) -- (3);
\draw[] (1) -- (4);
\draw[] (2) -- (7);
\draw[] (2) -- (9);
\draw[] (3) -- (5);
\draw[] (3) -- (10);
\draw[] (4) -- (6);
\draw[] (4) -- (8);
\draw[] (5) -- (6);
\draw[] (6) -- (9);
\draw[] (9) -- (10);
\draw[] (10) -- (8);
\draw[] (8) -- (7);
\draw[] (5) -- (7);
\draw[] (11) -- (10);
\draw[] (11) -- (3);
\draw[] (12) -- (6);
\draw[] (12) -- (4);
\draw[] (13) -- (7);
\draw[] (13) -- (2);
\draw[] (14) -- (11);
\draw[] (14) -- (12);
\draw[] (14) -- (13);
\end{tikzpicture}
\hspace{1cm}%
\begin{tikzpicture}[main/.style = {draw, circle}, minimum size=0.6cm]
\node[main,fill=red] (1) {U};
\node[main,fill=green] (2) [above of=1] {N};
\node[main,fill=cyan] (3) [below left of=1] {N};
\node[main,fill=green] (4) [below right of=1] {N};
\node[main,fill=red] (5) [above left of=2] {U};
\node[main,fill=cyan] (6) [above right of=2] {N};
\node[main,fill=cyan] (7) [left of=3] {N};
\node[main,fill=red] (8) [below of=3] {U};
\node[main,fill=red] (9) [right of=4] {U};
\node[main,fill=green] (10) [below of=4] {N};
\node[main,fill=cyan] (11) [below left of=10] {L};
\node[main,fill=green] (12) [below right of=9] {L};
\node[main,fill=green] (13) [ left of=7] {L};
\node[main,fill=red] (14) [below of=11] {R};
\draw[] (1) -- (2);
\draw[] (1) -- (3);
\draw[] (1) -- (4);
\draw[] (2) -- (7);
\draw[] (2) -- (9);
\draw[] (3) -- (5);
\draw[] (3) -- (10);
\draw[] (4) -- (6);
\draw[] (4) -- (8);
\draw[] (5) -- (6);
\draw[] (6) -- (9);
\draw[] (9) -- (10);
\draw[] (10) -- (8);
\draw[] (8) -- (7);
\draw[] (5) -- (7);
\draw[] (11) -- (10);
\draw[] (11) -- (3);
\draw[] (12) -- (6);
\draw[] (12) -- (4);
\draw[] (13) -- (7);
\draw[] (13) -- (2);
\draw[] (14) -- (11);
\draw[] (14) -- (12);
\draw[] (14) -- (13);
\end{tikzpicture}
\caption{A valid 3-coloring for the only configuration possible where a tree in a maximal high-magnitude chromatic forest has a weight greater than five.}
\label{fig:6ex}
\end{figure}
Now, the $K_{1,3}$ tree and its potential grandchildren must include multiple vertices in $U'$.
Then, the tree in the bushy forest they are adjacent to must neighbor at least six high-magnitude vertices.
The only possible configuration is shown in Figure \ref{fig:6ex}.
In this configuration, there must be three pairs of high-magnitude vertices that neighbor the same leaf of the tree in the bushy forest.
Then, the $K_{1,3}$ tree and its six grandchildren are only attached to the rest of the graph through these three leaves.
These three leaves share a neighbor in a single root vertex, so they must be 2-colorable.
Regardless of their coloration, it is always possible to color this configuration of vertices.
The three leaves either have the same color, or two have the same color and the third has a different color.
Figure \ref{fig:6ex} shows an exemplary coloring for both of these cases.
We now know that if the weight of the grandchildren of a $K_{1,3}$ tree is larger than five, we can either replace the tree and reduce the weight of the grandchildren, or the tree is trivial to color.
As such, we can conclude that we can find a maximal high-magnitude chromatic forest.
The maximal high-magnitude chromatic forest allows us to color vertices in $N_3$ more easily for each vertex in $U'$.
Specifically, for every five vertices in $N_{3,5}$, there must be one vertex in $U'$ adjacent to three of the vertices in $N_{3,5}$.
For the vertices in $N_{3,j}$ for $6 \leq j \leq 8$, they will all be adjacent to a vertex in $U'$.
\end{proof}
We found constraints between the partitions of the vertices: these limit the number of vertices in $N$ and $U$, based on their properties and the number of vertices in $L$.
Furthermore, some high-magnitude vertices can always be colored by using the chromatic forest.
We now create a linear program to decide the worst-case graph for our algorithm.
\section{Analyzing our Algorithm}
\label{sec:lp}
We improve Beigel and Eppstein's algorithm by finding a maximal low-magnitude bushy forest and a maximal high-magnitude chromatic forest.
We enumerate all possible color assignments for selected vertices in these forests.
\lstset{
breaklines=true,
postbreak=\mbox{\textcolor{red}{$\hookrightarrow$}\space},
}
\begin{lstlisting}
Remove vertices of degree one and degree two.
Remove cycles of degree-three vertices and connected subgraphs of at least nine degree-three vertices.
Find a maximal low-magnitude bushy forest.
Find a maximal high-magnitude chromatic forest.
For each color assignment for the internal vertices of the bushy forest and selected vertices of the chromatic forest:
Color all vertices according to the color assignment.
Use the (3,2)-CSP algorithm on the remaining vertices.
\end{lstlisting}
We will now prove Theorem \ref{the:1}.
We know constraints on the partitions with different relations to the bushy forest from Section \ref{sec:analysis}.
We use these constraints, along with the time it takes to color each vertex of the graph, to formulate a linear program.
This linear program will find the worst-case graph for our algorithm and determine that it runs in time $\mathcal{O}^*(1.3217^n)$.
\subsection{Linear Program}
We found a set of constraints (Constraints \ref{align:1}, \ref{align:2}, \ref{align:3}, and \ref{align:4}) between the partitions of the vertices.
We use these constraints to create a linear program, which maximizes the runtime of the algorithm.
First, we will discuss the maximization function of the linear program and how it helps us find to discover the worst-case runtime of our algorithm.
Then, we discuss further constraints that are required for the linear program to function.
Finally, we will discuss how this linear program leads to our final result:
solving \textsc{3-coloring} in time $\mathcal{O}^*(1.3217^n)$.
\subsection{Creating the Linear Program}
\begin{align}
\max \quad
& \log(3)\cdot |R|+\log(2)\cdot |I|+\log(1.36443)\cdot |N^*|+\log(1.34004)\cdot|U^*| \label{ilp:obj}
\end{align}
\begin{align}
|N|-\frac{3}{5}\cdot|N_{3,5}|-|N_{3,6}|-|N_{3,7}|-|N_{3,8}| = |N^*| \label{ilp:7}\\ \quad
|U|+\frac{3}{5}\cdot|N_{3,5}|+|N_{3,6}| = |U^*| \label{ilp:8}
\end{align}
Recall that we iterate over all $3^{|R|}\cdot2^{|I|}$ color assignments for the internal vertices in the maximal low-magnitude bushy forest.
This allows us to remove the leaves ($|L|$) in polynomial time, so they do not increase the exponential runtime.
For the remaining vertices, all vertices in $U$ will be covered by the maximal high-magnitude chromatic forest, along with every high-magnitude vertex adjacent to a vertex in $U'$.
We define the set $U^*$ as all vertices that will be covered by the maximal high-magnitude chromatic forest and thus be colored in time $\mathcal{O}^*(1.34004^{|U^*|}$:
all vertices in $U$, along with all vertices in $N_3$ adjacent to vertices in $U'$ (Lemma \ref{lem:hmchromaticforest}).
The remaining vertices will be colored by the \textsc{(3,2)-CSP} algorithm.
Let $N^*$ be the set of vertices that are neither covered by the maximal low-magnitude bushy forest nor the maximal high-magnitude chromatic forest and will simply be colored using the \textsc{(3,2)-CSP} algorithm.
Indeed, $N^*$ equals all vertices in $N$, excluding high-magnitude vertices adjacent to vertices in $U$ (Lemma \ref{lem:hmchromaticforest}).
Notice that $3^{|R|}\cdot2^{|I|}\cdot1.36443^{|N^*|}\cdot1.34004^{|U^*|}$ calculates the runtime of the \textsc{3-coloring} algorithm, based on the partitions of the graph.
However, this is not a linear function.
Luckily, taking the logarithm of this function leads to a linear function: $\log(3^{|R|}\cdot2^{|I|}\cdot1.36443^{|N^*|}\cdot1.34004^{|U^*|})$.
Hence, this function determine the parameters $|R|$, $|I|$, $|N^*|$, and $|U^*|$ that maximize the runtime.
\begin{align}
|R|+|I|+|L|+|N|+|U|=n \label{ilp:1} \\ \quad
\sum_{k=1}^3|N_k|=|N| \label{ilp:9}\\ \quad
\sum_{i=1}^8|N_{3,i}|=|N_3| \label{ilp:10}\\ \quad
\sum_{j=0}^7|U_j|+|U'|=|U| \label{ilp:11}\\ \quad
\sum_{i=1}^8(\frac{8}{i}\cdot|N_{3,i}|) \leq 8\cdot |R| \label{ilp:4}
\end{align}
Constraints \ref{align:1}, \ref{align:2}, \ref{align:3}, and \ref{align:4} function as constraints in the linear program. Constraint \ref{ilp:1} ensures that the partitions of the vertex set sum up to an arbitrary number of vertices $n$.
We add further constraints (Constraints \ref{ilp:9}, \ref{ilp:10}, \ref{ilp:11}) that ensure that other partitions sum up to their parent set.
Finally, we add Constraint \ref{ilp:4}:
by definition, each vertex in $N_{3,i}$ is adjacent to a tree in the maximal bushy forest that is adjacent to $i$ high-magnitude vertices in total.
\subsection{Results of the Linear Program}
Table \ref{tab:results} shows the results of the linear program.
In the results, sets of vertices are calculated as their fraction of the total number of vertices.
Notice that most sets of vertices are zero: they do not appear in the worst-case scenario.
The vertices in $N$ (the neighbors of the maximal low-magnitude bushy forest) that do exist are in either $N_{3,6}$ or $N_{2}$.
None of the vertices in $N$ are in $N_1$, so all vertices in $U$ are either in $U_0$ or in $U'$.
The vertices in $U_0$ exist in connected subgraphs of degree-three vertices of eight vertices in $U_0$: the largest connected subgraph of degree-three vertices that can exist.
\begin{table}[H]
\begin{tabular}{|ll|ll|ll|ll|}
\hline
$\mathbf{|R|}$ & 0.0396825 & $\mathbf{|E|}$ & 0.5555556 & $\mathbf{|N_{3,4}|}$ & 0 & $\mathbf{|U_2|}$ & 0 \\ \hline
$\mathbf{|I|}$ & 0 & $\mathbf{|N_1|}$ & 0 & $\mathbf{|N_{3,5}|}$ & 0 & $\mathbf{|U_3|}$ & 0 \\ \hline
$\mathbf{|L|}$ & 0.1587302 & $\mathbf{|N_2|}$ & 0.0396825 & $\mathbf{|N_{3,6}|}$ & 0.2380952 & $\mathbf{|U_4|}$ & 0 \\ \hline
$\mathbf{|N|}$ & 0.2777778 & $\mathbf{|N_3|}$ & 0.2380952 & $\mathbf{|N_{3,7}|}$ & 0 & $\mathbf{|U_5|}$ & 0 \\ \hline
$\mathbf{|U|}$ & 0.5238095 & $\mathbf{|N_{3,1}|}$ & 0 & $\mathbf{|N_{3,8}|}$ & 0 & $\mathbf{|U_6|}$ & 0 \\ \hline
$\mathbf{|N^*|}$ & 0.0396825 & $\mathbf{|N_{3,2}|}$ & 0 & $\mathbf{|U_0|}$ & 0.4444444 & $\mathbf{|U_7|}$ & 0 \\ \hline
$\mathbf{|U^*|}$ & 0.7619048 & $\mathbf{|N_{3,3}|}$ & 0 & $\mathbf{|U_1|}$ & 0 & $\mathbf{|U'|}$ & 0.0793651 \\ \hline
\end{tabular}
\caption{Results of the linear program.}
\label{tab:results}
\end{table}
The worst-case graph consists of many trees, which are all the same.
Each tree is surrounded by six high-magnitude vertices.
The remaining neighbors of the tree are vertices in $N_2$.
Every tree will cause two vertices in $U'$ to exist, with all other vertices not adjacent to a tree being in $U_0$.
One such tree is displayed in Figure \ref{fig:worstcase}.
Specifically, for every vertex in $R$, we have the following number of vertices in other sets:
\begin{enumerate}
\item $L$: 4
\item $N_2$: 1
\item $N_{3,6}$: 6
\item $U_0$: 11.2
\item $U'$: 2
\end{enumerate}
By using the frequency of these sets of vertices, we determine that the worst-case runtime of our algorithm is $\mathcal{O}^*((3^1\cdot1.36443^1\cdot1.34004^{19.2})^{n/25.2})=\mathcal{O}^*(1.3217^n)$.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we presented an improved algorithm for \textsc{3-coloring}.
The new algorithm performs the following steps:
Notably, we introduced the concept of the high-magnitude vertex.
In Beigel and Eppstein's algorithm, the existence of high-magnitude vertices allowed for many vertices to be outside the maximal bushy forest.
Instead, we created the maximal low-magnitude bushy forest: a maximal bushy forest with few high-magnitude vertices.
Additionally, we also created the maximal high-magnitude chromatic forest.
Using these forests, we determine a set of vertices for which we iterate over all possible color assignments.
All vertices not included in this set, nor neighboring this set, will be solved by the \textsc{(3,2)-CSP} algorithm.
We performed a sophisticated analysis of the different properties of the vertices in the graph, defined by their relation to the bushy forest.
This analysis, formulated as a linear program, allowed us to generate the graph for which the runtime of the algorithm would be the largest: time $\mathcal{O}^*(1.3217^n)$.
This analysis helped us find the following structure, a single tree in the family of worst-case graphs:
\begin{figure}[H]
\centering
\begin{tikzpicture}[main/.style = {draw, circle},node distance=1.5cm, minimum size=0.9cm]
\node[main] (1) [fill=red, line width=1.5pt] {$R$};
\node[main] (2) [above of=1,fill=Goldenrod, line width=1.5pt] {$L$};
\node[main] (3) [right of=1,fill=,fill=Goldenrod, line width=1.5pt] {$L$};
\node[main] (4) [below of=1,fill=Goldenrod, line width=1.5pt] {$L$};
\node[main] (5) [left of=1,fill=Goldenrod, line width=1.5pt] {$L$};
\node[main] (6) [above left of=2,fill=lime] {$N_3$};
\node[main] (7) [above right of=2,fill=lime] {$N_3$};
\node[main] (8) [above right of=3,fill=lime] {$N_3$};
\node[main] (9) [below right of=3,fill=lime] {$N_3$};
\node[main] (10) [below right of=4,fill=lime] {$N_3$};
\node[main] (11) [below left of=4,fill=lime] {$N_3$};
\node[main] (12) [below left of=5,fill=green] {$N_2$};
\node[main] (13) [above left of=5,fill=green] {$N_2$};
\node[main] (14) [right of=7,fill=cyan] {$U_0$};
\node[main] (15) [above right of=14,fill=cyan] {$U_0$};
\node[main] (16) [right of=10,fill=cyan] {$U_0$};
\node[main] (17) [below right of=16,fill=cyan] {$U_0$};
\node[main] (18) [right of=12,fill=cyan] {$U_0$};
\node[main] (19) [left of=9,fill=cyan] {$U_0$};
\node[main] (20) [below left of=12,fill=cyan] {$U_0$};
\node[main] (21) [left of=12,fill=cyan] {$U_0$};
\node[main] (22) [left of=13,fill=cyan] {$U_0$};
\node[main] (23) [above left of=13,fill=cyan] {$U_0$};
\node[main] (24) [right of=8,fill=teal] {$U'$};
\node[main] (25) [right of=9,fill=teal] {$U'$};
\node[main] (26) [below of=12,fill=Goldenrod] {$L$};
\node[main] (27) [above of=13,fill=Goldenrod] {$L$};
\draw[line width=2.0pt] (1) -- (2);
\draw[line width=2.0pt] (1) -- (3);
\draw[line width=2.0pt] (1) -- (4);
\draw[line width=2.0pt] (1) -- (5);
\draw (2) -- (6);
\draw (2) -- (7);
\draw (3) -- (8);
\draw (3) -- (9);
\draw (4) -- (10);
\draw (4) -- (11);
\draw (5) -- (12);
\draw (5) -- (13);
\draw (6) -- (15);
\draw (6) to [out=45,in=90,looseness=1] (24);
\draw (6) -- (18);
\draw (7) -- (14);
\draw (7) to [out=-20, in=105] (25);
\draw (7) -- (19);
\draw (8) -- (14);
\draw (8) -- (24);
\draw (8) -- (17);
\draw (9) -- (15);
\draw (9) -- (25);
\draw (9) -- (16);
\draw (10) -- (16);
\draw (10) to [out=20, in=-105] (24);
\draw (10) -- (19);
\draw (11) -- (17);
\draw (11) to [out=-45,in=-90,looseness=1] (25);
\draw (11) -- (18);
\draw (12) -- (20);
\draw (12) -- (21);
\draw (12) -- (26);
\draw (13) -- (22);
\draw (13) -- (23);
\draw (13) -- (27);
\end{tikzpicture}
\caption{Worst-case scenario for \textsc{3-coloring} (single tree). \textcolor{red}{Red}: root vertices ($R$). \textcolor{orange}{Orange}: internal vertices ($I$). \textcolor{Goldenrod}{Yellow}: leaves ($L$). \textcolor{lime}{Lime}: high-magnitude vertices. \textcolor{green}{Green}: other neighbors to the bushy forest ($N$). \textcolor{teal}{Teal}: vertices outside the bushy forest with only high-magnitude vertices as neighbors ($U'$). \textcolor{cyan}{Cyan}: other vertices ($U$).}
\label{fig:worstcase}
\end{figure}
We used the \textsc{(3,2)-Constraint Satisfaction Problem} algorithm as a black box algorithm, so any improvement to the \textsc{(3,2)-Constraint Satisfaction Problem} will automatically improve our algorithm for \textsc{3-coloring}.
Our algorithm will color many vertices through the bushy and chromatic forests, so the input of the \textsc{(3,2)-Constraint Satisfaction Problem} will be relatively predictable.
One might be able to improve the algorithm for \textsc{3-coloring}, by investigating whether the input to the \textsc{(3,2)-Constraint Satisfaction Problem} contains easily structures that are easy to reduce.
It is unclear whether the concepts presented in this paper could be applied to improve the runtime of $k$-\textsc{coloring} for $k \geq 4$.
Especially for \textsc{4-coloring}, the best known algorithm for \textsc{(4,2)-Constraint Satisfaction Problem} is the exact same as the \textsc{(3,2)-Constraint Satisfaction Problem}.
However, the \textsc{(4,2)-Constraint Satisfaction Problem} adds a complex new case:
both variables with three and four possible colors require exponential time, but the variables with four colors are more difficult to color than those with three.
If a vertex has a colored neighbor, it will still have three possible colors.
So, we cannot reduce a vertex in polynomial time, unless it has two neighbors which received different colors.
As such, it is significantly more complex to use a partially colored graph and the \textsc{(4,2)-Constraint Satisfaction Problem} to solve \textsc{4-coloring}.
\section{Acknowledgements}
\label{sec:8}
We thank Carla Groenland and Jesper Nederlof for their detailed feedback on earlier drafts of this paper and helpful discussions about graph coloring.
We thank Till Miltzow and Ivan Bliznets for giving recommendations about the writing style of this paper.
\printbibliography
\end{document} |
{
"arxiv_id": "2302.13638",
"language": "en",
"timestamp": "2023-02-28T02:26:09",
"url": "https://arxiv.org/abs/2302.13638",
"yymm": "2302"
} | \section{Introduction}
Performance benchmarks are commonly used as a tool to better understand systems. This includes informing procurement decisions and, through the operation of systems, to inform deployment and scaling decisions. These benchmarks seek to understand the likely performance of a user's workload, but also energy consumption. While benchmarks show good potential to gain an understanding of performance, it is typically infeasible to benchmark all possible combinations of workload and hardware. This problem is exacerbated in environments which exhibit hardware heterogeneity.
Benchmarks~\cite{anand2008benchmarking, camp1989benchmarking, maire2005typology} -- which produce metrics~\cite{cam_bench, umd} on different hardware under specific workloads, help to identify the `best' hardware. The metrics can then be compared for different hardware options, supporting judgements as to how a specific user's workload would be expected to perform.
We seek to resolve the challenge of evaluating the performance for previously unseen hardware-workload combinations, using the SPEC CPU 2017 dataset.
Previous efforts using linear regression (e.g.,~\cite{benjamin_spec}) have demonstrated the potential to predict performance metrics, but perform poorly for non-linear aspects of hardware evolution. In our work we present a data cleaning pipeline to ensure the data is amenable to modelling.
We explore the potential of three Deep Networks to better model non-linear relationships in the benchmark data. We evaluate a number of fully-connected networks (often referred to as multilayer perceptrons (MLP)) due to the tabular format of the dataset as well as Convolutional Neural Networks (CNN). Originally developed for learning from image-based data (2-dimensional, greyscale, or 3-dimensional, colour), CNNs have recently gained traction in the case of 1-dimensional datasets such as tables~\cite{tab_conv, Butur, kaggle_1d}. For the first CNN approach, we evaluate a number of networks which contain convolution and pooling operations whilst for the second CNN approach, we evaluate adding residual blocks as proposed in ResNet~\cite{resnet}. We perform a hyperparameter tuning process within each of these networks. This allows us to demonstrate our approach can accurately predict unseen benchmark results. From this we are able to achieve $R^2$ scores of 0.96, 0.98 and 0.94 respectively, compared to 0.53 for linear regression.
\begin{table*}[t!h!]
\caption{An overview of the prediction studies that used SPEC datasets.}
\label{tab:rw}
\begin{tabular}{cp{4cm}p{5cm}p{6cm}}
\toprule
Work&Dataset(s)&Technique(s)&Prediction \\
\midrule
\cite{benjamin_spec}&
SPEC CPU / SPEC Java Server&
Custom linear regression model&
Server benchmark performances \\
\cite{berkin2008}&
SPEC 2006&
Custom linear regression model&
Performance of future systems \\
\cite{eyerman_mech}&
SPEC CPU2000 / CPU2006&
Hybrid mechanistic-empirical model&
Commercial processor performance \\
\cite{Jiang}&
SPEC OpenMP&
Classic fractal-based sampling&
Accelerating multithreaded app simulation \\
\cite{Zheng_spec}&
SPEC 2006&
Fine-grained phase-based approach&
Performance and power \\
\cite{lopez2018}&
SPEC 2017&
Multiple Neural Networks&
Computer hardware configuration \\
\cite{tousi}&
SPEC 2017&
Multi-layer perceptron&
Computer performance \\
\textbf{Ours}&
SPEC 2017&
MLP, CNN&
Computer performance \\
\bottomrule
\end{tabular}
\end{table*}
The remainder of this paper is organised as follows. In Section \ref{related}, we discuss prior work focusing on performance prediction. We outline our methodology in Section \ref{method}. We present our results in Section \ref{results} and explore Threats to Validity in Section~\ref{ttv}. We conclude and outline areas of future work in Section \ref{conc}.
\section{Related Work}
\label{related}
Here we present prior work on ML-based performance prediction of computer systems.
Performance prediction is the process of predicting some performance metric for a system based on known characteristics of that system, which is sometimes referred to as empirical performance modeling~\cite{emp_per}. However, we will reduce the scope of our study here down to the prediction of performance metrics for computer systems. In general, performance prediction is for values which can take any value within a given range (e.g., time to complete some task or a numeric value used to compare different systems). As such, the work here focuses on regression techniques.
One of the earliest studies was performed by Ein-Dor and Feld-messer~\cite{ein-dor}. They claimed that by using readily available data on CPU characteristics, it is possible to predict a given CPU performance. However, their work is based around simple statistical approaches and cannot be used for the SPEC performance predictions we wish to perform here. {I}pek \textit{et al.}~\cite{engin} used artificial neural networks to predict Instructions per Cycle (IPC) of a given system. Their dataset contains L1 and L2 cache sizes --the first and second caches in the hierarchy of cache levels-- and front-side bus bandwidth. Their experiments showed that their model predicts IPC with only a 1-2\% error.
Li \textit{et al.}~\cite{li_cloud} carried the empirical performance prediction domain to the cloud environment by developing a tool named CloudProphet. This was effectively a trace-and-replay tool to predict a legacy application's performance if migrated to a cloud infrastructure. As our work here focuses on prediction of benchmark scores, this would not be easily translatable to our work, though it could form a good starting point for predicting the performance of a specific workload on another (non-cloud) computer.
Upadhyay \textit{et al.}~\cite{Upadhyay} discuss performance prediction issues from a different point of view. Their motivation is to consider the other components of a systems hardware while designing a CPU. For selecting the best combination of CPU, they used data mining techniques. Although this could be applied to the SPEC datasets we would argue that the non-linear nature of new hardware would make this a less than accurate approach.
A number of prediction approaches have been proposed for prediction of performance metrics for GPUs. Ardalani \textit{et al.}~\cite{ardalani} focused on GPU performances and designed an ensemble of regression learners named Cross-Architecture Performance Prediction (XAPP). However, they intended to predict GPU performances using single-threaded CPU implementations. They achieved a 26.9\% average error on a set of 24 real-world kernels. As they mentioned in their paper, their study cannot capture the impact of texture memory and constant memory. On the other hand, adhering to their implication, this is the problem of having a small dataset that contains 122 data points. Therefore, since our dataset contains more than 20K data points, we require more sophisticated models.
The work by Justus \textit{et al.}~\cite{justus} forms inspiration for our work as they used Multi-Layer Perceptrons for the prediction of execution time for training Deep Learning networks. However, we take this work further by using Convolutional Neural Networks for our predictions and apply it to the SPEC dataset.
\subsection{Predictions from the SPEC datasets}
A number of works have addressed the problem of predicting metrics for the SPEC datasets. As there have only been two prior works which address the SPEC 2017 dataset, we expand our discussion here to cover all of the SPEC datasets. A summary of these works can be found in Table~\ref{tab:rw}.
Lee~\cite{benjamin_spec} and Ozisikyilmaz \textit{et al.}~\cite{berkin2008} used linear regression models for predicting benchmark performance. Our work seeks to overcome potential limitations by modelling non-linear responses.
Eyerman \textit{et al.}~\cite{eyerman_mech} developed a mechanistic model built on interval analysis which breaks the total execution time into intervals based on missed events, for out-of-order superscalar processors.
Jiang \textit{et al.}.~\cite{Jiang} presented a study to evaluate design alternatives for computer architectures. They designed a fractal-based sampling to speed up parallel microarchitecture simulation with multithreaded applications. Due to the fact that they mainly intend to obtain samples from parallel programming datasets, the only similarity with our study is the use of SPEC-based datasets.
Zheng \textit{et al.}~\cite{Zheng_spec} proposed a unified learning-based framework named LACross to estimate time-varying software performance and power consumption on a target hardware platform.
Lopez \textit{et al.}~\cite{lopez2018} used multiple neural networks for a classification task for predicting the best computer hardware configuration options. Although their work demonstrates the validity of using Deep Learning on SPEC datasets, their underlying problem is quite different to ours. The closest work to ours is that of Tousi and Lujan~\cite{tousi}, which uses MLPs for the prediction of computer performance. We go further by demonstrating how the use of Convolutional Neural Networks can be used to provide better results.
\section{Methodology}
\label{method}
All experiments are run on a Tesla T4 GPU and two Intel Xeon(R) CPUs @ 2.30GHz, and 12 GB of memory. As the SPEC 2017 dataset is not directly in a format which can be used for machine learning, we first discuss the process used for dataset cleansing in order to provide data which can be fed directly to our Deep Learning networks. We then go on to cover the search space of Deep Learning networks which we have evaluated as part of this work.
\begin{table}[t]
\caption{Columns of SPEC2017}
\label{tab:spec_columns}
\begin{tabular}{cl}
\toprule
Data Type&Column \\
\midrule
String & Benchmark, Hardware Vendor, System, \\
& Processor, CPU(s) Orderable, 1st Level Cache, \\
&
2nd Level Cache, 3rd Level Cache, Other Cache, \\
& Storage, Operating System, File System, \\
& Compiler, License, Tested By, Test Sponsor \\ \hline
Numerical & Peak Result, Base Result, Energy Peak Result, \\
& Energy Base Result, \# Cores, \# Chips, Memory, \\
& \# Enabled Threads Per Core, Processor MHz \\ \hline
Binary & Parallel \\ \hline
Ternary & Base Pointer Size \\ \hline
Quaternary & Peak Pointer Size \\ \hline
Date & HW Avail, SW Avail, Test Date, Published, \\
(mon-yyyy) & Updated \\ \hline
Text & Disclosures \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Dataset cleansing}
\label{subse:data-clensing}
Within this work we consider how to prepare SPEC 2017 benchmark dataset for machine learning. The dataset includes 34 attributes as illustrated in Table~\ref{tab:spec_columns}. The numeric columns \emph{Peak Result} and \emph{Base Result} represent the response time of systems under load or no load respectively and are the values we seek to predict in this work. We perform the following pre-processing on the data, making it amenable for model training. Our approach to mitigate inconsistencies and data quality issues include the following:
\begin{description}
\item[Alphanumeric cleaning:] Non-alphanumeric characters such as tabs and escape characters are removed from the dataset. We also remove spaces from column names to make downstream processing easier. All characters are converted to lower case to remove inconsistencies.
\item[Removal of outliers:] Some of the \emph{Base Result} values were zero, which is clearly incorrect. As there were only a small number of these, they are removed.
\item[Making units consistent:] Units varied across the data (e.g., memory in KB, MB, GB). All units are standardised to MB.
\item[Make columns categorical:] Many of the columns although appearing to allow arbitrary data are actually highly constrained (e.g., Memory can only take a small range of values). As such, the set of these values was determined and the data was replaced with categorical labels.
\item[Removal of highly correlated columns:] We used Kendall's rank correlation~\cite{kendall} to identify those columns which are highly correlated. It was determined, in our case, that the columns `CPU(s) Orderable', `Energy Base Result', `License', `Parallel', `System', `Test Sponsor', and `Tested By' were more than 70\% correlated with other columns. As strongly correlated variables may have almost the same ability to predict the result value for observation, due to their linear dependence, they were eliminated. It should be noted that we also evaluated Pearson and Spearman correlation and obtained similar results.
\end{description}
\subsection{Searching for the `best' Neural Network}
The shape (layers and neurons per layer) of Deep Learning networks significantly impact performance. We perform a space search for the `best' network for the SPEC data. We identify three network structures, two trapezium and one rectangular and populate these with either single neurons, Convolutional nodes or Residual Nodes.\label{archSearch} We evaluate three core network designs within this work. Those of fully-connected networks, convolutional neural networks and networks which use Residual blocks as proposed by the ResNet architecture~\cite{resnet}. We detail the design of each of these architectures:
\subsubsection{Fully-Connected Networks:}
We evaluate three network structures, those of a strictly decreasing number of neurons per layer shaped network -- which we will refer to as a trapezium network hereafter, see Figure \ref{fig:rMLP}, the reverse of this -- referred to as a reverse trapezium -- and that of a rectangular network with the same number of neurons in each layer. For the trapezium network the first layer has $2^n$ neurons. Each subsequent layer has half the number of neurons as the previous layer. The penultimate layer has $2^{n-m}$ neurons where $n-m > 1$. We vary the values of $n$ in the range $[4, ..., 11]$ and $m$ in the range $[1, ..., 10]$. The final layer of the network contains just a single neuron to provide the regression result. Reverse trapezium networks flip the order of the layers (apart from the last) having the narrowest layer first and the widest layer last.
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{images/Rough_MLP.drawio.png}
\caption{Sketch view of trapezium-shaped MLPs}
\label{fig:rMLP}
\Description{Sketch view of trapezium-shaped MLPs}
\end{figure}
The rectangular networks contain $m$ layers and have $2^n$ neurons in each layer, with a final layer containing only one neuron to provide the regression result. Although this network does not vary in shape between layers, the network learns weights which cause some neurons in a level to become redundant, effectively learning itself the number of neurons to place in each layer.
\subsubsection{CNN design:}
The CNN network consists of a number of convolutional layers followed by a fully-connected set of layers. Figure \ref{fig:tri_cnn} illustrates the shape of these networks. It should be noted that in these cases the fully-connected layers are smaller than those where we only use fully-connected layers. As our data is tabular, we use 1D convolutional layers -- i.e. our kernels/filters are 1D and of size $k \in [2,...,5]$. Again, we adopt the trapezium format of the first convolutional layer having a width (number of filters) of $2^n$ and each subsequent layer having half the width of the previous layer. With the last convolutional layer having a width of $2^{n-m}$ ($n-m > 1$). The fully-connected layers are trapezium in shape and range in nodes per layer between $2^p$ and $2^{p-q}$. We allow $n$, $m$, $p$ and $q$ to vary in the ranges $[7, ..., 11], [4, ..., 7], [7, ..., 11],$ and $[5, ..., 7]$, respectively. Initial experiments indicated that searches within these ranges yielded the best results.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{images/tri_cnn.pdf}
\caption{The CNN network structure}
\label{fig:tri_cnn}
\end{figure}
\subsubsection{Residual design:}
We adopt Residual blocks ~\cite{resnet}, where a `bypass' link around a set of convolutional units is merged with the output from the convolutional units. Figure \ref{fig:IdentBlock} illustrates this network topology and we refer to this hereafter as the identity block. The width of the input and output to the identity block must be the same ($2^p$). By convention, the kernel size of the first two convolutions are $2^{p-2}$ with the kernel size of the last convolution being $2^p$ to restore the original size. We allow $p \in [6, ..., 11]$.
One restriction of the original identity block is that the shape of the data entering the block must be the same as the shape of the output -- otherwise the merging of the data from the `bypass' will not be possible. In order to overcome this, we use a convolution unit to the `bypass' which has the same output width as the final convolution in the main path -- see Figure \ref{fig:ConvBlock}. In this case, the first two convolutions on the main path have a width of $2^{p-2}$, while the last convolution on the main path and the `bypass' path have widths of $2^{p}$. We refer to this as a convolutional block.
The two block templates are then combined to produce a superblock (Figure \ref{fig:superblock}). Each superblock starts with a convolutional block followed by $r$ identity blocks. The width of the output for each block (both identity and convolution) within a block will be $2^{p}$, also the output width of the whole superblock.
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{images/ResNet_IdentBlock.pdf}
\caption{The identity block}
\label{fig:IdentBlock}
\Description{A block with three convolutional layers with different specifications.}
\end{figure}
Superblocks can then be concatenated together as in Figure \ref{fig:ResNet}. Here, the original vector data is fed into a set of $w$ superblocks. Following the convention of ResNet, the width of output from each superblock will be double that of the previous superblock. Finally the output from the last superblock will be flattened before being fed into a single neuron to predict the regression value.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/ResNet_ConvBlock.pdf}
\caption{The convolutional block}
\label{fig:ConvBlock}
\Description{A block with three convolutional layers with different specifications and a skipping block with the same size as the third layer.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{images/super_block.pdf}
\caption{A superblock constructed from a convolution block and $r$ identity blocks}
\label{fig:superblock}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{images/resnet.pdf}
\caption{The ultimate design of our ResNet model}
\label{fig:ResNet}
\Description{1D ResNet.}
\end{figure}
\subsection{Hyperparameter search}
In addition to performing an neural architecture search over the architecture range specified in \ref{archSearch}, we also conducted a thorough search across the hyperparameters which could be used for the networks. This included the optimiser, the number of training epochs, the loss function and the activation function.
\subsubsection{Optimiser:} The optimiser is used to determine how the wei-ghts of the network are updated after each training step. This work focuses on three of the most commonly used optimisers:
{\bf SGD:} Stochastic Gradient Decent is the original optimiser used for Deep Learning. Although strictly speaking Gradient decent performs an update after each training sample is processed, we adopt the normal convention of performing the optimisation step after each batch of data is processed -- more correctly referred to as Batched Stochastic Gradient Decent.
{\bf RMSprop:} Root Mean Squared Propagation extends SGD by applying decaying average partial gradients to the step size of each parameter. The optimiser focuses more on recent gradients.
{\bf Adam:} ADAptive Moment estimation~\cite{adam} is an extension of SGD. Like RMSprop, Adam adopts a separate learning rate for each parameter. While RMSprop uses the average of the first moment, Adam also uses the average of the second moment when choosing how to adapt the learning rates.
\subsubsection{Loss function:} The loss function is used to determine the difference between the predicted values and the true values. We evaluate two loss functions, Mean Squared Error (MSE, Equation~\ref{eqn:mse}) and Mean Absolute Error (MAE, Equation~\ref{eqn:mae}),
\hspace{-0.6cm}\begin{minipage}{.5\linewidth}
\begin{equation}\label{eqn:mse}
MSE = \frac{1}{N}\sum_{i=1}^N (y_i' - y_i)^2 ,
\end{equation}
\end{minipage}
\begin{minipage}{.5\linewidth}
\begin{equation}\label{eqn:mae}
MAE = \frac{1}{N}\sum_{i=1}^N |y_i' - y| ,
\end{equation}
\end{minipage}
\noindent where number of samples is $N$, $y_i'$ and $y_i$ are predicted and true values respectively. Larger errors will have a larger impact on MSE's loss value which we would expect to lead to fewer outlier values.
\subsubsection{Activation function:}
The activation function provides the non-linear element within the networks. We evaluated three commonly used activation functions within our work: sigmoid, tanh and ReLU. For many problems ReLU has been shown to be the most effective activation function. However, there are a number of cases where the other activation functions are more suited to the problem at hand. As is the convention, no activation function was used on the final output layer to allow for arbitrary output values.
\subsubsection{Stride:}
Stride is the number of cells that shifts over the input matrix. While stride size is adaptive in the CNN experiments, we kept it fixed in the residual-inspired models except for the starting layer of the convolutional blocks. Our dataset has 24 columns for independent variables, making the shape of input data (1, 24), we defined the stride size in the range of $[1, ..., 4]$.
\subsection{Implementation Details}
We use an 80-20 training-test split. Further, the training data is split into training and validation sets as 80\% and 20\% respectively. The batch size is determined as 10. Each model is trained 5 times with different random seeds to obtain its average performance. The data splitting process was purposely designed to demonstrate the real-world future unseen data in operation, and adjusting class distributions via any controlled split approach such as cross-validation would go against an uncontrolled future configuration of data~\cite{cv}. By using random seeds to generate random splits of data for each experiment, we can be confident that our models are capable of responding to random distributions of data. in the real world.
We use the Glorot uniform initialiser~\cite{glorot} for initialising the parameters within our networks, which sets the weights so they are equal across all layers in terms of the variance of the activations. The gradient is kept from exploding or vanishing by the constant variance. In addition, the initial bias value(s) were set to zero~\cite{kerasInit}.
We allowed the number of epochs to vary between 100 and 300 in steps of 50. We stopped training after 300 epochs, where models demonstrated optimal performance on the test set.
\subsubsection{Baseline Models}
We consider three baseline models:
{\bf Linear Regression:} We would expect that this model would perform well for similar hardware, but perform poorly when there is a non-linear change in hardware performance.
{\bf Support Vector Regression:} SVR is often better than a linear regression model as it is able to fit better to the model. However, it still suffers from the fact that it is a linear model and hence is not expected to adapt well to step-changes in the hardware.
{\bf Random Forest Regression:} This is an ensemble technique which does not suffer from the linear model problems of the other two approaches. It does, however, require prior examples of hardware types to be able to predict new hardware accurately. We would therefore expect this to be better than the other baseline models, but less likely to be adaptable as the Deep Learning models.
\subsubsection{Evaluation Metrics:} We evaluate model performance using MSE (Equation \ref{eqn:mse}), MAE (Equation \ref{eqn:mae}) and $R^2$:
$$
R^2 = 1 - \frac{\sum_{i=1}^n (y_i - \hat{y_i})^2}{\sum_{i=1}^n (y_i - \bar{y})^2},
$$
where $y_i$ is the true value, $\hat{y_i}$ is the predicted value and $\bar{y}$ is the mean of all true values. For both the $R^2$ and MSE, values further from their predicted value are going to have a more significant impact on the results. In order to measure the models' vulnerability to outliers, focusing on MSE would be preferable. A focus on $R^2$ would reduce outliers at the expense of overall accuracy.
\section{Results}
\label{results}
\begin{table*}[ht]
\caption{The results of the best deep networks and machine learning models -- Order of $R^2$}
\label{tab:best20}
\resizebox{2.1\columnwidth}{!}{
\begin{tabular}{cccccccccccc}
\toprule
\# & Architecture & Loss Fn & Kernel Sizes & Stride Sizes & Number of Filters (m, n) & Neurons in Layers (p, q) & Optimizer & Epochs & R2 & MAE & MSE \\
\midrule
1 & TriCNN & MAE & 3 & 1 & (9, 7) & {[}9, …, 5{]} & Adam & 250 & 0.98638701 & 5.67389728 & 465.3285655 \\
2 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & Adam & 250 & 0.98590661 & 5.83946465 & 476.0394343 \\
3 & TriCNN & MAE & 3 & 2 & (9, 7) & {[}9, …, 5{]} & Adam & 300 & 0.98579341 & 5.76197731 & 494.124225 \\
4 & TriCNN & MAE & 3 & 1 & (9, 7) & {[}9, …, 5{]} & Adam & 150 & 0.98529142 & 6.25318407 & 513.9629513 \\
5 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & RmsProp & 150 & 0.98282719 & 7.14056732 & 620.2982421 \\
6 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & Adam & 200 & 0.98280914 & 6.03564805 & 582.3068145 \\
7 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & Adam & 300 & 0.98278342 & 5.61076184 & 582.0247239 \\
8 & TriCNN & MAE & 3 & 1 & (9, 7) & {[}9, …, 5{]} & Adam & 300 & 0.98107176 & 5.78137347 & \multicolumn{1}{l}{645.4129883} \\
9 & TriCNN & MAE & 3 & 2 & (9, 7) & {[}9, …, 5{]} & RmsProp & 250 & 0.98095925 & 6.72097815 & 669.8856237 \\
10 & TriCNN & MAE & 3 & 1 & (9, 7) & {[}9, …, 5{]} & Adam & 200 & 0.98089907 & 6.32291809 & 665.1641919 \\
11 & TriCNN & MAE & 3 & 2 & (9, 7) & {[}9, …, 5{]} & Adam & 150 & 0.98047251 & 6.71537772 & 663.7030719 \\
12 & TriCNN & MAE & 3 & 1 & (7, 6, 5, 4) & {[}9, …, 5{]} & RmsProp & 300 & 0.98038864 & 6.9974749 & \multicolumn{1}{l}{653.5821786} \\
$\sim$ & RF & & & & & & & & 0.9803076 & 4.76701531 & 688.0001262 \\
13 & TriCNN & MAE & 3 & 1 & (7, 6, 5, 4) & {[}9, …, 5{]} & RmsProp & 200 & 0.98002879 & 7.62788323 & 684.7595471 \\
14 & TriCNN & MAE & 2 & 1 & (9, 7) & {[}11, …, 6{]} & Adam & 150 & 0.9793459 & 6.519971 & 703.0615545 \\
15 & TriCNN & MAE & 3 & 2 & (9, 7) & {[}9, …, 5{]} & Adam & 100 & \multicolumn{1}{l}{0.97782539} & \multicolumn{1}{l}{8.23651529} & 754.5381605 \\
16 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & Adam & 100 & 0.97748578 & 7.30871799 & 757.4994833 \\
17 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & Adam & 150 & 0.97726148 & 6.65855022 & 772.0747562 \\
18 & TriCNN & MAE & 3 & 1 & (7, 6, 5, 4) & {[}9, …, 5{]} & RmsProp & 250 & 0.97665471 & 7.86703389 & \multicolumn{1}{l}{775.8960386} \\
19 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & RmsProp & 250 & 0.97650919 & 7.97325412 & 852.3545636 \\
20 & TriCNN & MAE & 3 & 2 & (9, 7) & {[}9, …, 5{]} & RmsProp & 300 & 0.97636563 & 6.91501173 & 816.7881606 \\
45 & TriMLP & MAE & & & & {[}11, …, 6{]} & Adam & 250 & \multicolumn{1}{l}{0.97347275} & \multicolumn{1}{l}{9.12443258} & \multicolumn{1}{l}{906.1439402} \\
159 & Residual& MAE & \multicolumn{2}{l}{Number of Superblocks = (2, 5, 5, 2)} & \multicolumn{1}{l}{((6, 6, 8), (7, 7, 9), (8, 8, 10), (9, 9, 11))} & 1 & RmsProp & 250 & 0.95007233 & 10.595069 & 1006.134564 \\
$\sim$ & LR & & & & & & & & 0.52639158 & 82.4596122 & 15761.16107 \\
$\sim$ & SVR & & & & & & & & -0.0045634 & 113.749207 & 33448.30886 \\
\bottomrule
\end{tabular}
}
\resizebox{1.5\columnwidth}{!}{* TriCNN = Trapezium-shaped CNN, RF = Random Forest Regression, TriMLP = Trapezium-shaped MPL, LR = Linear Regression, SVR = Support Vector Regression}
\end{table*}
\begin{table*}[ht]
\caption{The results of the best deep networks and machine learning models -- Order of MSE}
\label{tab:best20mse}
\resizebox{2.1\columnwidth}{!}{
\begin{tabular}{cccccccccccc}
\toprule
\# & Architecture & Loss Fn & Kernel Sizes & Stride Sizes & Number of Filters (m, n) & Neurons in Layers (p, q) & Optimizer & Epochs & R2 & MAE & MSE \\
\midrule
1 & TriCNN & MAE & 3 & 1 & (9, 7) & {[}9, …, 5{]} & Adam & 250 & 0.98638701 & 5.67389728 & 465.3285655 \\
2 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & Adam & 250 & 0.98590661 & 5.83946465 & 476.0394343 \\
3 & TriCNN & MAE & 3 & 2 & (9, 7) & {[}9, …, 5{]} & Adam & 300 & 0.98579341 & 5.76197731 & 494.124225 \\
4 & TriCNN & MAE & 3 & 1 & (9, 7) & {[}9, …, 5{]} & Adam & 150 & 0.98529142 & 6.25318407 & 513.9629513 \\
5 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & Adam & 300 & 0.98278342 & 5.61076184 & 582.0247239 \\
6 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & Adam & 200 & 0.98280914 & 6.03564805 & 582.3068145 \\
7 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & RmsProp & 150 & 0.98282719 & 7.14056732 & 620.2982421 \\
8 & TriCNN & MAE & 3 & 1 & (9, 7) & {[}9, …, 5{]} & Adam & 300 & 0.98107176 & 5.78137347 & \multicolumn{1}{l}{645.4129883} \\
9 & TriCNN & MAE & 3 & 1 & (7, 6, 5, 4) & {[}9, …, 5{]} & RmsProp & 300 & 0.98038864 & 6.9974749 & \multicolumn{1}{l}{653.5821786} \\
10 & TriCNN & MAE & 3 & 2 & (9, 7) & {[}9, …, 5{]} & Adam & 150 & 0.98047251 & 6.71537772 & 663.7030719 \\
11 & TriCNN & MAE & 3 & 1 & (9, 7) & {[}9, …, 5{]} & Adam & 200 & 0.98089907 & 6.32291809 & 665.1641919 \\
12 & TriCNN & MAE & 3 & 2 & (9, 7) & {[}9, …, 5{]} & RmsProp & 250 & 0.98095925 & 6.72097815 & 669.8856237 \\
13 & TriCNN & MAE & 3 & 1 & (7, 6, 5, 4) & {[}9, …, 5{]} & RmsProp & 200 & 0.98002879 & 7.62788323 & 684.7595471 \\
$\sim$ & RF & & & & & & & & 0.9803076 & 4.76701531 & 688.0001262 \\
14 & TriCNN & MAE & 2 & 1 & (9, 7) & {[}11, …, 6{]} & Adam & 150 & 0.9793459 & 6.519971 & 703.0615545 \\
15 & TriCNN & MAE & 3 & 2 & (9, 7) & {[}9, …, 5{]} & Adam & 100 & \multicolumn{1}{l}{0.97782539} & \multicolumn{1}{l}{8.23651529} & 754.5381605 \\
16 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & Adam & 100 & 0.97748578 & 7.30871799 & 757.4994833 \\
17 & TriCNN & MAE & 3 & 2 & (9, 7, 6, 5, 4) & {[}9, …, 4{]} & Adam & 150 & 0.97726148 & 6.65855022 & 772.0747562 \\
18 & TriCNN & MAE & 3 & 1 & (7, 6, 5, 4) & {[}9, …, 5{]} & RmsProp & 250 & 0.97665471 & 7.86703389 & \multicolumn{1}{l}{775.8960386} \\
19 & TriCNN & MAE & 3 & 2 & (9, 7) & {[}9, …, 5{]} & RmsProp & 200 & 0.97613855 & 7.72461632 & 807.1294185 \\
20 & TriCNN & MAE & 3 & 2 & (9, 7) & {[}9, …, 5{]} & RmsProp & 300 & 0.97636563 & 6.91501173 & 816.7881606 \\
48 & TriMLP & MAE & & & & {[}11, …, 6{]} & Adam & 250 & \multicolumn{1}{l}{0.97347275} & \multicolumn{1}{l}{9.12443258} & \multicolumn{1}{l}{906.1439402} \\
135 & Residual & MAE & \multicolumn{2}{l}{Number of Superblocks = (2, 5, 5, 2)} & \multicolumn{1}{l}{((6, 6, 8), (7, 7, 9), (8, 8, 10), (9, 9, 11))} & 1 & RmsProp & 250 & 0.95007233 & 10.595069 & 1006.134564 \\
39 & LR & & & & & & & & 0.52639158 & 82.4596122 & 15761.16107 \\
40 & SVR & & & & & & & & -0.0045634 & 113.749207 & 33448.30886 \\
\bottomrule
\end{tabular}
}
\resizebox{1.5\columnwidth}{!}{* TriCNN = Trapezium-shaped CNN, RF = Random Forest Regression, TriMLP = Trapezium-shaped MPL, LR = Linear Regression, SVR = Support Vector Regression}
\end{table*}
We present the results of our model training. All results represent the average of the five different splits of the dataset. Tables \ref{tab:best20} and \ref{tab:best20mse} present the top performing models when sorted by $R^2$ and MSE.
\subsection{Baseline Models}
We first evaluate our baseline cases. Both Linear Regression ($R^2 = 0.526$, $MSE=15761.2$) and Support Vector Regression ($R^2 = -0.004$, $MSE=33448.31$) performed poorly. Figure~\ref{fig:qq} shows Quantile-Quantile plots for the residuals of the top-performing model of each type; CNN, Linear Regression, Random Forest and SVR. We observe that the CNN models exhibit preferable behaviour at both extremes. Meanwhile, linear regression and Random Forest models exhibit larger residuals for lower quantiles. Finally, Linear Regression, Random Forest and SVR exhibit large variances for high performance machines in the dataset. Figure~\ref{fig:boxbymethod} shows the magnitude of residuals for each methods. We observe our CNN based approach exhibits preferable behaviour to prior approaches.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{images/FigResiduals.pdf}
\caption{Q-Q Plots for residuals of the best performing CNN, Linear Regression, Random Forest and SVM models}
\label{fig:qq}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/FigBox.pdf}
\caption{Magnitude of residuals, by method.}
\label{fig:boxbymethod}
\end{figure}
\subsection{Deep Learning Models}
For MLP networks, the Trapezium networks offered highest performance, achieving 45th position for $R^2$ and 48th position for MSE. The performance of MLP networks were typically not competitive with CNN-based approaches, so we do not discuss them further. Meanwhile, CNN networks dominate the top 12 and 13 positions for $R^2$ and MSE respectively.
We hypothesised that residual-inspired approaches would perform favourably in our case, due to their strong performance in other domains, however, this is not borne in our findings. Residual-inspired approaches only acheived positions of 159th by $R^2$ and 135th by MSE. These models are more complex to engineer, and require more time to train, and provided little benefit in our case.
We now summarise other design choices:
\begin{description}
\item[Optimizer: ] Consistent with prior research, Adam generally performed best, though RMSprop is a strong contender.
\item[Loss function: ] In all cases MAE produced the best results. Somewhat surprising when the overall metric is MSE.
\item[Activation Function: ] Sigmoid produced results for the smaller architectures; however, the results were either NaN or negative. On the other hand, the results of the tanh activation function could not pass 0.01 in terms of R2.
\item[Stride Size: ] In most cases ($R^2$ and MSE) having a stride size of one and two are best.
\item[Kernel Size: ] The best kernel size is three for the top result though this is not consistent over all results.
\item[Training Epochs: ] Figure~\ref{fig:epoch} shows the impact of training epochs on performance for our top model, measured by MAE, MSE and $R^2$. The epoch count for stopping our training was determined empirically.
\end{description}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/FigMeasures.pdf}
\caption{Performance for top CNN model, by training epoch.}
\label{fig:epoch}
\end{figure}
\subsection{Evaluation of top cases}
Here, we evaluate the performance of the top four networks by $R^2$ and MSE. As seen in Table~\ref{tab:best20} and~\ref{tab:best20mse}, four models which are trapezium-shaped CNNs that are optimised by Adam dominate both tables. All cases of the first-ranked model are in the top ten - 1st, 4th, 8th, and 10th in Table~\ref{tab:best20} and 1st, 4th, 8th, and 11th in Table~\ref{tab:best20mse}. However, when the table is examined closely, we see that the increase in the number of epochs does not result positively for this model and a fluctuation in $R^2$ performance is observed. At this point, it is important to compare other metrics. If we sort all epoch setups of the first-ranked model by MAE, we see similar fluctuations in performance. However, considering the MSE values, although there is no significant difference between each model, the first-ranked configuration seems to be more resistant to outliers. As a result, it is expected that an increase/decrease in the number of epochs affects the performance increase/decrease, but counter-intuitively, no pattern is obtained in this case. Moreover, the same argument is valid for the second-ranked model in both tables. Furthermore, an increase in the quantity of convolutional layers does not always translate into an improvement in performance.
\section{Threats to Validity}
\label{ttv}
Here, we introduce the limitations of this work, and highlight threats to validity arising from these. We structure our approach based on similar initiatives in the systems performance literature (e.g.,~\cite{eismann2022case}) and the approach of Wohlin \textit{et al.}~\cite{wohlin2012experimentation}.
\begin{description}
\item[L1 Single benchmark dataset] This study uses only data from SPEC CPU 2017 retrieved on 10 September 2022.
\item[L2 Single expert for data cleaning] Data cleaning processes were developed by a single expert researcher.
\end{description}
We now consider the implication of these limitations in terms of~\emph{construct},~\emph{internal} and~\emph{external} validity.
\textbf{Construct Validity} This work concerns the prediction of performance results. Further work could have also evaluate whether predictive performance holds for columns \emph{Energy Peak Result} and \emph{Energy Base Result} from the dataset.
\textbf{Internal Validity} As highlighted in Section \ref{subse:data-clensing}, our work involved cleaning data for it to be amenable to analysis and machine learning. The development of the cleaning processes were undertaken by a single expert researcher (\textbf{Limitation L2}), leaving the opportunity for misinterpretation of the datasets. To mitigate this impact, the processes undertaken were well documented, and the process was audited by two further researchers. Code to automate data cleaning is made available to the community.
\textbf{External Validity} Our experiment considers data from a single benchmark, SPEC CPU 2017 (\textbf{Limitation L1}), which may limit the generalisability of our findings. While our experiments were conducted for just one benchmark, our methodology is applicable to performance benchmarks more broadly. Further research is required to understand the extent to which our methods are effective for other workloads; we make this possible by providing our data and models for reproduction by other researchers.
\textbf{Reproducibility} We have made all our code and data, including the results of the training of all the networks available\footnote{\url{https://github.com/cengizmehmet/BenchmarkNets}}.
\section{Conclusion}
\label{conc}
This work has considered the extent to which it is possibly to predict benchmark results for previously untested hardware configurations. We have specifically focused on the potential of using Deep Network approaches to capture the non-linear relationships present in the data. Our study has centred around the SPEC CPU 2017 dataset.
We investigated three deep network types, MLPs, CNNs, and an architecture of CNNs which is ResNet. After comprehensive studies, the models we offer excel at predicting the performance of a given system. While the $R^2$ values are between approx. 0.945 and 0.985, MAEs are between approx. 13 and 3.2. Secondly, it is discovered that convolutional layers can more efficiently predict our tabular data. This can be seen by examining the performance gain observed when adding convolutional layers to MLPs. Another finding of our paper involves demonstrating the effectiveness of residual blocks as opposed to simple convolutional layers. Our results indicate that while increasing the number of convolutional layers can offer promising results, the use of residual blocks leads to better performance overall.
This study is an indication and a starting point that deep neural networks can be trained on existing benchmark datasets to predict performance. However, we believe there are of many areas of future work. One avenue of future research would be to extend the application by taking advantage of more powerful neural network architectures with innovative feature aggregating modules or perhaps a higher parameter and layer count. Another direction of research would include exploring the effects of transfer learning, whereby the performance prediction system can be pre-trained on a larger proxy dataset to boost the performance after a subsequent and carefully designed fine-tuning process on the benchmark dataset. The use of synthetic data along with domain adaptation techniques can also lead to better performance and possibly steer the abilities of the model towards the desired outcome considering real-world data distributions. The method of procedurally generating the synthetic data in a meaningful manner that can benefit the training of neural network, perhaps in an end-to-end fashion, can also be an interesting area to investigate in a future work.
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\texttt{acmsmall}}: The default journal template style.
\item {\texttt{acmlarge}}: Used by JOCCH and TAP.
\item {\texttt{acmtog}}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\texttt{acmconf}}: The default proceedings template style.
\item{\texttt{sigchi}}: Used for SIGCHI conference articles.
\item{\texttt{sigchi-a}}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\texttt{sigplan}}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\texttt{anonymous,review}}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \texttt{\acmSubmissionID} command to print the
submission's unique ID on each page of the work.
\item{\texttt{authorversion}}: Produces a version of the work suitable
for posting by the author.
\item{\texttt{screen}}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf, language=french,
language=german, language=spanish, language=english]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. As an exception, multiple authors may share one
affiliation. Authors' names should not be abbreviated; use full first
names wherever possible. Include authors' e-mail addresses whenever
possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.13647",
"language": "en",
"timestamp": "2023-02-28T02:26:24",
"url": "https://arxiv.org/abs/2302.13647",
"yymm": "2302"
} | \section{Introduction}\label{sec: intro}
Introduced in the data compression field by Kempa and Prezza~\cite{Kampa-Prezza}, the concept of \emph{string attractor} can be conceptualized as follows: it is a set of positions within a finite word that enables to catch all distinct factors.
String attractors also have applications in combinatorial pattern matching~\cite{ChristiansenEKN21}, but the problem of finding a smallest string attractor is NP-hard~\cite{Kampa-Prezza}.
However, it appears that combinatorial properties of words yield new strategies to find a string attractor of minimum size.
Consequently, string attractors have made their entry in combinatorics on words as a systematic topic of research.
Indeed, they have been studied for prefixes of the ubiquitous Thue--Morse word~\cite{TM-sa,Schaeffer-Shallit-2020} and of the period-doubling word~\cite{Schaeffer-Shallit-2020}, while Sturmian words can be characterized through the structure of their smallest string attractors~\cite{saandcow,SA-LATIN22}.
The story of the current work began during the international conference DLT 2022, where the three authors had the chance to meet for the first time and where they talked about the concept of string attractors.
Romana's expertise lead us to consider them for prefixes of generalized Fibonacci words to larger alphabets (on $k$ letters, the corresponding word is called the \emph{$k$-bonacci word}), as a natural extension of Sturmian words.
It turned out that the string attractors that we obtained rely on the well-known $k$-bonacci numbers~\cite{GRRSS23}.
Simultaneously, Dvo\v{r}áková studied string attractors of factors of episturmian words~\cite{Dvorakova2022}, which covers the case of all $k$-bonacci words. However, her description is less explicit.
Moreover, the fact that minimal string attractors of prefixes of the $k$-bonacci word can be described using the $k$-bonacci numbers tipped us on the probable link between string attractors and numeration systems, and lead us to believe that this bond can be adapted to other morphic sequences. More specifically, we have the following general question:
\smallskip
\noindent\textbf{Question.} Given a morphic sequence $\infw{z}$, does there exist a numeration system $\mathcal{S}$ such that $\infw{z}$ is $\mathcal{S}$-automatic and (minimal) string attractors of the prefixes of $\infw{z}$ are easily described using $\mathcal{S}$?
\smallskip
In this paper, as a first step towards answering this question, we study a particular family of morphic words. More precisely, given parameters in the shape of a length-$k$ word $c=c_0 \cdots c_{k-1} \in \mathbb{N}^k$, we define the morphism $\mu_c$ such that $\mu_c(i) = 0^{c_i} \cdot (i+1)$ for all $0 \leq i \leq k-2$ and $\mu_c(k-1) = 0^{c_{k-1}}$. When it exists, we then look at the fixed point of this morphism.
This family was not randomly chosen. First, it generalizes the $k$-bonacci morphisms but the fixed points are not necessarily episturmian.
In addition, some of these morphisms have already been studied in relation to numeration systems, in~\cite{Fabre} for example. Indeed, if $c$ is some $\beta$-representation of $1$ for a simple Parry number $\beta$, using the terminology of~\cite{CCSDLT22}, we can canonically associate a numeration system that is greedy and, in this case, corresponds to the sequence $(|\mu^n_c(0)|)_{n \in \mathbb{N}}$ of lengths of iterations of $\mu_c$ on $0$~\cite{Bertrand89}.
Under some conditions on the parameters, we show that the prefixes of the fixed point admit string attractors of size at most $k+1$ described using the associated numeration system.
This paper is organized as follows.
In Section~\ref{sec: prelim}, we recall some background on combinatorics on words. We also introduce the infinite words that we will study and give some of their basic properties.
Section~\ref{sec: ANS} introduces numeration systems and explains how to associate one with a morphic word. After that, we give conditions on the parameters $c_0, \ldots, c_{k-1}$ for the numeration system to have ``desirable'' properties.
Finally, in Section~\ref{sec: SA}, we look at string attractors and prove the main result of this paper, namely the description of string attractors of each prefix of the studied infinite words using the associated numeration system.
We present concluding remarks and future work in Section~\ref{sec: conclusion}.
\section{Preliminaries}\label{sec: prelim}
\subsection{Words}
We start with the bare minimum on words and introduce some notations.
Let $A$ be an alphabet either finite or infinite (for instance, we will consider words over the set of non-negative integers $\mathbb{N}$).
The length of a word is its number of letters and will be denoted with vertical bars $|\cdot|$.
We let $\varepsilon$ denote the empty word.
We let $A^*$ denote the set of finite words over $A$.
For any integer $n\ge 0$, we let $A^n$ be the set of length-$n$ words over $A$.
If $w = xyz$ for some $x,y,z \in A^*$, then $x$ is a \emph{prefix}, $y$ is a \emph{factor}, and $z$ is a \emph{suffix} of $w$.
A factor of a word is \emph{proper} if it is not equal to the initial word.
A word $v$ is a \emph{fractional power} of a non-empty word $w$ if there exist $\ell \in \mathbb{N}$ and $x$ a prefix of $w$ such that $v = w^\ell x$. We will then write $v = w^{ |v|/|w|}$.
Infinite words are written in bold and we start indexing them at $0$.
We use classical notations of intervals to denote portions of words.
For a non-empty word $u\in A^*$, we let $u^\omega$ denote the concatenation of infinitely many copies of $u$, that is, $u^\omega= u u u \cdots$.
Let $\leq$ be a total order on $A$. The \emph{lexicographic order} on $A^*$ induced by $\leq$ is defined as follows: for $x,y\in A^*$, we say that $x$ is \emph{lexicographically smaller than} $y$, and we write $x < y$, if either $x$ is a proper prefix of $y$, or $x=zax'$ and $y=zby'$ for some letters $a,b$ with $a < b$.
We write $x\le y$ if $x$ is lexicographically smaller than or equal to $y$.
The \emph{genealogical order}, also known as \emph{radix order}, on $A^*$ induced by $\leq$ is defined as follows: for $x,y\in A^*$, we say that $x$ is \emph{genealogically smaller than} $y$, and we write $x<_{\text{gen}} y$, if either $|x|<|y|$, or $|x|=|y|$ and $x=zax'$ and $y=zby'$ for some letters $a,b$ with $a< b$.
We write again $x\le_{\text{gen}} y$ if $x$ is genealogically smaller than or equal to $y$.
A non-empty word $w\in A^*$ is \emph{primitive} if $w=u^n$ for $u\in A^*\setminus\{\varepsilon\}$ implies $n=1$.
Two words are \emph{conjugates} if they are cyclic permutation of each other.
A word is \emph{Lyndon} if it is primitive and lexicographically minimal among its conjugates for some given order.
Defined in the 50's, Lyndon words are not only classical in combinatorics on words but also of upmost importance.
See~\cite{Lothaire97} for a presentation.
A celebrated result in combinatorics on words is that London words form a so-called \emph{complete factorization of the free monoid}.
\begin{theorem}[Chen-Fox-Lyndon~\cite{CFL58}]\label{thm:CFL-fact}
For every non-empty word $w\in A^*$, there exists a unique factorization $(\ell_1,\cdots, \ell_n)$ of $w$ into Lyndon words over $A$ such that $\ell_1 \ge \ell_2 \ge \cdots \ell_n$.
\end{theorem}
Several variations of Lyndon words have been considered lately: generalized Lyndon~\cite{Reut}, anti-Lyndon~\cite{Gewurza-Merola-2012}, inverse Lyndon~\cite{BdFZZ2018}, and Nyldon~\cite{Nyldon}.
In this text, we will use the second.
\begin{definition}
Let $(A,\leq)$ be a totally ordered alphabet.
We let $\leq_-$ denote the \emph{inverse order} on $A$, i.e., $b <_- a$ if and only if $a < b$ for all $a,b\in A$.
We also let $\invlex{\leq}$ denote the \emph{inverse lexicographic order} which is the lexicographic order induced by $\leq_-$.
A word is \emph{anti-Lyndon} if it is Lyndon with respect to the inverse lexicographic order.
\end{definition}
Otherwise stated, a word is anti-Lyndon if it is primitive and lexicographically maximal among its conjugates.
\begin{example}
\label{Ex: anti-Lyndon}
Let $A=\{0,1\}$ with $0<1$, so $1\invlex{<}0$. The first few anti-Lyndon words, ordered by length, are $1$, $0$, $10$, $110$, $100$, $1110$, $1100$, and $1000$.
\end{example}
\subsection{Morphisms and fixed points of interest}\label{sec: morphism and fp}
A \emph{morphism} is a map $f \colon A^* \to B^*$, where $A,B$ are alphabets, such that $f(xy)=f(x)f(y)$ for all $x,y\in A^*$.
The morphism $f$ is \emph{prolongable} on the letter $a\in A$ if $f(a)=ax$ for some $x\in A^*$ and $f^n(x)\neq \varepsilon$ for all $n\ge 0$.
In this section, we consider a specific family of morphisms defined as follows.
Note that they appear under the name \emph{generic $k$-bonacci} morphisms in~\cite[Example 2.11]{Rigo2014-vol2}.
\begin{definition}\label{def:mu-c}
Let $k \geq 2$ be an integer and let $c_0, \ldots, c_{k-1} \in \mathbb{N}$ be $k$ parameters often summarized in the shape of a word $c = c_0 \cdots c_{k-1} \in \mathbb{N}^k$.
The morphism $\mu_c : \{0, \ldots, k-1\}^* \to \{0, \ldots, k-1\}^*$ is given by $ \mu_c(i) = 0^{c_i} \cdot (i+1)$ for all $i\in\{0,\ldots,k-2\}$ and $\mu_c(k-1) = 0^{c_{k-1}}$.
For all $n\ge 0$, we then define $\word{c, n} = \mu_c^n(0)$ and $\length{c, n} = |\word{c, n}|$.
\end{definition}
When the context is clear, we will usually omit the subscript $c$ in Definition~\ref{def:mu-c}.
\begin{example}
\label{ex:c102 - words and lengths}
When $c=1^k$, we recover the $k$-bonacci morphism and words.
For $k=3$ and $c = 102$, the first few iterations of the corresponding morphism $\mu_c \colon 0 \mapsto 01, 1\mapsto 2, 2 \mapsto 00$ are given in Table~\ref{tab:c102 - words and lengths}.
Some specific factorization of the words $(\word{c, n})_{n \geq 0}$ is highlighted in Table~\ref{tab:c102 - words and lengths}.
\begin{table}
\caption{Construction of the sequences $(\word{n})_{n \geq 0}$ and $(\length{n})_{n \geq 0}$ for $c = 102$.}
\label{tab:c102 - words and lengths}
\[
\arraycolsep=4pt
\begin{array}{c||c|c|c|c|c|c}
n & 0 & 1 & 2 & 3 & 4 & 5\\[1pt]
\hline
\rule{0pt}{.8\normalbaselineskip}
\word{n} & 0 & 01 & 012 & 01200 & 012000101 & 012000101012012 \\
\text{fact. of } \word{n} & 0 & \word{0}^1 \cdot 1 & \word{1}^1 \word{0}^0 \cdot 2 & \word{2}^1 \word{1}^0 \word{0}^2 & \word{3}^1 \word{2}^0 \word{1}^2 & \word{4}^1 \word{3}^0 \word{2}^2 \\
\length{n} & 1 & 2 & 3 & 5 & 9 & 15
\end{array}
\]
\end{table}
\end{example}
The factorization presented in the previous example can be stated in general.
It gives a recursive definition of the words $(\word{c, n})_{n\ge 0}$ and can be proven using a simple induction.
\begin{proposition}
\label{P:alternative definition of the words}
For all $c = c_0 \cdots c_{k-1} \in \mathbb{N}^k$ and all $n\ge 0$, we have
\[
\word{n} =
\begin{cases}
\left(\displaystyle\prod_{i = 0}^{n-1} \word{n-i-1}^{c_i}\right) \cdot n, & \text{if } n \leq k-1;\\
\displaystyle\prod_{i = 0}^{k-1} \word{n-i-1}^{c_i}, & \text{if } n \geq k.
\end{cases}
\]
\end{proposition}
As a consequence of Proposition~\ref{P:alternative definition of the words}, the sequence $(\length{n})_{n \in \mathbb{N}}$ respects the following recurrence relation: if $0\le n \leq k-1$, then
$\length{n} = 1 + \sum_{i = 0}^{n-1} c_i \length{n-i-1}$, and if $n \geq k$, then $\length{n} = \sum_{i = 0}^{k-1} c_i \length{n-i-1}$.
In the rest of the paper, we will assume the following working hypothesis \eqref{eq:WH} on $c$:
\begin{equation}\tag{WH}
\label{eq:WH}
c = c_0 \cdots c_{k-1} \in \mathbb{N}^k \text{ with } c_0, c_{k-1} \geq 1.
\end{equation}
The condition $c_{k-1} \geq 1$ ensures both that the recurrence relation is of order $k$ and that the morphism $\mu_c$ is non-erasing, which is a classical assumption in combinatorics on words.
Moreover, the condition $c_0 \geq 1$ guarantees that $\mu_c$ is prolongable. Under~\eqref{eq:WH}, the morphism $\mu_c$ has an infinite fixed point starting with 0 denoted $\infw{u} := \lim_{n \to \infty} \word{n}$.
We make the following combinatorial observation.
\begin{remark}
\label{R:extensions of letters}
Under~\eqref{eq:WH}, using Proposition~\ref{P:alternative definition of the words}, a simple induction shows that the letter $1 \leq i \leq k-1$ can only be followed by $0$ and/or $i+1$ (and only $0$ in the case $i=k-1$) in $\infw{u}$.
\end{remark}
\section{Fun with numeration systems}\label{sec: ANS}
In this section, specific definitions will be recalled.
For the reader unfamiliar with the theory of numeration systems, we refer to~\cite[Chapter 2]{CANT2010} for an introduction and some advanced concepts.
A \emph{numeration system} (for natural numbers) can be defined as a triple $\mathcal{S} = (A, \rep_\mathcal{S}, L)$, where $A$ is an alphabet and $\rep_\mathcal{S} : \mathbb{N} \to A^*$ is an injective function such that $L = \rep_\mathcal{S}(\mathbb{N})$.
The map $\rep_{\mathcal{S}}$ is called the \emph{representation function} and $L$ is the \emph{numeration language}.
If $\rep_\mathcal{S}(n) = w$ for some integer $n\in\mathbb{N}$ and some word $w\in A^*$, we say that $w$ is the \emph{representation (in $\mathcal{S}$)} of $n$ and we define the \emph{valuation (in $\mathcal{S}$)} of $w$ by $\val_\mathcal{S}(w) = n$.
Note that, when the context is clear, we omit the subscript $\mathcal{S}$ in $\rep$ and $\val$.
Any given prolongable morphism naturally gives rise to a numeration system that we will call the \emph{associated Dumont-Thomas numeration system}~\cite{Dumont-Thomas-1989}. These are based on particular factorizations of the prefixes of the fixed point.
We only give here the definition in the particular case of the morphisms studied in this paper but the interested reader can find the general case in the original paper~\cite{Dumont-Thomas-1989}.
\begin{proposition}[Dumont-Thomas~\cite{Dumont-Thomas-1989}]
\label{P:Dumont-Thomas}
Let $c$ satisfy~\eqref{eq:WH}.
For all $n \in \mathbb{N}$, there exist unique integers $N, \ell_0, \ldots, \ell_N\in \mathbb{N}$ such that $\ell_0 \geq 1$,
$\infw{u}[0,n) = \word{N}^{\ell_0} \cdots \word{0}^{\ell_N}$, and this factorization verifies the following: $\word{N+1}$ is not a prefix of $\infw{u}[0,n)$ and, for all $0 \leq i \leq N$, $\word{N}^{\ell_0} \cdots \word{N-i+1}^{\ell_{i-1}} \word{N-i}^{\ell_i + 1}$ is not a prefix of $\infw{u}[0,n)$.
\end{proposition}
Recall that a numeration system based on a suitable sequence of integers $(U_n)_{n\ge 0}$ is called \emph{greedy} when, at each step of the decomposition of any integer, the largest possible term of the sequence $(U_n)_{n\ge 0}$ is chosen; formally, we use the Euclidean algorithm in a greedy way.
As the conditions on the factorization in the previous proposition resemble that of greedy representations in numeration systems, we will refer to it as being \emph{word-greedy}.
For a given $c$ satisfying~\eqref{eq:WH}, we then let $\mathcal{S}_c$ denote the numeration system associated with the representation function $\rep_{\mathcal{S}_c} : \mathbb{N} \to \mathbb{N}^*$ mapping $n$ to $\rep_{\mathcal{S}_c}(n) = \ell_0 \cdots \ell_N$,
where the integers $\ell_0, \ldots, \ell_N$ verify the conditions of Proposition~\ref{P:Dumont-Thomas} for $n$. By convention, we set $\rep_{\mathcal{S}_c}(0) = \varepsilon$.
\begin{example}
\label{Ex:c102 - numeration system}
Using Example~\ref{ex:c102 - words and lengths} for $c = 102$, the representations of the first few integers are given Table~\ref{tab:c102 - representations}.
The word-greedy factorization of each prefix is highlighted in the second row, leading to the representation of the corresponding integer in the third row.
\begin{table}
\caption{Illustration of the numeration system $\mathcal{S}_c$ for $c = 102$.}
\label{tab:c102 - representations}
\[
\arraycolsep=4pt
\begin{array}{c||c|c|c|c|c|c|c|c|c}
n & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\[1pt]
\hline
\rule{0pt}{.8\normalbaselineskip}
\infw{u}[0,n) & \varepsilon & 0 & 01 & 012 & 012 \cdot 0 & 01200 & 01200 \cdot 0 & 01200 \cdot 01 & 01200 \cdot 01 \cdot 0\\
\rep_{\mathcal{S}_c}(n) & \varepsilon & 1 & 10 & 100 & 101 & 1000 & 1001 & 1010 & 1011
\end{array}
\]
\end{table}
\end{example}
\begin{remark}
If $\rep_{\mathcal{S}_c}(n)= \ell_0 \cdots \ell_N$, then $n
= |\word{c,N}^{\ell_0} \cdots \word{c,0}^{\ell_N}|
= \sum_{i = 0}^N \ell_i \length{c,N-i}. $
In other words, $\val_{\mathcal{S}_c}$ is given by the usual valuation function associated with the sequence $(\length{c,n})_{n \in \mathbb{N}}$. Such a system is sometimes called a \emph{positional} numeration system. Note that this is not necessarily the case for the Dumont-Thomas numeration system associated with some other morphism.
\end{remark}
The Dumont-Thomas numeration systems are a particular case of abstract numeration systems introduced in~\cite{Lecomte-Rigo-2001}. A numeration system $\mathcal{S} = (A, \rep, L)$ is said to be \emph{abstract} if $L$ is regular and $\rep(n)$ is the $(n+1)$st word of $L$ in the genealogical order.
We have the following result.
\begin{theorem}[{Rigo~\cite[Section 2.2]{Rigo2014-vol2}}]
\label{T:automata construction of numeration systems}
Let $\sigma : \{\alpha_0, \ldots, \alpha_d\}^* \to \{\alpha_0, \ldots, \alpha_d\}^*$ be a morphism prolongable on the letter $\alpha_0$.
We define the automaton $\mathcal{A}_\sigma$ for which $\{\alpha_0, \ldots, \alpha_d\}$ is the set of states, $\alpha_0$ is the initial state, every state is final, and the (partial) transition function $\delta$ is such that, for each $\alpha \in \{\alpha_0, \ldots, \alpha_d\}$ and $0 \leq i \leq |\sigma(\alpha)| - 1$, $\delta(\alpha, i)$ is the $(i+1)$st letter of $\sigma(\alpha)$. If $\mathcal{S} = (A, \rep, L)$ is the Dumont-Thomas numeration system associated with $\sigma$, then $L = L(\mathcal{A}_\sigma) \setminus 0\mathbb{N}^*$ and $\rep(n)$ is the $(n+1)$st word of $L$ in the genealogical order.
\end{theorem}
\begin{example}
\label{Ex:c102 - automaton}
For $c = 102$, the automaton $\mathcal{A}_{\mu_c}$ of Theorem~\ref{T:automata construction of numeration systems} is depicted in Figure~\ref{fig:c102 - automaton} (details are left to the reader). The first few accepted words (not starting with $0$) are, in genealogical order, $\varepsilon$, $1$, $10$, $100$, $101$, $1000$, $1001$, $1010$, and $1011$, which indeed agree with the representations of the first few integers in Example~\ref{Ex:c102 - numeration system}.
\end{example}
\begin{figure}[h!tbp]
\begin{center}
\begin{tikzpicture}[scale=.8,every node/.style={circle,minimum width=.8cm,inner sep=1.5pt}]
\node (0) at (0,0) [draw,circle] {$0$};
\node (1) at (3,0) [draw,circle] {$1$};
\node (2) at (6,0) [draw,circle] {$2$};
\draw [->] (0) edge[out=110,in=70,looseness=8] node[above=-.2] {$0$} (0);
\draw [->] (0) edge node[above=-.2,midway] {$1$} (1);
\draw [->] (1) edge node[above=-.2,midway] {0} (2);
\draw [->] (2) edge[out=225,in=-45] node[below=-.1] {0, 1} (0);
\draw [->] (-1,0) -- (0);
\end{tikzpicture}
\caption{The automaton $\mathcal{A}_{\mu_c}$ for $c = 102$.}
\label{fig:c102 - automaton}
\end{center}
\end{figure}
As the automaton in Theorem~\ref{T:automata construction of numeration systems} can be used to produce, for all $n\ge 0$, the letter $\infw{u}_n$ when reading $\rep_{\mathcal{S}_c}(n)$ by~\cite[Theorem 2.24]{Rigo2014-vol2}, we have the following.
\begin{corollary}
Let $c$ satisfy~\eqref{eq:WH}.
Then the sequence $\infw{u}$ is $\mathcal{S}_c$-automatic.
\end{corollary}
Similarly to what is usually done in real base numeration systems, we will let $\infw{d}^\star$ denote the periodization of $c$, that is, $\infw{d}^\star=(c_0 \cdots c_{k-2} (c_{k-1}-1))^\omega$.
Using Theorem~\ref{T:automata construction of numeration systems}, we deduce the next result.
\begin{lemma}
\label{L:decomposition of T_n+1 - 1}
Under~\eqref{eq:WH}, for all $n \geq 0$, we have $\rep_{\mathcal{S}_c}(\length{n}) = 10^n$, the numbers having a representation of length $n+1$ are those in $[\length{n}, \length{n+1})$, and
$\rep_{\mathcal{S}_c}(\length{n+1}-1) = \infw{d}^\star[0,n]$.
In particular, $\length{n+1}-1 = \sum_{i = 0}^n \infw{d}^\star_i \length{n-i}$.
\end{lemma}
\begin{proof}
The first claim directly follows by the definition of $\mathcal{S}_c$, and the second one by the genealogical order.
The number $\length{n+1}-1$ is then represented by the maximal length-$(n+1)$ word accepted by the automaton $\mathcal{A}_{\mu_c}$, which is the length-$(n+1)$ prefix of $\infw{d}^\star$.
\end{proof}
Note that, if the numeration system $\mathcal{S}_c$ satisfies the greedy condition, this result follows from the characterization of numeration systems in terms of dynamical systems given by Bertrand-Mathis~\cite{Bertrand89,CCSDLT22}.
However, even though the function $\rep_{\mathcal{S}_c}$ is obtained using the word-greedy factorization of prefixes of $\infw{u}$, the numeration system $\mathcal{S}_c$ is not necessarily greedy as the following example shows.
\begin{example}
In Example~\ref{ex:c102 - words and lengths} for $c = 102$, we see that $\infw{u}[0,14) = 012000101 \cdot 012 \cdot 01$, so $\rep_{\mathcal{S}_c}(14) = 10110$, while the greedy representation of $14$ associated with the sequence $(U_n)_{n \in \mathbb{N}}$ is $11000$.
\end{example}
In fact, we have the following two characterizations.
\begin{lemma}
\label{L:greedyIFFlanguage}
Let $c$ satisfy~\eqref{eq:WH}. The numeration system $\mathcal{S}_c = (A, \rep_{\mathcal{S}_c}, L)$ is greedy if and only if,
for all $v \in L$ and for all $i \leq |v|$, the suffix of length $i$ of $v$ is smaller than or equal to $\infw{d}^\star[0,i)$.
Moreover, we then have
\[
L = \{v=v_1 \cdots v_n \in \mathbb{N}^* \setminus 0 \mathbb{N}^* \mid \forall \, 1 \leq i \leq n, v_{n-i+1} \cdots v_n \leq \infw{d}^\star[0,i)\}.
\]
\end{lemma}
\begin{proof}
Let us denote $\mathcal{S} = (A', \rep_\mathcal{S}, L')$ the canonical greedy numeration system associated with the sequence $(U_n)_{n \in \mathbb{N}}$. In particular, by uniqueness, $\mathcal{S}_c$ is greedy if and only if $\mathcal{S}_c = \mathcal{S}$.
As $\mathcal{S}_c$ is an abstract numeration system, $\rep_{\mathcal{S}_c}$ respects the genealogical order, i.e., $n \leq m$ if and only if $\rep_{\mathcal{S}_c}(n) \leq_{\text{gen}} \rep_{\mathcal{S}_c}(m)$. So does $\rep_{\mathcal{S}}$ by~\cite[Proposition 2.3.45]{CANT2010}. Hence, $\mathcal{S}_c = \mathcal{S}$ if and only if $L = L'$.
Moreover, for all $n \ge 0$, $\rep_{\mathcal{S}}(U_n) = 10^n$, so $L$ and $L'$ contain the same number of length-$n$ words by Lemma~\ref{L:decomposition of T_n+1 - 1}. Thus $L = L'$ if and only if $L \subseteq L'$.
The statement holds since, by~\cite[Lemma 5.3]{Hollander} and by Lemma~\ref{L:decomposition of T_n+1 - 1}, we have
\[
L' = \{v = v_1 \cdots v_n \in \mathbb{N}^* \setminus 0 \mathbb{N}^* \mid \forall \, 1 \leq i \leq n, v_{n-i+1} \cdots v_n \leq \infw{d}^\star[0,i)\}.
\]
\end{proof}
\begin{theorem}
\label{T:greedy condition}
Let $c = c_0\cdots c_{k-1} \in \mathbb{N}^k$ with $c_0, c_{k-1} \geq 1$.
The numeration system $\mathcal{S}_c$ is greedy if and only if $c_0 \cdots c_{k-2}(c_{k-1} - 1)$ is lexicographically maximal among its conjugates.
\end{theorem}
\begin{proof}
Using Lemma~\ref{L:greedyIFFlanguage} and Theorem~\ref{T:automata construction of numeration systems}, $\mathcal{S}_c$ is greedy if and only if, for all $n \in \mathbb{N}$ and for all $0 \leq i \leq k-1$, any path $\ell_0 \cdots \ell_n$ starting in State $i$ in the automaton $\mathcal{A}_{\mu_c}$ is such that $\ell_0 \cdots \ell_n \leq \infw{d}^\star[0,n]$.
However, by definition of $\mathcal{A}_{\mu_c}$, the lexicographically biggest path of length $n$ starting in state $i$ is given by the prefix of length $n$ of $\left(c_i \cdots c_{k-2} (c_{k-1} - 1) c_0 \cdots c_{i-1}\right)^\omega$.
We can therefore conclude that $\mathcal{S}_c$ is greedy if and only if $c_i \cdots c_{k-2} (c_{k-1} - 1) c_0 \cdots c_{i-1} \leq c_0 \cdots c_{k-2}(c_{k-1} - 1)$ for all $0 \leq i \leq k-1$, i.e., $c_0 \cdots c_{k-2}(c_{k-1} - 1)$ is maximal among its conjugates.
\end{proof}
Observe that the condition of the previous result is equivalent to the fact that $c_0 \cdots c_{k-2}(c_{k-1} - 1) = v^\ell$ for some anti-Lyndon $v$ (in fact, $v$ is the primitive root).
\begin{example}
\label{Ex: two NS for Fibonacci}
Let $k=4$ and $c=1011$.
In this case, $c_0 c_1 c_2(c_3 - 1) = 1010 = v^2$ with $v=10$, which is anti-Lyndon (see Example~\ref{Ex: anti-Lyndon}). The sequence $\length{n}$ satisfies the recurrence relation $\length{n+4} = \length{n+3} + \length{n+1} + \length{n}$ with initial conditions $\length{0}=1$, $\length{1}=2$, $\length{2}=3$, and $\length{3}=5$.
A simple induction shows that $(\length{n})_{n \in \mathbb{N}}$ is in fact the sequence of Fibonacci numbers.
Therefore the numeration system $\mathcal{S}_c$ corresponds to the classical Fibonacci numeration system, which can also be obtained with the parameter $c=11$.
\end{example}
The observation made in the previous example is more general.
\begin{remark}
Let $c$ satisfy~\eqref{eq:WH}.
If $c_0 \cdots c_{k-2}(c_{k-1} - 1) = v^\ell$ with $v$ anti-Lyndon, we define the word $v' := v_1 \cdots v_{|v| - 1} (v_{|v|} + 1)$ (simply put, we add $1$ to the last letter of $v$).
Then $c = v^{\ell-1}v'$ is a ``partial'' cyclization of $v'$.
In particular, since $\infw{d}^\star_c = \infw{d}^\star_{v'}$ (where the dependence of $\infw{d}^\star$ on the chosen parameters is emphasized via a subscript), the numeration systems $\mathcal{S}_c$ and $\mathcal{S}_{v'}$ coincide by Lemma~\ref{L:decomposition of T_n+1 - 1}.
For the reader familiar with the general theory of numerations, $v'$ satisfies $v'_i \cdots v'_{|v|} < v'$ for all indices $i\in\{2,\ldots,|v|\}$. This implies that $v'$ is the $\beta$-expansion $d_\beta(1)$ of $1$ for a simple Parry number $\beta$~\cite{Parry}. Therefore, $c$ is also a representation of $1$ in base $\beta$.
\end{remark}
\begin{example}
We illustrate the previous remark by resuming Example~\ref{Ex: two NS for Fibonacci}.
We have $v=10$ and $v'=11$.
The corresponding simple Parry number is the Golden ratio $\varphi$.
Observe that indeed $c=vv'=1011$ is a representation of $1$ in base $\varphi$.
\end{example}
\section{Link to string attractors}\label{sec: SA}
Using the results and concepts of the previous sections, we now turn to the concept of string attractors in relation to the fixed points of the morphisms $\mu_c$, $c \in \mathbb{N}^k$.
A \emph{string attractor} of a finite word $y = y_1 \cdots y_n$ is a set $\Gamma \subseteq \{1, \ldots, n\}$ such that every factor of $y$ has an occurrence crossing a position in $\Gamma$, i.e., for all factor $x \in A^m$ of $y$, there exists $i \in \Gamma$ and $j$ such that $i \in \{j, \ldots, j + m - 1\}$ and $x = y_j \cdots y_i \cdots y_{j + m - 1}$.
\begin{example}
The set $\{2,3,4\}$ is a string attractor of the word $0\,\underline{1}\,\underline{2}\,\underline{0}\,0\,1$. Indeed, it suffices to check that the factors $0$, $1$ and $01$ have an occurrence crossing one of the underlined positions. No smaller string attractor exists since at least one position in the set is needed per different letter in the word.
\end{example}
\noindent\textbf{Warning.}
We would like to stress the following crucial point: in this paper, the letters of infinite words are indexed starting from $0$ while the positions in a string attractor are counted starting at $1$.
This could be seen as confusing, but we use the same notation as the original paper on string attractors \cite{Kampa-Prezza}.
Where ambiguity may occur, we explicitly declare how finite words are indexed.
As we will look at prefixes of infinite words, it is natural to wonder if there is a link between the string attractors of the finite words $w$ and $wa$, where $a$ is a letter. In general, there is no trivial link although we have the following result which can be derived from the proofs of \cite[Propositions 12 and 15]{saandcow}.
\begin{proposition}
\label{P:sa of fractional powers}
Let $z$ be a non-empty word and let $x = z^r$, $y = z^s$ be fractional powers of $z$ with $1 \leq r \leq s$. If $\Gamma$ is a string attractor of $x$, then $\Gamma \cup \{|z|\}$ is a string attractor of $y$.
\end{proposition}
Since the considered infinite words are the limits of the sequence $(\word{n})_{n \in \mathbb{N}}$, we are interested in the prefixes which are fractional powers of some $\word{n}$.
\begin{definition}
\label{D: longest fractional power prefix}
Let $c$ satisfy~\eqref{eq:WH}. For all $n \geq 0$, we let $\upperw{n}$ denote the longest prefix of $\infw{u}$ that is a fractional power of $\word{n}$, i.e., the longest common prefix between $\infw{u}$ and $(\word{n})^\omega$.
For all $n \geq 0$, we also let $\upperb{n} = |\upperw{n}|$.
\end{definition}
\subsection{Fractional power prefixes and anti-Lyndon words}
In this subsection, we will prove that the words defined above have a particular structure related to (anti-)Lyndon words in Proposition~\ref{P:longest fractional power}.
To do so, we introduce some notations.
For all $n \geq 0$, the pair $\{i_n, j_n\}$ will designate the two (distinct) letters following $\upperw{n}$ in $\infw{u}$ and in $(\word{n})^\omega$.
Without loss of generality, we will always assume that $i_n < j_n$.
\begin{example}
Set $c=102$.
Recall from Examples~\ref{ex:c102 - words and lengths} that the first few words in $(\word{n})_{n\ge 0}$ are
$0$, $01$, $012$, $01200$, $012000101$, $012000101012012$. It is then easy to see that the first few words in $(\upperw{n})_{n\ge 0}$ are $0$, $01$, $012 0$, $01200 01$, $012000101 0120$.
So we conclude that the first few pairs in $(\{i_n, j_n\})_{n\ge 0}$ are $\{0,1\}$, $\{0,2\}$, $\{0,1\}$, $\{0,2\}$, $\{0,1\}$.
\end{example}
The following lemma gives a recursive construction for the sequences $(i_n)_{n \in \mathbb{N}}$ and $(j_n)_{n \in \mathbb{N}}$, as well as a first structure for the words $\upperw{n}$.
\begin{lemma}
\label{L:construction of i, j, l}
Let $c$ satisfy~\eqref{eq:WH}.
For all $n \geq 0$, we have $\upperw{n} = \word{n}^{\ell_0} \word{n-1}^{\ell_1} \cdots \word{0}^{\ell_n}$ where the sequences $(\ell_n)_{n \geq 0}$, $(i_n)_{n \geq 0}$, $(j_n)_{n \geq 0}$ are recursively constructed as follows:
$\ell_0 = c_0$, $i_0 = 0$, $j_0 = 1$, and for all $n \geq 0$, if $j_n \leq k-2$, we have
\[
\{\ell_{n+1}, i_{n+1}, j_{n+1}\} =
\begin{cases}
\{c_{j_n}, 0, j_n + 1\}, & \text{if } c_{i_n} > c_{j_n};\\
\{c_{j_n}, i_n + 1, j_n + 1\}, & \text{if } c_{i_n} = c_{j_n};\\
\{c_{i_n}, 0, i_n + 1\}, & \text{if } c_{i_n} < c_{j_n};
\end{cases}
\]
and if $j_n = k-1$, we have $\{\ell_{n+1}, i_{n+1}, j_{n+1}\} = \{c_{i_n}, 0, i_n + 1\}$.
\end{lemma}
\begin{proof}
We prove the claimed structure for the sequences $(\ell_n)_{n \geq 0}$, $(i_n)_{n \geq 0}$, $(j_n)_{n \geq 0}$ and also that $c_0 = \max\{c_0, \ldots, c_{j_n - 1}\}$ for all $n \geq 0$ by induction.
For the base case $n = 0$, as $\word{0} = 0$ and $\word{1} = 0^{c_0} 1$ is a prefix of $\infw{u}$, we directly have $\ell_0 = c_0$, $i_0 = 0$, $j_0 = 1$ and $c_0 = \max\{c_0\}$.
Let us now move to the induction step: assume that both claims are satisfied for $n$ and let us prove them for $n + 1$.
For the first claim, by definition, $\mu(\upperw{n})$ is a prefix of both $\mu(\infw{u})=\infw{u}$ and $\mu(\word{n})^\omega = (\word{n+1})^\omega$. Moreover, it is followed in one of them by $\mu(i_n) = 0^{c_{i_n}} \cdot (i_n + 1)$ and in the other by $\mu(j_n)$.
The image of $j_n$ under $\mu$ takes two forms.
If $j_n \leq k - 2$, then $\mu(j_n) = 0^{c_{j_n}} \cdot (j_n + 1)$. Thus, as $i_n + 1 \ne j_n + 1$, we have $\upperw{n+1} = \mu(\upperw{n}) 0^{\ell_{n+1}}$ where $0^{\ell_{n+1}}$ is the longest common prefix between $\mu(i_n)$ and $\mu(j_n)$. We then have
\[
\{\ell_{n+1}, i_{n+1}, j_{n+1}\} =
\begin{cases}
\{c_{j_n}, 0, j_n + 1\}, & \text{if } c_{i_n} > c_{j_n};\\
\{c_{j_n}, i_n + 1, j_n + 1\}, & \text{if } c_{i_n} = c_{j_n};\\
\{c_{i_n}, 0, i_n + 1\}, & \text{if } c_{i_n} < c_{j_n}.
\end{cases}
\]
The conclusion of the first claim follows from the fact that $\mu(\upperw{n}) = \word{n}^{\ell_0} \cdots \word{1}^{\ell_n}$ by the induction hypothesis.
If $j_n = k - 1$ then by Remark~\ref{R:extensions of letters}, $\upperw{n+1}$ is not only followed by $\mu(k-1)$ but by $\mu(k-1) \mu(0) = 0^{c_{k-1} + c_0} \cdot 1$. By the second claim, we have
\[
c_{i_n} \leq \max\{c_0, \ldots, c_{k-2}\} = c_0 < c_{k-1} + c_0
\]
as $c_{k-1} \geq 1$ by assumption. We conclude that $\{\ell_{n+1}, i_{n+1}, j_{n+1}\} = \{c_{i_n}, 0, i_n + 1\}$.
The second claim is also satisfied as $\max\{c_0, \ldots, c_{j_{n+1} - 1}\} \leq \max\{c_0, \ldots, c_{j_n - 1}\}$. Indeed, in all cases, either $j_{n+1} \leq j_n$, or $j_{n+1} = j_n + 1$ and $c_{j_n} \leq \max\{c_0, \ldots, c_{j_n - 1}\}$.
\end{proof}
\begin{example}
\label{Ex:construction of i, j, l}
Let us take $c = 210221$ for which $k = 6$. The first few elements of the sequences $(\ell_n)_{n \geq 0}$, $(i_n)_{n \geq 0}$, $(j_n)_{n \geq 0}$ are given in Table~\ref{tab:Duval-inverse-order}.
We already observe that they are (eventually) periodic. Indeed, $\{i_1, j_1\} = \{0, 2\} = \{i_4, j_4\}$ and, as $\{i_n, j_n\}$ entirely determines the rest of the sequences, $(\ell_n)_{n \geq 0}$, $(i_n)_{n \geq 0}$, $(j_n)_{n \geq 0}$ are eventually periodic of period length 3 starting from index 1 (and even from index 0 for $(\ell_n)_{n \geq 0}$).
\begin{table}[]
\caption{Illustration of the construction of the sequences $(\ell_n)_{n \geq 0}$, $(i_n)_{n \geq 0}$, $(j_n)_{n \geq 0}$ in the case where $c = 210221$.}
\label{tab:Duval-inverse-order}
\[
\begin{array}{c|c|c|c}
n & \text{Comparison} & \ell_n & \{i_n,j_n\} \\
\hline
0 & / & c_0 = 2 & \{0,1\} \\
1 & c_0 > c_1 & c_1 = 1 & \{0,2\} \\
2 & c_0 > c_2 & c_2 = 0 & \{0,3\} \\
3 & c_0 = c_3 & c_3 = 2 & \{1,4\} \\
4 & c_1 < c_4 & c_1 = 1 & \{0,2\} \\
5 & c_0 > c_2 & c_2 = 0 & \{0,3\} \\
6 & c_0 = c_3 & c_3 = 2 & \{1,4\}
\end{array}
\]
\end{table}
\end{example}
From the recursive definition given in Lemma~\ref{L:construction of i, j, l}, we derive the following result.
\begin{lemma}
\label{L:i, j and borders}
Let $c$ satisfy~\eqref{eq:WH}.
For all $n \geq 0$, the word $c_0 \cdots c_{i_n - 1}$ is a border of the word $c_0 \cdots c_{j_n - 1}$, i.e., $c_0 \cdots c_{i_n - 1} = c_{j_n - i_n} \cdots c_{j_n - 1}$.
\end{lemma}
\begin{proof}
Once again, we prove the result by induction on $n \geq 0$. Notice that, if $i_n = 0$, then the word $c_{j_n - i_n} \cdots c_{j_n - 1}$ is empty, hence the conclusion. This is in particular the case for $n = 0$. Assume now that the claim holds for $n$ and let us prove it for $n+1$. By Lemma~\ref{L:construction of i, j, l}, we have $i_{n+1} = 0$ unless $c_{i_n} = c_{j_n}$. In this case, $i_{n+1} = i_n + 1$ and $j_{n+1} = j_n + 1$ so, as $c_0 \cdots c_{i_n - 1} = c_{j_n - i_n} \cdots c_{j_n - 1}$ by the induction hypothesis, we directly have $c_0 \cdots c_{i_{n+1} - 1} = c_{j_{n+1} - i_{n+1}} \cdots c_{j_{n+1} - 1}$.
\end{proof}
We now show the link with (anti-)Lyndon words. Before doing so, we recall some famous properties of Lyndon words that will be useful.
The first result is part of the folklore, but a proof can be found, for instance, in~\cite{DUVAL20082261}.
\begin{proposition}\label{P:lyndon are unbordered}
Lyndon words are unbordered, i.e. if $w$ is a both a prefix and a suffix of a Lyndon word $v$, then $w = \varepsilon$ or $w = v$.
\end{proposition}
The next result is shown within the proof of Theorem~\ref{thm:CFL-fact}. See, for instance, \cite[Theorem 5.1.5]{Lothaire97}.
\begin{proposition}
\label{P:longest Lyndon prefix}
Let $w\in A^*$ be a non-empty word and let $(\ell_1,\cdots, \ell_n)$ be its Lyndon factorization as in Theorem~\ref{thm:CFL-fact}.
Then $\ell_1$ is the longest Lyndon prefix of $w$.
\end{proposition}
Duval provided an algorithm computing the Lyndon factorization of a word in linear time~\cite{Duval83}. It is based on a decomposition of the word into three parts $xyz$: we already computed the Lyndon factorization of $x$ and we are now looking at $w=yz$, where $y$ is a fractional power of a Lyndon word $v$ and $z$ is the part that we still need to explore. We keep track of the position of the first letter of $z$ with an index $j$, and of the period of $y$ (i.e. the length of $v$) using an index $i$ such that $j - i = |v|$.
\begin{algo}[Duval~\cite{Duval83}]
\label{A: Duval}
Let $(A, \leq)$ be an ordered set and let $w = w_0 \ldots w_n$ be a length-$n$ word over $A$. We denote $w_{n+1}$ a new symbol smaller than all the letters of $w$.
Set $i = 0$ and $j = 1$.
While $i \leq n$, compare $w_i$ and $w_j$ and do the following:
\begin{itemize}
\item
if $w_i < w_j$, then set $j = j+1$ and $i = 0$;
\item
if $w_i = w_j$, then set $j = j+1$ and $i = i+1$;
\item
if $w_i > w_j$, then output $w_0 \cdots w_{j-i-1}$ as the next element in the Lyndon factorization and restart the algorithm with the word $w_{i-j} \cdots w_n$.
\end{itemize}
\end{algo}
Using the notation of the paragraph preceding Algorithm~\ref{A: Duval}, we explain the three cases present in the algorithm.
We want to compute the next Lyndon word in the Lyndon factorization of a word, knowing that of some of its prefixes.
By definition of $i$ and $j$, we compare the letter $w_j$ in $z$ with the letter $w_i$, spaced by $|v|$ letters.
\begin{itemize}
\item
If $w_i < w_j$,
then $y w_j$ is a Lyndon word by~\cite[Lemme 2]{Duval80}, so we update $y$ to $y w_j$ and $v$ to $y$.
\item
If $w_i = w_j$,
then $y w_j$ is still a fractional power of $v$, so we simply update $y$ to $y w_j$ without changing the length of $v$ (that is, we do not modify $j-i$).
\item
If $w_i > w_j$, then $y w_j$ cannot be a prefix of a Lyndon word, so the longest Lyndon prefix of $w$ is $v$.
\end{itemize}
We are now ready to prove the structure of the words $\upperw{n}$ and its link with anti-Lyndon words.
\begin{proposition}
\label{P:longest fractional power}
Let $c$ satisfy~\eqref{eq:WH}.
Define $\infw{a}$ as the infinite concatenation of the longest anti-Lyndon prefix of the word $c_0 \cdots c_{k-2}$. Then for all $n\ge 0$, $\upperw{n} = \word{n}^{\infw{a}_0} \word{n-1}^{\infw{a}_1} \cdots \word{0}^{\infw{a}_n}$.
In particular, $\upperb{n} = \sum_{i=0}^n \infw{a}_i \length{n-i}$.
\end{proposition}
\begin{proof}
By Lemma~\ref{L:construction of i, j, l}, the beginning of the construction of the sequences $(\ell_n)_{n \geq 0}$, $(i_n)_{n \geq 0}$, $(j_n)_{n \geq 0}$ corresponds exactly to the first application of Duval's algorithm to the word $c_0 \cdots c_{k-2}$ with the order $\invlex{\leq}$.
More specifically, letting $N$ denote the first index $n$ for which $c_{i_n} < c_{j_n}$ or $j_n = k-1$ and setting $p = j_N - i_N$, then Duval's algorithm for $\invlex{\leq}$ implies that the word $\ell_0 \cdots \ell_{p-1}$ is the first element in the Lyndon factorization of $c_0 \cdots c_{k-2}$ for the order $\invlex{\leq}$.
Therefore $\ell_0 \cdots \ell_{p-1} = c_0 \cdots c_{p-1}$ is the longest anti-Lyndon prefix of $c_0 \cdots c_{k-2}$ by Proposition~\ref{P:longest Lyndon prefix}. Let us denote it $v$.
As in the statement, let $\infw{a}=vvv\cdots$.
Observe that, by definition of $N$ and by Lemma~\ref{L:construction of i, j, l}, for all $1 \leq n \leq N$, we have $j_n = n + 1$ as it is incremented at each step, and $\ell_{n} = c_{j_{n-1}} = c_n$.
In particular, $p = j_N - i_N = N + 1 - i_N$.
We now prove that $\ell_n = \infw{a}_n$ for all $n \geq 0$.
By definition of $\infw{a}$, the equality holds for $0\le n<p$, so it is enough to look at all $n \geq p$.
We show by induction on $n \geq p$ that $\ell_n = c_{n \bmod p}$, $j_n \equiv (n + 1) \mod p$, and $j_n \leq N + 1$.
For $p \leq n \leq N$, we already have $\ell_n = c_n$, $j_n = n + 1$, and $j_n \leq N + 1$ by the observation made above.
Moreover, Duval's algorithm implies that $c_0 \cdots c_N$ is periodic of period length $p$, so $\ell_n = c_n = c_{n \bmod p}$.
This is also true for $n = N + 1$ as $N + 1 = p + i_N \equiv i_N \mod p$. Indeed, by Lemma~\ref{L:construction of i, j, l} and by definition of $N$, we have $\ell_{N + 1} = c_{i_N} = c_{N + 1 \bmod p}$ and
\begin{equation}
\label{Eq: case N}
j_{N + 1} = i_N + 1 \equiv N + 2 \mod p.
\end{equation}
Assume now that the claim is true for indices up to $n \geq N + 1$ and let us prove it for $n+1$. By the induction hypothesis, we have $j_n \leq N + 1$, so we distinguish two cases.
\textbf{Case 1.}
If $j_n \leq N$, then $j_n\leq k - 2$ (as $j_N = N+1 \le k-1$). By Lemma~\ref{L:i, j and borders}, comparing $c_{i_n}$ and $c_{j_n}$ is equivalent to comparing $c_0 \cdots c_{i_n}$ and $c_{j_n - i_n} \cdots c_{j_n}$. As mentioned earlier in the proof, $c_0 \cdots c_N$ is a fractional power of $v$, so $c_0 \cdots c_{i_n}$ is a prefix of a power of $v$ while $c_{j_n - i_n} \cdots c_{j_n}$ is a prefix of a power of a conjugate of $v$. As $v$ is Lyndon for $\invlex{\leq}$, its powers are smaller than the powers of its conjugates for $\invlex{\leq}$, thus $c_0 \cdots c_{i_n} \invlex{\leq} c_{j_n - i_n} \cdots c_{j_n}$ and $c_{i_n} \leq_- c_{j_n}$, i.e., $c_{i_n} \geq c_{j_n}$. Using Lemma~\ref{L:construction of i, j, l}, we conclude that $\ell_{n+1} = c_{j_n} = c_{n+1 \bmod p}$ as $j_n \leq N$ is congruent to $n+1 \mod p$ by the induction hypothesis and $c_0 \cdots c_N$ has period length $p$. We also have $j_{n+1} = j_n + 1$ thus $j_{n+1} \leq N + 1$ and $j_{n+1} \equiv n + 2 \mod p$.
\textbf{Case 2.}
If $j_n = N + 1$, then using Lemma~\ref{L:i, j and borders}, we know that $c_0 \cdots c_N = c_0 \cdots c_{j_n - 1}$ has a border of length $i_n$ so $c_0 \cdots c_N$ has period length $N + 1 - i_n$. Since it also has period length $p$ and $c_0 \cdots c_{p-1}$ is anti-Lyndon thus unbordered by Proposition~\ref{P:lyndon are unbordered}, we must have that $N + 1 - i_n$ is a multiple of $p = N + 1 - i_N$. In other words,
\begin{equation}
\label{Eq: congruence iN and in}
i_n \equiv i_N \mod p.
\end{equation}
In particular, by periodicity, $c_{i_n} = c_{i_N}$.
Moreover, $j_n = N+1 = j_N$ so $\{c_{i_n}, c_{j_n}\} = \{c_{i_N}, c_{j_N}\}$. Therefore, by Lemma~\ref{L:construction of i, j, l} and by definition of $N$, we have
\begin{equation}
\label{Eq: comparison l and j between n et N}
\ell_{n+1} = \ell_{N+1} \quad \text{and} \quad
j_{n+1} = i_n + 1 \leq N + 1.
\end{equation}
By the induction hypothesis for $n$, we have
\begin{equation}
\label{Eq: congruence N+1 and n+1}
N + 1 = j_n \equiv n + 1 \mod p.
\end{equation}
We conclude that
\[
\ell_{n + 1} = \ell_{N + 1} = c_{(N + 1) \bmod p} = c_{(n + 1) \bmod p},
\]
where the first equality follows by~\eqref{Eq: comparison l and j between n et N}, the second by the induction hypothesis for $N+1$, and the last by Congruence~\eqref{Eq: congruence N+1 and n+1}, and
\[
j_{n+1} = i_n + 1 \equiv i_N + 1 \equiv j_{N + 1} \equiv (N + 2) \equiv (n + 2) \mod p,
\]
where the first equality follows from~\eqref{Eq: comparison l and j between n et N}, the second congruence from~\eqref{Eq: congruence iN and in}, the third by~\eqref{Eq: case N}, the fourth by the induction hypothesis for $N+1$, and the last by Congruence~\eqref{Eq: congruence N+1 and n+1}.
This ends the proof.
\end{proof}
\begin{example}
\label{ex:c102 - frac pow prefixes}
Let us pursue Example~\ref{ex:c102 - frac pow prefixes} for which $c=102$.
The first few words in $(\upperw{n})_{n\ge 0}$ are $0, 01, 012 0, 01200 01, 012000101 0120$.
The longest anti-Lyndon prefix of $c_0c_1=10$ is $10$ itself so $\infw{a} = (10)^\omega$. We can easily check that the first few $q_n$'s indeed satisfy Proposition~\ref{P:longest fractional power}.
\end{example}
\subsection{String attractors of the prefixes}
Motivated by Proposition~\ref{P:sa of fractional powers}, to describe string attractors of each prefix, it is now sufficient to be able to describe, for all $n \geq 1$, a string attractor of a prefix of length $m_n$ for some $m_n \in [\length{n} - 1, \upperb{n-1}]$. This argument is the key in the proof of the main theorem. However, we first have to ensure that this interval is well defined. For that, we will need the following lemma.
\begin{lemma}
\label{L:inequality of c_0...c_k-2}
Let $c$ satisfy~\eqref{eq:WH}. Then $c_0 \cdots c_{k-2} \geq \infw{a}[0, k-2]$.
\end{lemma}
\begin{proof}
Assume the contrary and let $w$ be the longest anti-Lyndon prefix of $c_0 \cdots c_{k-2}$. If $|w| \leq i \leq k-2$ is the smallest index such that $c_0 \cdots c_i < \infw{a}[0,i]$, then $c_0 \cdots c_i = w^\ell va$ with $v$ a proper prefix of $w$, $a$ a letter, and $va < w$. So~\cite[Lemme 2]{Duval80} implies that $c_0 \cdots c_i$ is an anti-Lyndon prefix of $c_0 \cdots c_{k-2}$. As $i \geq |w|$, this contradicts the definition (maximality) of $w$.
\end{proof}
In fact, the condition obtained for the greediness of the numeration system is related to the relation between $\length{n} - 1$ and $\upperb{n-1}$. This is detailed in the next two results.
\begin{proposition}
\label{P: c' lex max}
Let $c$ satisfy~\eqref{eq:WH}. If $c_0 \cdots c_{k-2} (c_{k-1} - 1)$ is lexicographically maximal among its conjugates, then $\infw{d}^\star[0,n] \leq \infw{a}[0,n]$ for all $n \geq 0$.
\end{proposition}
\begin{proof}
Let $w$ denote the longest anti-Lyndon prefix of $c_0 \cdots c_{k-2}$.
We first show that $c_0 \cdots c_{k-2} (c_{k-1} - 1) \leq \infw{a}[0,k-1]$. If it is not the case, there exist $\ell \geq 1$, a proper prefix $u$ of $w$, a letter $a$ and a word $v$ such that $c_0 \cdots c_{k-2} (c_{k-1} - 1) = w^\ell u a v$ and $ua > w$. Then $u a v w^\ell > c_0 \cdots c_{k-2} (c_{k-1} - 1)$, so $c_0 \cdots c_{k-2} (c_{k-1} - 1)$ is not maximal among its conjugates. This is a contradiction.
Therefore we have $c_0 \cdots c_{k-2} (c_{k-1} - 1) \leq \infw{a}[0,k-1]$.
By Lemma~\ref{L:inequality of c_0...c_k-2}, we get $c_0 \cdots c_{k-2} = \infw{a}[0,k-2]$ and $c_{k-1} - 1 \leq \infw{a}_{k-1}$.
We now prove that $\infw{d}^\star[0,n] \leq \infw{a}[0,n]$ for all $n \geq 0$. If $c_{k-1} - 1 < \infw{a}_{k-1}$, then the conclusion is direct. If $c_{k-1} - 1 = \infw{a}_{k-1}$, then $c_0 \cdots c_{k-2}(c_{k-1} - 1)$ is a fractional power of $w$ so there exist $\ell \geq 1$ and $u$ a proper prefix of $w$ such that $c_0 \cdots c_{k-2}(c_{k-1} - 1) = w^\ell u$. Let us write $w = uv$. If $u \ne \varepsilon$, we then have
\[
c_0 \cdots c_{k-2}(c_{k-1} - 1) = w^\ell u = u (vu)^\ell < u w^\ell
\]
as $w$ is anti-Lyndon thus strictly greater than its conjugates. This contradicts the assumption that $c_0 \cdots c_{k-2}(c_{k-1} - 1)$ is maximal among its conjugates. Therefore, $u = \varepsilon$ and $c_0 \cdots c_{k-2}(c_{k-1} - 1)$ is a (natural) power of $w$. We conclude that $\infw{a} = \infw{d}^\star$, which ends the proof of the first item.
\end{proof}
\begin{proposition}
\label{P:equivalence of conditions for inequalities}
Let $c$ satisfy~\eqref{eq:WH}. If $c_0 \cdots c_{k-2} (c_{k-1} - 1)$ is lexicographically maximal among its conjugates, then $\length{n+1} - 1 \leq \upperb{n}$ for all $n \geq 0$.
\end{proposition}
\begin{proof}
Let us show the claim by contraposition. So assume that there exists an integer $n$ such that $\length{n+1} - 1 > \upperb{n}$. Thus $\upperw{n} = \word{n}^{\infw{a}_0} \cdots \word{0}^{\infw{a}_n}$ is a proper prefix of $\infw{u}[0,\length{n+1}-1)$. By Lemma~\ref{L:decomposition of T_n+1 - 1}, $\rep_{\mathcal{S}_c}(\length{n+1}-1) = \infw{d}^\star[0,n]$, so $\infw{d}^\star_0$ is the largest exponent $e$ such that $\word{n}^e$ is a prefix of $\infw{u}[0,\length{n+1}-1)$. This implies that $\infw{d}^\star_0 \geq \infw{a}_0$. Moreover, if $\infw{a}_0 = \infw{d}^\star_0$, the same argument implies that $\infw{d}^\star_1$ is the largest exponent $e$ such that $\word{n}^{\infw{d}^\star_0}\word{n-1}^e$ is a prefix of $\infw{u}[0,\length{n+1}-1)$.
In both cases, we have $\infw{d}^\star_0\infw{d}^\star_1 \geq \infw{a}_0\infw{a}_1$. We may iterate the reasoning to obtain $\infw{d}^\star[0,n] \geq \infw{a}[0,n]$.
As $\upperw{n}$ is a proper prefix of $\infw{u}[0,\length{n+1}-1)$, the inequality cannot be an equality.
This contradicts Proposition~\ref{P: c' lex max}, which ends the proof.
\end{proof}
We will now prove that, under the conditions of the previous result, we can describe string attractors of every prefix of $\infw{u}$ using the elements of $(\length{n})_{n \in \mathbb{N}}$.
For $n \in \mathbb{N}$, we denote
\[
\Gamma_n =
\begin{cases}
\{\length{0}, \ldots, \length{n}\}, & \text{if } 0\le n \leq k-1;\\
\{\length{n-k+1}, \ldots, \length{n}\}, & \text{if } n \geq k.
\end{cases}
\]
We also define
\[
\lowerb{n} =
\begin{cases}
\length{n}, & \text{if } 0\le n \leq k-1;\\
\length{n} + \length{n-k+1} - \length{n-k} - 1, & \text{if } n \geq k.
\end{cases}
\]
The next lemma directly follows from Proposition~\ref{P:equivalence of conditions for inequalities} and the definition of $\lowerb{n}$.
\begin{lemma}
\label{L:inequality with lower and upper}
Let $c$ satisfy~\eqref{eq:WH}.
If $c_0 \cdots c_{k-2}(c_{k-1} - 1)$ is maximal among its conjugates, then $\lowerb{n} \leq \length{n+1} - 1 \leq \upperb{n}$ for all $n \in \mathbb{N}$.
\end{lemma}
To simplify the statement of the following theorem, we set $\Gamma_{-1} = \emptyset$.
\begin{theorem}
\label{T:sa of prefixes}
Let $c = c_0 \cdots c_{k-1} \in \mathbb{N}^k$ with $c_0, c_{k-1} \geq 1$ and $c_0 \cdots c_{k-2} (c_{k-1} - 1)$ maximal among its conjugates.
Fix an integer $n \ge 0$. If $m \in [\length{n}, \upperb{n}]$, then $\Gamma_{n-1} \cup \{\length{n}\}$ is a string attractor of $\infw{u}[0,m)$. Furthermore, if $m \in [\lowerb{n},\upperb{n}]$, then $\Gamma_n$ is a string attractor of $\infw{u}[0,m)$.
\end{theorem}
\begin{proof}
Let us simultaneously prove the two claims by induction on $n$.
If $n = 0$, then $1 \leq m \leq c_0$, so $\infw{u}[0,m) = 0^m$ and the conclusion directly follows for both claims.
Assume now that the claims are satisfied for $n-1$ and let us prove them for $n$. By Lemma~\ref{L:inequality with lower and upper} and the induction hypothesis, $\Gamma_{n-1}$ is a string attractor of $\infw{u}[0,\length{n} - 1)$. This implies that $\Gamma_{n-1} \cup \{\length{n}\}$ is a string attractor of $\word{n}$ so, by Proposition~\ref{P:sa of fractional powers} and by definition of $\upperb{n}$ (Definition~\ref{D: longest fractional power prefix}), of $\infw{u}[0,m)$ for all $m \in [\length{n}, \upperb{n}]$. This ends the proof of the first claim.
Let us now prove the second claim. Observe that, using Proposition~\ref{P:sa of fractional powers}, it suffices to prove that $\Gamma_n$ is a string attractor of $\infw{u}[0,P_n)$.
If $0\le n \leq {k-1}$, then $\Gamma_n = \Gamma_{n-1} \cup \{\length{n}\}$ so we can directly conclude using the first claim.
Thus assume that $n \geq k$. Then by the first claim, $\Gamma_n \cup \{\length{n-k}\} = \Gamma_{n-1} \cup \{\length{n}\}$ is a string attractor of $\infw{u}[0,P_n)$. Therefore, it remains to show that the position $\length{n-k}$ is not needed in the string attractor. In other words, we prove that the factors of $\infw{u}[0,P_n)$ that have an occurrence crossing position $\length{n-k}$ (and no other position of $\Gamma_n \cup \{\length{n-k}\}$) have another occurrence crossing a position in $\Gamma_n$. More precisely, we show that they have an occurrence crossing position $\length{n}$.
To help the reader with the proof, we illustrate the situation in Figure~\ref{fig:proof of sa}.
As the smallest position in $\Gamma_n$ is $\length{n-k+1}$, we need to consider the factor occurrences crossing position $\length{n-k}$ in $\infw{u}[0,\length{n-k+1} - 1)$. So, if we write $\infw{u}[0,P_n) = \word{n} w$, it is sufficient to show that $\word{n-k}$ is a suffix of $\word{n}$ and that $w' := \infw{u}[\length{n-k},\length{n-k+1} - 1)$ is a prefix of $w$.
Observe that
\begin{equation}
\label{eq:length-suffix-w}
|w| = \lowerb{n} - \length{n} = \length{n-k+1} - \length{n-k} - 1
\end{equation}
by definition of $\lowerb{n}$, so $|w'| = |w|$. We will actually show that $w' = w$.
\begin{figure}[h!t]
\centering
\begin{tikzpicture}[cross/.style={path picture={
\draw[black]
(path picture bounding box.south east) -- (path picture bounding box.north west) (path picture bounding box.south west) -- (path picture bounding box.north east);
}}]
\node (\lowerb{n}) at (-1,0.4) {$\infw{u}[0,\lowerb{n})=$};
\node [fit={(0,0) (7,0.8)}, inner sep=0pt, draw=black, thick] (u_n) {};
\node [fit={(7,0) (8.5,0.8)}, inner sep=0pt, draw=black, thick, fill=gray!30] (u_n-k-1) {};
\node [thick] (label_u_n) at (3.5,1) {$\word{n}$};
\node [thick] (label_w) at (7.75,1) {$w$};
\draw [thick, decoration={brace, mirror, raise=0.05cm, amplitude=5pt}, decorate] (0,0) -- node[below=.2cm] {$\word{n-k}$} (2,0) {};
\node [circle, fill=red,inner sep=0pt,minimum size=4pt] (T_n-k) at (1.8,0.5) {};
\node [cross] (T_n-k bis) at (1.8,0.5) {};
\node [red] (label_T_n-k) at (1.8,0.2) {$\length{n-k}$};
\draw [thick, decoration={brace, mirror, raise=0.05cm, amplitude=5pt}, decorate] (2,0) -- node[below=.2cm] {$w'$} (3.5,0) {};
\node [circle,fill=red,inner sep=0pt,minimum size=4pt] (T_n-k+1) at (3.7,0.5) {};
\node [red] (label_T_n-k) at (3.7,0.2) {$\length{n-k+1}$};
\draw [thick, decoration={brace, mirror, raise=0.05cm, amplitude=5pt}, decorate] (5,0) -- node[below=.2cm] {$\word{n-k}$} (7,0) {};
\draw [thick, decoration={brace, mirror, raise=0.05cm, amplitude=5pt}, decorate] (7,0) -- node[below=.2cm] {$w'$} (8.5,0) {};
\node [circle,fill=red,inner sep=0pt,minimum size=4pt] (T_n) at (6.7,0.5) {};
\node [red] (label_T_n) at (6.7,0.2) {$\length{n}$};
\end{tikzpicture}
\caption{Representation of the proof of the second claim of Theorem~\ref{T:sa of prefixes}. As we warned the reader before, elements in a string attractor are indexed starting at $1$ (in red), while indices of letters in $\infw{u}$ start at $0$.}
\label{fig:proof of sa}
\end{figure}
The fact that $\word{n-k}$ is a suffix of $\word{n}$ is a direct consequence of Proposition~\ref{P:alternative definition of the words} as $c_{k-1} \geq 1$ by assumption.
To prove that $w' = w$, we first make the following observation: Proposition~\ref{P:alternative definition of the words} again implies that $\word{n}$ is followed by $\word{n}^{c_0 - 1} \word{n-1}^{c_1} \cdots \word{n-k+1}^{c_{k-1}}$ in $\infw{u}$.
Since $\word{n-k+1}$ is a prefix of all the words $\word{n-k+1}, \ldots, \word{n-1}$, the word $\word{n}$ is in particular followed by $\word{n-k+1}$ in $\infw{u}$.
As $|w| \leq \length{n-k+1}$ by Equation~\eqref{eq:length-suffix-w}, this implies that $w$ is a prefix of $\word{n-k+1}$, so also of $\infw{u}$.
To conclude with the claim, it is then enough to show that $w'$ is also a prefix of $\infw{u}$.
To prove this, we will use the numeration system $\mathcal{S}_c$ and consider two cases.
First, assume that $n-2k+1 \geq 0$.
By definition of $w'$ and by Proposition~\ref{P:alternative definition of the words}, $w'$ is a prefix of $v := \word{n-k}^{c_0 - 1} \cdots \word{n-2k+1}^{c_{k-1}}$.
Define the word $x = (c_0 - 1) c_1 \cdots c_{k-1} 0^{n-2k+1}$.
If it begins with $0$'s, we consider instead the word obtained by removing the leading $0$'s.
Note that $x$ corresponds to a factorization of $v$ into the words $\word{n-k}, \ldots, \word{0}$.
As $c_0 \cdots c_{k-2}(c_{k-1} - 1)$ is maximal among its conjugates by assumption, $x$ is in the numeration language by Lemma~\ref{L:greedyIFFlanguage}. By definition of $\mathcal{S}_c$, $x$ is the Dumont-Thomas factorization of $v$, implying that $v$ is a prefix of $\infw{u}$.
Second, if $n-2k+1 < 0$, then we conclude in a similar way by considering $v = \word{n-k}^{c_0 - 1} \cdots \word{0}^{c_{n-k}}$ and $x = (c_0 - 1) c_1 \cdots c_{n-k}$ instead.
\end{proof}
As a consequence, for some values of $c$ ($c=211$ for example), we can say that every prefix of $\infw{u}$ has a string attractor of size at most $k$. Moreover, these string attractors are optimal as every position in $\Gamma_n$ covers a different letter (this can be proved using a simple induction).
Observe that the bounds given in the previous theorem are not necessarily tight. For example, if $c=23$, then $\Gamma_2 = \{3,9\}$ is a string attractor of the length-$9$ prefix $\infw{u}[0,9) = 001001000$, while $P_2 = 10$.
This is also the case for the $k$-bonacci morphisms ($c = 1^k$) where better bounds are provided in~\cite{GRRSS23}.
\section{Final comments}\label{sec: conclusion}
We end this paper by discussing the scope of use of our main result.
For a given $c\in\mathbb{N}^k$ satisfying specific properties, Theorem~\ref{T:sa of prefixes} states that we can easily describe a string attractor of size at most $k+1$ for any prefix of the fixed point $\infw{u}$ of $\mu_c$ defined in Section~\ref{sec: morphism and fp} and that, in some cases, we can even lower its size to $k$.
On the one hand, this result is not necessarily optimal. For example, if $c=12$, the corresponding fixed point is referred to as the \emph{period-doubling} word~\cite{AS}.
Our result particularly implies that $\{2,4,8\}$ is a string attractor of its length-$8$ prefix $01000101$ and we can check that every position is needed. However, in~\cite{Schaeffer-Shallit-2020}, the authors proved that we can find a string attractor of size 2, namely, $\{3,6\}$.
On the other hand, for some $c\in\mathbb{N}^k$, the corresponding numeration system is not \emph{addable}, meaning that the addition within the numeration system is not recognizable by a finite automaton.
For example, this is the case of $c=3203$~\cite{Frougny1997}.
As a consequence, the approach from~\cite{Schaeffer-Shallit-2020} does not apply; in particular, we study words outside the framework needed to use the software \emph{Walnut}~\cite{Shallit2022}.
Finally, we wish to point out that this paper is a first exploration into the possible link between string attractors of prefixes of morphic words and general numeration systems.
As stated in the Question presented Section~\ref{sec: intro}, we believe that this connection can be extended to other morphisms, which is a path that we will continue exploring in the future.
\subsubsection*{Acknowledgements}
We warmly thank M. Rigo and S. Kreczman for useful discussions on numeration systems, especially for indicating~\cite{Dumont-Thomas-1989} and~\cite{Hollander} respectively.
\bibliographystyle{plainurl}
|
{
"arxiv_id": "2302.13585",
"language": "en",
"timestamp": "2023-03-03T02:13:52",
"url": "https://arxiv.org/abs/2302.13585",
"yymm": "2302"
} | \section{Introduction}
\noindent Tracking, the large-scale collection of data about user behaviour, is ubiquitous across mobile apps.
It is often used to make many apps available for free by showing users personalised advertising or selling their data to third parties~\cite{anirudhchi2021,mhaidli_we_2019}, and
can have disproportionate, negative effects on individuals~\cite{van_kleek_better_2017,shklovski_leakiness_2014}.
To improve the balance between protecting individuals' data and leveraging personal information in businesses and other organisations,
China has introduced various laws and regulations over recent years which govern the processing of personal data.
This includes the \emph{Personal Information Protection Law} (PIPL) from August 2021, which is the first comprehensive Chinese law on data protection~\cite{pipl_notes}.
In addition, China introduced its Data Security Law in 2021 and its first Civil Code in 2020, which put forward further regulation of data flows.
Due to the novelty of these laws, few studies have assessed their material impacts on apps' data practices.
The enactment of new data laws in China is part of wider efforts in the country to regulate the digital space.
As such, this is similar to legal initiatives in the EU, US, and UK, all of which are trying to rein in
on monopolistic behaviour of tech companies and harmful impacts of digital technologies on society.
China has a unique app ecosystem compared to other countries.
It is the only major economy in which the Google Play Store is not available, since Google does not operate many of its services in mainland China.
Regardless of that, Android (which is mainly developed by Google) has a market share of about 70\%, while iOS has about 29\%~\cite{os_china}.
On Android, there exist a range of different app stores, including those by Tencent, Oppo, Huawei, Qihoo 360, Xiaomi, and Baidu.
On iOS, the Apple App Store is the only app store.
While there exists some limited previous research on privacy in Android in China~\cite{china_2018}, no similar studies exist for iOS, despite the Apple App Store being one of the largest app ecosystems in China.
The study of iOS is especially interesting because Apple has ever more increased its market share in China over recent years~\cite{os_china}.
Motivated by the breadth of recent changes to Chinese data regulation, along with the relative absence of recent investigations into Chinese app privacy (the only previous large-scale study was done on Android and was published about five years ago, in 2018~\cite{china_2018}), this paper aims to study app privacy might have changed (particularly the implementation of consent flows) following the implementation of data regulations like the PIPL in China.
For our analysis, we draw on a body of 634 Chinese iOS apps, one version from early 2020 and one from late 2021 (i.e. after the introduction of the PIPL).
Crucially, our present analysis is not sufficient to establish whether the new laws are causally responsible for any changes in apps' privacy practices.
However, if they have indeed tackled excesses of personal data processing, we should expect at least some changes in apps' privacy practices.
Even where we do not observe any changes, it is important to characterise the status quo due to lack of previous similar studies on iOS in a Chinese context.
\section{Regulation of Data in China}
\noindent We first briefly review the history of Chinese data law, which has seen significant evolution over the past decade.
The Decision of the National People’s Congress on Strengthening the Protection of Online Information issued in 2012 is widely regarded as the starting point for Chinese data law. This has subsequently motivated Articles 1034–1039 of the China Civil Code (2020), setting forth basic rules for the protection of personal information in mainland China. The rules around data were further clarified with the Personal Information Protection Law (PIPL) from August 2021, which is the first comprehensive law to regulate the protection of personal information in mainland China. The PIPL is complemented by the Cybersecurity Law (2017) and the Data Security Law (2021), which also regulate the governance of data in the digital era. As of 9 April 2022, there were a total of 31 laws and regulations with ‘personal information’ in the title on the Peking Law System with the keyword ‘personal information’.
Like in other countries and regions, the main purpose of Chinese data law is to balance the protection of personal information rights and the promotion of the use of personal information. The PIPL has a similar chapter structure and regulation content as the GDPR in Europe, including general provisions, rules for personal information processing, rules for cross-border personal information processing, the rights of individuals, the obligations of personal information processors, and further legal responsibilities.
Chinese data law provides seven potential legal grounds for personal information processing (PIPL Article 13). The most common legal ground for data processing, in the context of mobile apps, is ‘informed consent’. For consent to be valid, it must be voluntary, clear and fully informed (PIPL Article 14); depending on the context, further consent rules might apply.
Interestingly, there is no ‘legitimate interest’ legal ground in the PIPL, which allows data collection without consent under certain conditions under the GDPR in Europe.
As a result, much more emphasis is placed on consent in China than in Europe.
Two important further pieces of regulation are the Information Security Technology — Personal Information Security Specification (2020) (GB/T 35273-2020) (IST) and the Information Security Technology—Basic Requirements for Collecting Personal Information in Mobile Internet Applications (2022)(GB/T 41391-2022) (IST APP). These are national standards to provide detailed guidelines on personal information protection in China.
The IST provides targeted rules for various software products, while the IST APP focuses on apps only. Article 5.3 IST provides that consent should be given freely, and not against a data subject’s independent will.
The new personal data protection regulation pays much attention to the distinction between necessary and non-essential personal information, and between basic business functions and extended business functions. Article 5.3a) IST clarifies that the bundling of consent for different business functions, that require data processing, is not permitted; consent must be given to one business function at a time instead. Article 6.4.1d) IST A and Article 5.3e) IST both hold that refusing consent to one business function must not affect the use of other business functions. The specification even provides a few sample implementations of consent, including mock-ups. As such, these requirements are similar to those under Article 5(3) of the amended EU ePrivacy Directive from 2009.
Motivated by the breadth of recent changes to data regulation in China, the rest of this paper will analyse how app tracking has changed in a Chinese context since the introduction of PIPL and other notable data laws since 2020. Given the focus on consent in PIPL, we will explicitly analyse the provision and nature of consent in apps.
\section{Previous Work}
\noindent Previous research studied privacy in mobile apps extensively. Two main methods emerged: dynamic and static analysis.
\emph{Dynamic analysis} observes the run-time behaviour of an app, to gather evidence of sensitive data leaving the device.
Early research focused on OS instrumentation, i.e. modifying Android~\cite{enck_taintdroid_2010} or iOS~\cite{agarwal_protectmyprivacy_2013}.
With the growing complexity of mobile operating systems, recent work has shifted to analysing network traffic~\cite{privacyguard_vpn_2015,nomoads_2018,free_v_paid_2019,reyes_wont_2018,van_kleek_better_2017,ren_recon_2016,nomoads_2018,shuba_nomoats_2020}.
\emph{Static analysis} dissects apps without execution. Usually, apps are decompiled, and the obtained program code is analysed~\cite{han_comparing_2013,pios_2011}.
The key benefit of static analysis is that it can analyse apps quickly, allowing it to scale to millions of apps~\cite{china_2018,playdrone_2014,binns_third_2018,chen_following_2016}.
In a Chinese context, Wang et al. previously used static analysis to analyse 6 million Android apps from 16 Chinese app stores in 2018~\cite{china_2018}.
The main focus of this study was to characterise these different ecosystems and understand app security in those app stores.
These authors did not consider the Apple App Store, which is among those Chinese app stores with the largest market share~\cite{ios_marketshare_china}.
Given the increased interest in privacy by the public, regulators and lawmakers, an increasing body of literature is investigating regulatory questions~\cite{binns_measuring_2018,reyes_wont_2018,okoyomon_ridiculousness_2019,kollnig_2021,kollnig_before_2021,maps_2019,kollnig_goodbye_2022}, but hardly any work has yet covered China.
\section{Methodology}
\noindent\textbf{App download.}
For the selection of apps, we revisited the same dataset of 285,680 iOS apps as in our previous work on comparing Android and iOS privacy~\cite{kollnig2021iphones}.
These apps were selected by first generating a large list of apps available on the UK Apple App Store between December 2019 and February 2020.
Due to the global reach of the Apple App Store (including China), we noticed that the original app dataset contained a sizeable number of Chinese apps.
We selected a subset of apps that were available on the Chinese App Store.
Furthermore, we only included apps from the larger app dataset that: 1) contained Chinese (but not Japanese or Korean) characters in both their App Store title and description (which made up 7.2\% of all 285k apps) and 2) had a bundle identifier starting with \enquote{cn.} (which made up 0.4\% of all 285k apps); we made this choice to focus on apps that were developed by Chinese developers for the Chinese market.
By re-downloading those apps in October 2021, we then obtained a dataset of 634 \textit{pairs} of apps, one from before the new Chinese privacy laws and one from after.
We only included those apps that were still available on the Apple App Store in both 2020 and 2021.
We intentionally did not exclude apps that had not been updated because all apps need to comply with the new rules.
\noindent\textbf{App analysis.}
For the analysis of apps, we applied the same tools as in our previous work~\cite{kollnig2021iphones}; these tools are available online at \url{https://platformcontrol.org/}.
From this analysis, we obtained the tracking libraries integrated within apps, the tracking domains contacted upon the first app start, and the dominant companies behind this tracking and their jurisdictions; this work combines static and dynamic analysis while avoiding legal problems related to analysing iOS apps.
In contrast to our previous work, we did not study apps' sharing of PII, since Apple puts tight limits on apps' access to identifiers since the introduction of the App Tracking Transparency framework with iOS 14.5 in April 2021.
Since the new Chinese data protection law put much emphasis on consent, we additionally analysed apps' consent flows.
We additionally ran each app on a real iPhone to analyse them for consent popups.
30 seconds after having installed each app, we took a screenshot for further analysis and uninstalled it.
We inspected the screenshots for any form of
consent, following the methodology of previous research~\cite{kollnig_2021}.
Specifically, we classified any \emph{affirmative} user agreement to data practices as consent.
While this definition of consent is arguably less strict than what is usually required under many data protection and privacy laws, this was a deliberate choice to increase the objectivity of the classification, and provide an upper bound on compliance with Chinese consent requirements.
For those apps that showed an onboarding screen at the first app start (i.e. walking users through the essentials of the app), we manually re-ran the app, tried to skip onboarding to reach the main screen, and took another screenshot for consent analysis.
We used a Chinese IP address during this analysis.
\section{Results}
\noindent In this section, we present our findings from analysing two versions~--~one from 2020 and one from 2021~--~of 634 Chinese iOS apps.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/topHosts.pdf}
\footnotesize
\begin{tabular}{lrrrrrr} \toprule
& Median
& Mean
& Q1
& Q3
& Count $>10$
& None \\
\midrule
Before & 3 & 3.7 & 1 & 6 & 4.73\% & 21.14\% \\
After & 2 & 3.4 & 0 & 6 & 3.94\% & 25.71\% \\ \bottomrule
\end{tabular}
\caption{Top 15 tracking hosts contacted at the first app start and without consent, as well as the companies owning them.}
\label{fig:tracker_hosts}
\end{figure}
\subsection{Contacted Trackers without Consent}
\label{sec:data_sharing_consent}
\noindent This section analyses how many tracking domains are contacted by the studied apps before any user interaction takes place.
Since tracking libraries usually start sending data right at the first app start~\cite{kollnig_2021,reyes_wont_2018,nguyen_share_first_consent_2021,kollnig2021iphones}, this approach provides additional evidence as to the nature of tracking in apps~--~and without consent.
Our results are shown in Figure~\ref{fig:tracker_hosts}.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figures/screenshots/app_login.jpg}
\caption{A typical login screen for a Chinese iOS app. Login is possible through a range of messaging and social media services. For example, the penguin belongs to Tencent's QQ instant messaging software service.
The red icon belongs to Weibo, a social media service, which was the second most commonly contacted service upon the first app initiation (see Section~\ref{sec:data_sharing_consent}).
Some apps also allow the sign-in with an Apple account.
Before logging in, users must first check the checkbox and thereby agree to the Terms and Conditions and the Privacy Policy (i.e. an `unticked checkbox' type of consent in Section~\ref{sec:consent}).}
\label{fig:app_login}
\end{figure}
The average number of tracking domains contacted decreased somewhat (3.7 before, 3.4 after).
The number of different contacted tracking companies also declined (from 2.2 before to 1.9 after).
About one quarter of apps, from both before and after the new laws, did not contact any tracking domains at the first app start.
The most popular domain is related to Alibaba's analytics services at \texttt{ulogs.umeng.com} (25.9\% of apps before, 26.0\% after).
This is followed by Tencent's Bugly at \texttt{ios.bugly.qq.com} (19.6\% of apps before, 17.5\% after), Alibaba's \texttt{log.umsns.com} (19.4\% of apps before, 16.4\% after), and Weibo at \texttt{api.weibo.com} (18.6\% of apps before, 16.4\% after).
Weibo is one of the biggest social media platforms in China, and is often used as a login for mobile apps (see Figure~\ref{fig:app_login}).
The fact that social media and messaging services are commonly used for authentication with apps is likely responsible for some of the observed data collection without consent.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/consent.pdf}
\caption{Types of consent in apps. In our analysis, we applied a broad definition of consent, including all types of affirmative user choice over data in apps; this is to improve the objectivity of our analysis.
The Appendix provides a screenshot for each consent type.}
\label{fig:consent}
\end{figure}
\subsection{Provision of Consent}
\label{sec:consent}
\noindent Some of the analysed apps had problems showing a UI (e.g. crashed or just showed a black screen), so we excluded 34 from the following consent analysis, leading to 604 remaining apps.
98 apps (16.2\%) showed an onboarding screen, so we re-ran these apps and skipped the onboarding. 169 (28.0\%) asked for login credentials (a common example is shown in Figure~\ref{fig:app_login}) and might obtain consent from users elsewhere.
A total of 274 apps from the 2021 dataset (45.4\%) asked for consent. 145 apps (24.0\% of all apps, 52.9\% of apps with consent in 2021) added consent notices compared to their 2020 version.
3 apps did not show consent in 2021 but did so in 2020.
Among apps that did not show a login screen, 215 apps (49.4\% of apps without a login screen) asked for consent; 59 apps with a login screen (34.9\% of apps with a login screen) asked for consent. The discrepancy between the percentages indicates that we miss some consent flows for apps with a login screen, but still detect a sizeable share.
We also analysed in what way apps asked users for consent, see Figure~\ref{fig:consent}. 212 apps (77.4\% of apps that asked for consent) showed a binary choice. These apps usually ask in a popup screen whether the user agrees to the privacy policy or the terms of use, and exit the app on refusal. 12 apps (4.4\%) only allowed users to accept and did not display a refusal option. 2 apps (0.7\%) showed more options than just a binary choice.
23 apps (8.4\%) showed an unticked checkbox for consent, 3 (1.1\%) a pre-ticked checkbox.
22 apps (8.0\%) indicated that continuing to use the app (e.g. logging in) would mean acceptance of the terms of use or privacy policy (\enquote{continuing means T\&C} in Figure~\ref{fig:consent}). The Appendix provides a screenshot for each consent type.
Our analysis of consent in apps points to a relative absence of granularity in consent implementations.
As such, some of these apps might conflict with Article 5.3 of the Information Security Technology — Personal Information Security Specification. This specification requires separate consent for different business functions, that such consent is freely given, and that refusing consent does not have negative effects on the use of other business functions.
In the case of apps, this could mean that apps need to provide more granular consent options and may not be allowed to exit the app if consent is refused.
The Specification distinguishes between basic business functions and extended business functions. The former refers to the basic expectation and the most important demand of users to use the products or services provided, while the latter refers to other functions other than the basic business functions provided by the products or services.
If the personal information subject does not agree to collect the personal information necessary for extended business functions, it shall not refuse to provide the basic business function or reduce the service quality of the basic business function.
However, the line between different business functions as defined by the specification depends on each specific app.
The Specification even explicitly states in Article 5.3f) that solely for reasons such as improving service quality, enhancing user experience, developing new products and enhancing security, individuals may not be compelled to consent to data collection activities.
Article 11 of the Data Security Law further points out that network operators shall not force or mislead personal information subjects to consent in the form of default authorization or function bundling under the pretext of improving service quality, enhancing user experience, pushing targeted information or developing new products, and similar purposes.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/topLibraries.pdf}
\centering
\footnotesize
\begin{tabular}{lrrrrrr} \toprule
& Median
& Mean
& Q1
& Q3
& Count $>10$
& None \\
\midrule
Before & 4 & 4.7 & 2 & 7 & 3.47\% & 8.20\% \\
After & 5 & 4.9 & 2 & 7 & 3.94\% & 7.89\% \\ \bottomrule
\end{tabular}
\caption{Top 15 third-party libraries, as well as the companies owning them (in brackets).}
\label{fig:tracker_libraries}
\end{figure}
For comparison, in an EU context, our previous work on consent in a representative sample of 1,297 Google Play apps from 2020 found that only 9.9\% of apps asked for any form of consent~\cite{kollnig_2021}.
The observed share of Chinese apps with consent flows is much higher.
It is, however, difficult to compare the two studies because of differences between the app stores and sampling techniques.
\subsection{Tracking Libraries in Apps}
\label{sec:static_tracking}
\noindent Apps from both before the Chinese data protection laws and after widely integrated tracking libraries
(see Figure~\ref{fig:tracker_libraries}). The median number of tracking libraries included in an app was 4 before and 5 after.
The mean before was 4.7, and the mean after was 4.9.
3.47\% of apps from before contained more than 10 tracking libraries, compared to 3.94\% after. 91.80\% contained at least one before the new laws, and 92.11\% after.
The most prominent libraries have not changed since the introduction of the new laws.
The top one remains WeChat (in 65.8\% of apps before, and 67.8\% after). This is followed by Apple's SKAdNetwork library (54.9\% before, 58.0\% after). While part of Apple's privacy-preserving advertising attribution system, this library discloses information about what ads a user clicked on to Apple, from which Apple could (theoretically) build user profiles for its own advertising system~\cite{kollnig_goodbye_2022}.
Tencent Login ranks third (43.2\% before, 44.5\% after), closely followed by Umeng Analytics (39.9\% before, 42.6\%).
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figures/alluvial_rawgraphs.pdf}
\vspace{0.2cm}
\footnotesize
\begin{tabular}{l r r r r r r} \toprule
& Median
& Mean
& Q1
& Q3
& None \\
\midrule
Before & 4 & 3.6 & 2 & 5 & 5.52\% \\
After & 4 & 3.6 & 2 & 5 & 5.21\% \\ \bottomrule
\end{tabular}
\caption{Visualisation of third-party tracking across root companies and their jurisdictions, in 2020 \& 2021.
This information was derived both from contacted domains and included libraries.
Most of the data collection from Chinese apps goes to Chinese companies. The only exceptions are Apple and to a much lesser extent Alphabet/Google, both based in the US. However, Google does not operate in China and does not usually collect data from mainland China (though in Hong Kong).}
\label{fig:alluvial}
\end{figure}
\subsection{Companies behind Tracking Technology}
\label{sec:tracking_companies}
\noindent Since some tracker companies belong to a larger consortium of companies,
we now consider what parent companies ultimately own the tracking technology, i.e. the \textit{root companies} behind tracker companies.
We report these root companies by combining the observations from our static and traffic analysis.
This is visualised in Figure~\ref{fig:alluvial}.
Apple stands out as the most dominant foreign company with a share in iOS data collection in China.
The median number of companies was 4 both before and after the new laws. The mean was and remains at 3.6.
The maximum number of companies was 10 before and after the new laws.
About 75\% of apps could share data with Tencent (the parent company of WeChat and QQ), closely followed by Apple.
The next most common is Alibaba, which could be contacted by about two-thirds of the analysed apps.
Beyond the top three, a range of further companies
collect data from apps, including Sina (a Chinese tech company and developer of the microblogging service Weibo), Baidu (the company behind the leading search engine), and Aurora Mobile (a company -- also known as Jiguang -- that offers similar services to Google Firebase, including ads, push notifications and analytics).
\section{Conclusions}
\noindent This paper analysed data collection in pairs of 634 Chinese iOS apps. These apps stemmed from before and after the introduction of key Chinese data privacy laws, particularly the PIPL from 2021. Our research aimed to track the changes over time and since the new laws.
In our analysis, we noticed a discrepancy between current legal norms in China and data practices within apps.
Analysing consent in apps, we noticed that such was often restricted to yes/no selections and that choosing not to agree to data practices commonly exits the app. Thus, many apps do not offer the level of granularity required under Chinese law.
In practice, bundling of consent was common and there was often no distinction between necessary personal information and non-essential personal information, nor a distinction between basic business functions and extended business functions.
Furthermore, it is often hard to withdraw their consent after consenting to data collection within an app. The three types of consent in law (general consent, individual consent and written consent) were also not reflected in practice.
However, we observed that many apps have now added consent flows into their existing apps at the first app start: 45.5\% of the 2021 apps asked for some form of consent in our analysis, compared to 21.8\% in 2020.
We further found that Chinese apps, that have been available on the Apple App Store since 2020 or longer, still integrate a similar number of tracking libraries.
Yet, the number of tracking companies contacted upon the first app start and without consent declined.
What we see from our results is arguably the \textit{first iteration} of data regulation in Chinese apps. It is not perfect and there remains room for improvement. This particularly concerns the quality of consent in many of the studied apps.
As regulators around the world ramp up enforcement and shape norms around apps’ data practices, apps’ practices will change and evolve.
This will happen particularly in the Chinese context with active regulatory organisations that are highly interested in mobile apps.
We should expect a second wave of app regulation over the coming years, not necessarily through new legislation, but rather through evolving norms, practices, and consumer expectations around apps.
\section*{Acknowledgment}
\noindent We thank Michael Cerny for his helpful comments.
Konrad Kollnig was funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/R513295/1.
Konrad Kollnig, Jun Zhao and Nigel Shadbolt have been supported by the Oxford Martin School EWADA Programme.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13610",
"language": "en",
"timestamp": "2023-02-28T02:25:19",
"url": "https://arxiv.org/abs/2302.13610",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Named entity recognition (NER) aims at identifying spans of text and categorizing into a pre-defined set of classes, which is widely adopted in question answering\cite{molla2006named}, information retrieval\cite{guo2009named} and other language understanding applications\cite{nadeau2007survey}. Recent deep neural architectures have achieved impressive performance in fully supervised NER\cite{lample2016neural}\cite{chiu2016named}\cite{peters2017semi}. However, it is costly and not flexible to collect the annotation data in practical applications. Therefore, few-shot NER, aiming at recognizing entities based on few labeled instances, has attracted considerable attention from the research community in recent years.
\begin{figure}[t]
\centering
\resizebox{.48\textwidth}{!}{\includegraphics{intro.pdf}}
\caption{The illustration of the baseline model suffering from class confusion (upper) and entity surface mention memorizing (down) problem in few-shot NER task.}
\label{fig:intro}
\vspace{-0.4cm}
\end{figure}
Previous few-shot NER methods\cite{yang2020simple}\cite{hou2020few} generally formulate the task as a sequence labeling task based on prototypical networks\cite{snell2017prototypical}, which represent each class by a prototype based on the labeled examples and use nearest neighbor method for NER. However, these models only learn the surface mapping between entity and class, which is easily disturbed by non-entity tokens (i.e. ``O" class). To alleviate this issue, a branch of two-stage methods arises to decompose NER into two separate processes, including span detection and entity classification. Despite the above achievement, there are still two remaining problems. (1) \textbf{Class Confusion}: as shown in Figure \ref{fig:intro}, since PLMs often face a collapse of the representation space\cite{hou2020few}\cite{gao2021simcse}, the hidden representation of different classes of entities in the same sentence are close, which results to close or even overlapping prototypes distribution in the semantic space (e.g. animal prototype overlapping with person prototype). Therefore, the model finally suffers from performance degradation due to confusion about different classes of prototypes.
(2) \textbf{Entity Surface Mention Memorizing}: the traditional prototype-based method directly utilizes the mean value of all the entity span representations to calculate the prototype, which causes the NER model to focus on the entity surface mention excessively instead of inferring the entity type through contextual semantic information. As shown in Figure \ref{fig:intro}, the baseline model memorizes the entity surface mention of "Billie jones" in support set and ignores the contextual information in the query set, which leads to misclassification.
\begin{figure*}[ht]
\centering
\resizebox{0.88\textwidth}{!}{\includegraphics{main.pdf}}
\vspace{-0.8cm}
\caption{The overall architecture of the PSDC framework. }
\label{fig:method}
\vspace{-0.4cm}
\end{figure*}
To tackle the above issues, we propose PSDC, a novel prototypical semantic decoupling method with joint contrastive learning objectives. Specifically, given the input utterances, we calculate two prototype vectors by using different masking strategies separately to decouple class-specific information and contextual semantic information. Our aim is to guide the model to combine different semantic information contained in the two decoupled prototypes for classification, which effectively alleviates Class Confusion and Entity Surface Mention Memorizing problems.
To better integrate two decoupling information and optimize the collapsed semantic space, inspired by PCL\cite{li2020prototypical}, we further introduce instance-level and prototype-level contrastive learning objectives to learn an effective representation space.
Our contributions are three-fold: (1) To the best of our knowledge, we are the first to introduce semantic decoupling method into Few-Shot NER task by well-designed contextual prototype and class-specific prototype. (2) We propose instance-level and prototype-level contrastive learning objectives to refine collapsed semantic space and integrate the two kinds of decoupled semantic information. (3) Experiments on two few-shot NER benchmarks demonstrate that our framework achieves superior performance over previous state-of-the-art methods. Extensive analysis further validates the effectiveness and generalization of PSDC.
\vspace{-0.2cm}
\section{Methodology}
\subsection{Overall Architecture}
We adopt the two-stage framework as the backbone of our model. As shown in Figure \ref{fig:method}, in the first stage, we utilize BERT as a span detector to encode the original input utterance and identify the entity spans (as a BIO-label sequence tagger).
In the classification stage, We carefully design two different masking strategies to separately decouple sentences containing class-specific and contextual semantic information from the original input (Section \ref{sec:2.2}). Then according to the results of span detector, we obtain different forms of entity representations by passing these inputs through the encoder layer. Unlike the traditional prototype-based works\cite{snell2017prototypical}, we calculate two decoupled prototype vectors based on the different forms of entity representations (Section \ref{sec:proto}) and introduce two level contrastive learning during training to optimize collapsed semantic space (Section \ref{sec:cl}). For training and inference, we make a brief introduction in Section \ref{sec:train}.
\subsection{Semantic Decoupling Masking Strategies}
\label{sec:2.2}
In this section, we introduce two semantic decoupling masking strategies to preprocess the original input utterance before the classification stage:
\textbf{Class-specific Masking}:
Given an input utterance $\boldsymbol x=\{x_1, x_2, \dots, x_L\}$, $\boldsymbol e=\{x_{m}, x_{m+1}, \dots, x_n\}$ is an entity span belonging to class $y_i$. We replace all the entity spans in $\boldsymbol x$ whose labels are not $y_i$ with [MASK] tokens to obtain $y_i$ class-specific input $\boldsymbol x_{\rm cs}^{y_i}$. As shown in Figure \ref{fig:method}, we replace the ``school" entity of the ``Loc" class and the ``Sunday" entity of the ``Day" class with [MASK] tokens to obtain ``Per" class-specific input $\boldsymbol x_{\rm cs}^{\rm Per} = $ Mike\textit{ wants to go to }[MASK] \textit{on} [MASK]. This strategy aims to decouple the utterance by blocking the interference of other class entity spans.
\textbf{All Entities Masking}:
We replace all entity spans with [MASK] tokens to decouple the utterance $\boldsymbol x_{\rm ctx}$ containing contextual semantic information. As the example sentence in Figure \ref{fig:method}, we mask all entities to obtain $\boldsymbol x_{\rm ctx}=$ [MASK]\textit{ wants to go to }[MASK] \textit{on} [MASK].
\subsection{Class-specific and Contextual Semantic Prototype}
\label{sec:proto}
After decoupling utterances with different masking strategies, we elaborate on the procedure to construct class-specific (cs) and contextual semantic (ctx) prototypes.
We obtain the hidden representation of the input sentences by encoding $\boldsymbol x_{\rm cs}$ and $\boldsymbol x_{\rm ctx}$ respectively. For the class-specific prototype, we compute it for each class by averaging all the entity representations which share the same entity type. For the contextual semantic prototype, we calculate the mean values of all the contextual entity representations excluding [MASK] tokens for each class. Formally, given a dataset $\mathcal D$, $\mathcal Y$ is its set of labels, $\mathcal E_{y_i}$ is the set of all entities in $\mathcal D$ with label $y_i$, we have:
\begin{gather}
\boldsymbol h_{\boldsymbol e}^{\rm cs}=\frac1{n-m+1}\sum_{k=m}^{n}\boldsymbol h_{ \boldsymbol x_{\rm cs}[k]}\\
\boldsymbol h^{\rm ctx}_{\boldsymbol e}= \underset{\boldsymbol x_{\rm ctx}[k]\ne{\rm [MASK]}}{\mathbb E}\boldsymbol h_{\boldsymbol x_{\rm ctx}[k]}\\
\boldsymbol c_{y_{i}}=\frac1{|\mathcal E_{y_i}|}\sum_{\boldsymbol e \in \mathcal E_{y_i} }\boldsymbol h_{\boldsymbol e}
\label{eq:3}
\end{gather}
where $[m, n]$ denotes the span of $\boldsymbol e$ and $\boldsymbol x[k]$ represents the $k$-th token of $\boldsymbol x$.
The representation of an entity should be close to its class-specific prototype and context prototype, so we optimize the following losses:
\begin{gather}
\mathcal L_{\rm cls} = \sum_{(\boldsymbol e, y)\in \mathcal D} {\rm CE}(y, {\rm softmax}( -\boldsymbol d))
\label{eq:4}\\
\boldsymbol d=(dis(\boldsymbol h_{\boldsymbol e}, \boldsymbol c_{y_1}),dis(\boldsymbol h_{\boldsymbol e}, \boldsymbol c_{y_2}),\dots,dis(\boldsymbol h_{\boldsymbol e}, \boldsymbol c_{y_n}))
\label{eq:5}
\end{gather}
where * can be cs or ctx\footnote{We use the inner product as the distance function $dis(\cdot, \cdot)$. }, CE is the CrossEntropy function. Note that we use Equation (\ref{eq:3}--\ref{eq:5}) for computing both class-specific prototypes and contextual semantic prototypes.
\subsection{\emph{Instance-level} and \emph{Proto-level} Contrastive Learning}
\label{sec:cl}
Contrastive learning aims to learn an efficient representation space by bringing samples closer to their semantically similar positive samples and further away from other samples.
On this basis, we design instance-level and prototype-level contrastive learning objectives separately to construct a more decentralized and isotropic representation space.
Instance-level contrastive learning pulls in entity representations of the same class in a batch and pulls out entity representations of different classes; prototype-level contrastive learning pulls entity representation closer to the prototype of its class and further away from other prototypes:
\begin{gather}
\mathcal L_{\rm ins}=\sum_{(\boldsymbol e,y_i)\in\mathcal D}-\log\frac{\exp\{S(\boldsymbol h_{\boldsymbol e}, \boldsymbol h^+_{\boldsymbol e})/\tau\}}{\sum_j\exp\{S(\boldsymbol h_{\boldsymbol e}, \boldsymbol h_{\boldsymbol e_j})/\tau\}}\\
\mathcal L_{\rm proto}=\sum_{(\boldsymbol e,y_i)\in\mathcal D}-\log\frac{\exp\{S(\boldsymbol h_{\boldsymbol e}, \boldsymbol c_{y_i})/\tau\}}{\sum_j\exp\{S(\boldsymbol h_{\boldsymbol e},\boldsymbol c_{y_j})/\tau\}}
\end{gather}
where
$S(\cdot, \cdot)$ is the score function\footnote{We use the cosine similarity as the similarity score function.}, $\boldsymbol h^+_{\boldsymbol e}$ can be $\boldsymbol h^{\rm cs}_{\boldsymbol e}$ or $\boldsymbol h^{\rm ctx}_{\boldsymbol e}$, and $\tau$ is the temperature hyperparameter.
\subsection{Training and Inference}
\label{sec:train}
In the training phase, our optimization targets a linear combination of the losses mentioned in Sections \ref{sec:proto} and \ref{sec:cl} on dataset $\mathcal D$:
\begin{equation}
\mathcal L=\sum_{i}\alpha_i\cdot\mathcal L_i
\end{equation}
where $\mathcal L_i$ is some specific loss mentioned above and $\alpha_i$ is its weight hyperparameter.
In the inference stage, we calculate the distance vector of one sample from the class-specific prototypes and the context prototypes respectively based on Equation \eqref{eq:5}, and combine the two distance vectors to calculate the classification logits:
\begin{equation}
\hat{y}=\underset{y_i}{\arg\max}\{{\rm softmax}(-\beta\cdot\boldsymbol d_{\rm cs}-(1-\beta)\cdot\boldsymbol d_{\rm ctx})\}
\end{equation}
where $\beta$ is the weight hyperparameter.
\begin{table*}[!tbp]\huge
\centering
\renewcommand\arraystretch{1.0}
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{lrrrrrrrrrr}
\toprule
\multicolumn{1}{l}{\multirow{2}[4]{*}{\textbf{Models}}} & \multicolumn{5}{c}{\textbf{Intra}} & \multicolumn{5}{c}{\textbf{Inter}} \\
\cmidrule(r){2-6} \cmidrule(r){7-11} & \multicolumn{2}{c}{ \textbf{1$\sim$2-shot}} & \multicolumn{2}{c}{\textbf{5$\sim$10-shot}} & \multicolumn{1}{c}{\multirow{2}[2]{*}{\textbf{Avg.}}} & \multicolumn{2}{c}{\textbf{1$\sim$2-shot}} & \multicolumn{2}{c}{\textbf{5$\sim$10-shot}} & \multicolumn{1}{c}{\multirow{2}[2]{*}{\textbf{Avg.}}} \\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){7-8} \cmidrule(r){9-10} & \multicolumn{1}{c}{5 way} & \multicolumn{1}{c}{10 way} & \multicolumn{1}{c}{5 way} & \multicolumn{1}{c}{10 way} & & \multicolumn{1}{c}{5 way} & \multicolumn{1}{c}{10 way} & \multicolumn{1}{c}{5 way} & \multicolumn{1}{c}{10 way} & \\
\cmidrule(r){1-1} \cmidrule(r){2-11}
ProtoBERT & $20.76_{ \pm 0.84}$ & $15.05_{ \pm 0.44}$ & $42.54_{ \pm 0.94}$ & $35.40_{ \pm 0.13}$ & 28.44 & $38.83_{ \pm 1.49}$ & $32.45_{ \pm 0.79}$ & $58.79_{ \pm 0.44}$ & $52.92_{ \pm 0.37}$ & 45,75\\
NNShot & $25.78_{ \pm 0.91}$ & $18.27_{ \pm 0.41}$ & $36.18_{ \pm 0.79}$ & $27.38_{ \pm 0.53}$ & 26.90 & $47.24_{ \pm 1.00}$ & $38.87_{ \pm 0.21}$ & $55.64_{ \pm 0.63}$ & $49.57_{ \pm 2.73}$ & 47.83\\
StructShot & $30.21_{ \pm 0.90}$ & $21.03_{ \pm 1.13}$ & $38.00_{ \pm 1.29 }$ & $26.42_{ \pm 0.60}$ & 28.92 & $51.88_{ \pm 0.69}$ & $43.34_{ \pm 0.10 }$ & $57.32_{ \pm 0.63}$ & $49.57_{ \pm 3.08}$ & 50.53 \\
ESD\cite{wang2021enhanced} & $36.08_{ \pm 1.60}$ & $30.00_{ \pm 0.70}$ & $52.14_{ \pm 1.50}$ & $42.15_{ \pm 2.60 }$ & 40.09 & $59.29_{ \pm 1.25}$ & $52.16_{ \pm 0.79}$ & $69.06_{ \pm 0.80}$ & $64.00_{ \pm 0.43}$ & 61.13\\
MAML-ProtoNet\-*\cite{ma2022decomposed} & $46.28_{ \pm 0.85}$ & $40.39_{ \pm 0.46}$ & $59.69_{ \pm 0.57}$ & $53.80_{ \pm 0.25}$ & 50.04 & $64.18_{ \pm 0.35}$ & $58.65_{ \pm 0.43}$ & $70.95_{ \pm 0.47}$ & $66.35_{ \pm 0.05}$ & 65.90\\
\textbf{Ours} & \bm{$49.45_{ \pm 0.71}$} & \bm{$42.97_{ \pm 0.31}$} & \bm{$60.94_{ \pm 0.57}$} & \bm{$55.83_{ \pm 0.22}$} & \textbf{52.30} & \bm{$64.92_{ \pm 0.12}$} & \bm{$60.61_{ \pm 0.35}$} & \bm{$72.23_{ \pm 0.31}$ } & \bm{$68.55_{ \pm 0.09}$} & \textbf{66.58}\\
\bottomrule
\end{tabular}%
}
\caption{F1 scores with standard deviations on Few-NERD for both inter and intra settings. * indicates our reproduction results with the same settings in MAML-ProtoNet.}
\vspace{-0.2cm}
\label{tab:table1}%
\end{table*}%
\begin{table}[!tbp]\huge
\centering
\renewcommand\arraystretch{1.0}
\resizebox{0.47\textwidth}{!}{
\begin{tabular}{lrrrrr}
\toprule
\multicolumn{1}{l}{\multirow{1}[4]{*}{\textbf{Models}}} & \multicolumn{5}{c}{\textbf{1-shot}} \\
\cmidrule{2-6} & \multicolumn{1}{c}{News} & \multicolumn{1}{c}{Wiki} & \multicolumn{1}{c}{Social} & \multicolumn{1}{c}{Mixed} & \multicolumn{1}{c}{\textbf{Avg.}} \\
\midrule
TransferBERT & $4.75_{ \pm 1.42}$ & $0.57_{ \pm 0.32}$ & $2.71_{\pm 0.72}$ & $3.46_{ \pm 0.54}$ & 2.87 \\
SimBERT & $19.22_{ \pm 0.00}$ & $6.91_{ \pm 0.00}$ & $5.18_{ \pm 0.00}$ & $13.99_{\pm 0.00}$ & 11.33 \\
Matching Network & $19.50_{ \pm 0.35}$& $4.73_{ \pm 0.16}$ & $17.23_{ \pm 2.75}$ & $15.06_{ \pm 1.61}$ & 14.13 \\
ProtoBERT & $32.49_{ \pm 2.01}$ & $3.89_{ \pm 0.24}$ & $10.68_{ \pm 1.40}$ & $6.67_{ \pm 0.46}$ & 13.43\\
L-TapNet+CDT & $44.30_{ \pm 3.15}$ & $12.04_{ \pm 0.65}$ & $20.80_{\pm 1.06}$ & $15.17_{ \pm 1.25}$ & 23.08\\
MAML-ProtoNet & $46.09_{ \pm 0.44}$ & $17.54_{ \pm 0.98}$ & $25.14_{\pm 0.24}$ & $34.13_{ \pm 0.92}$ & 30.03 \\
\textbf{Ours} & \bm{$48.12_{ \pm 0.11}$} & \bm{$20.24_{ \pm 0.53}$ } & \bm{$29.13_{\pm 0.47}$} & \bm{ $38.17_{ \pm 0.21}$} & \textbf{33.92} \\
\bottomrule
\end{tabular}%
}
\caption{F1 scores with standard deviations under 1 shot setting on Cross-NER. The best results are in bold.}
\label{tab:table2}%
\end{table}%
\begin{table}[htbp]\Large
\centering
\renewcommand\arraystretch{0.9}
\resizebox{0.45\textwidth}{!}{
\begin{tabular}{l|ccc}
\toprule
\multicolumn{1}{c|}{\multirow{1}[2]{*}{\textbf{Method}}} & \multicolumn{2}{c}{\textbf{Few-NERD}} & \multicolumn{1}{c}{\textbf{CrossNER}} \\
& Intra & Inter & 1-shot \\
\midrule
\textbf{Ours} & 52.30 & 66.58 & 33.92 \\
\midrule
w/o class-specific proto & 51.54 & 66.15 & 32.14 \\
w/o contextual semantic proto & 51.83 & 66.32 &31.98\\
w/o proto-level CL & 51.66 & 65.93 & 31.57 \\
w/o instance-level CL & 52.07 & 66.21 & 32.66\\
\bottomrule
\end{tabular}%
}
\caption{The ablation study results (average F1 score
\%) for Few-NERD and CrossNER. }
\label{tab:table4}%
\vspace{-0.2cm}
\end{table}%
\vspace{-0.2cm}
\section{Experiment}
\label{sec:page}
\subsection{Dataset}
We evaluate our method on two widely used few-shot benchmarks \textbf{Few-NERD} \cite{ding2021few} and \textbf{Cross-NER}\cite{hou2020few}. Specifically, Few-NERD is annotated with a hierarchy of 8 coarse-grained and 66 fine-grained entity types. It contains two subtasks: 1) \textbf{Intra}, where all entities in the train/dev/test set belong to different coarse-grained types. 2) \textbf{Inter}, where only the fine-grained entity types are mutually disjoint in different datasets. \textbf{Cross-NER} contains four domains from \textbf{CoNLL-2003}\cite{sang2003introduction} (News), \textbf{GUM}\cite{zeldes2017gum} (Wiki), \textbf{WNUT-2017}\cite{derczynski2017results} (Social), and \textbf{Ontonotes}\cite{pradhan2013towards}(Mixed), We randomly select two domains for training, one for validation, and the remaining for test.
\vspace{-5pt}
\subsection{Baselines and implementation detail}
For the baselines, we mainly compare two groups of models with our approach. 1) \textbf{One-stage NER paradigms}: ProtoBERT\cite{snell2017prototypical}, StructShot\cite{yang2020simple}, NNShot\cite{yang2020simple}, CONTAINER\cite{das2021container} and LTapNet+CDT\cite{hou2020few}. 2) \textbf{Two-stage paradigm}: ESD\cite{wang2021enhanced}, and MAML-ProtoNet\cite{ma2022decomposed}.
We use BERT-base-uncased from Huggingface Library \cite{wolf2020transformers} and AdamW\cite{loshchilov2017decoupled} with a learning rate of 3e-5. We adopt the standard N-way K-shot setting \cite{ding2021few} and align the task definition with previous work\cite{ma2022decomposed}. We set 1\% linear warmup steps at both the training and finetuning stages for all experiments. The batch size is set to 32 and the max sequence length is set to 128. We keep the dropout rate at 0.3.
\subsection{Results and Analysis}
\textbf{Main Result.}
Table \ref{tab:table1} shows the main results of PSDC on Few-NERD compared to previous state-of-the-art methods\footnote{To make a fair comparison, we use Few-NERD Arxiv v6 version \cite{ding2021few} which revises mistakes of the previous version. We follow the corresponding results reported by MAML-ProtoNet\cite{ma2022decomposed} in their GitHub repository.}.
The result shows that PSDC achieves the best performance under all kinds of few-shot settings. Further, compared with MAML-ProtoNet\cite{ma2022decomposed}, the overall average results of PSDC are significantly improved by 2.26\% and 0.68\% respectively over Few-NERD Intra and Inter. It is noticeable that we have made such an improvement only by optimizing the entity classification stage. It demonstrates that our approach can effectively lead the model to capture generalized semantic information for inference under low resource condition.
\begin{table}[htbp]\large
\centering
\small
\renewcommand\arraystretch{1.0}
\resizebox{0.46\textwidth}{!}{
\begin{tabular}{c|l}
\toprule[1.1pt]
\textbf{Baseline} & \makecell[l]{$\textcolor[RGB]{0,119,51}{\rm hirschman}_{\textcolor[RGB]{0,119,51}{[person-scholar]}}$ was named \\ poet-in-residence with friends of the\\ \textcolor{red}{san francisco public} $\textcolor{red}{\rm library}_{\textcolor{red}{[person-scholar]}}$ in 2009 } \\
\midrule
\textbf{PSDC} &\makecell[l]{$\textcolor[RGB]{0,119,51} {\rm hirschman}_{\textcolor[RGB]{0,119,51}{[person-scholar]}}$ was named \\ poet-in-residence with friends of the\\ \textcolor[RGB]{0,119,51}{san francisco public} $\textcolor[RGB]{0,119,51}{\rm library}_{\textcolor[RGB]{0,119,51}{[building-library]}}$ in 2009 } \\
\midrule[1.1pt]
\textbf{Baseline} & \makecell[l]{She was runner-up to \textcolor{red}{tirunesh} $\textcolor{red}{\rm dibaba}_{\textcolor{red}{[person-other]}}$ \\ at \textcolor[RGB]{0,119,51}{the great manchester } $\textcolor[RGB]{0,119,51}{\rm run}_{\textcolor[RGB]{0,119,51}{[event-sportsevent]}}$ . } \\
\midrule
\textbf{PSDC} & \makecell[l]{She was runner-up to \textcolor[RGB]{0,119,51}{tirunesh} $\textcolor[RGB]{0,119,51}{\rm dibaba}_{\textcolor[RGB]{0,119,51}{[person-other]}}$ \\ at \textcolor[RGB]{0,119,51}{the great manchester } $\textcolor[RGB]{0,119,51}{\rm run}_{\textcolor[RGB]{0,119,51}{[event-sportsevent]}}$ . } \\
\bottomrule[1.1pt]
\end{tabular}%
}
\caption{The cases of Few-NERD. Both wrong and correct labels are marked in red and green, respectively. }
\label{tab:case}%
\vspace{-0.6cm}
\end{table}%
\textbf{Cross Domain Scenarios.}
To further explore the transferring ability of our framework, We conduct experiments on Cross-NER (see Table \ref{tab:table2}). Compared with Few-NERD, Cross-NER adopts 4 different NER datasets, which means a larger distribution gap between source and target domains. Our method consistently outperforms state-of-the-art baseline by 3.89\% on overall average result.
This suggests that PSDC is highly competent in solving confusing cases in complicated situations, highlighting the necessity of integrating different kinds of semantic information contained in cases.
\textbf{Ablation Studies.}
We introduce ablation experiments on Few-NERD and CrossNER to investigate the contribution of each component in the proposed approach. As shown in Table \ref{tab:table4}, removing any part of our approach will result in varying degrees of decline in model performance. This is consistent with our conjecture since class-specific prototypes and context semantic prototypes provide relatively orthogonal semantic information from two perspectives respectively. Missing each part will make the semantic space more chaotic, and thus make the classification effect worse. At the same time, the experiment illustrates the effect of proto-level and instance-level contrastive learning, which plays an important role in integrating two semantic information and enhances the collapsed semantic space.
\textbf{Case Studies.}
PSDC successfully alleviates the problems of class confusion and entity surface mention memorizing. To verify that, we randomly sample 100 instances from outputs and select two representative cases in Table \ref{tab:case}. The baseline model misclassifies entity spans in two cases because of confusing two entity types in the same sentence and ignoring contextual information. In contrast, our method solves these confusing problems, which demonstrates the effectiveness of our method.
\section{Conclusion}
\label{sec:foot}
In this paper, we propose a prototypical semantic decoupling method via joint contrastive learning (PSDC) for few-shot NER. Specifically, we decouple class-specific prototypes and contextual semantic prototypes to lead the model to focus on two different semantic information for inference. Besides, we further introduce joint contrastive learning objectives to better integrate two decoupling information and optimize the collapse of projected space. Extensive experiments and analyses demonstrate the effectiveness of our method.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.13763",
"language": "en",
"timestamp": "2023-02-28T02:29:35",
"url": "https://arxiv.org/abs/2302.13763",
"yymm": "2302"
} | \section{Introduction}
With the overgrowing Internet and rapid developments of web browser techniques, the web browser has become an essential tool in people's daily life. When a user visits websites, such as online shopping, social networking, online banking, etc., the destination will be inevitably leaked along a number of routers. These routers may accidentally observe or intentionally collect users' behaviors, and further infer sensitive information that violates individuals' privacy with an increasing severity~\cite{wang2013improved}. To prevent such privacy leakage, the Tor browser was developed based on The Onion Routing technology to enable anonymous communication, which is designed to conceal the identity of the users by encryption in the application layer of the communication protocol stacks~\cite{dingledine2004tor}. However, an attacker can still attempt to compromise the user's information by observing the patterns in the sequence of packets, e.g., by means of website fingerprinting (WF) attacks~\cite{cai2014systematic,chen2010side,murdoch2005low,sirinam2018deep}.
WF attack is proposed to reveal a user's browsing behavior by analyzing the exposed traffic patterns, even if they are encrypted~\cite{cai2012touching}. For example, handcrafted features, such as order, direction, size, and length of each connection, are extracted to represent Tor traffic and fed to classifying algorithms, such as support vector machine (SVM)~\cite{panchenko2016website}, random forest~\cite{wang2014effective}, and k-nearest neighbors (k-NN)~\cite{hayes2016k}, to launch attacks. However, such attacking methods require massive pre-processing operations based on expert knowledge and will lose efficiency under dummy-based defensive algorithms. For example, the adaptive padding~\cite{perry2015padding} and its follow-up, WTF-PAD~\cite{juarez2016toward}, have been shown the effectiveness against the attacks based on handcrafted features with reasonable overheads, and have been adopted in Tor. Furthermore, Wang and Goldberg proposed Walkie-Talkie~\cite{203876}, a low overhead defensive algorithm, to beat such attack algorithms, and ease the implementation by automated extraction of traffic features. On the contrary to defense, deep learning-based attacking algorithms have been proposed and become increasingly popular~\cite{sirinam2018deep,Rimmer_2018,sirinam2019triplet}.
In~\cite{sirinam2018deep}, a deep fingerprinting attack using convolutional neural networks was proposed, and significant performance gains were achieved in both closed and open-world scenarios. In detail, a nearly 99$\%$ accuracy has been achieved by only using the direction of the packets, following Wang \emph{et al.}'s methodology~\cite{wang2014effective}. To survive such deep learning attacks, an effective but expensive mechanism is to add random perturbations to the traffic packets. Recently, an explainable artificial intelligence (XAI) based algorithm was proposed in~\cite{9556572} and analyzed the leakage points to design a high-efficiency traffic perturbation method.
All of these methods are focusing on the important link of network traffic. When collecting and processing the Tor network traffic, the adversary can choose to directly capture TCP/IP packet instances, so that there will be two features of packet size and packet direction (e.g., 544, -1088, 1088, where these numbers represent bytes
and that the sign represents the direction of the data flow) in each traffic instance, or parse TCP/IP traffic to obtain Tor cell with more consistent packet size (i.e., 512 bytes for each cell), which results in only one feature of packet direction (e.g., 1, -1, -1, 1, 1). However, even though Tor cells can conceal the feature of packet size after parsing, one or more Tor cells will still be converted into TCP/IP instances with packet size in the actual network transmission~\cite{wang2013improved}. Furthermore, in the face of the most common defense of adding random noise packets, WF attacks will not have a good solution. For example, as shown in Figure~\ref{wf_noise}, we evaluate the accuracy of the WF attack when adding random noise packets, and the results show that when the noise probability is greater than 20\%, the attack performance will be greatly reduced. Therefore, an intriguing question arises, what if the attacker can utilize the information on packet size and design an advanced attacking method to identify the added packets?
Motivated by this, in this paper, we reconsider the attacking and defense methods in WF. In detail, we take the traffic packet size as well as direction into account, and first design a filter-based attacking method to differentiate the real and perturbed traffic packets and show the effectiveness of the method using real-life datasets collected via three mainstream web browsers. To defend against such an attack, we further propose a list-based defensive mechanism, which can mitigate the proposed attack and achieve high efficiency in terms of transmission overhead. Specifically, our main contributions are listed as follows.
\begin{itemize}
\item We first illustrate a detailed example of how the random traffic packets-based defensive mechanisms can be used to combat the WF attack when considering the factors of packet size.
\item To fully utilize the benefits of packet size, we then from a viewpoint of the attacker, propose a filter-assisted attack.
\item To complete our study, a list-based defensive mechanism is designed to defend against such attacks. In addition, to further relax the transmission overhead, a list-based splitting defensive algorithm is proposed, which can adjust the configurable tradeoff between the defense rate and the injected overhead.
\item We show details on collecting real traffic packets via three mainstream web browsers to construct the experimental datasets.\footnote{Related codes can be found in https://github.com/guduin/wf} Extensive experiments are conducted to show the effectiveness of the proposed attacking as well as the defensive mechanisms.
\end{itemize}
\begin{figure}[t]
\centerline{\includegraphics[width=0.35\textwidth, height=3cm]{wf_noise.pdf}}
\caption{WF Attack Accuracy under Random Noise Packets}
\label{wf_noise}
\end{figure}
\section{Background}
Website fingerprinting attack, as a new attack technique for de-anonymization, has been proven to be effective in identifying websites. To defend against such attacks, amounts of defense algorithms have been proposed, which mainly attempt to cover the unique or routine traffic distributions by adding dummy packets. In the process of attack and defense, the direction, size, length, and time of traffic can be used as extracted features. Many studies have tended to use the characteristics of packet direction and size while others only use the packet direction.
\subsection{Methods with Packet Direction and Size}
Cui \emph{et al.} in ~\cite{9355590} evaluated the WF attack method and extended it to multiple web page scenarios. They used the TCP layer packets directly (including packet sizes and direction of network packets). Applying the same features, Yin \emph{et al.}~\cite{9514394} studied the problem of overlapping noise in network traffic extraction, and proposed an automated multi-tab WF attack method. When a user opens multiple pages consecutively in a short time, the overlapping of subsequent traffic will degrade the performance of traditional WF attacks, and the proposed method can filter out the subsequent overlapping traffic.
In addition, Zuo \emph{et al.}~\cite{8067534} proposed a novel method for WF attack, which combines website fingerprinting and DNA-based fingerprints in biology to design attack models with the help of the profile hidden Markov model (PHMM). They used the size and direction of packets and symbolized them with two indicators as features. Shusterman \emph{et al.} in~\cite{9072556} proposed a novel traffic acquisition method by monitoring the cache occupancy channel instead of the traditional traffic patterns. They monitored the cache size by sending JavaScript codes to the target's router, achieving more resistance to network-based fingerprinting attacks.
In the study of defense methods, the first work based on this strategy, BuFLO, was proposed in~\cite{6234422}, but it failed to conceal coarse features, such as time duration, size, and the total length, unless with intolerable overheads. Furthermore, Gulmezoglu \emph{et al.}~\cite{9556572} proposed a defensive approach based on XAI, which can analyze and obtain leak points in ML classification tasks and then perturb these points to obscure the traffic features. Similar to Shusterman~\cite{9072556}, the authors in~\cite{9556572} used the data collected by the cache side channel. Eric \emph{et al.}~\cite{8416437} proposed a defense method based on clustering, which merges a group of similar websites into an indistinguishable maximum set, and finally achieved a relatively high defense rate at an expensive cost on transmission overhead.
Through these studies, we can find that although network traffic has many characteristics, most studies choose packet size and packet direction, or the cache side channel that can be characterized as the data source.
\subsection{Methods with Packet Direction Only}
Extracting more traffic features means that more information should be inferred in advance. However, processing more information also occupies more resources. If multiple features are not properly integrated or non-typical features are incorporated, the performance of WF will be degraded. Therefore, there are also many scholars who have made another drop in packet size and packet direction. Only using the least features (i.e., package direction) has become a popular topic~\cite{wang2014effective, sirinam2018deep, Rimmer_2018, he2018deep}.
Since Wang \emph{et al.}~\cite{wang2014effective} have proved that only using packet directions can also achieve the topmost accuracy, more and more research tends to use package direction solely. For example, Sirinam \emph{et al.}~\cite{sirinam2018deep} proposed a deep fingerprinting (DF) attack technology based on a convolutional neural network (CNN). Dazzing results, more than 98$\%$ accuracy, have been achieved based on Tor traffic. In addition to WTF-PAD~\cite{juarez2016toward} and intercom defense measures~\cite{203876}, obvious performance gains are also obtained. Rimmer \emph{et al.} in~\cite{Rimmer_2018} followed the direction-only network traffic and collected a dataset comprised of more than three million network traces and found that the performance achieved was comparable to previous studies. Meanwhile, the implicit features automatically learned through their approaches are more resilient to the dynamic changes of web content over time. After that, He \emph{et al.} ~\cite{he2018deep} used Rimmer's datasets and DL, achieving more than 99\% accuracy. Their experimental results show that deep learning is an efficient and robust fingerprinting attack technique for websites.
The above researches show that only using packet direction can achieve good attacking results. However, the limited information will degrade the attacking capability on the awareness of the traffic. In addition, the aforementioned algorithms especially rely on clean datasets, which will suffer from adding random noises directly (such as the RPD algorithm in Figure.~\ref{system_model}). Therefore, in this work, we are more inclined to use more information to assist the attack in order to achieve stronger robustness and anti-interference.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth,height=4cm]{system_model.pdf}
\caption{System Model}\label{system_model}
\end{figure}
\section{Overview}
\subsection{Adversary Model}
Although the aim of Tor is to protect users against adversaries from distinguishing which site the client is transiting. WF attacks, basically based on deep learning techniques, can undermine this protection by analyzing traffic patterns. To deploy such an attack, adversaries first need to capture the traffic patterns from the current website to the destination, and extract unique features for each transition of each website. Several features, such as transmission time and size with directions~\cite{panchenko2011website,wang2014effective,sirinam2018deep}, packet length and its frequencies~\cite{herrmann2009website}, distance-based scores~\cite{wang2013improved}, and the number of bursts~\cite{wang2014effective}, is then used to train a supervised classifier. Finally, the adversary can collect new transitions via Tor or other mainstream web browsers, and launch WF attacks.
In this work, we follow the assumption in~\cite{sirinam2018deep} that adversaries can only have access to the link between the user and the connected network within the used web browser and capture the traffic packets between links. As mentioned in~\cite{sirinam2018deep}, positions that a potential adversary launch WF attacks include: eavesdroppers on the client's local network, local system administrators, internet service providers (ISP), autonomous systems (AS) between the client and the entry network, and the operators of the entry network. In Figure~\ref{system_model}, we illustrate the adversary scenario: the user surfs the Internet by clicking links via Tor or other anonymous networks, and the adversary intents to extract the destination of each click by analyzing the traffic pattern between the client and the anonymous network. To this end, an adversary first accesses a list of potential websites through Tor or other web browsers and collects traffic traces between the local user and destinations. Then, traffic traces are labeled as their corresponding websites, and classified by the unique features (e.g., packet length, traffic direction, and traffic size) using supervised learning methods (e.g., random forest, k-NN, and SVM) or DL-based methods (e.g., SDAE, LSTM, and CNN).
\subsection{A Toy Example of Random Packets Defense}
In this subsection, we, from a defender viewpoint, first illustrate a basic process of RPD. As shown in Figure~\ref{system_model}, two features (packet size and direction) are prominent during the collection. To degrade the attack performance, a direct way is to disturb the distributions of these features by adding random packets. The execution logic of the random packets generator is as follows:
\begin{equation}
P_r = (-1)^{RANDI(1, 2)} \times RANDI(S_m, S_M) {\rm ,}
\end{equation}
\noindent that is, output a packet $P_r$ with a random direction and size, where the size is a random integer between the minimum size $S_m$ and the maximum size $S_M$. Then, the defender can process Algorithm~\ref{random} to start the defense. In detail, by inserting random traffic packets into the original trace, the internal connections between features are interrupted. During each perturbation, two factors, the injected position and the injected traffic size with direction are considered. For example, for each traffic bar, the defender set a random probability $P_t$ to determine whether there should be inserted a perturbation, and a random size of the traffic bar will be inserted when a random number ([0, 1]) is smaller than $P_t$.
In practice, RPD is easy to implement and deploy. However, from the statistical characteristics of packets, the inserted random packets will be significantly different from the original traffic. If the attacker takes advantage of this vulnerability, RPD may be disabled. Therefore, a high effective and comprehensive defensive mechanism should be proposed.
\begin{algorithm}[t]
\small
\caption{Random Packets Defense (RPD) Algorithm}
\label{random}
\LinesNumbered
\KwIn{Original traffic vector $V_{ORG}$}
\KwOut{Random Packets Defense traffic vector $V_{RPD}$}
\ForEach{$Packet$ in $V_{ORG}$}{
Output $Packet$ to $V_{RPD}$\\
\If{random number \textless $P_t$}{
Generate a $Random Packet$\\
Output $Random Packet$ to $V_{RPD}$\\
}
}
\end{algorithm}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.25\linewidth}
\centering
\includegraphics[width=0.9\linewidth, height=3cm]{size_edge.pdf}
\caption{Microsoft Edge}
\end{subfigure}
\begin{subfigure}{0.25\linewidth}
\centering
\includegraphics[width=0.9\linewidth,, height=3cm]{size_chrome.pdf}
\caption{Google Chrome}
\end{subfigure}
\begin{subfigure}{0.25\linewidth}
\centering
\includegraphics[width=0.9\linewidth,, height=3cm]{size_firefox.pdf}
\caption{Mozilla Firefox}
\end{subfigure}
\caption{Packet Size Distribution of Different Browsers}
\label{size}
\end{figure*}
\section{Proposed Attack and Defense Method}
In this section, we will first analyze the shortcomings of RPD, then propose an enhanced attack method to break RPD. Finally, a novel defense method has been proposed to deal with the enhanced attack.
\subsection{Filter Assisted Attack}
Destroying traffic characteristics is the main principle of anti-WF attacks. As discussed in the previous chapter, adding noise packets randomly may be a straightforward and efficient method. In order to observe the packet size distribution of different browsers, we have captured 8000 traffic from 100 websites in three main web browsers, e.g., Microsoft Edge, Google Chrome, and Mozilla Firefox. Afterward, we calculate the relationship between each packet size and the occurrence probability, illustrated in Figure~\ref{size}. The abscissa is the packet size (outbound packets are defined as positive, and inbound packets are defined as negative), and the ordinate is the probability of the corresponding packet size. We can clearly find that the distribution of package sizes is extremely unbalanced. Therefore, if these statistical characteristics are not taken into account during the attack, newly inserted interference packets will be easily filtered out.
\begin{algorithm}[t]
\small
\caption{Filter Assisted Attack (FAA) Algorithm}
\label{filter}
\LinesNumbered
\KwIn{Lots of original traffic vector $V_{LORG}$}
\KwIn{Random Packet Defense traffic vector $V_{RPD}$}
\KwOut{Filter Assisted Attack traffic vector $V_{FAA}$}
$Statistics$ of $V_{LORG}$ with $S_p$\\
$Sort$ of $Statistics$ in descending by $P_p$\\
$L$ gets the first $X$ packets of $Sort$
\ForEach{$Packet$ in $V_{RPD}$}{
\If{$Packet$ Size not in $L$}{
continue
}
Output $Packet$ to $V_{FAA}$\\
}
\end{algorithm}
\begin{figure}[t]
\centerline{\includegraphics[width=0.4\textwidth]{traffic_filter.pdf}}
\caption{Random Packets Defense (RPD) through Filter Assisted Attack (FAA)}
\label{traffic_filter}
\end{figure}
Based on the above observation, we have designed an enhanced attack method, called filter-assisted attack (FAA), to destroy the RPD. Its algorithm is shown in Algorithm~\ref{filter}. The main idea is as follows:
\begin{itemize}
\item \textbf{Step 1.} (Line 1-3) Analyze lots of network traffic $V_{LORG}$ and make a statistical analysis on the size of all packets. Obtain the size of the packet with the highest frequency, and then take the first $X$ as the filter list $L$.
\item \textbf{Step 2.} (Line 4-9) Filtrate the traffic of random packets defense $V_{RPD}$, and the packets whose sizes are not in the filter list will be filtered out. Then, output the filtered traffic $V_{FAA}$.
\end{itemize}
Its working model is shown in Figure~\ref{traffic_filter}. Due to the packet size difference, randomly added noises are filtered out by the designed filter.
Because the packet of random noise is evenly distributed, there will be a lot of interferences distributed out of the filter list. So these disturbances will be filtered out and the FAA can improve the adaptability to random noises. Although the filter can not completely filtrate the injected random noise, the overall noise level has been reduced.
\subsection{List Assisted Defense}
\begin{algorithm}[t]
\small
\caption{List Assisted Defense (LAD) Algorithm}
\label{list}
\LinesNumbered
\KwIn{Lots of original traffic vector $V_{LORG}$}
\KwIn{Original traffic vector $V_{ORG}$}
\KwOut{List Assisted Defense traffic vector $V_{LAD}$}
$Statistics$ of $V_{LORG}$ whit $S_p$\\
$Sort$ of $Statistics$ in descending by $P_p$\\
$L$ gets the first $X$ packets of $Sort$\\
$P_L$ gets $P_p$ of $L$
\If{base method = insert}{
\ForEach{$Packet$ in $V_{ORG}$}{
Output $Packet$ to $V_{LAD}$\\
\If{random number \textless $P_t$}{
Select $List Packet$ from $L$ according to $P_L$\\
Output $List Packet$ to $V_{LAD}$\\
}
}
}
\If{base method = split}{
\ForEach{$Packet$ in $V_{ORG}$}{
\If{random number \textless $P_t$}{
Select $List Packet$ from $L$ according to $P_L$\\
\If{$Packet$ - $List Packet$ $\geq$ $S_m$}{
Output $List Packet$ to $V_{LAD}$\\
Output $Packet$ - $List Packet$ to $V_{LAD}$\\
}
}
}
}
\end{algorithm}
\begin{figure}[t]
\centerline{\includegraphics[width=0.35\textwidth]{traffic_list.pdf}}
\caption{List Assisted Defense (LAD) through Filter Assisted Attack (FAA)}
\label{traffic_list}
\end{figure}
The packet size of RPD deviates from the distribution of normal packet size, so random injected patterns are easy to be filtered directly. Therefore, against such attacks, we further propose the list-assisted defense (LAD). Meanwhile, as the network overhead should also be carefully designed, we then provide two methods, one is based on probably packet insertion, and another is based on probably packet splitting, which has the purpose of alleviating the overloaded transmission overhead.
Its algorithm is as shown in Algorithm~\ref{list}. The main idea of this defense method is as follows:
\begin{itemize}
\item \textbf{Step 1.} (Line 1-4) This part is similar to FAA. But the defender will obtain a list of probability distributions $P_L$ from $L$ in addition. $P_L$ will guide the defender to add noise packets according to the original probability distribution of the traffic.
\item \textbf{Step 2.} (Line 5-13) If the defender chooses the packets insertion method, a listing packet $P_l$ will be appended with a determined probability after the output of an original packet. This list packet is guided by $P_L$ and taken from $L$ to ensure that its size and probability are consistent with the original traffic. The operation proceeds as the following equation:
\begin{equation}
P_l = {\rm Array}[L \times P_L]_{RANDI(0, N)} {\rm .}
\end{equation}
\item \textbf{Step 3.} (Line 14-24) If the defender chooses the packets splitting method, a large-size of traffic packet will be split into two smaller packets, and these newly generated packets are selected from the list. The newly generated first packet ($P_f$) and second packet ($P_s$) are processed as the following equation:
\begin{equation}
\begin{cases}
P_f = P_l, P_s = P - P_l,& {\rm if\ } P_2 \geq S_m {\rm ;}\\
P_f = P, P_s = {\rm None},& {\rm if\ } P_2 \textless S_m {\rm .}
\end{cases}
\end{equation}
\end{itemize}
When interfering, the defender tries to insert packets from $L$ with a probability $P_L$. In this way, it not only ensures the characteristics of random interference but also avoids filtering out due to the difference in statistical characteristics. Thus, when it passes through FAA again, limited noisy packets are filtered out. Its working model is shown in Figure~\ref{traffic_list}.
Inserting packets is easy to implement and does not affect normal communication, but because the inserted packets are usually useless noise, the overhead is huge. To reduce such overhead, packet splitting has the advantage that all packets are involved in the communication. According to our research in~\cite{socolofsky1991tcp}, each packet consists of a header and a payload, and the network itself splits the payload. If the defender controls this behavior, a variety of padding methods can be obtained, which will greatly increase the uncertainty and complexity of network traffic. So it is feasible to use this method to deal with WF attacks at a very limited extra cost on data packet headers.
\section{Data Collection}
In order to conduct model training and performance evaluation on WF attacks and defenses, it is essential to capture the traffic data of each website.
In this experiment, we use \emph{windump} for packet capture and Python and PyAutoGUI for automation. Its flow chart is shown in Figure~\ref{data_collection}, and the main steps are described as follows.
\begin{figure}[t]
\centerline{\includegraphics[width=0.45\textwidth,height=4cm]{data_collection.pdf}}
\caption{Data Collection Flow Chart}
\label{data_collection}
\end{figure}
\begin{itemize}
\item \textbf{Step 1:} list URLs. In order to traverse the website and capture traffic data, we first enumerate a list of website URLs. The common way to obtain the URL list is the traffic ranking of major websites. In the website traffic rankings \footnote{\href{https://top.chinaz.com/all/}{https://top.chinaz.com/all/}}, we can collect the URLs of the most popular sites as the target websites for subsequent capture.
\item \textbf{Step 2:} access URLs. Once the URLs of the target websites are obtained, we can use Python to read the URL in turn and use PyAutoGUI to automatically type the URL into the browser for access. In addition, to ensure traffic consistency, we will disable the browser's cache for each instance.
\item \textbf{Step 3:} deploy the packet capture tool. When a website is opening, we use \emph{windump} to capture the traffic packets. In order to filter out the interference of packets generated by other computer programs, we also need to set up a filter, achieved by using URL as a parameter.
\item \textbf{Step 4:} stop capturing and saving files. After the site is loaded, we will stop this capture and save the file. To determine whether the website has finished loading, PyAutoGUI is used to monitor the pixels of the loading status icon on the screen. We also set a timeout for sites that have not finished loading. After capture, the observed packets will be stored as \emph{.pcap} files.
\item \textbf{Step 5:} data processing. In order to carry out DL training, we need to process the \emph{.pcap} files. First, we use the \emph{dpkt} module in Python to parse the \emph{.pcap} file iteratively, and extract the size and direction of the packet. For the emitted packet, we define it as +size, and for the received packet, we define its data as -size. We end up with a bunch of vectors that represent per WF access. Considering that the length of each vector is different, we need to trim its length and finally unify it to a fixed length for the input of the DL model. Finally, we store the processed data as \emph{.csv} format.
\end{itemize}
So far, the data collection and processing work has been completed. In this experiment, we collected 100 URLs and repeat the visit 100 times, resulting in a 100*100 \emph{.pcap} files. After data pre-processing, we obtain the dataset with 100*100 data vectors.
For the open world, we have collected an additional 1000 URLs and visited every URL once, resulting in a 1000*1 dataset. In addition, in order to test the adaptability of WF in different browsers, we apply three main web browsers, Microsoft Edge, Google Chrome, and Mozilla Firefox, to process the above collection.
\section{Evaluation}
\subsection{Experimental Setup}
Our experiment is based on Windows 10 platform, the Microsoft Edge browser version is 105.0.1343.50, the Google Chrome browser version is 104.0.5112.102 and the Mozilla Firefox browser version is 105.0.1. We run our scripts and code in Anaconda 4.5.11, Jupyter Notebook with Python 3.7.0, and use a DL framework with TensorFlow as the back end and Keras as the front end.
In this experiment, we follow the work of Sirinam~\cite{sirinam2018deep}, Wang~\cite{wang2022snwf}, Rimmer~\cite{Rimmer_2018} \emph{et al.}, and take the CNN model as the experimental basis. Different from most two-dimensional image recognitions~\cite{jordan2015machine}, the input data of this experiment is only a one-dimensional data vector. Therefore, Conv1D and MaxPooling1D are used instead of Conv2D and MaxPooling2D and the size of the convolution kernel is changed from the classical (2, 2) to (5, 1). In the four times convolution - convolution - pooling unit, we gradually increase the number of convolution kernels to adopt more features. In the experiment, we use the number of convolution kernels of 32, 64, 128, and 256, respectively. After each layer of the network, we will add the activation function of ReLU to increase the nonlinearity of the model, which prevents the model from degrading to a linear programming structure. In the last layer of the model, the SoftMax function is used to connect the network to 100 classification result outputs. After the model is established, we use the categorical cross entropy compilation model and Adam's optimization model~\cite{sirinam2018deep}\cite{wang2022snwf}. For the collected data, we have divided them into the training dataset and test dataset according to the ratio of 8: 2, that is, 80 visits of 100 websites are used for the training model, and the remaining 20 visits are used for the testing model performance.
\subsection{Metrics}
\subsubsection{Defensive Efficiency}
In order to comprehensively evaluate the defense effect, we define the defense efficiency (DE) to represent the average of the classification accuracy with the probability of adding noise ranging from 0\% to 100\% that
${\rm DE} = \frac{N}{\sum_{i=0}^N{\rm Accurary}}{\rm .}$
\noindent where \emph{N} is the number of test points between 0\% to 100\%.
\subsubsection{Overhead}
The network cost of defense is a factor that cannot be ignored by any WF defensive method. Therefore, we define ${\rm Overhead} = \frac{\sum\ {\rm Extra\ Packets\ Size}}{\sum\ {\rm Original\ Packets\ Size}}{\rm .}$ to evaluate the overhead of defense.
\begin{figure*}[t]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{random_filter_edge.pdf}
\caption{Microsoft Edge}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{random_filter_chrome.pdf}
\caption{Google Chrome}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{random_filter_firefox.pdf}
\caption{Mozilla Firefox}
\end{subfigure}
\caption{Filter Assisted Attack (FAA) on Random Packets Defense (RPD) Result}
\label{random_filter}
\end{figure*}
\renewcommand\arraystretch{0.7}
\begin{table}[t]
\small
\caption{Defensive Efficiency of Random Packets Defense (RPD) and Filter Assisted Attack (FAA)}
\label{random_de}
\begin{center}
\resizebox{0.95\linewidth}{!}{
\begin{tabular}{@{}c|c|c|c|c@{}}
\toprule
\textbf{\ \ Browser} & \textbf{Method} & \textbf{Small} & \textbf{Medium} & \textbf{Large\ \ } \\ \midrule
\multirow{2}{*}{ Microsoft Edge} & RPD Under DF & 5.56 & 5.88 & 5.88 \\ \cmidrule(l){2-5}
& RPD Under FAA & 2.33(58\%↓) & 2.13(-64\%↓) & 1.82(69\%↓)\ \ \\ \midrule
\multirow{2}{*}{ Google Chrome} & RPD Under DF & 5.88 & 6.25 & 6.25 \\ \cmidrule(l){2-5}
& RPD Under FAA & 2.56(56\%↓) & 2.17(65\%↓) & 1.89(70\%↓)\ \ \\ \midrule
\multirow{2}{*}{ Mozilla Firefox}& RPD Under DF & 4.76 & 5.26 & 5.88 \\ \cmidrule(l){2-5}
& RPD Under FAA & 2.04(57\%↓) & 2.17(59\%↓) & 2.00(66\%↓)\ \ \\ \bottomrule
\end{tabular}
}
\end{center}
\end{table}
\renewcommand\arraystretch{0.7}
\begin{table}[t]
\small
\caption{Defensive Efficiency of List Assisted Defense (LAD) and Filter Assisted Attack (FAA)}
\label{list_de}
\begin{center}
\resizebox{0.75\linewidth}{!}{
\begin{tabular}{@{}c|c|c|c@{}}
\toprule
\textbf{\ \ Browser} & \textbf{Method} & \textbf{Insert} & \textbf{Split$^1$\ \ } \\ \midrule
\multirow{2}{*}{ Microsoft Edge} & LAD Under DF & 6.25 & 5.56 \\ \cmidrule(l){2-4}
& LAD Under FAA & 5.26(16\%↓) & 3.57(36\%↓)\ \ \\ \midrule
\multirow{2}{*}{ Google Chrome} & LAD Under DF & 6.25 & 5.88 \\ \cmidrule(l){2-4}
& LAD Under FAA & 5.56(16\%↓) & 3.57(36\%↓)\ \ \\ \midrule
\multirow{2}{*}{ Mozilla Firefox}& LAD Under DF & 6.25 & 5.00 \\ \cmidrule(l){2-4}
& LAD Under FAA & 5.88(6\%↓) & 3.57(29\%↓)\ \ \\ \bottomrule
\multicolumn{4}{l}{\footnotesize $^1$ This parameter is rescaled according to packets probability.}
\end{tabular}
}
\end{center}
\end{table}
\subsubsection{ROC Curve}
In order to evaluate the model more easily and comprehensively, we use closed-world and open-world assumptions. The closed-world assumption is commonly used to evaluate models, while the open-world assumption is more realistic.
In the open-world assumption, we will have four results, named
TP (True Positive, identified as positive, actually positive, correct),
TN (True Negative, identified as negative, actually negative, correct),
FP (False Positive, identified as positive, actually negative, incorrect),
FN (False Negative, identified as negative, actually positive, incorrect) to determine whether an instance is in the monitored set.
After recording the discrimination results of the whole test dataset, $\rm TPR = \frac{TP}{TP+FN}{\rm,}$ and $\rm FPR = \frac{FP}{FP+TN}{\rm}$ that describe the discrimination ability of the model are obtained.
In the process of judging whether the instance is positive, we usually set a series of thresholds between 0 and 1 and record each TPR and FPR. Finally, a ROC curve can be obtained by connecting the recorded points into lines with TPR as the ordinate and FPR as the abscissa. The more inclined the ROC curve is to the upper left corner, the higher TPR can be obtained under a certain FPR.
\subsection{Filter Assisted Attack on Random Packets Defense}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{list_filter_edge.pdf}
\caption{Microsoft Edge}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{list_filter_chrome.pdf}
\caption{Google Chrome}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{list_filter_firefox.pdf}
\caption{Mozilla Firefox}
\end{subfigure}
\caption{List Assisted Defense (LAD) on Filter Assisted Attack (FAA) Result}
\label{list_filter}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{roc_edge.pdf}
\caption{Microsoft Edge}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{roc_chrome.pdf}
\caption{Google Chrome}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{roc_firefox.pdf}
\caption{Mozilla Firefox}
\end{subfigure}
\caption{Random Packets Defense (RPD), List Assisted Defense (LAD) and Filter Assisted Attack (FAA) ROC Curve in Open World}
\label{roc}
\end{figure*}
\begin{figure}[t]
\centerline{\includegraphics[width=0.35\textwidth,height=4cm]{overhead.pdf}}
\caption{Defense Traffic Overhead}
\label{overhead}
\end{figure}
Based on the CNN model, we first conduct traffic interferences on the test dataset in terms of RPD. In addition, we select three ranges of packet size, named large packet (-1514 to 1514), medium packet (-1000 to 1000), and small packet (-500 to 500), to evaluate the influence of the size of the interference packet on the model.
The final accuracy curves are shown in the solid line of Figure~\ref{random_filter}. It can be observed that with the increase of interference packets insertion probability, the classification accuracy of DF~\cite{sirinam2018deep} has a significant decline. When the probability of random packets is greater than 20\%, the WF attack accuracy drops around 20\%. By comparing the curves of large packets, medium packets, and small packets, we can also find that the size range of packets has a limited impact on the accuracy degradation. Therefore, in RPD, we can use as many small packets as possible to fill in, to achieve the lowest network overhead with negligible performance losses.
Then, we use the FAA method to carry out enhanced attacks on the above RPD traffic, and the accuracy curves are shown in the dashed line in Figure~\ref{random_filter}. It can be seen that after FAA is applied, the decrease in recognition accuracy significantly slows down. In addition, the DE of RPD and DE under FAA are provided in Table~\ref{random_de}. We can see that the FAA significantly reduces the defense efficiency of RPD, where the drops are from 70\% to 56\% for large and small injected packet sizes, respectively.
\subsection{List Assisted Defense on Filter Assisted Attack}
In this experiment, we interfere with the traffic according to the method of LAD, and the results are provided in Figure~\ref{list_filter} and Table~\ref{list_de}. The solid line represents the effect of LAD while the dashed line is FAA. It can be seen that with the increase of packet insertion probability, the classification accuracy drops significantly. Among them, the LAD based on packet insertion has the best defense effect. Even if the proposed FAA is applied, the accuracy decline is not basically slowed down (e.g., the drop of DE is smaller than 16\%). However, the LAD based on packet splitting has a slightly slower precision decline under the FAA (e.g., the drop of DE is about 30\%). The reason is that packet splitting can not guarantee the size of the two newly generated packets both in the list, so one has a relatively high probability of being detected. Therefore, its resistance effect against FAA is slightly worse than that of LAD based on packet insertion.
\subsection{Defense Traffic Overhead}
In Figure~\ref{overhead}, we provide the traffic overhead of RPD and LAD. We select 20\%, 40\%, and 60\% packet probabilities, respectively, as test points, and compare the overhead of defense measures. As can be observed, the overhead of large, medium, and small packets of RPD decreases proportionally. Since the packet insertion needs to keep the probability distribution of the original traffic, the range of the inserted packets is the same as the large packets of RPD, so its overhead is the same as that of the large packets of RPD. However, the interference of LAD based on packets splitting comes from the traffic itself, it only needs to insert extra packet headers, so its overhead is far lower than other defense methods. In general, LAD based on packet insertion has the best defensive performance with a relatively high overhead, while LAD based on packet splitting is better and balanced in both defense ability and overhead.
\subsection{Open World Evaluation}
In this subsection, we evaluate the ROC curves of RPD and LAD under FAA. The results are shown in Figure~\ref{roc}. By comparison, we can find that the ROC of the WF attack is the highest without any defense measures. When we apply RPD and LAD, ROC decreases significantly. However, when we apply FAA to enhance the attack, RPD will be defeated, while LAD based on packet insertion still has a satisfactory performance, which is consistent with the results in the closed-world dataset.
\section{Conclusion}
In this paper, we have first studied the influence of random noise on WF attacks based on traditional random packet defense. Then a filter-assisted attack has been proposed to mitigate the noise and enhance the efficiency of the attack. Finally, we have investigated a list-assisted defense in dealing with the filter-assisted attack. From our experimental results, we observe that the list-assisted defense based on packets insertion achieves the best performance and can resist the filter-assisted attack, but its network overhead is close to that of random large packets, while the list-assisted defense based on packet splitting strikes a balance in both performance and overhead, which is proved to be an efficient and feasible method.
\begin{acks}
This work is partially supported by the National Key R\&D Program of China (2021YFB3100700), the National Natural Science Foundation of China (62272228, 62002170, 62076125, U20B2049, U20B2050, U21A20467), Shenzhen Science and Technology Program (Grant No.JCYJ20210324134408023), Youth Foundation Project of Zhejiang Lab (No. K2023PD0AA01), and Research Initiation Project of Zhejiang Lab (No. 2022PD0AC02).
\end{acks}
\clearpage
\nocite{*}
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.13676",
"language": "en",
"timestamp": "2023-02-28T02:27:08",
"url": "https://arxiv.org/abs/2302.13676",
"yymm": "2302"
} | \section{INTRODUCTION}
The importance of precision measurement in physics and other sciences has made it a long-pursued goal for the vast majority of scientific researchers. Compared to classical precision measurements, quantum metrology can significantly enhance the sensitivity of parameter estimation by utilizing the distinct features of quantum effects such as entanglement~\cite{1}, squeezing~\cite{2} and quantum criticality~\cite{3,4,5,6,7}. For instance, the properties of the equilibrium state may change significantly when the physical parameters change slightly near the critical point. This sensitivity of critical behavior provides a powerful resource for estimating physical parameters.
In the past few years, quantum criticality has attracted growing interest and many quantum sensing protocols based on quantum criticality have been proposed~\cite{8,9,10,11,12,13,14,15,16,17,18,19,20,21,22}. However, the time required to prepare the ground state close to the critical point also diverges, which obliterates the advantages provided by quantum criticality. Recent results suggest that the stringent requirement for state preparation can be relaxed by the dynamical method~\cite{23}, showing that a divergent scaling in the quantum Fisher information (QFI) can be achieved for general initial states by taking the QRM as an explicit example. The work of Ref.~\cite{23} dealt with the standard (isotropic) QRM, in which the rotating and counterrotating terms have the uniform coupling strengths. However, in real light-matter interaction systems, the available coupling strengths are typically much smaller than the frequencies, so that the counter-rotating--wave terms have a negligible effect on the system dynamics. A promising strategy to overcome this restriction is to introduce a parametric drive, which can effectively transform the rotating--wave coupling into an asymmetric combination of rotating-- and counter-rotating--wave couplings~\cite{add1,add2}. As such, an investigation of criticality-enhanced sensing based on the anisotropic quantum Rabi Model (AQRM) is of practical relevance.
Recent years have witnessed impressive explorations on different aspects of the AQRM, such as the enhanced squeezing~\cite{24}, the QFI~\cite{25}, and the quantum phase transition (QPT)~\cite{26,27,28,29}. To date, several creative schemes have been proposed to realize the AQRM, including the ones utilizing a two-dimensional quantum well~\cite{25}, trapped ions~\cite{30}, and superconducting circuit~\cite{31,32,33}.
In this article, we investigate quantum sensing in the AQRM and analyze the scope of application under the situations of different anisotropic ratios between rotating-wave and counterrotating-wave interaction terms. We analytically derive the formula of the QFI for the AQRM, and its divergent scaling. We find that the rotating-wave and counterrotating-wave interaction terms have symmetrical effects on the critically-enhanced quantum sensing at the limit of the infinite ratio of qubit frequency to field frequency. However, at finite frequency scaling, the effect of higher order correction will break this equilibrium and shift the inverted variance maxima located in the QRM in the direction where the rotating-wave interaction prevails.
\section{The QFI of critical quantum dynamics}
The performance of quantum sensing is related to the sensitivity of state with respect to the change of a parameter that can be quantified by QFI, which is introduced by extending the classical Fisher information to quantum regime~\cite{34,35}. The QFI for the estimation of the parameter $\alpha$ has a relatively simple form as
\begin{eqnarray}\label{Eq1}
\mathscr{F}_\alpha= 4{\rm{Var}}\left[h_\alpha\right]_{\left\vert\Psi\right\rangle},
\end{eqnarray}
where $\rm{Var}\left[\ldots\right]_{\left\vert\Psi\right\rangle}$ is variance related to the initial state $\left\vert\Psi\right\rangle$ and $h_\alpha =
iU^{\dagger}_{\alpha}\partial_{\alpha}U_{\alpha} =-i\partial_{\alpha}U^{\dagger}_{\alpha}U_{\alpha}$ is the transformed local generator of parametric translation of $U_{\alpha} = exp\left(-iH_{\alpha}T\right)$ with respect to $\alpha$\textcolor{red}{~\cite{36+,37,38}}, where $H_{\alpha} = H_{0}+\alpha H_{1}$ represents a family of parameter $(\alpha)$-dependent Hamiltonians, whose eigenvalue equation satisfies the following form~\cite{37}
\begin{eqnarray}\label{Eq2}
\left[H_{\alpha},\Gamma\right] = \sqrt{\Delta}\Gamma,
\end{eqnarray}
where $\Gamma=i\sqrt{\Delta}A-B$ with $A=-i\left[H_0,H_1\right]$, $B=-\left[H_{\alpha},\left[H_0,H_1\right]\right]$ and $\Delta$ is dependent on the parameter $\alpha$. This type of Hamiltonian $H_{\alpha}$ may have an isometric energy spectrum in which the energy gap $\varepsilon\thicksim\sqrt{\Delta}$ when $\Delta>0$, and $\sqrt{\Delta}$ becomes imaginary if $\Delta<0$. The normal-to-superradiant phase transition occurs at the critical point defined by $\Delta=0$.
The transformed local generator can be written as
\begin{eqnarray}
h_{\alpha} = H_1t + \frac{{\rm{cos}}(\sqrt{\Delta}t)-1}{\Delta}A - \frac{{\rm{sin}}(\sqrt{\Delta}t)-\sqrt{\Delta}t}{\Delta\sqrt{\Delta}t}B.
\end{eqnarray}
It can be seen that $h_{\alpha}$ becomes divergent as $\Delta\to0$ if $\sqrt{\Delta}t\backsimeq\mathcal{O}(1)$, which represents a signature of critical quantum dynamics. Substituting the transformed local generator into Eq.~(\ref{Eq1}), we obtain the QFI as follows:
\begin{eqnarray}\label{Eq4}
\mathscr{F}_{\alpha}(t)\backsimeq4\frac{[{\rm{sin}}(\sqrt{\Delta}t)-\sqrt{\Delta}t]^{2}}{\Delta^3}{\rm{Var}}\left[B\right]_{\left\vert\Psi\right\rangle}.
\end{eqnarray}
It obviously shows that the QFI diverges under the condition of $\sqrt{\Delta}t\backsimeq\mathcal{O}(1)$. The requirement for ground state preparation can be avoided based on the fact that this scaling of the QFI results from the dynamic evolution of quantum system itself and is applicable to general initial state $ \left\vert\Psi\right\rangle $ provided that $ {\rm{Var}}\left[B\right]_{\left\vert\Psi\right\rangle}\backsimeq\mathcal{O}(1) $ or general mixed state. The prominent feature for the cases proposed here is that any Hamiltonian satisfying Eq.~(\ref{Eq1}) takes effect when it's applied to such a kind of quantum sensing~\cite{23}.
\section{Critical quantum sensing in the AQRM}
The AQRM describes the interaction between a qubit and a bosonic field mode with asymmetric rotating- and counterrotating-wave coupling strengths. The system dynamics is governed by the Hamiltonian $(\hbar =1)$
\begin{eqnarray}
\mathcal{H} &=&\omega a^{\dagger}a + \frac{\Omega}{2}\sigma_{z}+\lambda_{1}(a\sigma_{+} +a^{\dagger}\sigma_{-}) \nonumber\\
&&+ \lambda_{2}(a\sigma_{-} +a^{\dagger}\sigma_{+}),
\end{eqnarray}
where $\sigma_{\pm}=\left(\sigma_{x} \pm \emph{i}\sigma_{y}\right)/2$ and $\sigma_{z}$ are Pauli operators of the qubit with the transition frequency $\Omega$, and $a^{\dagger}$ $(a)$ is a creation (annihilation) operator for the bosonic field with the frequency $\omega$. $\lambda_{1}$ and $\lambda_{2}$ respectively characterize the coupling strengths of rotating-wave and counterrotating-wave interactions between the qubit and the bosonic field. The ratio $\lambda_1/\lambda_2$ depicts distinctive feature of the AQRM, as compared to the isotropic one that has been investigated in the context of criticality-enhanced quantum sensing~\cite{23}. Without loss of
generality, we set $ \lambda_{1} $ and $ \lambda_{2} $ to be real.
In the limit of $\eta =\Omega/\omega \rightarrow \infty$, we can obtain an effective low-energy Hamiltonian,
\begin{eqnarray}\label{Eq6}
\mathcal{H}_{np}^{\downarrow} &=& \left(\omega - \frac{\lambda_{1}^2+\lambda_{2}^2}{\Omega}\right)a^{\dagger}a-\frac{\lambda_{1}\lambda_{2}}{\Omega}(a^{\dagger2}+a^2) \nonumber\\
&&-\frac{\lambda_{2}^2}{\Omega}-\frac{\Omega}{2}.
\end{eqnarray}
Eq.~(\ref{Eq6}) can be diagonalized as $H_{np}^{\downarrow} = \varepsilon_{np}a^{\dagger}a + E_{np}$, with the energy gap $\varepsilon_{np} = \omega\sqrt{\Delta_{g}}/2$, and the ground-state energy $E_{np}=\frac{1}{2}\left(\varepsilon_{np}-\omega+\frac{\lambda_{1}^{2}-\lambda_{2}^{2}}{\Omega}-\Omega\right)$, where $\Delta_g=4(1-g^2)(1-\gamma^2g^2)$, $g = (\lambda_{1}+\lambda_{2})/\sqrt{\omega\Omega}$, and $\gamma = (\lambda_{1}-\lambda_{2})/(\lambda_{1}+\lambda_{2})$. The energy gap $\varepsilon_{np}$ is real only when $g<1$ and vanishes at $g=1$, locating the critical point at which QPT happens.
For this Hamiltonian, the QFI regarding estimation of the parameter g is
\begin{small}
\begin{eqnarray}\label{Eq7}
\mathscr{F}_g(t)\backsimeq16g^{2}\xi^{2}\mu^{2}\frac{[{\rm{sin}}(\sqrt{\Delta_{g}}\omega t)-\sqrt{\Delta_{g}}\omega t]^{2}}{\Delta_{g}^3}{\rm{Var}}\left[P^2\right]_{\left\vert\varphi\right\rangle_c},
\end{eqnarray}
\end{small}where $\xi=\gamma^{2}-1$, $\mu=1-\gamma^2g^2$, $P = i(a^{\dagger} - a)/\sqrt{2}$ and $\left\vert\varphi\right\rangle_c$ is the initial state of the bosonic field (see Appendix \ref{A} for a proof). It can be found that when $g\to1$ (i.e., $\Delta_{g}\to0$), $\mathscr{F}_{g}\to\infty$. This would allow us to estimate the parameter with a precision enhanced by critical quantum dynamics. The quantum Cram\'{e}r-Rao bound (QCRB)~\cite{39} associated with the QFI characterizes how well a parameter can be estimated from a probability distribution and gives the ultimate precision of the quantum parameter estimation. Here, we study quantum sensing based on such an AQRM and explore two experimentally feasible measurement methods to achieve the precision of the same order as QCRB.
\section{Measurement schemes for AQRM-based sensing}
The first method is based on quadrature measurements of the bosonic field by standard homodyne detection~\cite{40}. Without losing generality, the system is assumed to be initially in a product state of qubit's $\left\vert\downarrow\right\rangle_{q}$ state and field's photon superposition state $\left\vert\varphi\right\rangle_c$: $\left\vert\Psi\right\rangle = \left\vert\downarrow\right\rangle_{q}\otimes\left\vert\varphi\right\rangle_c$, with $\left\vert\varphi\right\rangle_c = (\left\vert0\right\rangle+i\left\vert1\right\rangle)/\sqrt{2}$. After an evolution for time $t$ under the Hamiltonian, the mean value of the quadrature X, defined as $X = (a + a^{\dagger})/\sqrt{2}$, is
\begin{eqnarray}
\left\langle{X}\right\rangle_{t} = \sqrt{2}\Delta_{g}^{-\frac{1}{2}}\mu {\rm{sin}}(\sqrt{\Delta_{g}}\omega t/2),
\end{eqnarray}
with the variance
\begin{eqnarray}
\left(\Delta X\right)^{2} = 1 -2g^{2}\xi^{2}\mu^{2}\Delta_{g}^{-1}\left[1-{\rm{cos}}\left(\sqrt{\Delta_{g}}\omega t\right)\right].
\end{eqnarray}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{df1-eps-converted-to.pdf}
\caption{(a) Mean value of quadratures $X$ at different working points during an evolution time $\tau=2\pi/(\sqrt{\Delta_{g_w}}\omega)$ as a function of $g$. The inset shows the corresponding susceptibility at different working points $\chi(g_w)\equiv\chi_{g}(\tau)\vert_{g=g_w}=4\sqrt{2}\pi g_w\Delta_{g_w}^{-3/2}$ with $\lambda_{1}/\lambda_{2}=2$. (b) QFI $\mathscr{F}_g(t)$ as a function of the evolution time $t$. Inset: the local maximum of the inverted variance $\mathscr{I}_g(\tau)$ after an evolution time $\tau=2\pi/(\sqrt{\Delta_{g}}\omega)$ reaches the same order of $\mathscr{F}_g(\tau)$ with $\lambda_{1}/\lambda_{2}=2$.
}
\label{f1}
\end{figure}The inverted variance defined as
\begin{eqnarray}
\mathscr{I}_{g}=\chi_{g}^{2}(t)/\left(\Delta X\right)^{2},
\end{eqnarray}
can be used to quantify the precision of the parameter estimation, where $\chi_{g}(t) = \partial_{g}\left\langle{X}\right\rangle_{t}$ is the susceptibility of $\left\langle{X}\right\rangle_{t}$ with respect to the parameter $g$, and exhibits a divergent behavior when $g\to1$ for different anisotropic ratio $\lambda_1/\lambda_2$ (Fig.~\ref{f1}(a)), an analogous feature appearing in the standard QRM with $\lambda_1/\lambda_2=1$. The precision of the parameter estimation reaches the QCRB when $\mathscr{I}_{g}(t)=\mathscr{F}_{g}(t)$. The inverted variance achieves its local maximum as
\begin{eqnarray}
\mathscr{I}_{g}(\tau_{k})\backsimeq32\pi^{2}g^{2}\mu^{4}\Delta_{g}^{-3}k^{2}
\end{eqnarray}
(see Appendix \ref{B} for a proof), at $t = \tau_{k} = 2k\pi/\left(\sqrt{\Delta_{g}}\omega\right)\left(k\in\mathbb{Z}^{+}\right)$. The QFI at the same time can be obtained from Eq.~(\ref{Eq7}) as
\begin{eqnarray}
\mathscr{F}_{g}(\tau_{k})\backsimeq 64\pi^{2}g^{2}\xi^{2}\mu^{2}\Delta_{g}^{-3}k^{2}{\rm{Var}}\left[P^2\right]_{\left\vert\varphi\right\rangle_c}.
\end{eqnarray}
It can be seen from Fig.~\ref{f1}(b) that $\mathscr{I}_{g}(\tau_{k})$ can reach the same order of $\mathscr{F}_g(\tau_k)$ for different anisotropic ratio $\lambda_1/\lambda_2$,
though it is reduced with the increase of $\lambda_1/\lambda_2$. Note that the optimal case happens for the isotropic condition $\lambda_1/\lambda_2 = 1$.
We stress that the results of the AQRM considered here, like that of the QRM, also holds without requiring particular initial states of the bosonic field~\cite{23}.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{df2-eps-converted-to.pdf}
\caption{Inverted variance $\mathscr{I}(g_w)$ after evolution time $\tau=4\pi/(\sqrt{\Delta_{g_w}}\omega)$. The inset shows the selected working points under the condition $\mathcal{L}(g)=0.5$. The initial state is $\left\vert\Psi\right\rangle =(c_{\uparrow}\left\vert\uparrow\right\rangle_{q} +c_{\downarrow}\left\vert\downarrow\right\rangle_{q})\otimes\left\vert\varphi\right\rangle_c$. Without loss of generality, we assume $2c_{\uparrow}^{*}c_{\downarrow}=1$ and $\left\vert\varphi\right\rangle_c =\left\vert0\right\rangle $. As three examples, we consider the three cases: (a) $\lambda_{1}/\lambda_{2}=1$, (b) $\lambda_{1}/\lambda_{2}=2$ and (c) $\lambda_{1}/\lambda_{2}=4$.}
\label{f2}
\end{figure}
Another method is to directly measure the qubit to extract the information on the parameter. Without loss of generality~\cite{41,42,43}, we assume that the initial state of the system is a product state $\left\vert\Psi\right\rangle =(c_{\uparrow}\left\vert\uparrow\right\rangle_{q} +c_{\downarrow}\left\vert\downarrow\right\rangle_{q})\otimes\left\vert\varphi\right\rangle_c$. The mean value of the qubit's observable $\langle\sigma_{x}\rangle$ is
\begin{eqnarray}
\langle\sigma_{x}\rangle =2{\rm{Re}}[c_{\uparrow}^{*}c_{\downarrow}\mathcal{G}(g,t)],
\end{eqnarray}
where $\mathcal{G}(g,t) = \langle\varphi\vert u_{\uparrow}^{\dagger}u_{\downarrow}\vert\varphi\rangle_{c}$ is the Loschmidt amplitude~\cite{42}, and $u_{\uparrow}$($u_{\downarrow}$) is the evolution operator of bosonic field when the qubit in state $\left\vert\uparrow\right\rangle_{q}$ ($\left\vert\downarrow\right\rangle_{q}$). The inverted variance corresponding to $\langle\sigma_{x}\rangle$ is
\begin{eqnarray}
\mathscr{I}_{g}=\left(\partial_{g}\langle\sigma_{x}\rangle\right)^{2}/{\rm{Var}}[\sigma_{x}].
\end{eqnarray}
In Fig.~\ref{f2}, we assume $2c_{\uparrow}^{*}c_{\downarrow}=1$ and plot the inverted variance $\mathscr{I}_{g_w}=\mathscr{I}(\tau)\vert_{g=g_w}$ at the working point with an evolution time $\tau=4\pi/(\sqrt{\Delta_{g_w}}\omega)$ for the estimation of the parameter $g$ based on the observable $\langle\sigma_{x}\rangle$ with three anisotropic ratios. The point that satisfies $\mathcal{L}(g_w)=0.5$ is selected as the working point, where $\mathcal{L}=L(g)-\lfloor L(g)\rfloor$ with $L(g) =\left\{[(1+g^{2})(1+\gamma^{2}g^{2})]/[(1-g^{2})(1-\gamma^{2}g^{2})]\right\}^{1/2}$. Under this condition, the mean value of the observable is $\langle\sigma_{x}\rangle\backsimeq0$.
It can be found that $\mathscr{I}_{g_w}$ scales as $\Delta_{g_w}^{-3}$, which shows a divergent feature close to the critical point. It is worth noting that we can obtain similar results from other general initial states, such as coherent states and the superposition of Fock states (see Appendix \ref{C}). Obviously, as compared to the homodyne detection with the QRM (Fig.~\ref{f2}(a)), for the cases of the AQRM and with the increase of the anisotropic ratio $\lambda_1/\lambda_2$, there are more working points satisfying $\mathcal{L}(g_w)=0.5$, as shown in Fig.~\ref{f2}(b), (c).
To quantify the performance of the present sensing protocol, we use the Ramsey interferometry as the benchmark, which works by sandwitching a free evolution with a time $\tau$ between two Ramsey pulses, each performing a $\pi/2$ rotation on the qubit, whose frequency is to be estimated. The qubit, starting with the initial state $\left\vert\downarrow\right\rangle$, has a probability of being populated in $\left\vert\uparrow\right\rangle$, given by
\begin{eqnarray}
P_{\uparrow} = (1/2)(1+ {\rm{cos}}\theta),
\end{eqnarray}
where $\theta = \omega_q \tau$, with $\omega_q$ characterizes the qubit's transition frequency. Such a probability is related to the Bloch vector $\sigma_{z}$ by $P_{\uparrow}= (1+\left\langle\sigma_{z}\right\rangle)/2$. The interferometer is most susceptible to the variation of $\omega_q$ around the bias point~\cite{qs}, where $\theta_o=(n+1/2)\pi$ and the susceptibility is
\begin{eqnarray}
\left\vert\partial P_{\uparrow}/\partial\theta\right\vert_{\theta=\theta_o}=1/2.
\end{eqnarray}
This implies that a longer evolution time is preferred for improving the susceptibility, which, however, would introduce more serious decoherence noises. At this point, the standard deviation associated with measurement of $P_{\uparrow}$ is $\Delta P_{\uparrow}=1/2$, which leads to inverted variance
\begin{eqnarray}
(\partial P_{\uparrow}/\partial\theta)^2/(\Delta P_{\uparrow})^2=1.
\end{eqnarray}
In distinct contrast with this result, inverted variance in the present protocol exhibits a much higher inverted variance near the critical point, as shown in Fig.~\ref{f2}. We note that both sensing protocols work within a limited parameter range. For the Ramsey interferometry, the detection has a limited linear range with $\left\vert\delta\omega_q\tau\right\vert<\pi/2$, where $\delta\omega_q$ is the deviation of qubit frequency from the reference point and $\tau$ denotes the free evolution time~\cite{qs}. When exceeding such a range, phase wrapping occurs, breaking down the one-to-one correspondence between the transition probability and the qubit frequency.
\section{Finite-frequency scaling}
We now turn to investigating the finite-frequency effect. We derive a high-order correction to the effective low-energy Hamiltonian up to fourth order in $\omega/\Omega$:
\begin{eqnarray}
\mathcal{H}_{np}^{\Omega}&=&\mathcal{H}_{np}^{\downarrow} + \frac{1}{\Omega^{3}}(DC)^2- \frac{\omega}{\Omega^2}(\lambda_{1}^2a^{\dagger}a - \lambda_{2}^2aa^{\dagger}),
\end{eqnarray}
where $C=\lambda_{1}a + \lambda_{2}a^{\dagger}$, $D =\lambda_{2}a + \lambda_{1}a^{\dagger}$, and the high-order correction makes the Hamiltonian no longer contain only even terms for the bosonic field compared with Eq.~(\ref{Eq6}). We find that the odd order terms will break the equilibrium between the rotating-wave and counterrotating-wave interactions and shift the maximum point of the inverted variance at the QRM towards the direction where the rotating-wave coupling prevails. The inverted variance of the high-order correction
by homodyne detection of the bosonic field mode can be written as
\begin{eqnarray}\label{Eq16}
\mathscr{I}_{lab}^g(\tau) &=& \frac{g^2\pi^2}{2(1-g^2)^3} + \frac{g^2\pi^2(2+g^4)}{(1-g^2)^4\eta}\gamma \nonumber\\
&&+ \frac{g^2\pi^2E}{4(1-g^2)^6\eta^2}\gamma^2 + \mathcal{O}(\gamma^3),
\end{eqnarray}
where $E = 4g^{10} + 4(2+\eta^2) + 2g^{8}(3\eta^2-8) - 2g^2(9\eta^2+4) - 2g^6(11\eta^2 + 10)+g^4(30\eta^2+32-\pi ^2)$ (see Appendix \ref{D}). It can be seen from Fig.~\ref{f_bias} that the shift of the maximum point of the inverted variance will increase when $g\to1$ and as the limit $\Omega/\omega \to \infty$ is not satisfied, the influence of $\eta$ is non-negligible.
\begin{figure}[]
\centering
\includegraphics[width=0.5\textwidth]{f3_z1-eps-converted-to.pdf}
\caption{Optimal ratio of $\lambda_{1}/\lambda_{2}$ at the maximum point of the QFI under finite frequency ratio. The evolution time is not exactly $\tau=2\pi/\left(\sqrt{\Delta_{g}}\omega\right)$ and $\Delta_g=4(1-g^2)(1-\gamma^2g^2)$ is not applicable for finite frequency. We choose the maximum of $\mathscr{I}_{lab}^g(t)$ as the result for which evolution time $t$ is $0.9\tau\le t\le1.0\tau$.}
\label{f_bias}
\end{figure}
\section{Conclusion}
To summarize, we have investigated the criticality-enhanced quantum sensing based on the dynamical evolution of the AQRM. We show the divergent scaling of the QFI under different anisotropic ratios, which weakens as the anisotropy ratio increases. We find that the influences of the rotating-wave and counterrotating-wave interaction terms are symmetric when the ratio of the qubit transition frequency to the field frequency is infinite. For a finite frequency ratio, the equilibrium between the rotating-wave and counterrotating-wave interaction is broken, and a bias in the couplings favors improvement of the precision.
\section*{ACKNOWLEDGMENTS}
This work was supported by the National Natural Science Foundation of China (Grant No. 12274080, No. 11874114, No. 11875108), the National Youth Science Foundation of China (Grant No. 12204105), the Educational Research Project for Young and Middle-aged Teachers of Fujian Province (Grant No. JAT210041), and the Natural Science Foundation of Fujian Province (Grant No. 2021J01574, No. 2022J05116).
|
{
"arxiv_id": "2302.13742",
"language": "en",
"timestamp": "2023-02-28T02:29:02",
"url": "https://arxiv.org/abs/2302.13742",
"yymm": "2302"
} | \section{\label{sec:intro}Introduction}
Quantum field theory has revealed unexpected and non-intuitive lessons about the way nature works. Arguably, one of the most notorious results of this paradigm is the Reeh-Schlieder theorem \cite{reehschlider}. It applies to free and interacting theories alike. To discuss its consequences in the simplest possible context, we will restrict to free, real scalar field theories in $D+1$-dimensional Minkwoski spacetimes. This restriction ensures that the concepts discussed here cannot be attributed to the interactions of the field theory under consideration; they are intrinsic properties of any quantum field theory.
Consider operators of the form $\hat \Phi_F:= \int dV\, F(x)\, \hat \Phi(x)$, where $F(x)$ is a smooth function and $dV$ the spacetime volume element. These are called smeared field operators, and $F(x)$ are smearing functions (the smearing ensures that $\hat \Phi_F$ is a well-defined operator in the Hilbert space\footnote{In the sense that it maps states to other states. This is not the case without smearing; for instance, $\hat \Phi(x)$ acting on the vacuum produces a state with infinite norm, $\langle 0|\hat \Phi(x)\hat \Phi(x)|0\rangle\to \infty$, which is clearly not part of the Hilbert space. Smeared field operators do not have this problem and are suitable candidates for the elementary observables of the theory, from which one can generate the full algebra of observables.}). It is well-known that the Hilbert space of the theory can be generated from states of the form
\begin{equation} \label{state} |\Psi\rangle =\hat \Phi_{F_1}\hat \Phi_{F_2}\cdots \hat \Phi_{F_N}|0\rangle\, ,\end{equation}
in the sense that any state can be approximated arbitrarily well by such states, for appropriate choices of smearing functions $F_1(x),\cdots, F_N(x)$. This is not surprising, and simply tells us that we can create any excitation of the field by acting with an appropriate combination of operators. Intuitively, one can imagine creating an excitation with support in a small laboratory by acting with a suitable set of smeared operators supported within the laboratory.
What is rather surprising ---and this is the content of the Reeh-Schlieder theorem--- is that
one can generate the entire Hilbert space from states of the form \eqref{state} {\em even if we restrict the smearing functions to be supported within an arbitrarily small open set of Minkowski spacetime}. In simple words, one can excite the field in an arbitrary corner of the universe by acting on the vacuum with operators supported exclusively within our small lab! (One cannot use this fact, however, to produce faster-than-light communication~\cite{Witten:2018,Fewster:2018qbm,Bostelmann:2020unl}.)
Although puzzling at first, this is reminiscent of the properties of maximally entangled states in quantum mechanics~\cite{Witten:2018}. Consider two quantum mechanical systems with Hilbert spaces $\mathcal{H}_A$ and $\mathcal{H}_B$ of the same dimension $n$, and let $|\Psi\rangle$ be a pure, maximally entangled state. It is well known that {\em every} state in $\mathcal{H}_A\otimes \mathcal{H}_B$ can be obtained by acting on $|\Psi\rangle$ with an operator {\em restricted to subsystem $A$}:
\begin{eqnarray}
&\forall& |\alpha\rangle \in \mathcal{H}_A\otimes \mathcal{H}_B\ {\rm there \ exist} \ \hat O_A \ {\rm such \ that } \nonumber \\ \nonumber
& & |\alpha\rangle=\hat O_A \otimes \hat{\mathbb{I}}_B |\Psi\rangle \, ,
\end{eqnarray}
where $\hat{\mathbb{I}}_B$ is the identity operator in $\mathcal{H}_B$ (see, for instance, \cite{nielsen2002quantum}). \footnote{This is true not only for maximally entangled states, but also for any state whose Schmidt form
\begin{equation}
|\Psi\rangle=\sum_i^n c_i\, |i\rangle_A| i\rangle_B\,, \end{equation}
has {\em all} coefficients $c_i$ different from zero. These are sometimes called totally entangled states (maximally entangled states correspond to $c_i=1/\sqrt{n}$ for all $i$).
The proof goes as follows. One basis state $|i\rangle_A|j\rangle_B \in \mathcal{H}_A\otimes \mathcal{H}_B$ can be obtained from $|\Psi\rangle$ by acting on it with the operator $\frac{1}{c_j}|i
\rangle\langle j|_A\otimes \hat{\mathbb{I}}_B$. Similarly, we can create any other basis element in $\mathcal{H}_A\otimes \mathcal{H}_B$. Hence,
for each and every state in $\mathcal{H}_A\otimes \mathcal{H}_B$, there is a linear combination of such operators whose action on $|\Psi\rangle$ produces the desired state, and such linear combination can be written in the form $\hat O_A\otimes \hat{\mathbb{I}}_B$. Note that this argument fails if any of the coefficients $c_j$ are equal to zero.}
This is similar to the content of the Reeh-Schlieder theorem for quantum field theory, if we identify the field degrees of freedom inside our small lab with subsystem A and all the rest with subsystem B. The Reeh-Schlieder theorem reveals that the vacuum state is an extraordinarily rich state regarding its entanglement structure \cite{Verch:2004vj}. In particular, it has been shown that the Reeh-Schlieder theorem implies that, if A and B are subsystems made of all the field degrees of freedom contained within two regions of spacetime $V_A$ and $V_B$, respectively, and the two regions are spacelike separated, subsystems A and B are always entangled when the field is prepared in the vacuum \cite{Verch:2004vj,Hollands:2017dov}. The entanglement content of quantum field theory has been reinforced by calculations of the geometric entanglement entropy associated with an open region of space $V$ (see \cite{Sorkin:1985bu,Bombelli:1986rw,Srednicki:1993im,Solodukhin:2011gn,Bianchi:2012ev,Bianchi:2019pvv}, and references therein).
These results have taught us a profound lesson about quantum field theory: entanglement is ``ubiquitous" in the vacuum; and since the short-distance behavior is the same for all states, entanglement is equally ubiquitous in any other state (in Minkowski spacetime, every state with bounded energy satisfies the Reeh-Schlieder property \cite{haag_local_1996}), reflecting the fact that entanglement between spatially separated regions is an intrinsic property of quantum field theory.
The results summarized so far involve subsystems containing {\em infinitely many} degrees of freedom (typically, all the field modes supported withing a region $V$). Although this is of interest to understand the conceptual and mathematical content of quantum field theory, it would be desirable to extend the discussion to {\em finite dimensional} subsystems of this theory. This is the goal of this paper. This extension is of direct practical interest, since experimentalists have access only to a finite set of such field modes.
\begin{figure*}
\centering
\begin{tikzpicture}
\node at (0,0) { \includegraphics[width=\textwidth]{Figs/intro_quantum_field_and_detector_v3.png}};
\node[rotate=-40] at (-7.25,-2.75) {$\Sigma_t$};
\node at (-7,3) {Input};
\node at (0.5,3) {Output};
\node at (7,3) {Lattice};
\end{tikzpicture}
\caption{Representation of a quantum field in a portion of a Cauchy hypersurface, $\Sigma_t$, that a detector might access at an instant of time. The detector is made of a finite number of pixels (represented by the red circles on top of $\Sigma_t$), and captures a simplified (smeared) version of the field in each of its pixels, as represented by the output of the detector on the laptop. The smearing process provides a way to define a lattice theory out of the continuum field theory, by assigning the smeared field in each of the pixels of the detector with a lattice node, as depicted in the rightmost part of the figure.}
\label{fig:intro_quantum_field_and_detector}
\end{figure*}
There is a common belief that entanglement is ubiquitous in quantum field theory, even if we restrict to finite dimensional subsystems. In particular, it is usually taken for granted that {\em any pair} of field degrees of freedom are entangled in the vacuum state. This intuition is supported by the following fact. Given a fixed field mode compactly supported in a region B of spacetime, if we choose an arbitrary compact region A separated from the first, the Reeh-Schlieder theorem guarantees that there is at least one mode within region A that is entangled to the fixed mode in region B, when the field is in the vacuum state \cite{Hollands:2017dov}. However, the theorem does not tell us how many modes in A are entangled with the fixed mode in B, or how complicated such modes are. Since region A hosts infinitely many modes, the belief that any pair of modes, one in A and one in B, are entangled in the vacuum, is an (unjustified) extrapolation of the actual content of the Reeh-Schlieder theorem. The primary goal of this paper is to check whether this extrapolation is actually true. We find that it is not.
We proceed as follows. We construct a family of locally defined individual modes of a scalar field theory in $D+1$-dimensional Minkowski spacetime, by smearing the field and its conjugate momentum in space (see Sec.~\ref{sec:individual-modes} for the relation between smearing in space and in spacetime). The smearing function can be intuitively thought of as defining a ``pixel'' of the field theory: the support of the smearing determines the size of the pixel, corresponding to the maximum resolution of a detector, while the shape of the smearing function determines the resolution of the detector within the pixel. In this way, one can divide the space into disjoint pixels, each describing a single degree of freedom of the field theory (this is illustrated in Fig.~\ref{fig:intro_quantum_field_and_detector}). Any finite region contains a finite number of such pixels.
This strategy has the advantage that, given any two regions, each containing $N_A$ and $N_B$ degrees of freedom, one can use standard techniques in quantum mechanics of finite dimensional systems to quantify correlations and entanglement. All the difficulties and subtleties intrinsic to quantum field theory are removed. In particular, the calculations are free of the divergences that plague the calculation of the geometric entanglement entropy associated with a region in quantum field theory. A similar strategy has been used before in \cite{Martin:2015qta,martin_real-space_2021,Martin:2021qkg,Espinosa-Portales:2022yok,bianchi_entropy_2019} to evaluate mutual information, entropy, quantum discord and to search for violations of a type of Bell inequalities, with interesting applications in cosmology.
One can think of our ``pixelation'' of space as a way of defining a lattice field theory out of the continuum theory. With the crucial difference that one is not restricting the degrees of freedom before quantization---something we want to avoid, since the entanglement content of lattice field theory can be very different from the theory in the continuum; in particular, it is far from obvious if some analog of the Reeh-Schlieder theorem exists for lattice theory. Our strategy contains the benefits and kindness of lattice field theories, while keeping the richness of the continuum. The lattice constructed in this way is defined by the capabilities of experimentalists, rather than by a drastic truncation of the degrees of freedom prior to quantization. (We extend this strategy in different directions, for instance, by allowing different pixels to overlap.)
The main lesson of this article is that entanglement is significantly less ubiquitous than one would have thought. In particular, for $D\geq 2$ we do not find entanglement between pairs of modes supported in non-overlapping regions, unless we fine-tune the family of field modes to maximize the contact between the subsystems (this fact can be explained by the analysis in \cite{deSLTorres:2023aws}, which shows that entanglement between regions is sharply concentrated close to the boundary). We also observe that entanglement is weaker in higher dimensions.
In the rest of this article, we proceed as follows. In Sec.~\ref{sec:2}, we describe the way we isolate individual field degrees of freedom that are localized in a region of space in a free scalar theory. We describe how to compute the reduced state describing a finite number of such modes, and how to obtain properties of interest such as von Neumann entropy, correlations, mutual information and entanglement. In Sec.~\ref{sec:3}, we apply this formalism to two modes belonging to a simple, yet physically interesting family of modes. We increase the number of modes in each subsystem in Sec.~\ref{sec:4}, and evaluate whether entanglement shows up between these ``richer" subsystems. In Sec.~\ref{sec:5}, we extend our analysis to a larger family of smearing functions. In Sec.~\ref{sec:6}, we discuss choices of pairs of field modes for which we do find entanglement. Finally, Sec.~\ref{sec:discussion} collects the main results of this article, discusses their relevance, and puts them in a larger perspective.
Throughout this paper, we use units in which ${\hbar=c=1}$.
\section{Subsystems, reduced states and entanglement}\label{sec:2}
Field theories describe physical systems with infinitely many degrees of freedom. In experiments, however, we only have access to a finite subset of them. We describe in this section the way we isolated individual field degrees of freedom localized in a region of space in a free scalar field theory (generalization to other types of free fields is straightforward). We then describe how to compute the reduced quantum state restricted to a finite set of such degrees of freedom when the field is prepared in the vacuum, and how to compute properties of interest from it, such as the von Neumann entropy, correlations, mutual information and entanglement.
\subsection{Defining individual modes of the field}
\label{sec:individual-modes}
In order to fix the basic concepts, let us consider first an analog situation in standard quantum mechanics. Let us consider a set of $N$ harmonic oscillators, and let $\hat {\vec r}:= (\hat x_1,\hat p_1,\cdots, \hat x_N,\hat p_N)$ be the vector of canonical operators. The canonical commutation relations can be succinctly written as $[\hat r^i,\hat r^j]=i\, \Omega^{ij}$, where $\Omega$ is the (inverse of the) symplectic structure of the classical phase space
\begin{equation*}
\Omega_N = \bigoplus_{i = 1}^N \Omega_2\,, \qquad \Omega_{2}= \left(\begin{matrix}
0 & 1 \\ -1 & 0\end{matrix}\right) \, .
\end{equation*}
A general observable that is linear in the canonical variables can be written as $\hat O_{\vec v}:= v_i \hat r^i$, (sum over repeated indices is understood) with $\vec v \in \mathbb{R}^{2N}$. Vectors $\vec v$ can be identified with elements of $\Gamma^*$, the dual of the classical phase space, establishing a correspondence between linear observables in the classical and quantum theories. Written in this way, all $\hat O_{\vec v}$'s have dimensions of action, and their commutation relations are given by the symplectic product of the corresponding $\vec v$'s
\begin{equation} [\hat O_{\vec v},\hat O_{\vec v'}]=i\, v_iv'_j\Omega^{ij}\, . \end{equation}
Any {\em non-commuting} pair of linear observables $(\hat O_{\vec v},\hat O_{\vec v'})$ {\em defines a subsystem with a single degree of freedom} (we will refer to subsystems like this as ``{\em modes} of the system'') ---more precisely, the subsystem is defined by the algebra generated by the pair $(\hat O_{\vec v},\hat O_{\vec v'})$ \cite{haag_local_1996}. For instance, subsystems corresponding to each individual oscillator are defined by the pairs $(\hat x_I,\hat p_I)$, $I=1,\cdots, N$. However, the definition is more general and includes
modes which are combinations of several oscillators (when the oscillators are coupled to each other by springs, the normal modes of the Hamiltonian are familiar examples of such combinations). This procedure provides a simple recipe to extract individual modes of our systems. This idea can be extended to field theory as follows.
A field theory hosts infinitely many degrees of freedom. This is true even if we restrict to an arbitrarily small open region of space. Intuitively, at each point in space $\vec x$, we have an independent pair of canonically conjugated operators $(\hat \Phi (\vec x),\hat \Pi (\vec x))$, and each such pair defines a single mode of the system; since any region contains infinitely many points, the region hosts as many independent modes. This is only heuristic because, as mentioned in the introduction, neither of the objects $\hat \Phi (\vec x)$ nor $\hat \Pi (\vec x)$ are well-defined operators. We need to smear them out. The standard procedure is to smear the covariant operator $\hat \Phi (x)$ against a function in spacetime\footnote{Convenient choices for smearing functions in Minkowski spacetime are functions in Schwartz space \cite{Schwartz} or functions of compact support sufficiently differentiable (see Appendix \ref{Ap:B}). We will restrict to the latter, since this will allow us to localize field modes in compact regions.} $ \hat \Phi_F:= \int dV F(x) \hat \Phi(x)$, with $F(x)$ a smooth function compactly supported in a region $V$. This is the set of linear observables in the theory ---in this covariant formulation, the conjugate momentum $\hat \Pi=\frac{d}{dt}\hat {\Phi}$ is not needed.
Given two operators defined in this way, their commutation relations are
\begin{equation} \label{covcom} [\hat \Phi_{F_1},\hat \Phi_{F_2}]=i\, \Delta(F_1,F_2) \, ,\end{equation}
where
\begin{equation} \Delta(F_1,F_2):=\int dV dV'\, F_1(x) F_2(x)\, \Delta(x,x')\, ,\end{equation}
and $\Delta(x,x'):=G_{\rm Ad}(x,x')-G_{\rm Ret}(x,x')$ is the difference between the advanced and retarded Green's functions of the Klein-Gordon equation. Eq.~\eqref{covcom} is simply the smeared version of the familiar covariant commutation relations $[\hat \Phi(x),\hat \Phi(x')]=i\, \Delta(x,x')$.
With these definitions, given any two smearing functions $F_1$ and $F_2$ compactly supported in a region $V$, and such that the associated field operators do not commute, the pair $(\hat \Phi_{F_1},\hat \Phi_{F_2})$ defines ---again, via the algebra it generates--- an individual mode of the system localized in region $V$. This strategy provides a simple way of extracting from the field theory individual degrees of freedom localized in a given region.
It is essential to keep in mind that there are infinitely many independent modes within any open region $V$, and that a non-commuting pair $(\hat \Phi_{F_1},\hat \Phi_{F_2})$ defines just one of them. Put plainly, one should not identify a region $V$ with a single mode.
To finish this subsection, we summarize how, in free field theories, the discussion above can be translated to a canonical picture, where instead of smearing fields in spacetime, one smears the field and its conjugate momentum only in space. This re-formulation looks closer to the example of $N$ harmonic oscillators given above, and we will use it in the rest of this article.
In the canonical picture, linear observables are operators of the form
\begin{equation}
\hat O_{f,g}:=\int_{\Sigma_t} d^Dx \, \big(g(\vec x)\, \hat \Phi(\vec x,t)-f(\vec x)\, \hat \Pi(\vec x,t)\big)\, ,
\end{equation}
where the integral is restricted to a Cauchy hypersurface $\Sigma_t$ of a $D+1$-dimensional Minkowski spacetime, which for simplicity in this paper will be chosen to be a hypersurface defined by a constant value of the time coordinate $t$ of any arbitrary inertial frame ---although nothing will change in the discussion if we use a more complicated choice. The functions $f(\vec x)$ and $g(\vec x)$ are compactly supported in a region $R$ of such Cauchy $t=$constant hypersurface, and $\hat \Pi(\vec x,t):=\frac{d}{dt} \hat \Phi(\vec x,t)$. As for the example of harmonic oscillators, we can identify pairs of functions $(f(\vec x),g(\vec x))$ with elements of the dual phase space $\Gamma^*$; then, all operators $ \hat O_{f,g}$ have dimensions of action. Operators defined from pairs of the form $(0,g(\vec x))$ are called pure field operators; similarly, pairs of the form $(f(\vec x),0)$ are called pure momentum operators.
The commutation relations in the canonical formulation are given by the symplectic product of the smearing functions
\begin{equation} [\hat O_{f,g},\hat O_{f',g'}]=i\, \Omega\big( (f,g), (f',g')\big):=i\, \int_t d^Dx\ (f g'-gf')\, . \nonumber \end{equation}
In this context, individual modes of the system localized in a region of space $R$ are selected by choosing two pairs of functions $(f,g)$ and $(f',g')$ supported within $R$ and such that the commutator in the previous equation is different from zero. {\em This is the way we will define localized field modes in this article}.
The simplest choice is a subsystem defined from a pure field and a pure momentum operator, $\hat O_1=\int_td^Dx \, g \, \hat \Phi$, $\hat O_2=\int_td^Dx\, f\, \hat \Pi$ such that $\int_td^Dx f g=1$, and consequently $[\hat O_1,\hat O_2]=i$. However, more general combinations will also be considered in this article.
The relation between the covariant and canonical pictures is given by the following map between functions $F$ of compact support in spacetime and pairs of functions $(f,g)$ compactly supported in space (see, for instance, \cite{Ashtekar:2021dab,Ashtekar:1975zn,Wald:1995yp} for further details). Recall that the commutator bi-distribution $\Delta(x,x')$ satisfies the field equations in both its variables. Hence, by smearing the $x'$ dependence of $\Delta$ with $F$, we are left with a solution of the field equations, $s(x):=\int dV \Delta(x,x') F(x')$. By reading Cauchy data from $s(x)$ corresponding to a $t$=constant hypersurface, $f(\vec x)=s(\vec x, t),g(\vec x)=\frac{d}{dt}s(\vec x,t)$, we obtain a map $F(x)\to (f(\vec x),g(\vec x))$, which in turn defines a map $\hat \Phi[F]\to \hat O_{f,g}$ between operators smeared in spacetime and operators smeared in space.\footnote{This map is onto but not invertible. The reason is that the operator-valued-distribution $\hat \Phi[x]$ has a Kernel, given by functions of the form $(\Box-m^2)G$, with $G$ a function of compact support in spacetime; i.e., $\hat \Phi(x)$ smeared with $(\Box-m^2)G$ vanishes for all $G$. In passing to the canonical formulation, this kernel is conveniently eliminated by the map $F(x)\to (f(\vec x),g(\vec x))$ defined by $\Delta(x,x')$, because $\Delta$ has the same Kernel as $\hat \Phi(x)$. Hence, while the map $F\to \hat \Phi_F$ has a Kernel, implying that different smearing functions do not necessarily define different operators,
the map $(f,g)\to \hat O_{f,g}$ has the advantage that is faithful.} This map preserves the commutation relations, because the properties of $\Delta$ guarantee that $\Delta(F,F')$ is mapped to $\Omega((f,g),(f',g'))$ (see \cite{Ashtekar:2021dab} for a simple proof).
\subsection{Finite-dimensional subsystems and reduced states}
Consider now a finite set of independent modes of the field. They are defined by a set of $2N$ operators $\hat O^i$, which can be straightforwardly normalized to satisfy
\begin{equation} [\hat O^i,\hat O^j]=i\, \Omega^{ij}\, .\end{equation}
The algebra generated by these observables is isomorphic to the algebra of a quantum mechanical system with $N$ bosonic degrees of freedom (mathematically, the associated Weyl algebra is a Type I von Neumann algebra). Hence, the difficulties intrinsic to field theories are removed by restricting to such a finite set of modes; one is in the realm of standard quantum mechanics to define reduced states, entropies and entanglement. Physically, we think of this $N$-dimensional subsystem as encoding the degrees of freedom that a particular experimentalist may be able to measure.
We now describe how to compute the {\em reduced quantum state} for a finite set of modes when the field is prepared in the vacuum state (see \cite{Martin:2021qkg,bianchi_entropy_2019} for previous similar calculations). It is well known that such state is {\em always} mixed \cite{Verch:2004vj,Hollands:2017dov,Ruep:2021fjh}, something that we will confirm with several examples. This is an important message to keep in mind: in quantum field theory {\em all physically allowed} reduced states describing subsystems localized within a compact region of space are mixed; this is a drastic departure from standard quantum mechanics.
Given an arbitrary state $\hat \rho$ in the full theory, the task of finding the reduced state for a subsystem of $N$ modes is complicated. However, there is a significant simplification when $\hat \rho$ is a Gaussian state. This is the case for the standard vacuum in Minkowski spacetime; we will restrict in the following to such states, although the generalization to other Gaussian states is straightforward.
Recall that a Gaussian state in field theory is completely and uniquely characterized by its one- and two-point distributions, $\langle \hat \Phi(x) \rangle$ and $\langle \hat \Phi(x) \hat\Phi(x') \rangle$, respectively. From these distributions, one obtains the first and second moments for any mode of the field by smearing them out. For the vacuum, the one-point distribution is zero. Higher order correlators can all be obtained from $\langle \hat \Phi(x) \hat\Phi(x') \rangle$. Recall also that the reduced state of a Gaussian state is also Gaussian. Therefore, the reduced state for our $N$-mode subsystem is completely characterized by
\begin{equation} \langle \hat O^i\rangle\, \quad \text{and} \quad \langle \hat O^i \hat O^j\rangle \, . \end{equation}
For the vacuum state, $\langle \hat O^i\rangle=0$ for all $i$. Hence, the characterization of the reduced quantum state for our subsystem reduces to merely compute the second moments $\langle \hat O^i \hat O^j\rangle$, a task that we will do repeatedly in this article.
Furthermore, we can decompose the second moments in their symmetric and anti-symmetric parts
\begin{equation} \langle 0|\hat O^i\hat O^j|0 \rangle=\frac{1}{2} \, \braket{0|\{\hat O^i,\hat O^j\}|0} +\frac{1}{2} \, \braket{0| [\hat O^i ,\hat O^j]|0}\, , \end{equation}
where curly brackets represent the standard anti-commutator. Notice that the anti-symmetric part, the commutator, is equal to $i\,\Omega^{ij}$ and is state independent. Therefore, all information of the reduced state is actually encoded in the anti-commutator part. It is common to call that part the {\em covariance matrix} of the reduced state
\begin{equation} \sigma^{ij}:=\langle 0 | \{\hat O^i, \hat O^j\}|0\rangle \, . \end{equation}
Therefore, {\em the calculation of the reduced state of a $N$-mode subsystem when the field is in the vacuum reduces to computing the covariance matrix $\sigma$}. It completely and uniquely characterizes the reduced state (keeping in mind that the first moments are all zero).
Several interesting aspects of the reduced state can be readily obtained from $\sigma$ in an elegant manner. For instance, Heisenberg's uncertainty principle is encoded in the statement that the matrix $\sigma + i \Omega$ is positive-semidefinite\footnote{In the sense that $(\sigma^{ij} + i \, \Omega^{ij})\bar v_iv_j\geq 0$ for all $v_i\in \mathbb{C}^{2N}$.}, and the reduced state is pure if and only if the eigenvalues of $\sigma^{ik}\Omega_{kj}$ are all $\pm i$ (in which case the pair ($\sigma^{ij}, \Omega_{ij}$) defines a K\"{a}hler structure in the classical phase space \cite{Ashtekar:1975zn,Hackl:2020ken}). We will use this criterion to confirm that all reduced states we will obtain from the Minkowski vacuum are mixed.
The information contained in the covariance matrix $\sigma$ can be invariantly characterized by its {\em symplectic eigenvalues}, denoted by $\nu_I$, with $I=1,\cdots, N$, and defined as the modulus of the eigenvalues of the matrix $\sigma^{ik}\Omega_{kj}$, understood as a linear map in $\mathbb{C}^{2N}$. Many quantities of interest for us can be readily obtained from the symplectic eigenvalues. For instance, the von Neumann entropy of the reduced state is given by \cite{serafini2017quantum}
\begin{eqnarray} \label{S} S[\bm\sigma]=\sum_I^N \Big[\left( \frac{\nu_I+1}{2}\right) \log_2\left( \frac{\nu_I+1}{2}\right)\nonumber \\ -\left( \frac{\nu_I-1}{2}\right) \log_2\left( \frac{\nu_I-1}{2}\right) \Big ]. \end{eqnarray}
\subsection{Correlations and Entanglement} \label{subsec:2}
Given two subsystems, $A$ and $B$, made of $N_A$ and $N_B$ modes, respectively, we will be interested in computing the correlations and entanglement between them when the field is prepared in the vacuum.
The correlations between concrete pairs of observables $\hat O^i$ and $\hat O^j$ can be computed straightforwardly, since they correspond to the elements of the covariance matrix.
On the other hand, the total amount of correlations between the two subsystems can be quantified by means of the {\em Mutual Information} $\mathcal{I}(A,B)$, given by
\begin{equation} \label{MI}
\mathcal{I}(A,B) = S_{A} + S_{B} - S_{AB}\,,
\end{equation}
where $S_A$, $S_B$ and $S_{AB}$ are the von Neumann entropies of subsystems $A$, $B$ and the joined system $AB$, respectively. We will check with concrete examples that the mutual information of general subsystems is different from zero in field theory; as expected, since correlations are ubiquitous. However, it is important to keep in mind that a non-zero mutual information $\mathcal{I}(A,B)$ {\em does not} imply that the two subsystems are entangled, since $\mathcal{I}(A,B)$ quantifies all correlations, both of classical and quantum origins. To evaluate whether the subsystems are entangled, we need to go beyond mutual information.
The evaluation of the entanglement between arbitrary subsystems is a subtle issue in quantum field theory; a rigorous strategy to quantify such entanglement has been proposed only recently in \cite{Hollands:2017dov}. The difficulty originates, of course, from the possibly infinite number of degrees of freedom each subsystem may have. Although subsystems made of infinitely many modes are important to understand the conceptual and mathematical structure of quantum field theory, in practical situations one has access to a finite number of modes. For finite-dimensional subsystems, one can apply the techniques developed in quantum mechanics to define and quantify entanglement. This is the strategy we follow in this paper.
Given a finite set of modes of the field, we are interested in dividing them into two subsystems $A$ and $B$ and evaluating whether ---and how much--- they are entangled when the field is prepared in the Minkowski vacuum. For this task, we need to find an appropriate entanglement quantifier. Entanglement entropy, commonly used in many applications in quantum mechanics is, unfortunately, useless for our task: entanglement entropy is a quantifier of entanglement only when the total state describing the two subsystems is pure. As discussed above, this is never the case for a finite set of modes in quantum field theory.
Quantifying entanglement for mixed states is a subtle question, and there is generally not a simple necessary and sufficient criterion for entanglement. However, such a necessary and sufficient criterion {\em does exist} in restricted situations, such as the setup we investigate in this paper, as we explain now.
One easily-computable measure of entanglement for pure and mixed states alike is the {\em logarithmic negativity} (LogNeg) \cite{peres96, plenio05}, which we will denote by $E_{\mathcal{N}}$. A non-zero value of the LogNeg implies a violation of the Positivity of Partial Transpose (PPT) criterion \cite{plenio05}. This in turn implies that a non-zero value of the LogNeg is a sufficient condition for entanglement; but it is not necessary for general quantum states.
However, when restricting to Gaussian states and when, additionally, one of the subsystems is made of a single mode, regardless of the size of the other subsystem, the LogNeg is different from zero {\em if and only if} the state is entangled. Furthermore, under these circumstances the LogNeg is a faithful quantifier of entanglement, in the sense that higher LogNeg means more entanglement \cite{serafini2017quantum}.
The LogNeg is a lower bound for the entanglement that can be distilled from the system via local operations and classical communications.
For Gaussian quantum states, the value of the LogNeg has an operational meaning as the exact cost (where the cost is measured in Bell pairs or entangled bits) that is required to prepare or simulate the quantum state under consideration \cite{wilde2020ent_cost, wilde2020alpha_ln}.
The LogNeg for a Gaussian quantum state can be directly computed from its covariance matrix. Consider an $N_A+N_B$-mode Gaussian state $\hat{\rho}$ of a bi-partite system, with covariance matrix $\sigma_{AB}$, where $N_A$ and $N_B$ are the number of modes in each subsystem. The LogNeg for the bi-partition can be computed as
\begin{equation}\label{LogNeg}
E_{\mathcal{N}} = \sum_{J=1}^{N_A+N_B}\max\{0,-\log_{2} \tilde{\nu}_J\}\,,
\end{equation} where $\tilde{\nu}_J$ are the symplectic eigenvalues of $\tilde{\sigma}$, defined as
\begin{equation} \label{PT}
\widetilde{\sigma}_{AB} = \bm T\sigma_{AB}\bm T,
\end{equation}
where $\bm T= \mathbb{I}_{2N_A}\oplus\bm\Sigma_{N_B}$ and $\bm\Sigma_{N_B}=\oplus_{N_B} \sigma_z$ is a direct sum of $N_B$ $2\times2$ Pauli-z matrices. The relation between the LogNeg and the PPT criterion can be understood by noticing that $\tilde{\sigma}$ is actually the covariance matrix of the {\em partially transposed} density matrix $\hat{\rho}^{\top_B}$, where the transpose is taken only in the $B$ subsystem; therefore, the non-positivity of $\hat{\rho}^{\top_B}$
implies that some of the symplectic eigenvalues $\tilde \nu_I$ are smaller than one, producing $E_{\mathcal{N}} >1$ (see, e.g., Ref.~\cite{serafini2017quantum} for further details).
Observe also that a sufficient condition for quantum entanglement is $\min\{\tilde{\nu}_J\}<1$.
We will mostly use the Log Neg in situations in which it is faithful (that is, when $N_A=1$). Nevertheless, we will also analyze the LogNeg of bi-partitions of ``many versus many'' modes in Sec.~\ref{sec:5}, which will be of interest since $E_{\mathcal{N}} >0$ is always a sufficient condition for entanglement; although in this case it is not necessary, so $E_{\mathcal{N}} = 0$ does not imply the absence of entanglement. In any case, since the LogNeg is a lower bound for distillable entanglement, $E_{\mathcal{N}} = 0$ indicates that whatever entanglement may be contained in the system cannot be distilled.
\section{Correlations and entanglement between two degrees of freedom} \label{sec:3}
In this section, we apply the formalism presented above to a simple, yet illustrative situation: two modes supported in disjoint regions of space and each defined by a pure field and a pure momentum operator. We evaluate correlations, entropy, mutual information, and entanglement between the two modes.
We use a family of smearing functions which, for massless fields, permits to derive analytical expressions in any number of space dimensions $D>1$. (The $D=1$ case requires attention, since one needs to introduce a mass to avoid infrared divergences; we solve the massive case numerically).
We find no entanglement between the two modes for all values of $D>1$, i.e., we find that the reduced state is separable for the family of modes used in this section.
We generalize this calculation to include a larger number of modes
in Sec.~\ref{sec:4}, and to other smearing functions and more general definitions of modes in Sec.~\ref{sec:5}.
\subsection{Smearing functions, correlations and covariance matrix}
Consider two $D$ dimensional balls, A and B, with radius $R$ in a $D+1$ dimensional Minkowski spacetime, and let $\rho$ be the distance between their centers in units of $R$. The balls are assumed to be disjoint so that $\rho>2$ (see Fig.\ \ref{fig:setup_2dof}).
\begin{figure}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics[width=0.4\textwidth]{Figs/Mink_cov/subsystems_2dof_OB.png}};
\node at (-3,-2.75) {$\Sigma_t$};
\node at (-1.5,-2.25) {$A$};
\node at (-1.5,2) {$(\hat{\Phi}_A,\hat{\Pi}_A)$};
\node at (-2.25,0.5) {$f_A(\vec{x})$};
\node at (1.75,2.) {$f_B(\vec{x})$};
\node at (1.,-0.5) {$B$};
\node at (1,2.75) {$(\hat{\Phi}_B,\hat{\Pi}_B)$};
\draw[<->] (-1.3,-1.25)--(1,0.3);
\draw[->] (-1.3,-1.25)--(-0.4,-1.25);
\node[below] at (0,- 0.475) {$\rho$};
\node[below] at (-0.85,-1.25) {$R$};
\end{tikzpicture}
\caption{Illustration of two space-like separated balls of radius R in a $t$=constant Cauchy hypersurface in $D+1$ dimensional Minkowski spacetime. A function $f_{A(B)}(\vec{x})$ compactly supported in region A (B) defines a single field-mode $(\hat{\Phi}_{A(B)}, \hat{\Pi}_{A(B)})$, as shown in Eq.~\eqref{eq:phiA2dof_def}. }
\label{fig:setup_2dof}
\end{figure}
We consider in this sections two modes, each supported within region A and B, respectively, and defined as follows. The mode in A is defined by a pair of non-commuting operators of the form
\begin{eqnarray}\label{eq:phiA2dof_def}
\hat{\Phi}_A &:=& \int \mathrm{d}^D x \, f_A(\vec x) \,\hat{\Phi}(\vec{x})\, , \nonumber \\
\hat{\Pi}_A &:=& c \int \mathrm{d}^D x \, f_A(\vec x) \,\hat{\Pi}(\vec{x})\, ,
\end{eqnarray}
where $f_A(\vec x)$ is a function compactly supported in region A and
$c$ is an arbitrary constant with dimensions of inverse energy (Mutual Information and entanglement between two subsystems will not depend on the value of $c$, since changing $c$ amounts to performing a symplectic transformation restricted to one subsystem and these quantities are invariant under such ``local'' transformations). In this section, we denote the pair of non-commuting operators defining the modes of interest as $(\hat{\Phi}_A,\hat{\Pi}_A)$, rather than $(\hat{O}_A^{1},\hat{O}_A^{2})$, as we did in the last section, in order to emphasize that we choose them to be a pure field and a pure momentum operators, respectively. The mode B is similarly defined by using a function $f_B(\vec x)$ compactly supported in region B.
For the smearing functions $f_i(\vec x)$, $i=A,B$, in this section we will use the following one-parameter family of {\em non-negative} functions
\begin{equation}\label{eq:family_test_funcs1}
f^{(\delta)}_i(\vec x)= A_{\delta}\ \left(1 -\frac{|\vec{x} - \vec{x}_i|^2}{R^2}\right)^{\delta}\, \Theta\left( 1 - \frac{|\vec{x} - \vec{x}_i|}{R}\right)\,,
\end{equation} where $\vec x_i$ is the center of the ball $i$ and $\Theta(x)$ is the Heaviside step function ---which ensures that $f^{(\delta)}_i(\vec x)$ is compactly supported within a ball of radius $R$ centered at $\vec x_i$; $A_{\delta}$ is a normalization constant determined below and $\delta$ a positive real number. Fig.~\ref{fig:W_delta_vs_r} shows the shape of $f^{(\delta)}_i(\vec x)$ for some values of $\delta$.
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figs/Mink_cov/W_delta_vs_r_legenddown_norm2.pdf}
\caption{\label{fig:W_delta_vs_r}
Shape of the smearing functions $f^{(\delta)}_i(\vec x)$ for a few values of $\delta$. Since these functions are spherically symmetric around their center $\vec x_i$, we plot them versus the dimensionless radial coordinates $r:=|\vec x-\vec x_i|/R$. Note that the larger $\delta$ is, the less support $f^{(\delta)}_i$ has near the boundary.}
\end{figure}
The parameter $\delta$ determines the differentiability class of $f^{(\delta)}_i(\vec x)$. For example, for $\delta=0$, $f^{(\delta)}_i(\vec x)$ reduces to the Heaviside function, which is discontinuous. For $\delta=1$, the function is continuous, but its first derivative is not. The differentiability class of $f^{(\delta)}_i(\vec x)$ is $C^{\delta-1}$ for integer $\delta$. In order for the smeared operators $\hat{\Phi}_i$ and $\hat{\Pi}_i$ to be well-defined, it suffices to choose $\delta\geq 1$, as we will see below by explicitly computing their quantum moments in the vacuum. Furthermore, although these and other smearing functions we use in this article are not infinitely differentiable, we argue in Appendix \ref{Ap:B} that this is actually not a restriction. This is because there always exist smooth functions of compact support defining modes whose physical properties are the same as for the modes we actually use, up to arbitrarily high accuracy.
In the following, we will explore modes defined from $f^{(\delta)}_i(\vec x)$ for different finite $\delta\geq 1$, even considering non integer values.
An advantage of this family of smearing functions is that their Fourier transform has a simple expression in terms of Bessel functions (see Appendix~\ref{app:details}). This is true in any spatial dimension $D$, and makes it possible to obtain analytical expressions for the quantities of interest for all $D$. Later in this article, we consider other families of smearing functions, including infinitely differentiable ones. The results are qualitatively similar, although in those cases we perform calculations numerically.
The commutator between the four operators $\Phi_i$, $\Pi_i$, with $i=A,B$ are
\begin{eqnarray}
&[\hat{\Phi}_A,\hat{\Phi}_B]&=[\hat{\Pi}_A,\hat{\Pi}_B]= 0 \\
&[\hat{\Phi}_A,\hat{\Pi}_B] &=[\hat{\Phi}_B,\hat{\Pi}_A]=i\, c \int d^Dx\, f^{(\delta)}_A\, f^{(\delta)}_B=0\,. \nonumber
\end{eqnarray}
The last integral vanishes, because $f^{(\delta)}_A(\vec x)$ and $f^{(\delta)}_B(\vec x)$ are supported in disjoint regions. On the other hand,
\begin{equation} [\hat{\Phi}_i,\hat{\Pi}_i]=i\, c \int d^Dx\, (f^{(\delta)}_i)^2 \neq 0\, , \ \ i=A,B.\end{equation}
We fix the (dimension-full) constant $A_{\delta}$ in the definition of $f^{(\delta)}_i$ by demanding that $[\hat{\Phi}_i,\hat{\Pi}_i]=i$. This implies that
\begin{equation}
A_{\delta} = c^{-1/2}\, R^{-D/2}\pi^{-D/4} \sqrt{\frac{\Gamma(1 + D/2 +2 \delta)}{\Gamma(1+2\delta)}}\,.
\end{equation}
The covariance matrix of the reduced state for the two modes of interest can be readily obtained by computing vacuum expectation values of symmetrized products of $\hat{\Phi}_i$ and $\hat{\Pi}_i$. It is easy to check that
$\braket{\{\hat{\Phi}_i,\hat{\Pi}_j\}} =0$ in the Minkowski vacuum, and that $\braket{\hat{\Phi}_A} =\braket{\hat{\Phi}_B}=: \braket{\hat{\Phi}}$ and $\braket{\hat{\Pi}_A} =\braket{\hat{\Pi}_B}=: \braket{\hat{\Pi}}$, since we are using the same smearing function in both regions. With this, the covariance matrix of the total system takes the form
\begin{equation}\label{sAB}
\sigma_{AB} = \left(\begin{matrix}
\sigma_{A}^{\mathrm{red}} & C \\
C^T & \sigma_{B}^{\mathrm{red}}
\end{matrix} \right) \,,
\end{equation}
where
\begin{equation}\label{sA} \sigma_{A}^{\mathrm{red}}=\sigma_{B}^{\mathrm{red}} = 2\left(\begin{matrix}
\braket{\hat{\Phi}^2} & 0\\
0 & \braket{\hat{\Pi}^2}
\end{matrix}
\right) \,
\end{equation}
is the covariance matrix of each mode, and $C = \mathrm{diag}(\braket{\{\hat{\Phi}_A, \hat{\Phi}_B\}},\braket{\{\hat{\Pi}_A, \hat{\Pi}_B\}})$ describes their correlations.
When the field is massless and for $D>1$, we obtain (see Appendix \ref{app:details} for details of the calculation)
\begin{equation}\label{phicorr}
\braket{\{\hat{\Phi}_i, \hat{\Phi}_j\}} = 2\, N_{\delta}^2\,\frac{R}{c}\, \left\{ \begin{matrix}
\mathcal{J}^{D}(-1,\delta) & i=j\\
\mathcal{L}^{D}(-1,\delta, \rho)& i\neq j
\end{matrix}\right.\,,
\end{equation}
\begin{equation}\label{picorr}
\braket{\{\hat{\Pi}_i, \hat{\Pi}_j\}} = 2N_{\delta}^2\,\frac{c}{R}\, \left\{ \begin{matrix}
\mathcal{J}^{D}(1,\delta) & i=j\\
\mathcal{L}^{D}(1,\delta, \rho)& i\neq j
\end{matrix}\right.\,,
\end{equation}
where
\begin{equation}\begin{split}
\mathcal{J}^{D}(\lambda, \delta)&=2^{-1-2\delta +\lambda}\frac{\Gamma\left(\frac{D+\lambda}{2}\right)\Gamma\left(1+2\delta - \lambda\right)}{\Gamma\left(1+\delta-\frac{\lambda}{2} \right)^2 \Gamma\left(\frac{D-\lambda}{2}+2\delta+1 \right)}\,,
\end{split}
\end{equation}
\begin{equation}\begin{split}
\mathcal{L}^{D}(\lambda, \delta, \rho) =&\rho^{-(D+\lambda)} \frac{ \Gamma\left(\frac{D+\lambda}{2}\right)\Gamma\left(D/2\right)}{2^{1+2\delta-\lambda}\Gamma\left(\frac{D}{2}+1 + \delta\right)^2\Gamma\left(-\frac{\lambda}{2}\right)}\times \\
&{}_3F_2\left[\begin{matrix}
1+\frac{\lambda}{2},\frac{D+\lambda}{2},\frac{D+1}{2}+\delta\\
\frac{D}{2} + 1+\delta, D+1+2\delta
\end{matrix}\,; \frac{4}{\rho^2}\right]\,,
\end{split}
\end{equation} and
\begin{equation}
N_{\delta}^2 = \frac{2^{2\delta} \Gamma\left(1 + \frac{D}{2} + 2\delta\right)\Gamma\left(1 + \delta\right)^2}{\Gamma\left(1+2\delta\right)\Gamma\left(D/2\right)}\,.
\end{equation}
A few comments are in order. First of all, from these expressions one can check that, as expected, the field and momentum self-correlations are positive and bounded functions of $\delta$, for $\delta \geq 1$ (for $\delta=0$, the momentum self-correlations diverge).
On the other hand, these expressions show that the correlations between both modes behave, for large separations $\rho\gg 1$, as
\begin{equation}
\braket{\{\hat{\Phi}_A, \hat{\Phi}_B\}} =\rho ^{-(D-1)}\frac{R}{c}\,\Big( u(\delta, D) + \mathcal{O}(\rho^{-2})\Big)\,,
\end{equation} and
\begin{equation}
\braket{\{\hat{\Pi}_A, \hat{\Pi}_B\}}= -\rho^{-(D+1)}\frac{c}{R}\,\Big(v(\delta, D)+\mathcal{O}(\rho^{-2}) \Big)\,
\end{equation}
for $D>1$, where $$u(\delta, D) = \frac{2^{ -2\delta -1} \Gamma \left(\frac{D-1}{2}\right) \Gamma (\delta +1) \Gamma \left(\frac{1}{2} (D+4 \delta +2)\right) }{\Gamma \left(\delta +\frac{1}{2}\right) \Gamma \left(\frac{D}{2}+\delta +1\right)^2}$$ and $$v(\delta, D) = \frac{2^{-2 \delta} \delta \Gamma \left(\frac{D+1}{2}\right) \Gamma (\delta ) \Gamma \left(\frac{1}{2} (D+4 \delta +2)\right)}{\Gamma \left(\delta +\frac{1}{2}\right) \Gamma \left(\frac{D}{2}+\delta +1\right)^2}$$ are positive functions\footnote{The correlator $\braket{\{\hat{\Pi}_A, \hat{\Pi}_B\}}$ is, therefore, negative. This, in turn, implies that the sub-matrix $C$ of $\sigma_{AB}$ is negative, even when we are using non-negative smearing functions. Consequently, the covariance matrix $\sigma_{AB}$ in \eqref{sAB} describes a Gaussian state which is not manifestly separable, according to Simon's separability criterion \cite{SimonSeparability2000}. Additional calculations are needed to show that this state is indeed separable.} that depend on $\delta$ and the spacetime dimension.
The dependence $\braket{\{\hat{\Phi}_A, \hat{\Phi}_B\}} \sim\rho ^{-(D-1)}$ and $\braket{\{\hat{\Pi}_A, \hat{\Pi}_B\}}\sim -\rho^{-(D+1)}$ is precisely what is expected, providing a good check to our expressions.
Fig.~\ref{fig:Correlations_D1D2D3} shows these correlations for $D=2$ and $D=3$ dimensions of space and for a massless field. One can see that, although at small separations $\rho\approx 2$ the cross-correlations depend on the details of the smearing functions, in particular on the value of $\delta$, at large separations they behave as expected for all $\delta\geq 1$.
We have also included in Fig.~\ref{fig:Correlations_D1D2D3} the $D=1$ case. As mentioned before, in this case one needs to introduce a mass $m$ to the field, to avoid infrared divergences. When $m\neq0$, we do not find closed analytical expressions for the correlation functions, and the results presented in Fig.~\ref{fig:Correlations_D1D2D3} have been obtained numerically.
\begin{figure*}[t]
\hspace*{-.75cm}
\includegraphics[width=1.05\textwidth]{Figs/Mink_cov/Correlations_D1D2D3_v3.pdf}
\caption{Correlations between two field-modes versus the dimensionless distance between the centers of the spherical regions where each mode is supported. $\rho=2$ corresponds to the two regions touching each other. The plots for $D=2$ and $D=3$ describe the correlations of a massless field and are obtained analytically, while for $D=1$ we introduce a small mass $\mu =mR= 10^{-2}$ to avoid infrared divergences, and compute the correlations numerically. All plots show that correlations depend on the details of the smearing functions, particularly on the value of the parameter $\delta$. On the other hand, at large separations the fall-off behavior of all correlations is as expected.}
\label{fig:Correlations_D1D2D3}
\end{figure*}
\subsection{Mutual Information and entropy}
The results in the previous subsection confirm that, within the family of modes we have considered, any pair of them with one mode supported in region $A$ and the other in region $B$, are correlated.
As discussed in sec.~\ref{sec:2}, the total amount of correlations between both modes can be quantified using their Mutual Information, defined in expression \eqref{MI}. To compute this quantity, we first need the von Neumann entropies of each system separately, $S_A$ and $S_B$, and the entropy of the joined system, $S_{AB}$.
Since the reduced covariance matrices for each subsystem are identical when the field is in the Minkoswki vacuum, $\sigma^{\rm red}_A=\sigma^{\rm red}_B$, so are their entropies. Expression \eqref{S} shows that all we need to compute this entropy is the symplectic eigenvalue of $\sigma^{\rm red}_A$. Using the form of $\sigma^{\rm red}_A$ given in \eqref{sA}, this symplectic eigenvalue is, for $D>1$,
\begin{eqnarray}
& & \nu_{A} = 2 \sqrt{\braket{\hat{\Phi}^2}\braket{\hat{\Pi}^2}}=\\ \nonumber
& & \frac{\Gamma (\delta +1)^2 \Gamma \left(\frac{D}{2}+2 \delta +1\right) \sqrt{\frac{\Gamma \left(\frac{D-1}{2}\right) \Gamma \left(\frac{D+1}{2}\right) \Gamma (2 \delta ) \Gamma (2 \delta +2)}{\Gamma \left(\frac{1}{2} (D+4 \delta +1)\right) \Gamma \left(\frac{1}{2} (D+4 \delta +3)\right)}}}{\Gamma \left(\frac{D}{2}\right) \Gamma \left(\delta +\frac{1}{2}\right) \Gamma \left(\delta +\frac{3}{2}\right) \Gamma (2 \delta +1)} \,.
\end{eqnarray}
This quantity is larger than one, confirming that the reduced state corresponding to a single mode is a mixed quantum state \cite{bianchi_entropy_2019,Hollands:2017dov}.
From this, we obtain an analytical expression for $S_A$, which we plot in Fig.~\ref{fig:vNEntropy_vs_D}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figs/Mink_cov/vonNeumannS_1dof_vs_D.pdf}
\caption{Von Neumann entropy $S_A$ of a single mode as a function of the dimension of space $D$, and different values of $\delta$.}
\label{fig:vNEntropy_vs_D}
\end{figure}
We see that the entropy depends on $\delta$; hence it depends on the details of the smearing function. This is expected, since the smearing function actually defines the concrete mode whose entropy we are evaluating. We observe that the larger $\delta$ is, the smaller the entropy is. Larger $\delta$ corresponds to smearing functions with more weight around the center of the region and less support close to the boundary. In other words, we find that modes supported closer to the boundary have larger entropy.
We also observe, interestingly, that $S_A$ decreases monotonically with $D$. In the limit $D\to \infty$, for a fixed $\delta$, we have
\begin{equation}
\lim_{D\to \infty} \nu^2_{\mathrm{1dof}}= \frac{\Gamma (2 \delta ) \Gamma (\delta +1)^4 \Gamma (2 \delta +2)}{\Gamma \left(\delta +\frac{1}{2}\right)^2 \Gamma \left(\delta +\frac{3}{2}\right)^2 \Gamma (2 \delta +1)^2} \,.
\end{equation}
from which we obtain a finite value of $S_A$.\footnote{It is intriguing to note that the double limit $\delta \to \infty$ and $D\to \infty$ produces $S_A\to 0$ (pure state). However, since the $\delta \to \infty$ limit of our the smearing functions \eqref{fig:W_delta_vs_r} produces a Dirac-delta distribution, we do not find a clear physical interpretation for this mathematical result.}
It is tempting to interpret $S_A$ as a quantifier of the entanglement between a single mode in region A and the rest of the degrees of freedom of the field theory (infinitely many, some supported within A and some outside).
However, as emphasized in \cite{Hollands:2017dov}, such an interpretation is an unjustified extrapolation of results in standard quantum mechanics, because there the Hilbert space of the total system is always a product of the Hilbert spaces of the two-subsystems; this is not true in quantum field theory, if the subsystems A and B are complementary.
The second ingredient entering the expression for the Mutual information is the entropy of the joined system of the two modes, $S_{AB}$. This quantity can also be obtained analytically for $D>1$, by plugging in Eq.~\eqref{S} the form of the two symplectic eigenvalues of $\sigma_{AB}$:
\begin{equation} \label{symeigs}
\nu_{\pm} = \sqrt{(2\braket{\hat{\Phi}^2} \pm \braket{\{\hat{\Phi}_A, \hat{\Phi}_B\}} )(2\braket{\hat{\Pi}^2} \pm \braket{\{\hat{\Pi}_A, \hat{\Pi}_B\}} )} \, .
\end{equation}
Notice that, although both, field and momentum correlations, depend on the radius $R$ of the regions where the modes are supported, this dependence cancels out in the combination of correlation functions appearing in \eqref{symeigs}. Consequently, $\nu_{\pm}$ and quantities derived from it, like mutual information, entropies and entanglement, remain invariant under re-scalings.
We plot the mutual information $\mathcal{I}_{AB}$ versus the distance between the two regions in Fig.~\ref{fig:MI_vsrho_D1D2D3_v2}, for $D=1$ (left panel), $D=2$ (middle figure) and when $D=3$ (right panel), for different values of $\delta$. (As in the previous subsection, the $D=1$ case is computed numerically).
\begin{figure*}[t]
\hspace{-1cm}
\includegraphics[width=1.05\textwidth]{Figs/Mink_cov/MI_vsrho_D1D2D3_v3.pdf}
\caption{Mutual information for two field-modes versus the dimensionless distance between the centers of the spherical regions where each mode is supported. $\rho=2$ corresponds to the two regions touching each other. The plots for $D=2$ and $D=3$ correspond to a massless field and are obtained analytically, while for $D=1$ we introduce a small mass $\mu\equiv m R= 10^{-2}$ to avoid infrared divergences, and compute the mutual information numerically. All plots show that the short-distance behavior of the mutual information depends on the details of the smearing functions through the value of the parameter $\delta$. However, at large distances the dependence on $\delta$ decreases and the fall-off behavior is as expected. }
\label{fig:MI_vsrho_D1D2D3_v2}
\end{figure*}
We observe that the mutual information is finite and its short-distance behavior ($\rho \gtrsim 2$) depends on the details of the smearing functions. However, its long-distance behavior ($\rho \gg 2$) is given by $\mathcal{I}(A,B) \sim \rho^{-2(D-1)}$ for $D>1$ (we have explicitly checked this up to $D=10$, and expect this fall-off behavior to be true for all $D$). This is the expected result, and is compatible with results obtained previously in \cite{Martin:2015qta,Martin:2021qkg,Espinosa-Portales:2022yok} for $D=3$.
An important lesson we extract from this analysis is that, for a fixed distance between the regions supporting two modes, the total correlations (classical and quantum) between them are weaker the larger the dimension $D$ of space is.
\subsection{Entanglement}
We use the LogNeg to evaluate whether the correlations between two single modes discussed in the previous subsection contain any entanglement. The LogNeg, defined in \eqref{LogNeg} is obtained from the symplectic eigenvalues of $\tilde \sigma_{AB}$, the partial-transposition of the covariance matrix $\sigma_{AB}$, which are given by
\begin{equation}\label{tnu}
\tilde{\nu}_{\pm} = \sqrt{(2\braket{\hat{\Phi}^2} \pm \braket{\{\hat{\Phi}_A, \hat{\Phi}_B\}} )(2\braket{\hat{\Pi}^2} \mp \braket{\{\hat{\Pi}_A, \hat{\Pi}_B\}} )}\,.
\end{equation}
Recall that LogNeg is different from zero only if at least one of these symplectic eigenvalues is smaller than one. By comparing this expression with \eqref{symeigs} and by taking into account that $\braket{\{\hat{\Pi}_A, \hat{\Pi}_B\}}$ is negative while $\braket{\{\hat{\Phi}_A, \hat{\Phi}_B\}}$ is positive for our smearing functions, one can see that $\tilde \nu_+ \geq \nu_+$, while
$\tilde \nu_- \leq \nu_+$ (recall, $\nu_{\pm}$ are the symplectic eigenvalues of $\sigma_{AB}$). Since both $\nu_{\pm}$ are larger than one, only
$\tilde \nu_-$ can possibly contribute to the LogNeg.
Furthermore, we also see from the expression above that $\tilde{\nu}_-$ would be smaller than one only if the momentum cross-correlation $\braket{\{\hat{\Pi}_A, \hat{\Pi}_B\}}$ is ``negative enough'', and that the LogNeg (if different from zero) must fall off with the distance between the regions at a rate dictated by the dimension $D$ of space. For this reason, we organize the discussion in the rest of this section in terms of the number of spacetime dimensions.
\subsubsection{$D=1$}
As discussed above, in $D=1$ we compute the correlation functions numerically, and from this we evaluate $\tilde \nu_-$, from which we compute the LogNeg using Eq.~\eqref{LogNeg}. We are interested in understanding: (i) whether the LogNeg is different from zero; and if the answer is in the affirmative, (ii) how the LogNeg depends on the mass $m$ of the field, on the distance between the regions A and B, and on the details of the smearing function. The answers to these questions are contained in Figs.~\ref{fig:LND1mu} and \ref{LND1rho}.
On the one hand, Fig.~\ref{fig:LND1mu} shows the LogNeg versus the dimensionless mass $\mu$,
for a fixed distance $\rho$ between the two regions and for different values of $\delta$. Since the LogNeg decreases with $\rho$, we choose in this plot the minimum possible value of $\rho$, namely $\rho=2$. The main messages we extract from this figure are the following: (1) For $\delta=1$, we find that the LogNeg is different from zero and, consequently, the two modes {\em are entangled}. (2) For $\delta\geq 1.7$ the LogNeg vanishes for any value of the mass $\mu$. This shows that the LogNeg is very sensitive to the shape of the smearing function. The smearing functions $f^{(\delta)}$ that we use in this section have more support close to the boundary of the region for smaller $\delta$. Since correlations fall off with distance, it is therefore expected that modes with larger support close to the boundary ($\delta$ close to $1$) are more correlated (and entangled) than modes defined with $\delta\geq1$. Fig.~\ref{fig:LND1mu} confirms that this intuition is correct and, furthermore, shows that pairs of modes supported in disjoint regions {\em are not entangled at all} if we use $\delta\gtrapprox 1.7$. (3) The LogNeg between these two modes is very sensitive to the mass of the field. For $\delta=1$, the LogNeg reaches a maximum when $\mu\ll1$, decrease very fast when $\mu \approx 1$ and completely vanishes when $\mu\gtrsim 10$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figs/Mink_cov/LogNeg_vs_mass_rho2_1D_logspacing.pdf}
\caption{LogNeg for $D=1$ as a function of the dimensionless mass, $\mu=m R$ ($R$ is the radius of the regions of support of A and B), when the regions $A$ and $B$ are kept at a fixed dimensionless distance $\rho=2$. The LogNeg reaches a maximum when $\mu\ll 1$, decreases monotonically, and vanishes when the mass of the field reaches a threshold that depends on the details of the smearing function. The LogNeg is zero for all $\mu$ if $\delta\gtrsim 1.7$.}
\label{fig:LND1mu}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figs/Mink_cov/LogNeg_vs_rho_1D_mu0p01_v3.pdf}
\caption{LogNeg for $D=1$ between the two field-modes as a function of the dimensionless distance between the centers of the spherical regions where each mode is supported, for a fixed dimensionless mass of the scalar field ($\mu=10^{-2}$) and for different smearing functions. }
\label{LND1rho}
\end{figure}
On the other hand, Fig.~\ref{LND1rho} shows the way the LogNeg changes with the distance $\rho$ between the regions A and B. This plot is computed for a small value of the dimensionless mass, $\mu=10^{-2}$, for which we know from the previous plot that LogNeg is close to its maximum. Again, we find that the LogNeg is different from zero only for $\delta$ close to one. We observe that the LogNeg falls off rapidly with the distance $\rho$, and completely vanishes beyond $\rho\approx 2.2$ (i.e., when the separation between the boundaries of the two region is about 20\% of their radius). Note that this fall off is much faster than the one we obtained for the Mutual Information $\mathcal{I}_{AB}\sim \rho^{-1}$.
\subsubsection{$D> 1$}
For the number of spatial dimensions larger than one, we can compute the LogNeg analytically for massless fields (this is the most interesting case since entanglement is expected to be larger when $\mu\to 0$, as we saw above). Substituting the value of the correlation functions reported in \eqref{phicorr} and \eqref{picorr} in expression \eqref{tnu}, we see that $\tilde \nu_-$ is larger than one for all distances $\rho$ and all values of $\delta$, including $\delta=1$. Consequently, we obtain
\begin{equation}
E_{\mathcal{N}} = 0\, \quad \forall \,\delta\geq 1 \text{ and }\, \forall D>1\,.
\end{equation}
The fact that we find less entanglement in $D\geq 2$ than in $D=1$ is compatible with the fact that correlations are stronger in lower dimensions, as shown in the last subsection.
In summary, we conclude that the correlations captured by the Mutual Information computed in the previous section are mainly classical correlations and do not originate from entanglement. For the family of modes we have explored in this section, only in the special case $D=1$, small mass $\mu$, $\delta$ close to one, and small separation between the two regions, we find that the two modes are entangled. {\em In all other cases, the reduced state is separable.}
In Sec.~\ref{sec:5}, we extend the family of smearing functions and show that these conclusions are not peculiar to the special smearings used in this section. In Sec.~\ref{sec:6}, we will show how we can find entanglement between two modes, by carefully selecting their region of support.
\begin{figure*}
\centering
\begin{tikzpicture}
\node at (0,6) {\includegraphics[width=\textwidth]{Figs/Mink_cov/hexagonal_layers_list.pdf}};
\draw[ultra thin,dash pattern=on 2pt off 2.5pt] (-7.25,4.8)--(-7.85,5.5);
\draw[ultra thin,dash pattern=on 2pt off 2.5pt] (-5.46,4.8)--(-6.15,5.5);
\draw[ultra thin,dash pattern=on 2pt off 2.5pt] (-4.3,4.8)--(-4.65,5.5);
\draw[ultra thin,dash pattern=on 2pt off 2.5pt] (-3.51,4.8)--(-3.45,5.5);
\draw[ultra thin,dash pattern=on 2pt off 2.5pt] (-2.9, 4.8) -- (-2.3,5.5);
\draw[ultra thin,dash pattern=on 2pt off 2.5pt] (-2.4,4.8) --(-1.24,5.5);
\node at (0.1,0.2) {\includegraphics[width=.99\textwidth]{Figs/Mink_cov/LogNeg_vsNB_hexagonal_layers_delta1.pdf}};
\end{tikzpicture}
\caption{LogNeg between subsystems A and B for different values of the number of modes $N_B$ in subsystem B and $N_A=1$, in two spatial dimensions, $D=2$. The $N_B$ modes are distributed in space as illustrated in the top part of the figure, where the orange central disk represents the region of support of the single mode in A, and the blue disks are the regions where each of the $N_B$ modes are supported. We use smearing functions $f^{(\delta)}$ with $\delta=1$ in this plot.
The plot shows that A and B are entangled as long as $N_B\geq 5$. }
\label{fig:LogNeg_vs_Ndof_subsystem2_hexagon_2dims}
\end{figure*}
\section{Enlarged subsystems}\label{sec:4}
In this section, we extend the previous calculations by enlarging the number of modes in our two subsystems, with the goal of studying whether such an enlargement leads to the emergence of entanglement. We will show that it does in $D=2$; in contrast, for larger dimensions we have not been able to find entanglement in any of the multi-mode configurations described below. We work in this section in $D\geq 2$ spatial dimensions and massless scalar fields, and use the same smearing functions as in the previous section, for which all results can be obtained analytically. We will use smearing functions with $\delta=1$, since this is the case for which entanglement is more likely to appear. The extension to other values of $\delta$ is straightforward.
Our set-up in this section is made of $N_A+N_B$ modes, defined in disjoint regions in a similar manner as done in the last section. Having more than two modes raises the question of how to distribute them in space. We will restrict attention to configurations that are more likely to show bipartite entanglement. These are configurations for which the two subsystems are as close as possible. Since we are considering spherical regions, this problem is tantamount to packing spheres as densely as possible in a $D$-dimensional space.
\subsection{$D=2$}
\savebox{\mybox}{\includegraphics[width=0.5\textwidth]{Figs/Mink_cov/LogNeg_vs_rho_delta1_hexagon_2dims.pdf}}
\begin{figure*}
\centering
\begin{minipage}{0.45\textwidth}
\subfigure[]{
\centering
\vbox to \ht\mybox{%
\vfill
\hspace{-1cm}
\begin{tikzpicture}
\node at (0,0) { \includegraphics[width=1.05\textwidth]{Figs/Mink_cov/Hexagon_vs_distance.pdf}};
\node at (2.825,0.45) {$\rho$};
\draw[->] (-1.92,0)--(-1.92,-0.65);
\node at (-2.1,-0.35) {$R$};
\draw[->] (2.25,0)--(2.25,-0.39);
\node at (2.08,-0.15) {$R$};
\end{tikzpicture}
\vfill
}
}
\end{minipage}\begin{minipage}{0.45\textwidth}
\subfigure[]{
\centering
\usebox{\mybox}
}
\end{minipage}
\caption{(a) Configuration we use to test the way the LogNeg varies with distance in $D=2$. The blue disks represent the regions of support of the 6 modes forming subsystem B, while the orange disk is where the single mode in A is supported. (b) LogNeg between subsystems A and B, corresponding to the configurations showed in Fig.~\ref{fig:LogNeg_vs_rho_hexagon_scheme}(a), versus $\rho$, defined as the distance between centers in units of the radius (hence, $\rho\geq 2$). We obtain the covariance matrix for this system analytically. However, since for $N_B=6$ it is a relatively big matrix, we compute its symplectic eigenvalues numerically; this is why we only show $E_{\mathcal{N}}$ for a discrete set of $\rho$'s (blue dots).}
\label{fig:LogNeg_vs_rho_hexagon_scheme}
\end{figure*}
A natural way of generalizing the results in the last section is by adding new modes to subsystem B, while minimizing the distance to subsystem A ---the later will be kept composed of a single mode for the moment; i.e.\, $N_A=1$. This is achieved in $D=2$ in the way shown in Fig.~\ref{fig:LogNeg_vs_Ndof_subsystem2_hexagon_2dims}, namely by locating the modes in subsystem B forming a hexagonal configuration around the mode A~\cite{circle_packing}. Fig.~\ref{fig:LogNeg_vs_Ndof_subsystem2_hexagon_2dims} shows the results for the LogNeg between A and B as a function of $N_B$, when the distance between modes is as small as possible (the expression for $E_{\mathcal{N}}$ is lengthy and not particularly illuminating, and we do not write it explicitly). The main feature of Fig.~\ref{fig:LogNeg_vs_Ndof_subsystem2_hexagon_2dims} is that the LogNeg becomes {\em different from zero if} $N_B>4$. It is interesting to see that, in contrast to $D=1$, in two spatial dimensions we need to enlarge our subsystems to be able to capture any entanglement.
Fig.~\ref{fig:LogNeg_vs_Ndof_subsystem2_hexagon_2dims} also shows that the LogNeg saturates as the number of ``layers'' in subsystem B increases, in the sense that adding more layers does not change its value. The interpretation here is that the outer layers are too far away from subsystem A to contribute to the entanglement.
\savebox{\mybox}{\includegraphics[width=0.5\textwidth]{Figs/Mink_cov/LogNeg_vs_NB_alternating_line.pdf}}
\begin{figure*}
\centering
\subfigure[]{
\begin{minipage}{0.45\textwidth}
\centering
\vbox to \ht\mybox{%
\vfill
\hspace{-1cm}
\includegraphics[width=1.05\textwidth]{Figs/Mink_cov/Alternating_line_NB10.pdf}
\vfill
}
\end{minipage}}~\subfigure[]{\begin{minipage}{0.45\textwidth}
\centering
\usebox{\mybox}
\end{minipage}}
\caption{(a) Configuration in $D=2$ consisting of $N_A=N_B=10$ modes placed alongside a straight line. The orange discs represent the regions of support of the modes forming subsystem A, while the blue discs represent the regions of support of the modes that constitute subsystem B. (b) LogNeg between subsystems A and B, corresponding to the configuration showed in Fig.~\ref{fig:Alternating_line_NB10} (a), as a function of the number of modes $N_B$ ($N_A$) within subsystem B (A). }
\label{fig:Alternating_line_NB10}
\end{figure*}
Now that we have found a configuration containing entanglement, we study the way this entanglement changes with the ``distance'' between the two subsystems. Such distance can be varied in many different manners. As an illustrative example, we consider the configuration depicted in Fig.~\ref{fig:LogNeg_vs_rho_hexagon_scheme}(a). In Fig.~\ref{fig:LogNeg_vs_rho_hexagon_scheme}(b) we plot the LogNeg versus the distance $\rho$ for this system, containing $N_B=6$ modes in subsystem B. We observe that the LogNeg falls off very fast, completely vanishing when the distance between the surfaces of the regions is less than 10\% of their radius.
Next, we consider configurations where both subsystems, A and B, are made of multiple modes (recall that when both $N_A$ and $N_B$ are larger than one, a non-zero value of LogNeg is a sufficient but not a necessary condition for entanglement). In this case, there are plenty of geometric configurations that one can consider. In what follows, we mention two that we find particularly interesting. The first one is depicted in Fig.~\ref{fig:Alternating_line_NB10}, and consists of $2N$ disjoint regions placed alongside a straight line, with alternating regions belonging to each subsystem. This configuration is interesting because, for $N=N_A=N_B\geq 2$, the LogNeg is different from zero and grows linearly with $N$, as shown in Fig.~\ref{fig:Alternating_line_NB10}.
\savebox{\mybox}{\includegraphics[width=0.5\textwidth]{Figs/Mink_cov/LogNeg_vs_rho_Two_Hexagons_diff_distances.pdf}}
\begin{figure*}
\centering
\subfigure[]{
\begin{minipage}{0.45\textwidth}
\centering
\vbox to \ht\mybox{%
\vfill
\hspace{-1cm}
\begin{tikzpicture}
\node at (0,0) { \includegraphics[width=1.05\textwidth]{Figs/Mink_cov/Two_Hexagons_diff_distances.pdf}};
\node at (2.,0.15) {$\rho$};
\end{tikzpicture}
\vfill
}
\end{minipage}}~\subfigure[]{\begin{minipage}{0.45\textwidth}
\centering
\usebox{\mybox}
\end{minipage}}
\caption{(a) $D=2$ configuration of two subsystems made of $N_A=N_B$ modes. (b) LogNeg versus the distance $\rho$ (in units of the radius of the individual small regions) corresponding to the $D=2$ configuration of Fig.~\ref{fig:Two_Hexagons_diff_distances} (a).The small kink around $\rho \sim 0.14$ is a boundary effect caused by the concrete geometric configuration we use in this example.}
\label{fig:Two_Hexagons_diff_distances}
\end{figure*}
A second configuration we explore consists of A and B each made of a hexagonal cell with $N$ disjoint modes, and separated from each other as shown in Fig.~\ref{fig:Two_Hexagons_diff_distances}. This figure shows that for this configuration the LogNeg decreases rapidly with the separation, and completely vanishes when the distance $\rho$ (defined as depicted in Fig.~\ref{fig:Two_Hexagons_diff_distances}, and measured in units of the radius of the individual disks) is larger than $0.3$.
This configuration is inspired by the type of systems considered in lattice field theory. The behavior of the LogNeg in 1+2 dimensional Minkowski spacetime has been investigated in such context \cite{Klco:2021biu,Klco:2020rga}. In lattice field theory, each field degree of freedom ``lives'' at the nodes of the lattice, and subsystems A and B are each made of $N_A$ and $N_B$ modes, respectively. One can then consider two regions, each containing $N$ nodes, and evaluate the entanglement between the two regions when the field is prepared in the vacuum.
Numerical computations \cite{Klco:2021biu,Klco:2020rga} have revealed that entanglement between two regions in lattice field theory has many similarities with our findings. In particular, lattice calculations also show that, for a finite number of modes, the LogNeg becomes zero abruptly beyond some threshold separation distance. The fall off of the LogNeg with distance is found to be exponential~\cite{Klco:2021biu}. Although one cannot compare our calculations with the result of lattice field theory in a detailed manner, mainly because we have included only a finite number of modes in our calculations, our results indicate that in the continuum theory the LogNeg for a finite number of modes falls off significantly faster with the distance between the two regions than its counterpart in lattice field theory. It would be interesting to have a more detailed comparison, but this is beyond the scope of this work.
\subsection{$D=3$}
Next, we apply the same strategy to the case of $D=3$ spatial dimensions; namely, we generalize the results of section~\ref{sec:3} by adding new modes to subsystem B, supported in disjoint regions located close to subsystem A. In $D=3$, the densest regular arrangement of spheres can be achieved by placing them either in a face-centered cubic (FCC) configuration, or in a hexagonal close-packed (HPC) configuration. We choose the latter. We illustrate one such configuration in Fig.~\ref{fig:1p5HCP_Cell}.
\begin{figure*}
\centering
\subfigure[\label{fig:1p5HCP_Cell}]{
\includegraphics[width=0.4\textwidth]{Figs/Mink_cov/1p5HCP_Cell.pdf}
\subfigure[\label{fig:thick_net}]{
\includegraphics[width=0.4\textwidth]{Figs/Mink_cov/thick_net.pdf}
}\\
\subfigure[ \label{fig:super_sandwich_config}]{
\includegraphics[width=0.6\textwidth]{Figs/Mink_cov/super_sandwich_config.PNG}
}
\caption{(a) Configuration of modes in $D=3$ in a hexagonal close-packed with a single mode in subsystem A and $N_B=18$ modes in subsystem B. (b) Example of a configuration we used to compute the LogNeg in $D=3$. The blue spheres represent the regions of support of the $N_B = 1088$ modes, while the orange sphere is where the single mode in subsystem A is supported. (c) Configuration in $D=3$. The blue spheres represent the regions of support of the $N_B = 1922$ modes, while the orange spheres represent the regions of support of the $N_A=961$ modes in subsystem A. We find that LogNeg vanishes in this configuration. }
\label{fig:sphere_configs_3D}
\end{figure*}
We have computed analytically the covariance matrix for this set up, including up to $N_B=1088$ modes in subsystem B, while keeping $N_A=1$ (see Fig.~\ref{fig:thick_net}). From this covariance matrix, we have computed the LogNeg (with the assistance of software for symbolic calculations, such as {\em Mathematica}) for a massless, non-interacting scalar field, and for the family of smearing functions introduced in \eqref{eq:family_test_funcs1}.
Contrary to the situation in $D=2$, in $D=3$ we have found that the LogNeg is {\em zero} for $N_B$ up to 1088. This is true even for smearing functions with $\delta$ equal or close to one, for which we found entanglement in $D=2$ for $N_B\geq 5$.
This result confirms the trend we found in the last subsections: entanglement is ``weaker'' or ``sparser'' in higher dimensions.
We have further extended our calculations by increasing also the number of modes in subsystem A. One such configuration is showed in Fig.~\ref{fig:super_sandwich_config}. We have increased the number of modes until $N_A=961$ and $N_B=1922$, and obtained LogNeg equal zero in all cases.
Entanglement for $D=3$ is too weak or diluted to be captured using the finite number of modes we have used so far. In the next two sections, we extend the family of modes and the way they are distributed in space.
\section{Other smearing functions}\label{sec:5}
In this section, we extend our calculation in different directions, with the goal of checking whether the absence of entanglement between pairs of modes supported in disjoint regions in $D\geq 2$ (or finite sets of them for $D\geq 3$) is a peculiarity of the concrete family of modes we have used so far. We generalize our calculations in different directions, by (i) considering other families of smearing functions including some without a definite sign, and (ii) by mixing field and momentum operators. In none of these cases we have found entanglement between subsystems A and B in $D \geq 2$ for pairs of modes supported in disjoint spherical regions.
\subsection{Other positive semi-definite smearing functions}
In addition to the smearing functions introduced in Eq.~\eqref{eq:family_test_funcs1}, we have explored the following families of non-negative smearing functions, all spherically symmetric around a center $\vec x_i$, and where $r$ is the distance to the center in units of the radius $R$ of the region of support:
\begin{enumerate}
\item
$ h^{(n)} \left(r\right) = A_n\, \Theta\left(1- r\right) \, \cos^n\left(\frac{\pi}{2}r\right)\,, \quad n>1\in \mathbb{N}\,, $
with $A_{n}$ a normalization constant. This family has been used before in \cite{Reznik:2002fz}.
\item
$ g\left(r\right) = A \, \Theta\left(1- r\right) \, \exp\left(- \frac{1}{1-r^2}\right)\,. $
\item $$ w^{(\delta)}\left(r\right) = A_{\delta} \begin{cases}
1 & 0<r\leq 1\\
-\frac{1}{\delta}(r-1) +1 & 1<r\leq 1+\delta\\
0 & r > 1+ \delta
\end{cases}\,. $$
This family has been used before in \cite{martin_real-space_2021}.
\item $j^{(n)}(r) = A_{n}\, \Theta\left(1- r\right) \left(1 - r^n \right)\,,\quad n>1 \,\in\, \mathbb{N}\,.$
\end{enumerate}
All these functions and their first derivatives are continuous ($g(r)$ is actually smooth). We argue in Appendix~\ref{app:smooth_functions} that this is sufficient for our purposes.
\subsection{Non semi-positive definite smearing functions}
We have explored the following family of $L^2$-orthonormal functions of compact support
\begin{equation}
k^{(n)}(r)=
\frac{\sin{2\pi n r}}{2\pi n r}\,\Theta(1-r)\, .\label{SF3D.sinc}
\end{equation}
Again, these functions and their first derivatives are continuous. When these functions are used to define modes supported in disjoint regions, we find no entanglement between them. In the next section, we will use these functions to define modes supported {\em in the same region} (this is possible since these functions are orthogonal to each other for different values if $n$), in which case we do find entanglement (see Sec.~\ref{overlapping} below).
\subsection{Combinations of field and momentum}
The modes of the field used in the calculations presented in previous sections were all defined from pairs of operators of the form of a pure-field and a pure momentum operator, $(\hat \Phi_i, \hat \Pi_i)$, both constructed from the same smearing function (up to an exact factor $c$ in $\hat \Pi_i$, needed to ensure that both operators have the same units (action)).
We extended here these calculations by considering modes defined from (the algebra generated by) pairs of canonically conjugated operators of the following form
\begin{align} \label{modesmix}
\hat{O}^{(1)}_A & = \frac{1}{\sqrt{2N}} \sum_{i=1}^{N}\left(\hat{\Phi}^{(2i-1)}_A - \hat{\Pi}^{(2i)}_A\right)\, , \\
\hat{O}^{(2)}_A & = \frac{1}{\sqrt{2N}}\sum_{i=1}^N \left(-\hat{\Phi}^{(2i-1)}_A + \hat{\Pi}^{(2i)}_A\right)\, ,
\end{align}
where $\hat{\Phi}^{(n)}_A$ and $\hat{\Phi}^{(n)}_A$ indicate field and momentum operators, respectively, smeared with the element $k^{(n)}(r)$ of the family of orthonormal functions written in Eq.~\eqref{SF3D.sinc}.
Therefore, each of these two operators $\hat{O}^{(i)}_A$ is made by combining pure-field and pure-momentum operators, each constructed from different smearing functions. The orthonormality of the smearing functions $k^{(n)}(r)$ guarantees that $\hat{O}^{(1)}_A$ and $\hat{O}^{(2)}_A$ are canonically conjugate.
Subsystem B is defined in the same way, with support in a spherical region as close as possible to the support of subsystem A. We have explored different values of $N$, from 1 to 10 in \eqref{modesmix}, and we have not found entanglement between the two subsystems in any case. \\
The analysis of this section reveals that the absence of entanglement in $D\geq 2$ between pairs of modes supported in two disjoint spherical regions is not unique to the smearing functions we have chosen, but a rather generic fact.
\section{\label{sec:6}Pairs of entangled modes}
The analysis so far shows that, for $D\geq 2$, finding entangled between two field modes is not an easy task, or at least not as easy as one would have thought. In fact, in none of the modes explored so far, we have found pair-wise entanglement for $D\geq 2$. We have presented results for a massless scalar field, but introducing a mass only makes entanglement weaker. Does this mean that it is actually impossible to find pairs of modes of the field which are entangled? Absolutely not. Entanglement is intrinsic to every quantum state of multi-mode systems, even in ordinary quantum mechanics, in the sense that, for every quantum state, one can find subsystems that are entangled \cite{Zanardi:2004zz,Agullo:2022ttg}.
As a simple illustrative example, consider two uncoupled harmonic oscillators prepared in the ground state (the product of ground states of each oscillator). This is a product state, and therefore the subsystems defined by $(\hat x_A,\hat p_A)$ and $(\hat x_B,\hat p_B)$ are obviously not entangled. However, it is not difficult to find other partitions of the systems for which entanglement shows up in the ground state. A simple choice is made by the two subsystems $(\hat x_1,\hat p_1)$ and $(\hat x_2,\hat p_2)$, where
\begin{eqnarray}
\hat x_1&=&\cosh z\, \hat x_A+\sinh z\,\hat x_B\, , \nonumber \\
\hat p_1&=&\cosh z\, \hat p_A-\sinh z\,\hat p_B\, ,\nonumber \\
\hat x_2&=&\sinh z\, \hat x_A+\cosh z\,\hat x_B\, ,\nonumber \\
\hat p_2&=&-\sinh z\,\hat p_A+\cosh z\,\hat p_B\, , \nonumber
\end{eqnarray}
with $z\in \mathbb{R}$. Subsystems $(\hat x_1,\hat p_1)$ and $(\hat x_2,\hat p_2)$ are obtained by mixing the original oscillators, but each pair defines a licit mode of the system. It is straightforward to check that the LogNeg between these two modes is $E_{\mathcal{N}}= \frac{2 \, z}{\ln 2}$; therefore,
there is entanglement between these two modes for all $z\neq 0$, and it grows monotonically with $z$.
This example reminds us about the well-known fact that entanglement is not a property of a quantum state alone; it is an attribute of a state {\em and} a choice of subsystems. Furthermore, for every quantum state of a multi-mode system there are choices of subsystems for which entanglement is present~\cite{ABRM_2022}. When we simply say that a quantum state is not entangled is because we implicitly assume a natural or physically preferred set of modes of the systems.
Coming back to field theory in Minkwoski spacetime, it is straightforward to find modes which are entangled in the vacuum, by simply mimicking the example of the two harmonic oscillators. Consider any pairs of modes $(\hat \Phi_A, \hat \Pi_A)$ and $(\hat \Phi_B, \hat \Pi_B)$ considered in previous sections, for which we found that the reduced state is separable. From them, we can construct new modes
$(\hat \Phi_1, \hat \Pi_1)$ and $(\hat \Phi_2, \hat \Pi_2)$, where
\begin{eqnarray}
\hat \Phi_1&=&\cosh z\,\hat \Phi_A+\sinh z\, \hat \Phi_B\, , \nonumber \\
\hat \Pi_1&=&\cosh z\,\hat \Pi_A-\sinh z\,\hat \Pi_B\, ,\nonumber \\
\hat \Phi_2&=&\sinh z\, \hat \Phi_A+\cosh z\,\hat \Phi_B\, ,\nonumber \\
\hat \Pi_2&=&-\sinh z\, \hat \Pi_A+\cosh z\, \hat \Pi_B\, , \nonumber
\end{eqnarray}
with $z\in \mathbb{R}$. We have shown that there exist a minimum value of $|z|$ above which the new subsystems are entangled. (Contrary to the example of harmonic oscillators, $|z|$ must be above a non-zero threshold. The reason is that, in field theory, the reduced state describing the two modes together is always mixed (non-zero entropy). This mixedness acts as a source of noise for entanglement, requiring a minimum amount of ``squeezing'' intensity $|z|$ to entangle the modes. The threshold value of $|z|$ depends on the smearing functions chosen to define the initial pair of modes.)
Although we can define a new pair of modes, $(\hat \Phi_1, \hat \Pi_1)$ and $(\hat \Phi_2, \hat \Pi_2)$, that are entangled, they are not supported in disjoint regions, since each mode is a combination of a part supported in region A and a part in B. In this sense, these modes are non-local and somewhat unnatural.
In the remaining of this section, we describe other ---less obvious and therefore more informative--- examples of pairs of entangled modes in field theory.
\subsection{Independent modes with overlapping support}\label{overlapping}
Motivated from the intuition that entanglement falls off very rapidly with distance, we consider two modes defined in the same region of space. This can be done by defining each mode from pairs $(\hat \Phi_n,\hat \Pi_n)$ using the smearing functions in the family $k^{(n)}(r)$ defined in \eqref{SF3D.sinc}. These functions are orthogonal to each other for different values of $n$, hence they define independent (commuting) modes. We will use $n=n_A$ to define mode A, and $n=n_B\neq n_A$ for mode B.
We have found that such two modes {\em are entangled} in $D=3$. The entanglement is largest when $n_A=1$ and $n_B=2$, and decreases with both $n_A$ and $n_B$. These results are illustrated for $D=3$ in Fig.~\ref{fig.LN.vs.n2}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5\textwidth]{additional notes/figures/LN_varying_n2.pdf}
\caption{LogNeg between two modes of the form $(\hat \Phi_i,\hat \Pi_i)$, $i=A,B$, defined in the same region of space, from the family of orthonormal functions $k^{(n)}(r)$ introduced in \eqref{SF3D.sinc}. System A is defined using $k^{(n_A)}(r)$ and similarly for system B, with $n_A\neq n_B$. We plot the LogNeg vs $n_B$, for $n_A=1$. Two messages can be extracted from this plot: (i) the two modes are entangled, and (ii) entanglement is larger the closer $n_A$ and $n_B$ are. }\label{fig.LN.vs.n2}
\end{center}
\end{figure}
\subsection{Rindler modes}
\begin{figure*}[t]
\centering
\subfigure[\label{fig:ball&shell}]{
\begin{tikzpicture}
\node at (0,0) {\includegraphics[width=0.4\textwidth]{additional notes/FigsPRM/Ball_shell.pdf}};
\node[white,rotate=55] at (1.25,2.2) {$d_B$};
\node[white, rotate=42] at (-1.25,-0.75) {$R_A=R_B$};
\end{tikzpicture}
}~\subfigure[]{
\begin{tikzpicture}
\node at (0,0) { \includegraphics[width=0.4\textwidth]{additional notes/FigsPRM/Ball_shell_distance.pdf}};
\node[white,rotate=75] at (0.75,1) {$R_B$};
\node[white, rotate=50] at (-1.2,-0.75) {$R_A$};
\node[white, rotate=-30] at (-2,1.75) {$d_B$};
\end{tikzpicture}
}
\caption{Regions of support for two field modes. The orange sphere is where the mode defining subsystem A is supported, while the mode in B is supported in the blue shell. The two panels illustrates the freedom we have in the distance between the sphere and the shell, as well as the thickness of the shell.}
\label{fig:ball&shell3D}
\end{figure*}
A well-known example of pairs of modes that are entangled in the vacuum is that of Rindler modes. For completeness, in this subsection we use our tools to check that the right-Rindler and left-Rindler modes are indeed entangled. This is an example of modes with support in disjoint regions which nevertheless are entangled.
Let $\hat{a}^{\mathrm{R}}_{\omega\vec{k}_{\perp}}$ and $\hat{a}^{\mathrm{L}}_{\omega\vec{k}_{\perp}}$ be standard right and left, respectively, Rindler annihilation operators (see, for instance, \cite{Crispino:2007eb}), where $\omega$ is the Rindler frequency, and $\vec{k}_{\perp} = (k_x,k_y)$ is the momentum in the directions perpendicular to the Rindler acceleration, which we assume to be in the $z$-direction. From these operators and their adjoints, we build two modes $(\hat{X}^{\mathrm{R}}_{\omega\vec{k}_{\perp}}, \hat{P}^{\mathrm{R}}_{\omega\vec{k}_{\perp}})$ and $(\hat{X}^{\mathrm{L}}_{\omega\vec{k}_{\perp}}, \hat{P}^{\mathrm{L}}_{\omega\vec{k}_{\perp}})$:
\begin{align}
\hat{X}^{\mathrm{R}}_{\omega\vec{k}_{\perp}} &= \frac{1}{\sqrt{2}}\left(\hat{a}^{\mathrm{R}}_{\omega\vec{k}_{\perp}} + \hat{a}^{\mathrm{R}\,\dagger}_{\omega\vec{k}_{\perp}} \right)\,,
\\
\hat{P}^{\mathrm{R}}_{\omega\vec{k}_{\perp}} &= \frac{-i}{\sqrt{2}}\left(\hat{a}^{\mathrm{R}}_{\omega\vec{k}_{\perp}} - \hat{a}^{\mathrm{R}\,\dagger}_{\omega\vec{k}_{\perp}} \right)\,,
\\
\hat{X}^{\mathrm{L}}_{\omega'\vec{k}'_{\perp}} &= \frac{1}{\sqrt{2}}\left(\hat{a}^{\mathrm{L}}_{\omega'\vec{k}'_{\perp}} + \hat{a}^{\mathrm{L}\,\dagger}_{\omega'\vec{k}'_{\perp}} \right)\,,
\\
\hat{P}^{\mathrm{L}}_{\omega'\vec{k}'_{\perp}} &= \frac{-i}{\sqrt{2}}\left(\hat{a}^{\mathrm{L}}_{\omega'\vec{k}'_{\perp}} - \hat{a}^{\mathrm{L}\,\dagger}_{\omega'\vec{k}'_{\perp}} \right)\, .
\end{align}
They are independent of each other (i.e., they commute) and satisfy canonical commutation relations, $[\hat{X}^{\mathrm{R}(\mathrm{L})}_{\omega\vec{k}_{\perp}},\hat{P}^{\mathrm{R}(\mathrm{L})}_{\omega'\vec{k}'_{\perp}}]=i\, \delta(\omega-\omega')\delta^2(\vec{k}_{\perp}-\vec{k}_{\perp}) $.
Following the procedure introduced in section~\ref{sec:2}, we compute the components of the covariance matrix $\sigma_{RL}(\omega,\vec{k}_{\perp},\omega',\vec{k}'_{\perp})$ of the reduced system for these two modes, supported in different wedges, in the Minkowski vacuum. They are:
\begin{align}
\braket{ (\hat{X}^{\mathrm{R(L)}}_{\omega\vec{k}_{\perp}})^2} &= \frac{1}{2}\tanh{\left(\pi\omega/a\right)}^{-1}\delta^3(0)\,,
\\
\braket{ (\hat{P}^{\mathrm{R(L)}}_{\omega\vec{k}_{\perp}})^2} &= \frac{1}{2}\tanh{\left(\pi\omega/a\right)}^{-1}\delta^3(0)\,,
\end{align}
and
\begin{equation}
\braket{ \{\hat{X}^{\mathrm{R}}_{\omega\vec{k}_{\perp}},\hat{X}^{\mathrm{L}}_{\omega'\vec{k}'_{\perp}}\}} =\sinh{(\omega\pi/a)}^{-1}\delta(\omega - \omega')\delta^{2}(\vec{k}_{\perp} + \vec{k}'_{\perp})\,,
\end{equation} and \begin{equation}
\braket{ \{\hat{P}^{\mathrm{R}}_{\omega\vec{k}_{\perp}},\hat{P}^{\mathrm{L}}_{\omega'\vec{k}'_{\perp}}\}} = -\sinh{(\omega\pi/a)}^{-1}\delta(\omega - \omega')\delta^{2}(\vec{k}_{\perp} + \vec{k}'_{\perp}) \; .
\end{equation}
The constant $a$ is the acceleration of the Rindler frame. The presence of Dirac deltas is a result of the normalization of the modes and, as usual, it can be removed by using wave-packets instead of plane-waves.
Note that $\sigma_{RL}$ is ``diagonal'' in the labels $(\omega,\vec{k}_{\perp})$ and $(\omega',\vec{k}'_{\perp})$, so right and left modes with different labels $(\omega,\vec{k}_{\perp})$ are uncorrelated and unentangled, as expected.
From the previous expressions, we compute the partially transposed covariance matrix and its symplectic eigenvalues for $\omega'= \omega$ and $\vec{k}_{\perp} =-\vec{k}'_{\perp}$. The smallest of these eigenvalues is
\begin{equation}\label{tildenuR}
\tilde{\nu}_- = \delta^3(0) \, \tanh\left( \frac{\omega \pi }{2\,a}\right) \,.
\end{equation}
This eigenvalue is smaller than one for any $a\neq 0$. Consequently, right and left Rindler modes with the same labels $(\omega,\vec{k}_{\perp})$ are entangled in the Minkowski vacuum as long as $a\neq 0$ and their entanglement grows monotonically with $a$.
This calculation refers to Rindler modes defined using plane-waves for which, strictly speaking, all quantities above blow up. The calculations can be made finite by replacing plane-waves by wave-packets, and the finite part of \eqref{tildenuR} should be understood as the limiting result when the support of the wave-packets tends to the entire Rindler wedge. It is an interesting question whether the use of wave-packets with finite support requires a minimum acceleration $a$ for right and left modes to be entangled.
\subsection{Entanglement between spherical shells}
Since correlations decrease with the distance between the two subsystems, we have explored other configurations for two modes, where the ``contact'' between A and B is maximized. Such is the case when subsystem A has support within a sphere, while subsystem B is supported in a spherical shell surrounding it. The region of support of each mode is illustrated in Fig.~\ref{fig:ball&shell3D}.
In this section, we construct modes from pairs of operators of the form $(\hat \Phi_i,\hat \Pi_i)$, $i=A,B$, similarly to Sec.~\ref{sec:3}, with the difference that in this section we use the following smearing functions
\begin{equation}
f_A(r;R_A)=
\cos^2\left(\frac{\pi}{2}\frac{r}{R_A}\right)\, \Theta(R_A-r)\, ,\end{equation}
for the sphere (subsystem A), and
\begin{equation}\begin{split}
&f_B(r;R_B,d_B)=\\
& \begin{cases}
\sin^2\left(\pi\frac{r-R_B}{d_B}\right) \qquad R_B\leq r\leq R_B+ d_B\\
0,\qquad \qquad \qquad \qquad \text{otherwise}
\end{cases}\end{split}\label{SFf2}
\end{equation}
for the shell (subsystem B).
These functions and their first derivatives are continuous.
While the smearing function in the sphere depends only on one parameter, namely the radius $R_A$, the mode in the shell is parameterized by the inner radius $R_B$ and the thickness of the shell, $d_B$. We have checked that for a massless scalar field the quantities we evaluate in this section (the LogNeg) do not change if we re-scale these three parameters simultaneously. Therefore, we have only two independent parameters, which we choose to be $R_B$ and $d_B$ measured in units of $R_A$. Another freedom we play with is the dimension of space, $D$.
We compute the LogNeg between subsystems A and B and the way it changes with $R_B$, $d_B$ and $D$. The results of this section are obtained numerically.
First, Fig.~\ref{fig:LogNeg_vs_D_Rs1} shows the value of the LogNeg when there is no gap between the shell and the sphere $(R_A=R_B)$ (the optimal case) and for a fixed value of $d_B$. The LogNeg is evaluated as a function of the dimensionality of space $D$. The main lessons from this plot are that (i) the LogNeg is different from zero!, and (ii) it decreases in higher dimensions, completely vanishing for $D>6$ for the values of $d_B$ we have chosen.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{additional notes/FigsPRM/LogNeg_vs_D_Ra1Rb1.pdf}
\caption{LogNeg as a function of the spatial dimension, $D$, for the configuration of two modes depicted in Fig.~\ref{fig:ball&shell3D} (a), i.e., when there is no gap between the sphere and the shell, $R_A=R_B =1$. We choose $d_B=0.5$ for this plot.}
\label{fig:LogNeg_vs_D_Rs1}
\end{figure}
Next, we study how the LogNeg depends on the distance between A and B, as quantified by $R_B-R_A$. This is shown in Fig.~\ref{fig:LogNeg_vs_RbmRa_D2to6_delta0p5} for several values of $D$. As expected, the LogNeg quickly falls off with the distance between both subsystems, and disappears beyond some threshold distance. Once more, we see that entanglement is weaker in higher dimensional theories.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figs/LogNeg_vs_RbmRa_D2to6_delta0p5_v2.pdf}
\caption{LogNeg for the same configuration as in Fig.~\ref{fig:LogNeg_vs_D_Rs1}, now plotted versus the separation between the sphere and the shell, for different spatial dimensions, $D$. We used $d_B = 0.5$ for the thickness of the shell (all distanced measured in units of $R_A$).}
\label{fig:LogNeg_vs_RbmRa_D2to6_delta0p5}
\end{figure}
Finally, we study how the LogNeg depends on the thickness of the shell, $d_B$. Fig.~\ref{fig:LogNeg_vs_dB_D3} shows a curious result: the LogNeg is different from zero in a finite interval, and vanishes when $d_B$ is either bigger or smaller than this interval.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{additional notes/FigsPRM/LogNeg_vs_dB_D3.pdf}
\caption{LogNeg as a function of the width of the shell, when there is no gap between the sphere and the shell. Interestingly, the LogNeg is different from zero only in a finite interval of $d_B$.}
\label{fig:LogNeg_vs_dB_D3}
\end{figure}
As a final curiosity, we plot in Fig.~\ref{fig:Several_shells} an ``onion''-like configuration, in which we increase the number of spherical shells and assign one mode per shell alternating between modes in subsystem A and B. Fig.~\ref{fig:LogNeg_Several_shells} shows what is expected: the LogNeg grows monotonically with the number of layers.
\savebox{\mybox}{\includegraphics[width=0.5\textwidth]{additional notes/figures/LNalt.pdf}}
\begin{figure*}
\centering
\subfigure[\label{fig:Several_shells}]{
\begin{minipage}{0.45\textwidth}
\centering
\vbox to \ht\mybox{%
\vfill
\includegraphics[width=0.8\textwidth]{Figs/Several_shells.pdf}
\vfill
}
\end{minipage}}~\subfigure[\label{fig:LogNeg_Several_shells}]{\begin{minipage}{0.45\textwidth}
\centering
\usebox{\mybox}
\end{minipage}}
\caption{(a) Regions of support of modes in an ``onion''-like configuration. The orange sphere and shells belong to subsystem A and the blue shells belong to subsystem B. (b) LogNeg as a function of the total number of degrees of freedom $N=N_A+N_B$. As expected from our previous results, the LogNeg grows with the number of shells.}
\end{figure*}
In summary, in this subsection we have (finally!) found configurations of two modes compactly supported in disjoint regions that are entangled in $D\geq 2$ spatial dimensions. Entanglement is fragile, in the sense that it disappears when increasing the distance between the modes or the dimension of space. These results contain a valuable message: entanglement can be found in pairs of disjoint and compactly supported modes, but one needs to carefully chose their spatial configuration. This is in consonance with recent results in \cite{deSLTorres:2023aws}, which indicate that the entanglement between regions $R_A$ and $R_B$ is largely concentrated in modes sharply supported near the boundaries.
\section{Discussion}\label{sec:discussion}
Entanglement in quantum field theory has been discussed in detail from diverse perspectives. The entanglement entropy associated with a region of space $R$ is perhaps the most studied quantity in this context \cite{Calabrese:2004eu}; it has played an important role in many developments in theoretical physics, ranging from black holes \cite{Solodukhin:2011gn,Srednicki:1993im,Bombelli:1986rw,Sorkin:1985bu} to the quantum nature of spacetime \cite{Jacobson:1995ab,Ryu:2006bv,Bianchi:2012ev}. This entropy also presents some inconveniences. In the first place, it is intrinsically divergent, requiring a regulator to extract a finite value from it, a procedure that introduces ambiguities. Moreover, the Hilbert space of the field theory is {\em not} of the form $\mathcal{H}_R\otimes \mathcal{H}_{\bar R}$, with $\mathcal{H}_R$ the Hilbert space of the field degrees of freedom within region $R$ and $\mathcal{H}_{\bar R}$ the analog for the region $\bar R$ complementary to $R$. This implies that one is outside the standard realm to define entanglement in quantum mechanics, making it unclear how to interpret this entropy in terms of entanglement between region $R$ and its complement \cite{Hollands:2017dov}.
On the other hand, starting from the Reeh-Schlieder theorem \cite{reehschlider}, it is possible to show that the field degrees of freedom supported within two compact regions of space, $R_A$ and $R_B$, which are separated from each other, are entangled for any state satisfying the Reeh-Schlieder property. The separation guarantees that entanglement is finite \cite{Verch:2004vj,Hollands:2017dov}. In Minkowski spacetime, the Reeh-Schlieder property holds for any state of finite energy, including the vacuum, implying that entanglement is ubiquitous in this theoretical paradigm.
These results, although of great conceptual interest, involve subsystems with infinitely many degrees of freedom and do not tell us how much finite sets of field modes are entangled, or even if they are entangled at all. The primary goal of this paper is to introduce a strategy to address these questions and answer them in some concrete examples.
The calculations in this article are restricted to a free scalar theory in $D+1$-dimensional Minkowski spacetime. We extract individual degrees of freedom (modes) of the field, localized in a compact region of space, by smearing the field and its conjugate momentum against functions of compact support. Each mode defines an algebra isomorphic to the algebra of an ordinary harmonic oscillator. This strategy provides a way of extracting a finite set of modes out of the field in a local and covariant manner, to which the standard tools of quantum mechanics to evaluate entanglement can be applied. In particular, our strategy is free of divergences plaguing other approaches.
The focus on a finite number of modes is motivated by the finite capabilities of observers. The resulting system has all the benefits of a lattice field theory regarding conceptual and computational simplicity, while keeping the richness and subtleties of the continuum theory since. In contrast to lattice field theory, we do not truncate the degrees of freedom prior to quantization.
The concrete family of modes under consideration depends on the choice of smearing functions. We have explored different families in this work, and have focus on results that are common to all of them. For some of our smearing functions, we obtain results analytically in $D$ spatial dimensions. For other smearings, we proceed numerically. We have tested our numerical tools against the analytical results when they are available, and checked that they agree to high precision.
Using the Gaussianity of the Minkowski vacuum, we have computed the reduced state describing a finite number of modes, and evaluated its entropy, Mutual Information, and entanglement. In particular, we have checked that the reduced state describing finite dimensional subsystems is always mixed, in agreement with general results \cite{Ruep:2021fjh}.
The main lesson from our analysis is that it is difficult to find pairs of modes supported in disjoint regions of space separated by a non-zero distance which are entangled. The difficulty increases with the dimensionality of space. Namely, in $D=1$ it is relatively easy to find pairs of such modes which are entangled, but this task becomes increasingly challenging for $D\geq 2$. In fact, we find that the regions of support of two modes need to be carefully chosen to find any entanglement for $D\geq 2$. One example of a configuration for which we find entanglement is when one mode is supported within a sphere while the other mode is supported on a spherical shell surrounding the sphere. This configuration is efficient in minimizing the distance between both modes, and is able to capture pair-wise entanglement. (Another configuration in which we have found pair-wise entanglement for $D\geq 2$ is when two independent modes coexists in the same region of support.)
In the cases where we find entanglement between a pair of modes, or between two subsystems each made of a finite number of them, we have checked that the entanglement quickly disappears when the distance between the subsystems increases.
Hence, we conclude that entanglement in field theory is not as prevalent as normally thought. It is ubiquitous when considering subsystems containing infinitely many modes, but not for finite dimensional systems. This is an important lesson which, to the best of our knowledge, has not been pointed out before. Furthermore, this result is compatible with the Reeh-Schlieder theorem, which guarantees that given a field mode supported in a region $R_A$, and a second region $R_B$, there exists at least one mode in $R_B$ that is entangled with the given mode in $R_A$ \cite{Hollands:2017dov}. However, the theorem does not tell how many modes in $R_B$ are entangled with the given mode in A, or how complicated such a mode is. Our results show that one needs to carefully select the mode in $R_B$ to find any entanglement. This is compatible and complementary to the results recently obtained in \cite{deSLTorres:2023aws}, which indicate that the entanglement between regions $R_A$ and $R_B$ is largely concentrated in modes sharply supported near the boundaries.
We finish by pointing out a potential tension between our results and the protocol of entanglement harvesting in quantum field theory~\cite{Reznik:2002fz,Reznik:2003mnx,Pozas-Kerstjens:2015gta}. This protocol couples the field theory to two non-relativistic systems, which play the role of detectors. These detectors are turned on only for a finite amount of time, and are separated in space in such a way that they remain spatially separated during the interval they are on. The detectors are prepared in their respective ground states, so the initial state of the system made of the two detectors is a product state with no entanglement. After the interaction, one is interested in knowing whether the two detectors end up in an entangled state. If they do, because the detectors do not interact with each other, nor do they interact via the field because they remain spatially separated, the only possible origin of the entanglement is entanglement in the field itself which has been swapped to the detectors by means of the interaction. In this sense, the detectors ``harvest'' entanglement from the field.
A careful inspection of the protocol reveals that each detector is coupled to a {\em single degree of freedom of the field}, that is defined from the operator $\hat \Phi[f]$ smeared with the function $f(\vec x)$ describing the support of the detector. Our calculations reveal that pairs of such modes of the field that are separated by a non-zero distance, are generically not entangled. In particular, we have checked that they are not entangled if we use the smearing functions $f(\vec x)$ normally used in the harvesting protocol.
From this point of view, it is difficult to understand where the entanglement in the detectors is coming from, since the field modes to which they are coupled are not entangled. We do not have an answer to this question at the moment, and further work is need to unravel whether the entanglement in the detectors is an artifact of their non-relativistic nature or their non-local coupling to the field, or it is genuine entanglement harvested from the field in a subtle manner.
\acknowledgments
The content of this paper has benefited enormously from discussion with: A.~Ashtekar, E.~Bianchi, B.~Elizaga-Navascues, A.~Delhom, G.~Garcia-Moreno, S.~Hollands, E.~Martin-Martinez, J. Polo-Gomez, A. del Rio, K. Sanders, and V. Vennin. I.A.\ and D.K.\ are supported by the NSF grant PHY-2110273, and by the Hearne Institute for Theoretical Physics. S. N. is supported by the Universidad de Valencia, within the Atracci\'o de Talent Ph.D fellowship No. UV-INV- 506 PREDOC19F1-1005367.
|
{
"arxiv_id": "2302.13623",
"language": "en",
"timestamp": "2023-02-28T02:25:39",
"url": "https://arxiv.org/abs/2302.13623",
"yymm": "2302"
} | \section{Introduction}
\label{sec:introduction}
\subsection{Associative Polymers}
\label{sec:Associative Polymers}
Associative polymers are polymer chains that are decorated with functional groups which can form thermo-reversible bonds with each other, or stickers \cite{RubColby2003,RubDob97}. This reversibility distinguishes them from polymers which have been chemically cross-linked, practically permanently \cite{Martin1991-od}. With a sufficiently high number of bonds or cross-links, individual polymer chains become united in a system-spanning mesh capable of bearing load, and the system has become a gel \cite{Flory1974-tm}. In a permanent chemical gel, this load is supported indefinitely, but in a reversible "physical" gel, thermally activated dissociations release stress over time and the solution will eventually flow \cite{Douglas2018-xh}. In chemical gels, the onset of gelation is generally triggered by an increase in concentration or the extent of the cross-linking reaction \cite{Winter:2000gw,Nicolai1997-su}. In physical gels, concentration, temperature, and sticker strength are typical control variables \cite{Parada2018-fx}. In this work these controls are collectively referred to as triggering variables. In both classes of gel, architecture of the unbound precursor chain (branches and sticker locations) can also play a role in the material properties \cite{Chassenieux2011-fv}.
Physical gels have attracted attention due their combination of practically elastic behavior, ultimate fluidity, and adjustable properties\cite{Parada2018-fx}. Since the relaxation is driven by thermal activation, physical gels have temperature-dependent rheological properties. Further, solution states such as pH and concentration \cite{Yan2004-zl,Furikado2014-vr} which can be modified after formulation can also affect the gel without depending on the preparation state. These qualities grant physical gels an adaptability that is impossible in chemical gels. There are also many different associative groups to decorate the polymers with \cite{Webber2016-hj,Kloxin2013-ci,Wang2015-kv,Voorhaar:2016jt}, which make the sticker strength and functionality design variables as well. The proliferation of control parameters on physical gels makes them ideal materials for applications that call for rapid prototyping or precisely tuned rheological properties \cite{Yan2004-zl,Furikado2014-vr,Parada2018-fx}. They have two other valuable qualities as well. The associative mechanism for gelation means that if a gel is torn apart by excessive load, the polymer chains will dissociate rather than break the polymer chain, and the stickers will then form new associations to rebuild the mesh in a self-healing process \cite{Stukalin2013-hu,Herbst2013-gi,Chaudhuri2016-lf,Holten-Andersen2011-ke}. In addition, the rheology of associative polymers is driven by the statistical physics of reversible bonds instead of a particular chemistry. This means that applications with limited chemical options (such as medical technology \cite{Graham1987-mp}) are readily developed.
The focus of this work is on the effects of polymer concentration and "sticker strength" (the activation energy needed to break a reversible bond) on the rheology of a physical gel. Schematic impressions of the effects of these parameters are shown in Fig.~\ref{fig:Goft Schematics}. Broadly, there are obvious limiting behaviors for these two parameters, but a clear understanding of the crossover between these limits is not yet established. In the limit of high sticker strength, the behavior of a physical gel must approach that of the thoroughly studied chemical gels. That is, at low concentration, stickers can only bond with other stickers on the same chain, and no network forms. At very high concentration, all stickers are bound, most to different chains, and a robust elastic network is formed. In the limit of low sticker strength, associative polymers must behave as undecorated "homopolymers" which do not form a gel at any concentration. Somewhere in between these sticker strength limits there must be a point at which the signatures of gelation disappear. In this work, simulations of a model system are used to resolve this point and describe the differences in behavior on either side of it. A curious discovery from this investigation has been that a system-spanning network can exist, and yet not exhibit the rheological signatures that are conventionally used to identify a gel.
\begin{figure*}[t]
\includegraphics[width=\textwidth]{fig1.pdf}
\caption{Schematic representations of the hypothesized stress responses of physical gels and thier dependence on the number and strength of reversible bonds. (a) Given a fixed sticker strength, the blue curves represent changing the density of bonds (for example, by changing the polymer concentration). Darker shades correspond to a “weaker” gel with a lower elastic plateau. (b) Given a fixed concentration, red curves represent changing the bond lifetime (for example, by varying the interaction energy). Darker shades approach the “evanescent” gel behaviour, in which the elastic plateau submerges into the single chain response. In both figures, the broken lines represent the hypothesized viscoelastic response of the network, which is not readily separable from the single chain response, so only their combined response (solid coloured lines) is observed.}
\label{fig:Goft Schematics}
\end{figure*}
It is necessary to clarify promptly some terminology about gels and associations. The Long-time (or low-frequency) plateau in the elastic modulus of a physical gel (labelled $G_\epsilon$ in Fig.~\ref{fig:Goft Schematics}) is sometimes referred to as $G_0$, but this notation sets aside the relaxation processes within the polymer chains. Since this work is in a regime where both phenomena are relevant, we denote the (persistent but ultimately finite-lived) elastic modulus of a physical gel as $G_\epsilon$. $G_0$ would then be the instantaneous shear modulus, including the whole polymer contribution, as indicated in Fig.~\ref{fig:Goft Schematics}. As a second point, the term "weak gel" has been used to describe physical gels generally. This term is applied because a physical gel can exhibit a lower apparent elastic modulus than a chemical gel with similar molecular weight, concentration, and bond density. In some cases, the term "weak gel" has been used to specifically refer to an associative polymer solution which exhibits no observable elastic plateau, but this manuscript explicitly avoids that usage. There is a distinct difference between a weak sticker and a weak gel. The strength of a gel is a reference to the height of its elastic modulus, as represented schematically in Fig.~\ref{fig:Goft Schematics}(a). The strength of a sticker is effectively a reference to the lifetime of a bond, as in Fig.~\ref{fig:Goft Schematics}(b). If one assumes a particular polymer concentration and sticker strength produce a robust gel, then a reduction of the concentration reduces the number of bonds, weakening the gel. If instead the concentration is fixed but the sticker strength is reduced, the high concentration of available stickers will ensure that the number of bonds remains high. The height of the elastic modulus is therefore only marginally affected by the strength of the stickers. A system with stickers so weak that the gel-like behavior is disrupted is therefore more suitably referred to with a temporal qualifier, so the term "evanescent" is used
\subsection{Gel Transition}
\label{sec:Gel Transition}
Experimentally, gelation in polymeric solutions is typically characterised by applying small amplitude oscillatory shear and calculating the viscoelastic response in terms of the storage modulus ($G'$), loss modulus ($G''$), and loss tangent ($\tan\,\delta=G''/G'$)~\cite{WinterGel,Bromberg,Dennis09,polym9110556,Ruyumbeke2017,SumanJOR,Indei17}. The dynamic modulii as functions of the oscillation frequency, $\omega$, for an unentangled homopolymer solution are well described by the spectrum of relaxation times predicted by the Zimm model (or, in simulations neglecting hydrodynamic interactions, the Rouse model), where there exists a terminal flow regime in the limit of low frequency. However, multi-sticker associative polymer solutions may exhibit broad power law relaxation spectra~\cite{Mewis2001,Andersen2014,Wagner2017} with no readily discernible characteristic relaxation time scale, such that $G'\sim G''\sim \omega^{n}$ for a wide range of frequency~\cite{WinterGel,SumanJOR}. The onset of gelation for such systems is identified by plotting the loss tangent ($\tan\,\delta$) as a function of the triggering variable for a range of frequency, and then identifying the threshold value of the triggering variable for which $\tan\,\delta$ becomes independent of frequency~\cite{WinterGel,SumanJOR}.
Apart from the dynamic modulii, gelation in associative polymer solutions is also characterized based on the scaling of zero-shear rate viscosity ($\eta_{p0}$) and terminal relaxation time ($\tau$) with the triggering variable such as concentration or temperature~\cite{Bromberg,SumanJOR,RnSdynamics}. For instance, the divergence of the zero-shear rate viscosity and terminal relaxtion time at the onset of sol-gel transition are prominent dynamic signatures of gelation for strong elastically active gels~\cite{SumanJOR}. However, for weak stickers, instead of divergence, both zero shear viscosity and terminal relaxation time can exhibit non-divergent power-law scaling in the gel regime~\cite{RnSdynamics,RnS2001,Bromberg}. Another characteristic of the gel transition is the appearance of an equilibrium shear modulus as the system becomes mechanically rigid. The magnitude of this modulus (visually, the height of a plateau in $G(t)$ or $G'(\omega)$) increases with the triggering variable in accordance with percolation theory.
\subsection{Sticky Theories}
\label{sec:Sticky Theories}
The mechanical rigidity of a gel originates from the concentration of elastically active chains within the network. Elastically active chains are distinguished from dangling or looping chains or free clusters. A dangling chain is only connected to the network at one end. Looping chains connect back to the same point in the network. Free clusters (and free chains) aren't associated to the system-spanning network. All of these other types of chains can fully relax any stress on a finite time scale, but each elastically active chain contributes to the overall elastic modulus of the network. The relationship between the concentration of elastically active chains $\nu_e$ and the shear modulus $G_\epsilon$ is often expressed as $G_\epsilon=\nu_ek_BT$. However, $\nu_e$ is not directly measurable in experiment. To the authors' knowledge, neither has it been evaluated explicitly by previous simulation studies, likely due to the subtlety of distinguishing elastically active chains from fully relaxable ones. The number of elastically active chains is formally equivalent to the circuit rank of the elastic network, which can be extracted from the bond configuration in a simulation. $\nu_e$ is usefully related to the rheological signatures of gelation. Below the gel point, $\nu_e$ is zero by definition, since there is no system-spanning network. At the gel point, $\nu_e$ is still technically zero, since the circuit rank at the percolation transition is zero. At this point there is a scale free distribution of relaxation times, since the fractal nature of the network amounts to a scale free distribution of dangling chains lengths. As more bonds form beyond the gel point, new circuits in the network introduce mechanical rigidity, so $\nu_e$ (and therefore $G_\epsilon$) increases. This bond-concentration-dependent elastic modulus is what is represented in Fig.~\ref{fig:Goft Schematics}(a).
In associative polymer solutions, formation of a percolating network does not always guarantee the existence of a gel with a measurable elastic modulus. Polymer chain length, monomer concentration, density of sticky groups along the polymer backbone are all parameters that modify the viscoelastic response~\cite{Ruyumbeke2017,Indei17}. It is interesting to note that depending on the elasticity of the network formed after gelation, the dynamic modulii ($G'$ and $G''$) may show power law scaling with no characteristic relaxation or a distinct plateau with a discernible relaxation time along with a terminal flow regime~\cite{Ruyumbeke2017,Indei17,Bromberg}. For instance, $G'$ and $G''$ can exhibit a strong elastic response for high molecular weight, yet show a terminal flow behaviour at low molecular weight with the same spacing and strength of stickers. ~\cite{Ruyumbeke2017}. Such work demonstrates that the crossover between gel-like and fluid-like behavior is multi-faceted. There are not always established guiding principles for determining under which conditions a gel could form.
Since the unique rheological properties in solutions of associative polymers arise from the statistics of sticker association, \citet{AritraStatJoR} have thoroughly investigated the static properties related to the scaling of intra-chain and inter-chain associations and different static signatures of gelation based on percolation, maxima in the free-chain concentration, and onset of bimodality in the cluster size distribution. However, the relationship of these static signatures with the viscolelasticity and dynamic signatures of gelation is currently unknown. The key findings from the study on the static signatures of gelation are that each of these different signatures occur at different values of scaled concentration, $c/c^*$, and these concentrations are independent of the chain length ($N_b$) for a given distance between stickers on a chain ($\ell$) and sticker strength ($\epsilon_{st}$). In the present work, this ambiguity is dispelled by combining microscopic structural measurements with conventional rheological measurements to reveal the effect that changing sticker strength has on gelation.
The paper is organised as follows. Several aspects of our simulation methods are discussed in the \nameref{sec:Model} section. The subsection \nameref{sec:sticker algorithm} is noted as a pivotal development which was instrumental to this work. The \nameref{sec:Results} section presents the data obtained from these simulations. The subsection \nameref{sec:sticker algorithm} specifically presents a method of calculating the concentration of elastically active chains explicitly from simulation configurations. In the \nameref{sec:Conclusion} we discuss several insights and conjectures drawn from the simulation data.
\section{Model for Associative Polymer Solutions} \label{sec:Model}
\subsection{Governing Equation}
\label{sec:governing equation}
A bead-spring chain model was used to simulate solutions of associative polymers using Brownian dynamics. The governing equation in BD simulations is a stochastic differential equation describing the evolution of the position vector $\Vector{r}_\mu(t)$ of a bead $\mu$ with time $t$. The Euler integration algorithm for the non-dimensional version of this It\^{o} stochastic differential equation is given in its most general form as~\cite{Stoltz2006},
\begin{equation}
\label{eqn:governing}
\begin{split}
\Vector{r}_\mu(t+\Delta t)=&\Vector{r}_\mu(t)+(\pmb{\kappa}\cdot\Vector{r}_\mu(t))\Delta t\\
&+\frac{\Delta t}{4}\sum_{\nu=1}^N\Tensor{D}_{\mu\nu}\cdot\left(\Vector{F}^\Vector{s}_\nu+\Vector{F}^\Vector{SDK}_\nu\right)\\
&+\frac{1}{\sqrt{2}}\sum_{\nu=1}^N\Tensor{B}_{\mu\nu}\cdot\Vector{\Delta W}_\nu
\end{split}
\end{equation}
There are three qualitatively different terms in this equation. The term containing $\pmb{\kappa}$ accounts for the unperturbed solvent flow field \Vector{v} through $\pmb{\kappa}=(\nabla\Vector{v})^\top$. The second term accounts for the forces acting on each bead, where $\Vector{F}^\Vector{s}$ and $\Vector{F}^\Vector{SDK}$ are the net forces on each particle due to springs and pairwise interactions, which will be detailed later. The diffusion tensor $\Tensor{D}_{\nu \mu} = \delta_{\nu \mu} \Tensor{\delta} + (1-\delta_{\nu \mu})\pmb{\varOmega}(\Vector{r}_{\nu}-\Vector{r}_{\mu})$, where $\Tensor{\delta}$ and $\delta_{\mu \nu}$ represent a unit tensor and Kronecker delta respectively, and $\pmb{\varOmega}$ is the Rotne-Prager-Yamakawa (RPY) hydrodynamic interaction tensor. The final term introduces multiplicative noise as an implicit solvent to create Brownian motion. Efficiently computing the effects of fluctuating hydrodynamic interactions in these last two terms is non-trivial and will be detailed below.
$\Vector{F}^\Vector{SDK}$ includes contributions from the excluded volume (EV) interaction between all overlapping pairs of beads, as well as the associative interaction between reversibly bound pairs of stickers. In principle, the EV interaction can interpolate between good solvent and poor solvent limits, but in this work, it was fixed in the good solvent limit.
The SDK potential is used to model the excluded volume and associative interactions for a multi-sticker polymer solution with sticker functionality, $\varphi=1$. A FENE spring force law with $b$-parameter equal to $50.0$ is used to represent the connector force between adjacent beads. Simulations with hydrodynamic interactions are performed with $h^*=0.2$.
\subsection{Dynamic Functional Associations}
\label{sec:sticker algorithm}
By utilizing various polymer chemistries, associative groups can be implemented with a wide variety of properties. Two key parameters to describe this interaction are the energy barrier against dissociation (bond strength) and the number of associations that a functional group can form (functionality). If the association is based on hydrophobic interactions, the bond strength varies smoothly with the length of a hydrophobic block, and the functionality is very high. For other chemistries, the functionality is generally 1-3, and the strength depends on chemistry and pH. Another feature of associative systems is the notion of "species" of stickers which either only associate with themselves, or only with a particular other species.
In this work, if a pair of beads are associated, they simply interact with an SDK potential with a larger value of the well depth than the “backbone” excluded volume interaction that most bead pairs feel. The decision to bind or unbind a pair is made using a typical Monte Carlo process. If two stickers are within the cutoff distance of each other, the change in energy $\Delta E$ if the bond state were changed is calculated. A pseudo-random number is drawn from a uniform distribution between 0 and 1. If the random number is less than $\exp(-\Delta E/k_bT)$, then the change of state is carried out. In an update sweep, each existing bond attempts to break in this manner, then bond formations are attempted. If two stickers are within the cutoff distance, but at least one of them already has as many bonds as the functionality setting permits, then the bond formation for the new pair is not attempted.
The bond update algorithm is detailed in Algorithm 1. A simulation contains $N_{stick}$ stickers each with functionality $f$. The algorithm maintains an $N_{stick}\cross f$ array in which the $i$th row contains the list of indices of stickers associated with sticker $i$. There is a persistent copy of this array called the previous-partners list, which is accessible to the machinery within HOOMD for calculating pairwise forces based on a neighbour list. This persistent array is initialized to the no-bonds state, then in each update step, Algorithm 1 is applied.
\begin{algorithm}
\caption{Association Update}
\label{alg:associations}
\begin{algorithmic}[1]
\State possible pair list $\gets$ conventional neighbour list
\State new pair list $\gets \varnothing$
\ForAll{stickers $i$}
\State $N_{bonds}[i]\gets0$
\EndFor
\State shuffle possible pair list
\ForAll{pairs $(i,j) \in$ possible pair list}
\State $\Delta E_{ij} \gets U_{bound}(r_{ij})-U_{unbound}(r_{ij})$
\If {$(i,j) \in$ previous pair list}
\If {$X\sim U(0,1)>\exp(-\Delta E_{ij})$}
\State append $(i,j)$ to new pair list
\State $N_{bonds}[i]\gets N_{bonds}[i]+1$
\State $N_{bonds}[j]\gets N_{bonds}[j]+1$
\EndIf
\ElsIf{$N_{bonds}[i]<functionality$ \textbf{ and } $N_{bonds}[j]<functionality$}
\If {$X\sim U(0,1)<\exp(-\Delta E_{ij})$}
\State append $(i,j)$ to new pair list
\State $N_{bonds}[i]\gets N_{bonds}[i]+1$
\State $N_{bonds}[j]\gets N_{bonds}[j]+1$
\EndIf
\EndIf
\EndFor
\State previous pair list $\gets$ new pair list
\end{algorithmic}
\end{algorithm}
This association update sweep is applied once before each Brownian dynamics time step. If the BD simulation were frozen and the MC bond update applied iteratively, it would sample the Boltzmann distribution for the possible bond configurations of the system given that instantaneous particle position configuration. This is clear because all possible bond state changes are directly reversible, and have the complementary probabilities required for detailed balance. As BD and MC update steps are interleaved, it is expected that the infinitesimal changes in particle positions during a BD step will only infinitesimally change the possible bond configuration energies. Therefore, the Boltzmann distribution from one update step to the next will be similar enough for the MC bond update steps to approximate it.
\subsection{Parameter Values}
\label{sec:parameter values}
The simulations carried out to study the dynamic properties consist of multi-sticker associative polymer solutions at finite concentrations with $f=4$ stickers per chain and spacer length, $\ell=4$ or 0, implying the total number of beads per chain $N_b=(\ell+1)f+\ell=24$ or 4. All simulations used an athermal solvent quality for the backbone, $\epsilon_{bb}=0$, and a range of sticker strengths $\epsilon_{st}$. All relevant parameters are compiled in Table~\ref{tab:parameters}. For the specific system $f=4, \ell=4, \epsilon_{st}=5$, the three static signatures of gelation are observed at $c_{g_1}/c^* \approx 0.3$, $c_{g_2}/c^* \approx 1.0$ and $c_{g_3}/c^* \approx 0.5$, where the subscripts $g_1$, $g_2$ and $g_3$ denote gelation signature at percolation transition, onset of bimodality and free chain maxima, respectively~\cite{AritraStatJoR}. A typical simulation consists of a pre-equilibration run of about $3$ to $4$ Rouse relaxation times for a system of chains with only backbone monomers and no stickers, followed by the introduction of stickers and an equilibration run of about $5$ Rouse relaxation times. Finally, sampling is carried out over a production run of about $5$ to $20$ Rouse relaxation times. During the production run, dynamic properties are calculated as a function of time from each independent trajectory. Ensemble averages and error of mean estimates of different dynamic properties are then computed over a collection of 500 to 1000 such independent trajectories. All simulations with HI (with $h^*=0.2$) have been carried out with a non-dimensional time-step $\Delta t=0.005$. Simulations are carried out using the non-dimensional time scale $\lambda_H$ and length scale $l_H$.
\begin{table*}
\centering
\begin{tabular}{l | l | l | l}
Parameter Name & Symbol & Values \\
\hline
Backbone interaction strength & $\epsilon_{bb}$ & 0 \\
Sticky beads per chain & $f$ & 4 \\
Spacer beads between stickers & $\ell$ & 4,0 \\
Sticker strength & $\epsilon_{st}$ & 3-12 \\
Hydrodynamic interaction parameter & $h^*$ & 0,0.2 \\
Beads per chain ($(\ell+1)f+\ell$) & $N_b$ & 4,24 \\
Concentration & $c/c^*$ & 0,0.1-10 \\
Integration time step & $\Delta t$ & 0.0001, 0.001, 0.005 \\
Simulation duration & $t$ & 1000 \\
Polymer chains per simulation & $N_c$ & 8-30, 300, 3000 \\
Independent simulation instances & $N_\mathrm{run}$ & 32-1000 \\
Step Strain & $\gamma$ & 0.2 \\
Step Strain rate & $\dot{\gamma}$ & $10^6$
\end{tabular}
\caption{Parameter values used in this work.}
\label{tab:parameters}
\end{table*}
\section{Results}
\label{sec:Results}
The bulk of the data examined in this work is extracted from the linear viscoelastic stress response $G(t)$. In BD simulations, this response can be measured either using the stress auto correlation function in equilibrium, or by deforming the simulation volume to perform a step strain protocol and measuring the resulting stress. Examples of the resulting relaxation curves are presented for a few conditions in Fig.~\ref{fig:Goft Weak and Goft Dense}. Fig.~\ref{fig:Goft Weak and Goft Dense}(a) shows data from simulations with relatively weak stickers, $\epsilon_{st}$=5, ranging from the dilute limit to twice the overlap concentration. These responses exhibit a Rouse-like distribution of relaxation modes, as well as a modest push toward longer relaxation times with increased concentration. In contrast, Fig.~\ref{fig:Goft Weak and Goft Dense}(b) shows data from simulations just above $c^*$ with a range of $\epsilon_{st}${} from 3 to 12. The curves for higher sticker strengths exhibit a more notable response at later times. It is expected that for sufficiently high concentration and sticker strength, $G(t)$ should include a plateau due to the elastic response of the network. This interpretation is plausible for the $\epsilon_{st}$=12 case in Fig.~\ref{fig:Goft Weak and Goft Dense}(b), but more data are needed for a clear picture.
\subsection{Strong Stickers}
\label{sec:Strong Stickers}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig2.pdf}
\caption{Simulated linear viscoelastic stress response $G(t)$ (a) for sticker strength $\epsilon_{st}$$=5$ while varying $c/c^*$ and (b) for $c/c^*$$=1.2$ while varying $\epsilon_{st}${} in the absence of HI.}
\label{fig:Goft Weak and Goft Dense}
\end{figure*}
In order to reduce the computational cost of data collection, a system of very short chains is modelled so that the single-chain relaxation modes are limited to short times, rendering the response of the network more distinguishable at later times. Fig.~\ref{fig:Goft and hyperscaling} shows $G(t)$ for a system of 4-bead chains, in which each bead is a sticker. Fig.~\ref{fig:Goft and hyperscaling}(a) clearly demonstrates the transition from a sol phase with a very short terminal relaxation time, to a critical gel with a power-law relaxation process, to a gel with a long-lasting elastic plateau. Without running longer simulations, it is impossible to measure directly how long this plateau will persist before dissociations release stress, but the computational expense of BD limits the duration of routine simulations. In the present analysis, $G(t)$ curves are fit with the functional form
\begin{equation}
G(t)=\left[\mathbb{G}E_\alpha\left(-\frac{\mathbb{G}}{\mathbb{V}}t^\alpha\right)+G_\epsilon\right]\exp\left(-\frac{t}{\tau_\epsilon}\right),
\label{eqn:MLphys}
\end{equation}
where $E_\alpha(z)$ is the Mittag-Leffler function
\begin{equation}
E_\alpha(z)=\sum_{k=0}^\infty\frac{z^k}{\Gamma({\alpha}k+1)}.
\end{equation}
This form captures the phenomenology of a broad distribution of relaxation modes due to chain segment relaxation processes, in addition to a single extra mode due to the load supported by a rigid network. The Mittag-Leffler function approaches a stretched exponential at short times and a power law at long times. This function has been derived using fractional calculus as a model for viscoelastic materials [ref McKinley]. Here it is simply employed as a phenomenological model to extract properties from $G(t)$ data. Here the Mittag-Leffler function is multiplied by an exponential to account for the terminal flow behavior inherent in physical gels, and to extract the longest relaxation time. The zero-shear viscosity in the sol phase is measured using the integral of $G(t)$, and the elastic modulus in the gel phase is taken from the fit parameter $G_\epsilon$. The resulting scaling of these quantities near the gel point is presented in Fig.~\ref{fig:Goft and hyperscaling}(b). The lines show power law fits to these measurements, which identify the gel point under these conditions as $c_g${}$/c^*\approx0.32$.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig3.pdf}
\caption{Conventional rheological signatures of gelation with concentration as a triggering variable. (a) $G(t)$ exhibits a power law relaxation near $c_g$, and a growing plateau as $c$ is increased further. (b) The polymer contribution to the zero shear viscosity $\eta_{p0}$ (blue circles) measured as the integral of $G(t)$ diverges on approach to $c_g${} from below. The gel elastic modulus $G_\epsilon$ (red squares) extracted from $G(t)$ fits increases as a power law above $c_g$.}
\label{fig:Goft and hyperscaling}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=\textwidth]{fig4.pdf}
\caption{ (a) Storage $G'(\omega)$ (solid) and loss $G''(\omega)$ (dashed) modulii as functions of dimensionless frequency, $\omega \lambda_H$ at fixed $\epsilon_{st}${} and varied $c/c^*$. The characteristic frequency $\omega_1$ is indicated at the intersection of $G'$ and $G''$. $G_\epsilon$ and $G_0$ are indicated as the low- and high-frequency limits, respectively, of $G'$. The low-frequency, low-concentration slopes of $G'$ and $G''$ are labelled to indicate terminal flow. (b) The loss tangent $\tan\delta=G''/G'$, for the same values of $c/c^*$. (c) $\tan\delta$ as a function of concentration. The intersection of curves for various frequencies indicates the gel point.}
\label{fig:modulii and tandelta}
\end{figure*}
Small amplitude oscillatory shear experiments are typically used to measure the frequency dependent storage and loss modulii $G'(\omega)$ and $G''(\omega)$. Fig.~\ref{fig:modulii and tandelta}(b) shows a construction of these modulii from simulation data by taking the real and imaginary parts of the Fourier transform of $G(t)$. For numerical stability, the fit to Eqn.~\ref{eqn:MLphys} is used instead of the raw data. These curves exhibit all of the conventional features of gel-forming systems. At low concentration, the terminal flow regime is observed at low frequency. The characteristic frequency $\omega_1$ is also identified at the intersection of $G'$ and $G''$. Near the critical point (the green line for $c/c^*$=0.32), the modulii are parallel power laws over a broad range of frequencies. Above $c_g$, the elastic modulus is seen as a low-frequency plateau in $G'$. The gel transition is sometimes located using the frequency-independence of the loss tangent $\tan \delta=G''/G'$, as plotted in Figs.~\ref{fig:modulii and tandelta}(b) (vs frequency) and (c) (vs concentration). The flatness of $\tan \delta(\omega)$ near $c/c^*$$\approx0.32$ or equivalently the intersection of $\tan \delta(c)$ curves for various frequences at $c/c^*$$\approx0.3$ are consistent with the indications drawn from $\eta_{p0}$ and $G_\epsilon$.
\subsection{Weak Stickers}
\label{sec:Weak Stickers}
Returning now to the weaker stickers represented in Fig.~\ref{fig:Goft Weak and Goft Dense}(a), the properties of associative polymers are again considered as a function of concentration, despite the absence of a clear plateau due to an elastic network. The scaling of the terminal relaxation time ($\tau$) with concentration is often used to characterize the dynamic signatures of gelation in associative polymer solutions~\cite{SumanJOR,RnSdynamics}. Fig.~\ref{fig:tausVc} presents several dynamical measurements of this system, in an effort to discover any macroscopically observable evidence of the divergence seen in Fig.~\ref{fig:Goft and hyperscaling}(b). Measurements are reported of time scales extracted from the end-to-end unit vector auto-correlation ($\tau_{Re}$), the stress relaxation ($\tau_G$), the inverse of the characteristic frequency ($\tau_\omega$), and the zero viscosity ($\tau_\eta$). All of these time scale measurements $\tau_i$ are normalized by their respective dilute limits $\tau_i^0$. The dependence of all these measurements on concentration collapse to a power law with an exponent near 0.6 above $c/c^*$$\approx0.5$, as opposed to the divergence leading up to $c_g${} observed with higher $\epsilon_{st}$. Interestingly, the maxima in the free chain concentration for this system found in Ref.~\cite{AritraStatJoR}, which occurs at $c/c^*=0.5$, coincides with the cross-over concentration of the $\tau\sim c^{0.6}$ regime.
\begin{figure}
\centering
\includegraphics[width=240pt]{fig5.pdf}
\caption{Multiple independent measurements of the longest relaxation time in a solution of associative polymers with $\epsilon_{st}$=5 show self-consistent scaling with $c/c^*$, but no indication of the diverging time scale expected on approach to the gel transition.}
\label{fig:tausVc}
\end{figure}
In the mean-field theory developed by Rubinstein and Semenov~\citep{RnSdynamics} for associative polymer solutions, the zero shear viscosity ($\eta_{p0}$) in the post gel regime, close to the gel-point, is shown to scale with the relative distance, $\Delta$, from the gelation concentration ($c_g$), where $\Delta = (c-c_g)/c_g$, as
\begin{equation}
\label{DyEq:eta0_gel}
\eta_{p0} \sim \Delta^{1/(3\nu-1)},
\end{equation}
\noindent{}where $\nu=0.6$ is the Flory exponent, which implies that the exponent $1/(3\nu-1)=1.25$. Considering $c_g/c^*=0.5$, we find $\eta_{p0}$ to scale with a slope of $1.25$ in the post-gel regime, as shown in Fig.~\ref{fig:viscosityScaling}. Note that is not scaled with the number density of polymer chain, $n_p$ (which is related to monomer concentration $c$), in order to bring out the concentration dependence explicitly. This is one of the dynamic signatures of gelation which is in agreement with the prediction by the mean-field theory~\citep{RnSdynamics} and as observed by Bromberg~\citep{Bromberg} in experiments with thermo-reversible hydrogels.
\begin{figure}
\centering
\includegraphics[width=240pt]{fig6.pdf}
\caption{Concentration scaling of the polymer contribution to the zero shear-rate viscosity for a solution of associative polymers with $\epsilon_{st}$=5. The value $c_g/c^*$=0.5 has been estimated based on the scaling regimes in Fig.~\ref{fig:tausVc} to enable this comparison with theory~\citep{RnSdynamics} and experiment~\citep{Bromberg}.}
\label{fig:viscosityScaling}
\end{figure}
The scaling of the ratio $\tau/\tau_0$ with $c/c^*$ for associative polymer solutions is compared with that of homopolymer solutions, where the terminal relaxation time for homopolymers are estimated from end-to-end unit vector auto-correlation function. Note that for homopolymers we have performed only few simulations in the dilute regime, and data has been acquired from the work by Huang et al.~\cite{Huang2010} (yellow diamonds in Fig.~\ref{fig:tausVc}), for the purpose of comparing with associative polymers. As compared to associative polymer solutions, the normalised relaxation time for the homopolymers goes through a broad cross-over with a scaling exponent of $(2-3\nu)/(3\nu-1) = 0.25$, where the Flory exponent $\nu=0.6$, which is a well known scaling law for relaxation time in the semi-dilute unentangled regime~\cite{deGennes,Huang2010}. It is interesting to note that the concentration dependence of homopolymer and associative polymer relaxation times is roughly similar until the associative solution crosses over to the gel scaling regime. It should also be noted that if the strong-sticker simulations could be run for a few more orders of magnitude of time, the dissociation of the network would be captured and the scaling of the terminal time scale could be analyzed.
Rubinstein and Semenov~\cite{RnSdynamics,RnS2001} have proposed the sticky Rouse model, which predicts the scaling of the relaxation time, $\tau_{\text{relax}}$, in unentangled solutions of associative polymers well above the gel point. According to the prediction
\begin{equation}
\label{DyEq:tauRelax}
\tau_{\text{relax}} \approx \tau_{\text{bond}}\,(f\,p_2)^2,
\end{equation}
\noindent{}where $\tau_{\text{bond}}$ is the bond lifetime of associated stickers, $f$ is the number of stickers per chain and $p_2$ is the fraction of inter-chain associated stickers. From its definition, the product $f\,p_2$ is essentially the total number of inter-chain associated stickers in a chain. In order to verify this prediction for our model, we first calculate the average bond lifetime of associated stickers by considering all possible associating pairs for the systems of associative polymer solutions at different concentrations. According to the mean-field theory~\cite{RnSdynamics,RnS2001}, $\tau_{\text{bond}}$ is effectively a function of only the sticker strength, $\epsilon_{st}$, however, from the Brownian dynamics simulations we find that at a constant value of sticker strength, $\epsilon_{st}=5.0$, the bond lifetime varies weakly with concentration, for both HI and no HI cases. The average bond lifetime of the stickers is relatively higher for simulations carried out with HI. The reason for such a behaviour may be attributed to the influence of the back flow generated by the long-range hydrodynamic perturbations, which increases the contact time between the stickers. It is noteworthy that the values of bond lifetime are of the same order as that of the large scale relaxation times. The ratio $\tau_i/(\tau_i^0\,t_{\text{bond}})$ is plotted as a function of $f\,p_2$ at different values of monomer concentration in the pre and post-gel regimes, as shown in Fig.~\ref{fig:fp2Scaling}. Here, $\tau_i$'s are the large scale relaxation time estimated from various methods as discussed earlier and $\tau_i^0$'s are the corresponding values of relaxation time in the dilute limit. In order to investigate the effect of HI on the relaxation dynamics of the associative polymer solutions, results from the simulations with HI are compared with those obtained from simulations without HI. From Fig.~\ref{fig:fp2Scaling}, it is clear that in the HI case the ratio $\tau_i/(\tau_i^0\,t_{\text{bond}})$ is independent of the methods used to evaluate the relaxation time. Moreover, the scaling of the ratio $\tau_i/(\tau_i^0\,t_{\text{bond}})$ with $f\,p_2$ goes through a cross-over and follows an asymptotic exponent of $2$ for a range of concentration well above the gel-point ($c_g/c^*=0.5$), which is in agreement with the prediction of the sticky Rouse model, given in Eq.~(\ref{DyEq:tauRelax}). At low concentration, relaxation seems to be faster with HI, but at high concentration, HI is screened and the two conditions converge. This may indicate that neglecting HI increases the observed $c_g$.
\begin{figure}
\centering
\includegraphics[width=240pt]{fig7.pdf}
\caption{Scaling of terminal relaxation times with the number of inter-chain associations per chain $fp_2$. Horizontal lines indicate values of $fp_2$ corresponding to $c/c^*$=0.5 and 2 for these values of $f,\ell,$ and $\epsilon_{st}$.}
\label{fig:fp2Scaling}
\end{figure}
As shown in Fig.~\ref{fig:fp2Scaling}, in the no HI case the normalised relaxation times computed from $R_e$, $G(t)$, and $\eta_{0p}$ are found to match with each other, however, the relaxation time $\tau_{1/\omega}=1/\omega_1$, calculated from the intersection of $G'$ and $G''$ diverges from the universal curve. The divergence in the behaviour of $\tau_{1/\omega}$ comes from the difference in the scaling of $G'$ and $G''$ in the intermediate frequencies, based on whether the model includes HI or not. In the Rouse model, where HI is not considered, the dynamic moduli follow the power law scaling $G'\sim G''\sim \omega^{0.5}$ in the intermediate range of frequencies, whereas, in case of Zimm model, which considers HI, $G'\sim G''\sim \omega^{0.6}$. Thus the point of intersection of $G'$ and $G''$ is very sensitive to the slope of dynamic moduli in the intermediate frequencies. Given the sensitivity of the point of intersection between $G'$ and $G''$ on the presence or absence of HI, since, $\tau_{1/\omega^*}$ is estimated from this intersection point, there exist a significant difference in the prefactor of $\tau_{1/\omega}$ calculated with and without HI. However, it is worthy to note that in the limit of high concentration ($c/c^* > 1$) at the post gel regime, the normalised relaxation times in the no HI case also follow the same asymptotic scaling as predicted by the mean-field theory. Furthermore, except the estimate of relaxation times based on intersection of $G'$ and $G''$, the HI and no HI case follow the same universal curve at the limit of high concentration in the post gel regime, indicating screening of HI due of formation of dense gelation network.
\subsection{Identification of Elastically Active Chains}
\label{sec:Elastically Active}
The data presented thus far for strong stickers have exhibited the standard rheological signatures of gelation in the permanent-associations limit. Meanwhile the weaker stickers have shown a terminal flow behavior that agrees with theories of thermoreversible associations, but fails the rheological test for the formation of a mechanically rigid network. These results raise two questions. First, how do the weak stickers exhibit the viscosity scaling predicted by assuming the presence of a system-spanning network without exhibiting the elastic response expected from such a network. Second, at what level of sticker strength does the elastic response of the network become apparent? It is noted that a simple consideration of the sticker strength as the triggering variable to find a gel point $\epsilon_g$ at a fixed concentration is uninteresting because the dissociation energy barrier is often determined by chemistry and is not always trivial to adjust. A useful answer to the second question requires a systematic prediction for whether or not a gel is possible at any concentration given a particular $\epsilon_{st}$. To resolve this ambiguity about the existence of a spanning network based on rheological data, the microscopic network structure must be analyzed carefully to aid the interpretation of macroscopic measurements.
The number of elastically active chains in a simulation volume is usually described as the number of independent cycles in a network, or the number of branch points in the network. An important detail to this description is that only those cycles or branches which are attached to the network by at least three independent paths may contribute to the elastic modulus of the network. Here we present an algorithm to identify the elastically active elements in the simulated molecular network. An illustration of this process is presented in Fig.~\ref{fig:network simplification}. Consider for example a localized, highly inter-connected bundle of polymers which are only connected to the system-spanning network by a single strand. This bundle, though it contains many branches and cycles, is ultimately a dangling end which is free to relax after a deformation. Consider further the case that the bundle is connected by two strands. Now the bundle as a whole acts merely as a single bridging chain, and all of the structures internal to the bundle can still relax. Only if the bundle is connected to the network by at least three independent paths may it contribute to the rigidity of the network. All of these types of sub-network are apparent in Fig.~\ref{fig:network simplification}(a).
\begin{figure*}
\includegraphics[width=\textwidth]{fig8.pdf}
\caption{Depiction of the process used to efficiently enumerate the elastically active elements in a super-molecular network.}
\label{fig:network simplification}
\end{figure*}
The enumeration of the triply-connected network elements can be accomplished with off-the-shelf graph analysis tools. A graph must be constructed of the super-molecular network, using the $N_{bead}$ beads as nodes and $N_{connect}$ springs and associations as edges. Existing algorithms can identify the tri-connected components of such graphs in $O(N_{bead}+N_{connect})$ time. However, the pre-factor on this computational cost is large enough that systematically applying the algorithm to the full simulation data set is unnecessarily time-consuming. The process is sped up dramatically by first simplifying the network graph to only include branch points as nodes and the bridging chains between them as single edges. This process of simplifying the network graph is depicted in Fig.~\ref{fig:network simplification}(b). The algorithm used is detailed in Alg.~\ref{alg:Elastic}.
\begin{algorithm}
\caption{Indentifying Elastically Active chains}
\label{alg:Elastic}
\begin{algorithmic}[1]
\State $M_{ij}(t_1,t_2) \gets 1$ if $(i,j)$ in pair list at both $t_1$ and $t_2$ or $(i,j)$ are connected by a spring
\State Graph $G$ is defined by adjacency matrix $M$
\State Graph $H$ is the largest connected component of $G$
\While{any nodes in $H$ have degree $<3$}
\State replace all branchless paths in $H$ with a single edge
\State remove duplicate edges from $H$
\EndWhile
\State Construct $S$, the SPQR tree of $H$
\State Graph $T$ is the largest node of $S$
\State Return (edges in $T$) - (nodes in $T$) + 1
\end{algorithmic}
\end{algorithm}
Fig.~\ref{fig:network simplification}(a) depicts the single large graph H acquired in step 4. This graph contains several dangling ends and hierarchies of bridging chains. Fig.~\ref{fig:network simplification}(b) depicts the first application of step 5. All nodes and only the nodes which were junctions in the previous step have been kept. Sequences of beads between junctions have been replaced with a single graph edge. Duplicate bonds between junctions have been drawn for visual clarity, but the actual graph constructed in the algorithm only keeps one such edge. Therefore, upon successive applications of step 5, nested bridging chains are successively simplified until only a single edge remains, as show in Fig.~\ref{fig:network simplification}(c). However, there still remains the possibility that a tri-connected subgraph is only connected to the rest of the network by two edges. One such cluster is circled in Fig.~\ref{fig:network simplification}(c). Naively, one would have to test for such clusters by removing every possible pair of edges (an $O(N_{edge}^2)$ task), and check if the graph is still connected (an $O(N_{edge})$ task). However, an algorithm exists to analyze tri-connected components of a graph in linear time using a data structure called an SPQR tree. This algorithm was adapted into a function which receives the bi-connected graph in Fig.~\ref{fig:network simplification}(c) and returns the lists of nodes in each tri-connected component of the graph. As shown in Fig.~\ref{fig:network simplification}(d), the largest tri-connected component is retained, and any smaller components are replaced with a single edge. The circuit rank of this final graph is the number of elastically active chains in a simulation volume. Note that the actual simulation networks contain thousands of beads, and even the final network can contain dozens of nodes. Due to the periodic, three-dimensional nature of the simulation, the networks are in general non-planar and difficult to visually interpret.
By applying this algorithm to simulation data and taking a time average over pairs of snapshots with the same $\Delta t=t_2-t_1$, a curve is constructed for the concentration of elastically active chains which have persisted for a length of time. The $\Delta t=0$ limit of this measurement is the equilibrium concentration of elastically active chains. A key realisation is that in order for an elastic element to exhibit a stress response, it must have maintained its connection to the network since the strain was applied. Fig.~\ref{fig:elastic emergence} shows the resulting curves for the concentration of persistent elastically active chains, $\nu_e(\Delta t)$, as well as $G(t)$. The immediate observation is that increasing polymer concentration increases the instantaneous concentration of elastically active chains, even at lower sticker strengths. But there is an "association turnover" with weak stickers, meaning that while a percolated network exists at any moment, the bonds within it are constantly dissociating, relaxing, and forming new associations in a relaxed configuration. This causes the elastic plateau observed for stronger stickers to recede to shorter times instead of dropping to lower heights. Fig.~\ref{fig:elastic emergence}(a) emphasizes the "evanescent" behavior, meaning that the elastic response of the network decays faster than the segmental relaxations of even single polymer chains, rendering it invisible, buried within the polymer relaxation spectrum.
\begin{figure*}
\includegraphics[width=\textwidth]{fig9.pdf}
\caption{Comparison of the rheologically measured $G(t)$ with the microscopically measured $nu_e(\Delta t)$ for (a) $\epsilon_{st}$=6 (b) $\epsilon_{st}$=9 (c) $\epsilon_{st}$=12. Weak stickers form an evanescent gel, which contains mechanically rigid structures that dissociate before their elastic response is observable. As $\epsilon_{st}${} is increased, the elastic plateau emerges from the single-chain relaxations. Solid lines through $G(t)$ data represent fits using Eqn.~\ref{eqn:MLphys}. Broken lines through $\nu_e(\Delta t)$ represent fits with a single exponential.}
\label{fig:elastic emergence}
\end{figure*}
These measurements of the concentration of elastically active chains make it possible to estimate the $c_g${} as a function of $\epsilon_{st}${} by using $\nu_e$ as a proxy for $G_\epsilon$ and fitting $\nu_e(c)$ with a power law, $\nu_e(0)=A(c/c_g-1)^z$, similar to Fig.~\ref{fig:Goft and hyperscaling}(b). Further, by considering the values of $\nu_e$ at different time scales, we can estimate the observed $c_g${} if rheological measurements were limited to a particular time scale. Fig.~\ref{fig:nue vs c} displays the dependence of $\nu_e$ on concentration, sticker strength, and time scale. The investigated time scales are chosen relative to the longest Rouse relaxation time for a 24 bead chain. The reasoning for this choice is that a process faster than this time scale will be difficult to distinguish from the polymer modes. In different complex fluids there might be other relaxation processes in play which might be competing with the dissociation mechanism.
\subsection{Competition of Time Scales}
\label{sec:Competition of Time Scales}
\begin{figure*}
\includegraphics[width=\textwidth]{fig10.pdf}
\caption{Extraction of $c_g${} from measurements by (a) Finding the intersection (crosses) of the interchain conversion $p_2$ with the quantity $1/(1-p_1)f-1$; or fitting with the form $\nu_e(c)=A(c/c_g-1)^z$ at (b) $\Delta t=0$ (c) $\Delta t=\lambda_1$ where $\lambda_1$ is the slowest Rouse mode for a 24 bead chain. After dissociations reduce $\nu_e$ to zero, fit is possible.}
\label{fig:nue vs c}
\end{figure*}
Some notes must be made about the values of $c_g${} extracted in Fig.~\ref{fig:nue vs c}. The ability to extract a value of $c_g${} is contingent on at least three different values of $c/c^*${} with a non-zero value of $\nu_e$ in order to constrain the fit. With so few data points (due to limitations of computational resources), meaningful uncertainty estimates on these fit parameters are not available. The purpose of this exercise is to demonstrate that $c_g${} may be identified systematically, rather than to resolve its value precisely. It is also notable that once the observation time scale passes beyond the dissociation time scale, $\nu_e$ drops to 0 and the gel network is not rheologically apparent ($\epsilon_{st}$=3 in Fig.~\ref{fig:nue vs c}(b) and $\epsilon_{st}$=3 and 6 in (c)).
\begin{figure*}
\includegraphics[width=\textwidth]{fig11.pdf}
\caption{A) Dependence of $c_g${} on $\epsilon_{st}$. The curves compare between the apparent gel point when considering coarse static information ($p_2$), detailed instantaneous information ($\nu_e(t=0)$), or dynamic measurements at various time scales. B) Dependence on $\epsilon_{st}${} of the time scale $\tau_\nu$ for the dissociation of elastically active chains. Horizontal lines indicate terminal relaxations predicted by Rouse and Zimm theory.}
\label{fig:cg vs epsilon}
\end{figure*}
Fig.~\ref{fig:cg vs epsilon} collects the values of $c_g${} as a function of $\epsilon_{st}${} extracted from $\nu_e$ data. The curves for different time scales suggest that, so long as an elastic network persists at all, the value of observed value of $c_g${} is not significantly impacted by the observation time scale. Meanwhile, the absence of a $c_g${} value for lower $\epsilon_{st}${} values, even when considering time scales much shorter than the single chain relaxation time, emphasizes the difficulty of rheologically detecting the presence of a network of weak stickers. For reference, a purely static estimate of $c_g${} is also included in Fig.~\ref{fig:cg vs epsilon}, using the criterion adapted by Dobrynin from the work of Flory and Stockmaryer that the fraction of inter-chain associations $p_2$ at the gel point is $p_2=1/(1-p)f-1$. This criterion is applied because it is a method of extracting an estimate of $c_g${} from static properties using very few concentration samples. Together, the curves in Fig.~\ref{fig:cg vs epsilon} show that, as one might intuitively expect, $c_g${} generally increases with $\epsilon_{st}${}, and that static or instantaneous measurements can detect a network at much lower sticker strength than dynamic measurements can, due to interference from other relaxation processes.
The absence of the typical signatures of gelation in a system with weak stickers can be explained by considering the time scale for dissociations. Fig.~\ref{fig:competition of time scales} depicts the concentration dependence of two different relaxation time measurements for both strong and weak stickers. One time scale is the rheologically observed terminal relaxation time $\tau_G$. The other time scale is the renormalized bond lifetime $\tau_M$. This time scale is measured by constructing an association matrix $M$ for each snapshot of the simulation. That is $M_{ij}(t)=1$ if and only if sticker $i$ is bound to sticker $j$ at time t. Specifically, just as $p_2$ is the fraction of stickers which are associated to a different chain, we define $M_2$ which includes only inter-chain associations. We then define an autocorrelation function
\begin{equation}
C_M(\Delta t)=<M_2(t)\cdot M_2(t+\Delta t)>
\end{equation}
where $<A\cdot B>$ indicates an ensemble, time, and element-wise average of the element-wise produce of $A$ and $B$. This autocorrelation function decays as a single exponential, from which we extract the time scale $\tau_M$.
\begin{figure}
\includegraphics[width=240pt]{fig12.pdf}
\caption{The associative pair lifetime $\tau_M$ limits the growth of the longest observed rheological time scale $\tau_G$. When $\tau_M$ is short, the rapid growth of $\tau_G$ leading up to the gel transition is suppressed. A power law with exponent -3 represents scaling theory.}
\label{fig:competition of time scales}
\end{figure}
This association lifetime is compared to the rheologically observed time scales. Fig.~\ref{fig:competition of time scales} shows that the bond lifetimes for $\epsilon_{st}$=12 are several orders of magnitude above the dilute terminal flow timescale. However, when $\epsilon_{st}$=5, the bond lifetimes are unsurprisingly lower. The proximity of the bond lifetime to the single polymer relaxation time means that, even if there are instantaneously large associated clusters, the rapid growth of the cluster relaxation time is not observable because the clusters dissociate before that time scale is reached.
\section{Conclusion}
\label{sec:Conclusion}
Dynamic signatures of gelation in associative polymer solutions have been investigated using Brownian dynamics simulations. Dynamic and linear viscoelastic properties like zero-shear rate viscosity, storage ($G'$) and loss ($G''$) moduli, large scale relaxation time and bond lifetime are used to characterise the viscoelasticity of the associative polymer solutions in the pre and post-gel regimes. Results obtained for the scaling of the zero-shear rate viscosity and terminal relaxation time are compared with the scaling prediction of the mean-field theory for associative polymers when the energy barrier against dissociation is relatively low. For stronger stickers, this scaling prediction may still be followed, but the relaxation times in question are outside of our observational window. The concentration corresponding to the gelation crossover for $\epsilon_{st}$=5, coincides with the maxima in free chain concentration, which is one of the static measures of gelation. For $\epsilon_{st}$=12, several conventional rheological signatures of gelation were observed, including a diverging instantaneous viscostiy, a growing plateau in the elastic modulus, power laws in $G(t), G'$, and $G''$, and the frequency independence of $\tan \delta$.
In order to study the effect of HI on the dynamic properties in the pre-gel and post-gel regime we have compared the results from simulations carried out with and without HI. A key finding of this study is that the relaxation time as a function of the fraction of inter-chain associated stickers per chain follows the scaling prediction by the sticky Rouse model~\cite{RnSdynamics,RnS2001}, in the post gel regime. Moreover, the same asymptotic scaling is also obtained at the limit of high concentration in the post gel regime from simulations without HI, indicating the screening of hydrodynamic interaction due to formation of dense network.
It has been verified by analyzing the structure and dynamics of the associated network directly that the viscoelastic response of physical gels at long times corresponds to the persistence of load-bearing structures within the associative network. By monitoring the status of the network in addition to the whole solution's stress response, we have shown that the network's response can be overwhelmed by the distribution of relaxation modes inherent in polymeric systems. This situation leads to a competition of time scales between the lifetime of the network and the terminal relaxation time of the unassociated polymer. If the association time scale is much greater, then the classical signatures of gelation established with chemical gels are present, at least approximately. If the association time scale is near or less than the single polymer's, then only the concentration scaling of terminal flow behavior can reveal the evanescent gel beneath the other relaxations.
This study was confined to very short chains, with $f=4$ stickers per chain and $\ell=0$ or 4 spacers between stickers. Many experimental systems have dozens of stickers separated by huge lengths of polymer, so future studies should consider the effects of $f$ and $\ell$ on the gel transition systematically. An important consideration in modern applications is the nonlinear rheology of complex fluids. Associative polymers are known to have unique responses to high flow rates because the reversible bonds can be torn apart, yet recover quickly when the fluid is at rest. With this study's simulation framework in place, future studies should investigate the nonlinear behavior of these associative polymers, and whether the "evanescent" gel exhibits a more pronounced response in different flow conditions.
\section*{Acknowledgements}
This research was supported under Australian Research Council's Discovery Projects
funding scheme (project number DP190101825). It was undertaken with the assistance
of resources from the National Computational Infrastructure (NCI Australia), an NCRIS
enabled capability supported by the Australian Government. This work was also employed computational facilities at Monash University, the MASSIVE and MonARCH systems.
\providecommand{\latin}[1]{#1}
\makeatletter
\providecommand{\doi}
{\begingroup\let\do\@makeother\dospecials
\catcode`\{=1 \catcode`\}=2 \doi@aux}
\providecommand{\doi@aux}[1]{\endgroup\texttt{#1}}
\makeatother
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{37}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Rubinstein and Colby(2003)Rubinstein, and Colby]{RubColby2003}
Rubinstein,~M.; Colby,~R.~H. \emph{Polymer Physics}; Oxford University Press,
Oxford, 2003\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rubinstein and Dobrynin(1997)Rubinstein, and Dobrynin]{RubDob97}
Rubinstein,~M.; Dobrynin,~A. Solutions of Associative Polymers. \emph{Trends in
Polymer Science} \textbf{1997}, \emph{5}, 181--186\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Martin and Adolf(1991)Martin, and Adolf]{Martin1991-od}
Martin,~J.~E.; Adolf,~D. The sol-gel transition in chemical gels. \emph{Annu.
Rev. Phys. Chem.} \textbf{1991}, \emph{42}, 311--339\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Flory(1974)]{Flory1974-tm}
Flory,~P.~J. Introductory lecture. \emph{Faraday Discuss. Chem. Soc.}
\textbf{1974}, \emph{57}, 7\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Douglas(2018)]{Douglas2018-xh}
Douglas,~J.~F. Weak and strong gels and the emergence of the amorphous solid
state. \emph{Gels} \textbf{2018}, \emph{4}, 19\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Winter and Chambon(1986)Winter, and Chambon]{Winter:2000gw}
Winter,~H.~H.; Chambon,~F. {Analysis of Linear Viscoelasticity of a
Crosslinking Polymer at the Gel Point}. \emph{J. Rheol.} \textbf{1986},
\emph{30}, 367--382\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Nicolai \latin{et~al.}(1997)Nicolai, Randrianantoandro, Prochazka, and
Durand]{Nicolai1997-su}
Nicolai,~T.; Randrianantoandro,~H.; Prochazka,~F.; Durand,~D. Viscoelastic
relaxation of polyurethane at different stages of the gel formation. 2.
{Sol-Gel} transition dynamics. \emph{Macromolecules} \textbf{1997},
\emph{30}, 5897--5904\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Parada and Zhao(2018)Parada, and Zhao]{Parada2018-fx}
Parada,~G.~A.; Zhao,~X. Ideal reversible polymer networks. \emph{Soft Matter}
\textbf{2018}, \emph{14}, 5186--5196\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chassenieux \latin{et~al.}(2011)Chassenieux, Nicolai, and
Benyahia]{Chassenieux2011-fv}
Chassenieux,~C.; Nicolai,~T.; Benyahia,~L. Rheology of associative polymer
solutions. \emph{Curr. Opin. Colloid Interface Sci.} \textbf{2011},
\emph{16}, 18--26\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Yan \latin{et~al.}(2004)Yan, Springsteen, Deeter, and
Wang]{Yan2004-zl}
Yan,~J.; Springsteen,~G.; Deeter,~S.; Wang,~B. The relationship among pKa, pH,
and binding constants in the interactions between boronic acids and
diols---it is not as simple as it appears. \emph{Tetrahedron} \textbf{2004},
\emph{60}, 11205--11209\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Furikado \latin{et~al.}(2014)Furikado, Nagahata, Okamoto, Sugaya,
Iwatsuki, Inamo, Takagi, Odani, and Ishihara]{Furikado2014-vr}
Furikado,~Y.; Nagahata,~T.; Okamoto,~T.; Sugaya,~T.; Iwatsuki,~S.; Inamo,~M.;
Takagi,~H.~D.; Odani,~A.; Ishihara,~K. Universal reaction mechanism of
boronic acids with diols in aqueous solution: kinetics and the basic concept
of a conditional formation constant. \emph{Chemistry} \textbf{2014},
\emph{20}, 13194--13202\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Webber \latin{et~al.}(2016)Webber, Appel, Meijer, and
Langer]{Webber2016-hj}
Webber,~M.~J.; Appel,~E.~A.; Meijer,~E.~W.; Langer,~R. Supramolecular
biomaterials. \emph{Nat. Mater.} \textbf{2016}, \emph{15}, 13--26\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kloxin and Bowman(2013)Kloxin, and Bowman]{Kloxin2013-ci}
Kloxin,~C.~J.; Bowman,~C.~N. Covalent adaptable networks: smart, reconfigurable
and responsive network systems. \emph{Chem. Soc. Rev.} \textbf{2013},
\emph{42}, 7161--7173\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang and Heilshorn(2015)Wang, and Heilshorn]{Wang2015-kv}
Wang,~H.; Heilshorn,~S.~C. Adaptable hydrogel networks with reversible linkages
for tissue engineering. \emph{Adv. Mater.} \textbf{2015}, \emph{27},
3717--3736\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Voorhaar and Hoogenboom(2016)Voorhaar, and
Hoogenboom]{Voorhaar:2016jt}
Voorhaar,~L.; Hoogenboom,~R. {Supramolecular polymer networks: hydrogels and
bulk materials}. \emph{Chem. Soc. Rev.} \textbf{2016}, \emph{45},
4013--4031\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Stukalin \latin{et~al.}(2013)Stukalin, Cai, Kumar, Leibler, and
Rubinstein]{Stukalin2013-hu}
Stukalin,~E.~B.; Cai,~L.-H.; Kumar,~N.~A.; Leibler,~L.; Rubinstein,~M.
Self-healing of unentangled polymer networks with reversible bonds.
\emph{Macromolecules} \textbf{2013}, \emph{46}, 7525--7541\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Herbst \latin{et~al.}(2013)Herbst, Dohler, Michael, and
Binder]{Herbst2013-gi}
Herbst,~F.; Dohler,~D.; Michael,~P.; Binder,~W.~H. Selfhealing polymers via
supramolecular forces, Macromol. \emph{Macromol. Rapid Commun} \textbf{2013},
\emph{34}, 203--220\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chaudhuri \latin{et~al.}(2016)Chaudhuri, Gu, Klumpers, Darnell,
Bencherif, Weaver, Huebsch, Lee, Lippens, Duda, and Mooney]{Chaudhuri2016-lf}
Chaudhuri,~O.; Gu,~L.; Klumpers,~D.; Darnell,~M.; Bencherif,~S.~A.;
Weaver,~J.~C.; Huebsch,~N.; Lee,~H.-P.; Lippens,~E.; Duda,~G.~N.;
Mooney,~D.~J. Hydrogels with tunable stress relaxation regulate stem cell
fate and activity. \emph{Nat. Mater.} \textbf{2016}, \emph{15},
326--334\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Holten-Andersen \latin{et~al.}(2011)Holten-Andersen, Harrington,
Birkedal, Lee, Messersmith, Lee, and Waite]{Holten-Andersen2011-ke}
Holten-Andersen,~N.; Harrington,~M.~J.; Birkedal,~H.; Lee,~B.~P.;
Messersmith,~P.~B.; Lee,~K. Y.~C.; Waite,~J.~H. pH-induced metal-ligand
cross-links inspired by mussel yield self-healing polymer networks with
near-covalent elastic moduli. \emph{Proc. Natl. Acad. Sci. U. S. A.}
\textbf{2011}, \emph{108}, 2651--2655\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Graham(1987)]{Graham1987-mp}
Graham,~N.~B. In \emph{Hydrogels in Medicine and Pharmacy}; Peppas,~N.~A., Ed.;
CRC Press: Boca Raton, FL, 1987; Vol.~2\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Winter(2016)]{WinterGel}
Winter,~H.~H. \emph{Encyclopedia of Polymer Science and Technology}; American
Cancer Society, 2016; pp 1--15\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bromberg(1998)]{Bromberg}
Bromberg,~L. Scaling of Rheological Properties of Hydrogels from Associating
Polymers. \emph{Macromolecules} \textbf{1998}, \emph{31}, 6148--6156\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Thomas \latin{et~al.}(2009)Thomas, DePuit, and Khomami]{Dennis09}
Thomas,~D.~G.; DePuit,~R.~J.; Khomami,~B. Dynamic simulations of individual
macromolecules in oscillatory shear flow. \emph{Journal of Rheology}
\textbf{2009}, \emph{53}, 275--291\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wilson and Baljon(2017)Wilson, and Baljon]{polym9110556}
Wilson,~M.~A.; Baljon,~A. R.~C. Microstructural Origins of Nonlinear Response
in Associating Polymers under Oscillatory Shear. \emph{Polymers}
\textbf{2017}, \emph{9}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Brassinne \latin{et~al.}(2017)Brassinne, Cadix, Wilson, and van
Ruymbeke]{Ruyumbeke2017}
Brassinne,~J.; Cadix,~A.; Wilson,~J.; van Ruymbeke,~E. Dissociating sticker
dynamics from chain relaxation in supramolecular polymer networks—The
importance of free partner! \emph{Journal of Rheology} \textbf{2017},
\emph{61}, 1123--1134\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Suman and Joshi(2020)Suman, and Joshi]{SumanJOR}
Suman,~K.; Joshi,~Y.~M. On the universality of the scaling relations during
sol-gel transition. \emph{Journal of Rheology} \textbf{2020}, \emph{64},
863--877\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ozaki \latin{et~al.}(2017)Ozaki, Narita, Koga, and Indei]{Indei17}
Ozaki,~H.; Narita,~T.; Koga,~T.; Indei,~T. Theoretical Analysis of Critical
Flowable Physical Gel Cross-Linked by Metal Ions and
Polyacrylamide-Derivative Associating Polymers Containing Imidazole Groups.
\emph{Polymers} \textbf{2017}, \emph{9}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mewis \latin{et~al.}(2001)Mewis, Kaffashi, Vermant, and
Butera]{Mewis2001}
Mewis,~J.; Kaffashi,~B.; Vermant,~J.; Butera,~R.~J. Determining Relaxation
Modes in Flowing Associative Polymers Using Superposition Flows.
\emph{Macromolecules} \textbf{2001}, \emph{34}, 1376--1383\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Holten-Andersen \latin{et~al.}(2014)Holten-Andersen, Jaishankar,
Harrington, Fullenkamp, DiMarco, He, McKinley, Messersmith, and
Lee]{Andersen2014}
Holten-Andersen,~N.; Jaishankar,~A.; Harrington,~M.~J.; Fullenkamp,~D.~E.;
DiMarco,~G.; He,~L.; McKinley,~G.~H.; Messersmith,~P.~B.; Lee,~K. Y.~C.
Metal-coordination: using one of nature{'}s tricks to control soft material
mechanics. \emph{J. Mater. Chem. B} \textbf{2014}, \emph{2}, 2467--2472\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wagner and McKinley(2017)Wagner, and McKinley]{Wagner2017}
Wagner,~C.~E.; McKinley,~G.~H. Age-dependent capillary thinning dynamics of
physically-associated salivary mucin networks. \emph{Journal of Rheology}
\textbf{2017}, \emph{61}, 1309--1326\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rubinstein and Semenov(1998)Rubinstein, and Semenov]{RnSdynamics}
Rubinstein,~M.; Semenov,~A.~N. Thermoreversible Gelation in Solutions of
Associative Polymers. 2. Linear Dynamics. \emph{Macromolecules}
\textbf{1998}, \emph{31}, 1386--1397\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rubinstein and Semenov(2001)Rubinstein, and Semenov]{RnS2001}
Rubinstein,~M.; Semenov,~A.~N. Dynamics of Entangled Solutions of Associative
Polymers. \emph{Macromolecules} \textbf{2001}, \emph{34}, 1058--1068\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Santra \latin{et~al.}(2021)Santra, Dünweg, and
Ravi~Prakash]{AritraStatJoR}
Santra,~A.; Dünweg,~B.; Ravi~Prakash,~J. Universal scaling and
characterization of gelation in associative polymer solutions. \emph{Journal
of Rheology} \textbf{2021}, \emph{65}, 549--581\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Stoltz \latin{et~al.}(2006)Stoltz, de~Pablo, and Graham]{Stoltz2006}
Stoltz,~C.; de~Pablo,~J.; Graham,~M. Concentration dependence of shear and
extensional rheology of polymer solutions: {B}rownian dynamics simulations.
\emph{J. Rheol.} \textbf{2006}, \emph{50}, 137--167\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Huang \latin{et~al.}(2010)Huang, Winkler, Sutmann, and
Gompper]{Huang2010}
Huang,~C.~C.; Winkler,~R.~G.; Sutmann,~G.; Gompper,~G. Semidilute Polymer
Solutions at Equilibrium and under Shear Flow. \emph{Macromolecules}
\textbf{2010}, \emph{43}, 10107--10116\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[de~Gennes(1979)]{deGennes}
de~Gennes,~P.~G. \emph{Scaling Concepts in Polymer Physics}; Cornell University
Press, Ithaca, 1979\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
\section{Raw Data and Fitting}
\label{sec:Raw Data and Fitting}
In this section, we present raw simulation data for various observables and the fitting methods used to compute numerical measurements.
\subsection{Stress Autocorrelation $C(\Delta t)$ and Relaxation modulus $G(t)$}
\label{sec:Coft and Goft}
Figures \ref{fig:Goft_Nb4_h0}-\ref{fig:Goft_est5} show comprehensive plots of the Stress autocorrelation $C(\Delta t)$ measured at equlibrium and the shear relaxation modulus $G(t)$ measured after a step strain $\gamma=0.2$, with strain rate $\dot{\gamma}=10^6$. Initial post-step stress was found to depend linearly on the strain at this and higher step sizes, but at higher $\gamma$ some associated pairs were broken apart during the strain.
\begin{figure*}
\includegraphics[width=\textwidth]{figS1.pdf}
\caption{All $G(t)$ data for $\ell=0$, $h^*=0$.}
\label{fig:Goft_Nb4_h0}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figS2.pdf}
\caption{All $G(t)$ data for $\ell=0$, $h^*=0.2$.}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figS3.pdf}
\caption{All $G(t)$ data using the HOOMD code for $\ell=4$, $h^*=0$.}
\end{figure*}
\begin{figure}[tbh]
\begin{center}
\resizebox{12cm}{!}{\includegraphics*[width=4cm]{figS4.pdf}}
\end{center}
\vskip-15pt
\caption{\small{Non-dimensionalized stress auto-correlation function, $C^*(t)$, for systems of associative polymers with $N_b=24$, $\ell=4$, $f$, $\epsilon_{bb}=0$ and $\epsilon_{st}=5.0$ at different values of scaled monomer concentration, $c/c^*$, in the dilute and semi-dilute regimes. The solid black lines are fit to the simulation data using a sum of exponentials. The inset shows the entire range of $c/c^*$ covered in the study.}}
\label{fig:StressAuto}
\end{figure}
\begin{figure}[tbh]
\begin{center}
\resizebox{11cm}{!}{\includegraphics*[width=4cm]{figS5.pdf}}
\end{center}
\vskip-15pt
\caption{Non-dimensionalized stress auto-correlation function, $C^*(t)$, for systems of associative polymers with $N_b=24$, $\ell=4$, $f$, $\epsilon_{bb}=0$ and $\epsilon_{st}=5.0$ at different values of scaled monomer concentration, $c/c^*$, in the dilute and semi-dilute regimes. The simulations are performed without HI. The solid black lines are fit to the simulation data using a sum of exponentials.}
\label{fig:GtSPnoHI}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{figS6.pdf}
\caption{All $C(\Delta t)$ data using the previous code for $\ell=0$, $\epsilon_{st}=5$. After validation that $C(\Delta t)=G(t)$ under these conditions, data are re-plotted for comparison with $G(t)$ under other conditions.}
\label{fig:Goft_est5}
\end{figure*}
Figure \ref{fig:GstrSP} Demonstrates a fundamental rheological discrepancy between models with and without hydrodynamic interactions. When HI is neglected, there is a modest but non-negligible change in the distribution of relaxation times. This difference is represented here using the magnitude of the complex modulus.
\begin{figure}[tbh]
\begin{center}
\resizebox{11cm}{!}{\includegraphics*[width=4cm]{figS7.pdf}}
\end{center}
\vskip-15pt
\caption{Effect of hydrodynamic interaction on the scaling of complex modulus.}
\label{fig:GstrSP}
\end{figure}
\subsection{End-to-end $\hat{R_e}$ Autocorrelation}
\label{sec:Re auto}
Figures \ref{fig:ReAutocorr}-\ref{fig:ReACF_Nb24_h0} contain comprehensive plots of simulation data for the end-to-end unit vector autocorrelation. Curves in Fig.~\ref{fig:ReAutocorr} from simulations with $\epsilon_{st}=5$ and 0 have been fit with a single exponential to extract the relaxation time.
\begin{figure*}[tbh]
\centerline{
\begin{tabular}{c c}
\includegraphics[width=85mm]{figS8a.pdf}
& \includegraphics[width=85mm]{figS8b.pdf} \\
(a) & (b) \\
\end{tabular}
}
\caption{\small{End-to-end unit vector auto-correlation function at different monomer concentrations, $c/c^*$, for (a) associative polymer solutions with $N_b=24$, $\ell=4$, $f=4$, $\epsilon_{bb}=0$ and $\epsilon_{st}=5.0$ and (b) homopolymer solutions in athermal solvent ($\epsilon_{bb}=0$) with chain length $N_b=24$. Symbols represents the simulation data and dashed lines are the exponential fit at different concentrations.}
\label{fig:ReAutocorr}}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figS9.pdf}
\caption{All $R_e$ ACF data for $\ell=0$, $h^*=0$.}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figS10.pdf}
\caption{All $R_e$ ACF data for $\ell=0$, $h^*=0.2$.}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figS11.pdf}
\caption{All $R_e$ ACF data using the HOOMD code for $\ell=4$, $h^*=0$.}
\label{fig:ReACF_Nb24_h0}
\end{figure*}
\subsection{Cluster Size Distribution $P(m)$}
\label{sec:Pofm}
Clusters of associated chains are identified in simulation trajectories by constructing an adjacency matrix of associated chains and identifying the connected components of the resulting graph. The resulting distributions of cluster sizes are presented in Figs.~\ref{fig:CustDist_exp}-\ref{fig:ClustDist_Nb24_h0}. Fig.~\ref{fig:CustDist_exp} particularly demonstrates the exponential tail in the cluster size distribution when sticker strength and concentration are modest. Figs.~\ref{fig:CustDist_PL}-\ref{fig:ClustDist_Nb24_h0} demonstrate the evolution of this distribution to a power-law on approach to the gel transition, and a bimodal distribution in the gel state. Note in Fig.~\ref{fig:ClustDist_Nb4_h0} that a power law and even bimodal distribution is possible even with $\epsilon_{st}=0$, at high concentration. This is due to the fact that stickers with $\epsilon_{st}=0$ have a 1:1 probability of being bound or unbound, so an instantaneous cluster is still possible, thought this cluster would have no rheological effect.
\begin{figure*}[ptbh]
\centerline{
\resizebox{\textwidth}{!}{ \begin{tabular}{cc}
\includegraphics[width=9cm,height=!]{figS12a.pdf}
&
\includegraphics[width=9cm,height=!]{figS12b.pdf} \\[5pt]
(a) &
(b) \\
\multicolumn{2}{c}{\includegraphics[width=9cm,height=!]{figS12c.pdf} }\\[5pt]
\multicolumn{2}{c}{(c)} \\
\end{tabular} }}
\caption{\small{Chain cluster size distribution in semi-log scale for systems of associative polymers with spacer length $\ell=4$, $\epsilon_{bb}=0$ and $\epsilon_{st}=5.0$ at (a) $c/c^*=0.3$, $N_b=44$, (b) $c/c^*=0.5$, $N_b=44$ and (c) $c/c^*=0.6$, $N_b=34$. The solid red symbols are simulation data and the exponential decay of the cluster size distribution is shown by the dashed black lines.}
\label{fig:CustDist_exp}}
\end{figure*}
\begin{figure*}[ptbh]
\centerline{
\begin{tabular}{c c}
\includegraphics[width=73mm]{figS13a.pdf}
& \includegraphics[width=74mm]{figS13b.pdf} \\
(a) & (b) \\
\includegraphics[width=74mm]{figS13c.pdf}
& \includegraphics[width=71mm]{figS13d.pdf} \\
(c) & (d) \\
\end{tabular}
}
\caption{\small{Chain cluster size distribution in log-log scale for systems of associative polymers with spacer length $\ell=4$, $\epsilon_{bb}=0$ and $\epsilon_{st}=5.0$ at different values of $c/c^* = \lbrace 0.3,\, 0.5,\, 0.8,\, 0.9\rbrace$ and different chain lengths, $N_b$. The power-law behaviour of the cluster size distribution is shown by the solid black lines in (c) and (d).}
\label{fig:CustDist_PL}}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figS14.pdf}
\caption{All cluster size distribution data for $\ell=0$, $h^*=0$. $c/c^*=$ 0.06 for purple triangles, 0.12 for purple diamonds, 0.25 for blue triangles, 0.5 for green squares, and 1 for red circles.}
\label{fig:ClustDist_Nb4_h0}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figS15.pdf}
\caption{All cluster size distribution data for $\ell=0$, $h^*=0.2$.}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figS16.pdf}
\caption{All cluster size distribution data using the HOOMD code for $\ell=4$, $h^*=0$.}
\label{fig:ClustDist_Nb24_h0}
\end{figure*}
\section{Derived Simulation Measurements}
Here we present a selection of numerical measurements extracted from simulations by fitting simulation data.
\subsection{Loss Tangent}
The stress relaxation data for $\epsilon_{st}=5.0$ were fit with a series of exponentials, then transformed to the storage and loss modulii $G'$ and $G''$, the ratio of which is the loss tangent $\tan{\delta}$. These measurements showed the signature frequency-independence near the gel transition for the case of $\epsilon_{st}=5.0$. However, Fig.~\ref{fig:tandelta} demonstrates that when stickers are weak, this signature vanishes.
\begin{figure}[tbh]
\begin{center}
\resizebox{11cm}{!}{\includegraphics*[width=4cm]{figS17.pdf}}
\end{center}
\vskip-15pt
\caption{\small{Variation of the loss tangent, $\tan\,\delta$, with the dimensionless frequency, $\omega\,\lambda_H$, at different values of $c/c^*$ for associative polymer solutions with $N_b=24$, $\ell=4$, $f=4$, $\epsilon_{bb}=0$ and $\epsilon_{st}=5.0$.}}
\label{fig:tandelta}
\end{figure}
\subsection{Bond Lifetime}
Bond lifetimes are measured by extracting the durations of bond events from simulation trajectories. Alternatively, the autocorrelation of bonded pairs provides a renormalized lifetime which includes bind-unbind-rebind events. Bond lifetimes measured by the average association duration are shown in Fig.~\ref{fig:tbond} as a function of concentration with and without HI for the case of $\epsilon_{st}=5.0$.
\begin{figure}[tbh]
\begin{center}
\resizebox{11cm}{!}{\includegraphics*[width=4cm]{figS18.pdf}}
\end{center}
\vskip-15pt
\caption{\small{Dimensionless mean bond lifetime, $t^*_{\text{bond}}$ as a function of concentration, $c/c^*$, for associative polymer solutions with $N_b=24$, $\ell=4$, $f=4$, $\epsilon_{bb}=0$ and $\epsilon_{st}=5.0$. The symbols are from simulations and the dashed line is best fit to the data.}}
\label{fig:tbond}
\end{figure}
Figures \ref{fig:M2acf_Nb4_h0}-\ref{fig:M2acf_Nb24_h0} present the autocrrelation function for the fraction of fraction of inter-chain associations at time $t$ which are also present at $t+\Delta t$. These data sets were fit with a single exponential for which the only parameter is the relaxation time $\tau_M$. Surprisingly, there appears to be a power-law tail in this autocrrelation function, which is robustly excluded by the single-parameter fit. Note that this tail seems largely independent of both concentration and sticker strength, suggesting generic characteristic of this measurement. This could be due to mechanisms such as sticker pairs dissociating, de-correlating completely, then subsequently re-associating due to the finite number of potential partners in the simulation volume.
\begin{figure*}
\includegraphics[width=\textwidth]{figS19.pdf}
\caption{All $M_2$ ACF data for $\ell=0$, $h^*=0$.}
\label{fig:M2acf_Nb4_h0}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figS20.pdf}
\caption{All $M_2$ ACF data for $\ell=0$, $h^*=0.2$.}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figS21.pdf}
\caption{All $M_2$ ACF data using the HOOMD code for $\ell=4$, $h^*=0$.}
\label{fig:M2acf_Nb24_h0}
\end{figure*}
\subsection{Instantaneous Shear Modulus}
The instantaneous shear modulus, extracted from simulations as the $t=0$ limit of $G(t)$, are presented in Fig.~\ref{fig:G0_v_c}. Note that, in some studies of associating polymer gels, the quantity $G_0$ refers to the short-time or high-frequency limit of the plateau produced by the gel network. In our simulations, faster relaxations due to single chain modes are observable, and are included in our measurement of $G_0$.
\begin{figure*}[tbh]
\centerline{
\begin{tabular}{c c}
\includegraphics[width=85mm]{figS22a.pdf}
& \includegraphics[width=85mm]{figS22b.pdf} \\
(a) & (b) \\
\end{tabular}
}
\caption{\small{(a) Scaled elastic modulus, $G_0/k_BT$, versus concentration, $c/c^*$, for the solutions of associative polymers with $N_b=24$, $\ell=4$, $f=4$, $\epsilon_{bb}=0$, $\epsilon_{st}=5.0$. (b) Elastic modulus, $G_0/k_BT$, as a function of the relative distance from the gelation concentration $\Delta = (c-c_g)/c_g$, near the gel-point in the post gel regime, where the gelation concentration is considered to be $c_g/c^*=0.5$. Symbols are the simulation data and the solid and broken lines indicate the corresponding scaling exponents.}
\label{fig:G0_v_c}}
\end{figure*}
\section{Validation Studies}
Here we present some data validating the accuracy of various numerical measurement strategies.
\subsection{Zero Shear-Rate Viscosity}
The measurements of the polymer contribution to the zero-shear-rate viscosity $\eta_{0p}$ presented in the main text are acquired by integrating a fit of the stress relaxation curve $G(t)$ over all time. This measurement is approximate in that our $G(t)$ data are discrete, noisy, and do not reach to infinite or infinitessimal time. The Step strain protocol could in principle also introduce sbutle nonlinearities. Further, the choice of fitting form imposes assumptions on the data which could be invalid under some conditions. To validate that these issues are not corrupting results, steady shear simulations have been carried out with very low shear rates to extract the zero-shear-rate limit of viscosity to compare with the integral method. The resulting data are shown in Fig.~\ref{fig:eta_v_gam}.
\begin{figure}[tbh]
\begin{center}
\resizebox{11cm}{!}{\includegraphics*[width=4cm]{figS23.pdf}}
\end{center}
\vskip-15pt
\caption{\small{Polymeric component of the non-dimensionalized shear viscosity, $\eta^*_p = \displaystyle\frac{\eta_p}{n_pk_BT\lambda_H}$, as a function of non-dimensional shear rate ($\dot{\gamma}\,\lambda_H$) for different values of monomer concentration, $c/c^*$. The open symbols are the values of $\eta^*_p$ calculated from shear flow simulations and filled symbols are the values of zero-shear rate viscosity, $\eta_p^{0^*}$, evaluated as the integral of $G(t)$}}
\label{fig:eta_v_gam}
\end{figure}
\subsection{Oscillatory Shear}
It is possible that the equilibrium stress autocorrelation function $C(\Delta t)$ does not capture the plateau modulus of a gel, since $C(\Delta t)$ must go to zero as $\Delta t$ goes to infinty in equilibrium. That is, there could be a finite offest between $C(\Delta t)$ and $G(t)$ for a gel. To validate that the absence of an apparent gel transition at $\epsilon_{st}=5$ was not due to such a concern, oscillatory shear flow simulations were carried out at several frequencies to validate that there was no discrepancy between the modulii measured by transforming $C(\Delta t)$ vs those measured by preturbative simulations. Fig.~\ref{fig:DyModRouse} demonstrates the correctness of the the stress autocorrelation measurement and the oscillatory measurement by comparing with the exactly known Rouse spectrum of a 24 bead chain. Fig.~\ref{fig:DyModN79}a then demonstrates this validation for the non-trivial case of $N_b=79$ with several stickers. Lissajous curves for these oscillatory simulations are shown in Fig.~\ref{fig:DyModN79}b.
\begin{figure}[tbh]
\begin{center}
\resizebox{11cm}{!}{\includegraphics*[width=4cm]{figS24.pdf}}
\end{center}
\vskip-15pt
\caption{\small{Non-dimensionalized dynamic moduli ($G'^*$ \& $G''^*$) as a function of dimensionless frequency, $\omega\,\lambda_H$, for a dilute solution of Rouse chains with chain length, $N_b=24$. The filled (for $G'$) and open (for $G''$) symbols are from BD simulations at equilibrium and under oscillatory shear flow (OSF), where $\gamma_0$ is the amplitude of oscillation. The solid and broken lines are analytical solutions for $G'$ and $G''$, respectively, for Rouse chains in a dilute solution.}}
\label{fig:DyModRouse}
\end{figure}
\begin{figure*}[tbh]
\centerline{
\begin{tabular}{c c}
\includegraphics[width=85mm]{figS25a.pdf}
& \includegraphics[width=85mm]{figS25b.pdf} \\
(a) & (b) \\
\end{tabular}
}
\caption{(a) Non-dimensionalized dynamic moduli ($G'^*$ \& $G''^*$) as a function of dimensionless frequency, $\omega\,\lambda_H$, for associative polymer solutions with chain length, $N_b=79$, spacer length, $\ell=4$, $\epsilon_{bb}=0$ and $\epsilon_{st}=5.0$ at $c/c^*=1.0$. The filled (for $G'$) and open (for $G''$) symbols are from BD simulations at equilibrium and under oscillatory shear flow (OSF) at different oscillation frequencies, where $\gamma_0$ is the amplitude of oscillation. (b) Lissajous curve corresponding to the frequencies considered for the comparison of dynamic moduli calculated from equilibrium and OSF simulations.}
\label{fig:DyModN79}
\end{figure*}
\end{document}
|
{
"arxiv_id": "2302.13718",
"language": "en",
"timestamp": "2023-03-02T02:15:10",
"url": "https://arxiv.org/abs/2302.13718",
"yymm": "2302"
} | \section{Introduction}
\input{paper_sections/1_introduction}
\section{Hypotheses and Methodological Approach}\label{sec:hypotheses_methods}
\input{paper_sections/2_hypotheses_method_approach}
\section{Institutional Setting \& Data Summary}\label{sec:institution}
\input{paper_sections/3_institutional_settings_data}
\section{Why Do Students Report Non-truthfully?}\label{sec:why_misreport}
\input{paper_sections/4_non_truthful_reporting_analysis}
\section{What Causes Payoff-relevant Omissions?}\label{sec:welfare_misreport}
\input{paper_sections/5_payoff_relevant_mistakes_analysis}
\section{How Does Non-truthful Reporting Affect Demand Estimation?}\label{sec:demand_estimate_misreport}
\input{paper_sections/6_demand_estimation_analysis}
\section{Robustness Analyses}\label{sec:robust_analyses}
\input{paper_sections/robustness_analyses}
\section{Literature Review}\label{sec:lit_review}
\input{paper_sections/literature_review}
\section{Discussion}\label{sec:discussion_and_conclusion}
\input{paper_sections/discussion_and_conclusion}
\newpage
\printbibliography[]
\section{Appendix}
\input{paper_sections/X_appendix}
\end{document}
\subsection{Data sources and measures}
We now outline our three sources of data. The first source of data is application and admission data from the centralized higher education admission system in Denmark. Importantly, the data contain students' reported preferences for study programs, i.e., their rank-ordered list (ROL). We use data from 2020 and 2021 for our analysis. The number of students applying is relatively stable across the years, although an unusually high number of students applied in 2020. The number of academic study programs has decreased since 2012 and ranges from around 320-350. About 50\% of the study programs receive more applications than their capacity, resulting in a cutoff level for those study programs. We refer to table \ref{tab:admission_summary_stats} in the appendix for further statistics on the admission system.
Our second source of data is Danish registry data on applicants and their parents. This data contains socioeconomic and demographic variables. A significant advantage of registry data is the fact that it covers the entire population of
Denmark, and it is maintained by Statistics Denmark for research purposes.
Finally, we conducted two waves of an online survey targeting students who applied to higher education in Denmark. The first wave was in 2020 and the second wave was in 2021. The timing of the surveys was such that students knew their eligibility score, but had no knowledge about the realized cutoff levels.\footnote{The students received an invitation to complete the survey three days after the application deadline. The survey was open for responses until the day before the students received a personal matching answer. The cutoff levels of every study program are announced on the same day students receive a matching answer. Thus, none of the survey respondents knew the realized cutoff levels at the time of answering the survey. However, the students did know their final eligibility score at the time of answering the survey.} The survey response rate was $15\%$. Table~\ref{tab:pop_vs_survey} compares the mean of indicator variables for the population and survey respondents.
In addition, we include a column for the sample of survey respondents included in the regression analyses. The regression sample includes survey respondents without any missing data.
\input{tables/pop_vs_survey_2021}
\paragraph{Explanatory variables}
From the registry, we include measures of students' age at admission, a woman indicator, eligibility score, and middle school GPA. In addition, we include the income percentile and years of education for students' parents.
From our survey we elicit a number of additional measures of students perceptions and personal situation. We measure whether students `Perceive rejection as being a failure'. A similar concern is that low ability students are more likely to report non-truthfully and omit preferred programs \citep{Chen2019Self-selectionChoice}. We address this concern by measuring `Confidence in own abilities'. We also explore whether skipping is more common among students who do not fully comprehend the admission process, for instance if they misperceive the admission procedure to be Immediate Acceptance \citep{Abdulkadiroglu2003SchoolApproach}. Finally, students who do not mind postponing admission to next year are presumably less concerned with not being admitted to a study program \citep{akbarpour2022centralized}, which we investigate by measuring `Willingness to postpone'. See the exact definition of the variables and how they are measured in Appendix~\ref{app:survey_var_describe}.
\paragraph{Selective participation}
An inherent concern of survey data is whether the participants are representative of the population of interest. In Table \ref{tab:pop_vs_survey} we report a two-sided Welch's t-test to compare the statistical differences in registry variables between our population and survey participants. Compared to the population, the survey respondents are marginally younger, more likely to be women, and have a somewhat higher eligibility score and middle school GPA. Moreover, they tend to report marginally more choices in their ROL. In Section \ref{sec:robust_analyses}, we apply a Weighted Least Square approach suggested by \citep{Dutz2021SelectionSurveys} as a robustness check for selective participation. We conclude that selective participation does not affect our findings and conclusions.
\subsection{Descriptive analysis of survey data}
We show descriptive statistics of our outcome and explanatory variables for our sample (and population where possible) in Table~\ref{tab:pop_vs_survey}. In terms of outcomes, 20\% of the students report non-truthfully, 12\% omit their most-preferred program, and 2\% make a payoff-relevant omission.
Most of the explanatory variables measures are well-established in the literature. \footnote{e.g., sociodemography, academic achievements as well as confidence and economic risk preferences}. However, we elicited a number of new measures which have not been analyzed before.
First and foremost, a striking 55\% respond that not getting admitted is a failure. This provides direct support for theories that incorporate dis-utility from rejection, either directly from e.g. report-dependent preferences \citep{Meisner2022Report-DependentStrategy-Proofness} or indirectly from Expectation-Based Loss Aversion \citep{Dreyfuss2022Expectations-BasedMechanisms}.
Moreover, 38\% respond that they find admission system difficult to comprehend. This highlights the potential for policies that alleviate this problem.
Finally, 70\% of respond are willing to postpone their admission by an additional year. This number is extremely large an indicates that viewing the admissions in a single year in isolation may miss out on crucial dynamic aspects \citep{larroucau2020dynamic}.
\subsection{Summary of Danish admission to higher education}
\input{tables/admission_summary_stats}
\subsection{Detailed description of survey covariates} \label{app:survey_var_describe}
\noindent\textbf{Comprehension of admission process}: Agreement with the following statement: `It is generally difficult to comprehend how the admission process for higher education works. We create an indicator variable for the students who agree or completely agree with this statement.'
\noindent\textbf{Willingness to postpone}: Responds `Agree' or `Completely agree' to the statement: `I am willing to wait another year, if I do not get admitted to any of the study programs I applied to this year'.
\noindent\textbf{Perceives rejection as a failure}: Responds `Agree' or `Completely agree' to the following statement: `It is a failure to apply for a study program and subsequently be rejected'.
\noindent\textbf{Confidence in own abilities}: Reported 0 to 10 rating to the following question: `Are you generally a person, who has large confidence in your own abilities?'. An answer of 0 corresponds to `I have very little confidence to my abilities'. An answer of 10 corresponds to `I have very high confidence to my abilities'. Thus, a higher number indicates higher self-stated confidence in own abilities.
\noindent\textbf{Risk profile}: Respondents between 0 to 10 rating to the following question: `Are you generally a person, who is willing to take risk or do you actively avoid risks?'. An answer of 0 corresponds to `I am not at all willing to take any risk'. An answer of 10 corresponds to `I am very willing to take risk'. Thus, a higher number indicates higher willingness to take risk.
\newpage
\newpage
\newpage
\subsection{Model output from Weighted Least Squares estimation}\label{app:results_WLS}
\begin{figure}[!ht]
\begin{subfigure}[b]{.9\linewidth}
\includegraphics[width=\linewidth]{plots/by_hypothesis/hyp_1_to_3_wls_no_fe.pdf}
\end{subfigure}
\caption{Individual characteristics and truthful behavior; testing Hypotheses \ref{hypothesis:beliefs}, \ref{hypothesis:SES}, and \ref{hypothesis:Personality} using WLS}
\label{fig:hyp_1_to_3_wls_no_fe}
\floatfoot{Notes: The figure contains estimated coefficients from model \eqref{eq:model_hyp_1_2_3} with 95\% confidence intervals using WLS instead of OLS. The dependent variable is one of two measures of non-truthful behavior, see the legend. Each entry on the vertical axis corresponds to an explanatory variable and the horizontal axis measures the variable's coefficient size. Standard errors of coefficients use robust estimation but are not clustered.}
\end{figure}
\begin{figure}[!ht]
\begin{subfigure}[b]{.8\linewidth}
\includegraphics[width=\linewidth]{plots/by_hypothesis/hyp_4_6_wls_no_fe.pdf}
\end{subfigure}
\caption{Payoff-relevant Omission of Most-preferred Program using WLS}
\label{fig:hyp_4_to_6_wls_no_fe}
\floatfoot{Notes: The figure above shows coefficients from model (\ref{eq:model_hyp_4_5_6}) using WLS instead of OLS. Coefficients are plotted with 95\% confidence intervals. The dependent variable is payoff-relevant omission of most-preferred program. Each entry on the vertical axis corresponds to an explanatory variable and the horizontal axis measures the variable's coefficient size. Standard errors of coefficients use robust estimation but are not clustered.}
\end{figure}
\subsection{Model output with fixed effects at study program level}\label{app:results_FE}
\begin{figure}[!ht]
\begin{subfigure}[b]{.9\linewidth}
\includegraphics[width=\linewidth]{plots/by_hypothesis/hyp_1_to_3_ols_fe.pdf}
\end{subfigure}
\caption{Individual characteristics and truthful behavior; testing Hypotheses \ref{hypothesis:beliefs}, \ref{hypothesis:SES}, and \ref{hypothesis:Personality}. Including study program fixed effects.}
\label{fig:hyp_1_to_3_ols_fe}
\floatfoot{Notes: The figure contains estimated coefficients from model (\ref{eq:model_hyp_1_2_3}) with 95\% confidence intervals. Compared to Figure~\ref{fig:hyp_1_to_3_ols_no_fe} the only difference is that this model is estimated with fixed effects at the study program level. The dependent variable is one of two measures of non-truthful behavior, see the legend. Each entry on the vertical axis corresponds to an explanatory variable and the horizontal axis measures the variable's coefficient size.
Standard errors of coefficients use robust estimation but are not clustered.}
\end{figure}
\begin{figure}[!ht]
\begin{subfigure}[b]{.8\linewidth}
\includegraphics[width=\linewidth]{plots/by_hypothesis/hyp_4_6_ols_fe.pdf}
\end{subfigure}
\caption{Payoff-relevant Omission of Most-preferred Program. Including study program fixed effects.}
\label{fig:model_hyp_4_5_6_fe}
\floatfoot{Notes: The figure above shows coefficients from model (\ref{eq:model_hyp_4_5_6}) where payoff-relevant omission of most-preferred program is the outcome. Compared to Figure~\ref{fig:regret_skip} the only difference is that this model is estimated with fixed effects at the study program level. Coefficients are estimated with 95\% confidence intervals using robust standard errors.}
\end{figure}
\input{tables/linreg_robustness_2021_survey_no_fe}
\input{tables/clogit_res_offset_both_surveys_no_FE}
|
{
"arxiv_id": "2302.13692",
"language": "en",
"timestamp": "2023-02-28T02:27:40",
"url": "https://arxiv.org/abs/2302.13692",
"yymm": "2302"
} | \section{Introduction}
Measurement of quantum many-body system induces nontrivial dynamical effects and produces exotic phases of matter.
One of the most interesting phenomena induced by measurements is entanglement phase transition in hybrid random unitary circuits \cite{Li2018,Skinner2019,Li2019,Vasseur2019,Chan2019,Szyniszewski2019,Choi2020,Bao2020,Jian2020,Zabalo2020,Sang2021,Sang2021_v2,Nahum2021,Sharma2022,Fisher2022_rev}.
This phase transition phenomenon emerges in various hybrid circuits including time-evolution operator by many-body Hamiltonian \cite{Fuji2020,Goto2020,Tang2020,Lunt2020,Turkeshi2021,Kells2022,Fleckenstein2022,KOH2022}.
High entanglement of states generated by unitary time evolution is suppressed by the measurements.
Also, as typical non-equilibrium dynamics, the spread of entanglement and scrambling of quantum information are suppressed.
Without time-evolution unitary, measurement-only quantum circuit (MoC) \cite{Lang2020,Ippoliti2021} also displays striking phenomena, i.e., combination of multiple kinds of measurements, some of which are not commutative with each other, can induce novel phase transitions and generate non-trivial states such as measurement-only thermal state without exhibiting area law of entanglement entropy \cite{Ippoliti2021}, symmetry protected topological (SPT) state \cite{Lavasani2021,Klocke2022} and topological order \cite{Lavasani2021_2}.
It should be remarked that these phase transitions in the MoC exhibit some universal behavior at transition points as reported in recent studies \cite{Ippoliti2021,Lavasani2021,Lavasani2021_2,Klocke2022}.
In the previous works \cite{Lavasani2021,Lavasani2021_2,Klocke2022}, sequential stabilizer projective measurements are operated to the system as a MoC and emergence of non-trivial states is observed.
There, interestingly enough, the resultant phase diagram of the MoC is similar and almost identical to that of the ground state of the Hamiltonian composed of the operated stabilizers.
For example, the phase diagram of the cluster spin Hamiltonian with local $X_j$ terms \cite{Zeng2016} can be reproduced in the MoC by varying the probability ratio of projective measurements between the cluster-spin and the local $X_j$ operators \cite{Lavasani2021}.
This result implies that the coefficient ratio between competing terms in the stabilizer Hamiltonian corresponds to the probability ratio between the projective measurements of the stabilizers, which
anti-commute with each other in the MoC.
In Ref.~\cite{Klocke2022}, an interesting conjecture is mentioned that the steady state in the MoC including stabilizer measurements is close to the ground state obtained by an imaginary-time evolution of the corresponding stabilizer Hamiltonian.
However, details of the above interesting conjecture have not been studied yet, and further concrete examples (both analytical and numerical ones) clarifying the correspondence are still lacking.
In this work, we shall study the following two subjects to clarify the above conjecture:
\begin{enumerate}
\item Based on the qualitative conjecture of parameter ratio-probability ratio correspondence (PRC) suggested in \cite{Lavasani2021,Klocke2022}, we investigate the PRC in a qualitative level by using the imaginary-time path integral formalism and the MoC of the Gottesman-Knill stabilizer simulation \cite{Gottesman1997,Aaronson2004,Nielsen_Chuang}. Comparing the path-integral formalism and the MoC,
we strengthen the conjecture. Some simple analytical examples are also shown.
Although rigorous mathematical proof for the PRC is not given in this work, our study supports the conjecture in a substantial way.
\item To investigate the PRC, we study an interesting system of great physical significance in high-energy physics and also condensed matter physics.
That is, we focus on a (1+1)-D $Z_2$ lattice gauge-Higgs model. Recently, the Higgs phase of the lattice gauge theory (LGT) \cite{Fradkin1979,Kogut1979} is suggested to have properties of the symmetry-protected topological (SPT) phase \cite{Verresen2022} and also the ground state phase diagram of the gauge-Higgs Hamiltonian was studied in \cite{Borla2021,Verresen2022}.
\item Instead of working on the Hamiltonian system of the above model, we numerically study its phase diagram of mixed state by applying the mixed-state update methods of stabilizer dynamics employed in \cite{Gullans2020,Ippoliti2021} to examine the PRC.
That is, MoC corresponding to the gauge-Higgs Hamiltonian is constructed by using the guiding principle of the PRC.
We draw the mixed-state phase diagram of the MoC and find its clear correspondence to the LGT Hamiltonian system.
This also indicates that the MoC with suitable stabilizer measurements produces interesting gauge-theoretical states predicted as a ground state of LGTs.
We further study phase transition criticality for some typical parameter sweeps by finite-size scaling (FSS) analysis. We comment on the critical exponents obtained via the MoC.
\end{enumerate}
The rest of this paper is organized as follows.
In Sec.~II, we shall discuss the PRC conjecture in a qualitative level. We show simple concrete examples for examination of the PRC, and discuss the extension of the PRC to the mixed-state case.
In Sec.~III, we shall introduce the Hamiltonian of (1+1)-D $Z_2$ lattice gauge-Higgs model and shortly review its ground state properties. Then, we introduce the setup of the MoC for searching the properties of the ground state of the gauge-theory Hamiltonian rather in detail.
There, the PRC plays a role of the guiding principle.
In Sec.~IV, we show the results of the numerical study of the MoC corresponding to the (1+1)-D $Z_2$ lattice gauge-Higgs model. Detailed discussions on the numerical results and study of phase transition criticality are given.
Section IV is devoted to conclusion.
\section{Conjecture of parameter ratio and probability ratio correspondence}
In this section, we start with a random-coupling Hamiltonian, each term of which is a stabilizer.
This type of Hamiltonian would be expected to have a corresponding counterpart MoC.
That is, the both systems share a very close ground state phase diagram, here `ground state' of the MoC means steady states appearing after a long time evolution.
In order to examine the conjecture, we first introduce the imaginary-time evolution and its path integral formalism. Second we explain the setup of the corresponding MoC and consider an ensemble of steady states obtained by the time evolution of the MoC.
Even though the expression of the ensemble of steady states and the time-evolution propagator are mathematically not rigorous, their descriptions are useful to compare the MoC and the imaginary-time evolution of the Hamiltonian system.
In fact, we obtain a useful insight for the PRC. The flowchart of this section is shown in Fig.~\ref{Fig1} (a).
\subsection{Considered Hamiltonian}
We start to consider a general binary random-coupling stabilizer Hamiltonian in one dimension defined as follows,
\begin{eqnarray}
H_{\rm stab}=\sum^{L-1}_{j=0}\sum^{M}_{\alpha=1}J^{\alpha}_{j}K^{\alpha}_{j},
\end{eqnarray}
where $L$ is a total number of sites $\{ j\}$, $\alpha$ represents $M$-types of stabilizers anti-commuting with each other, i.e.,
$\{ K^{\alpha}_j\}$ satisfy $[K^{\alpha}_j,K^{\alpha}_k]=0$ and $(K^{\alpha}_j)^2=1$,
and for different types of stabilizers,
$[K^{\alpha}_j,K^{\beta}_k]\neq 0$ and $\{K^{\alpha}_j,K^{\beta}_k\}=0$ ($\alpha\neq \beta$) \cite{stab}.
The couplings are local and binary for $\forall j$, $J^{\alpha}_{j}=\pm J^{\alpha}$, $J^{\alpha}>0$.
In general, the model has a rich ground state phase diagram depending on the choice of the stabilizers and exhibits clear phase transitions on varying values of parameters.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{Fig1.pdf}
\end{center}
\caption{
(a) Flowchart of comparison between the imaginary-time evolution of $H^{'}_{\rm stab}$ and the measurement-only circuit.
(b) Schematic image of time evolution of the MoC.
As an example, three different types of stabilizers $K^{\alpha}_j$ ($\alpha=1,2,3$) are considered.
During a single time step, one of three kinds of the stabilizer is chosen with the probability $p^\alpha$ and its projective measurement is carried out, where $\sum^3_{\alpha=1}p^\alpha=1$ and the position (site) of the measurement is chosen randomly with equal probability.}
\label{Fig1}
\end{figure}
\subsection{General setup of measurement-only circuit}
We construct a MoC counterpart to the above stabilizer Hamiltonian $H_{\rm stab}$ by using the PRC as guiding principle.
In the MoC, we choose a single stabilizer among the different types of $K^\alpha_{j_{0}}$ with a probability $p^{\alpha}$ and choose a target site $j_0$ with equal probability $1/L$ at each time step.
We set the probability condition of the choice of the type of the stabilizer, such as $\sum_{\alpha}p^\alpha=1$.
The setup is the same with that employed in the previous works \cite{Lavasani2021,Klocke2022}.
Then, we perform the projective measurement corresponding to the stabilizer $K^\alpha_{j_{0}}$.
We expect that after large number of time steps, a state reaches a steady state for most of cases. If $p^{\alpha'}$ with a particular $\alpha'$ is dominant, the steady state is stabilized by $K^{\alpha'}_j$, corresponding to a stabilizer state.
Schematic example of $M=3$ case of the MoC (three different types of stabilizer projective measurements) is shown in Fig.~\ref{Fig1} (b).
\subsection{Parameter fixing of $H_{stab}$ and simplified MoC}
In this work, we simulate MoCs (numerically) by employing simplified stabilizer circuits \cite{Gottesman1997,Aaronson2004}, in which the information of the overall sign of the observed value of the stabilizers by projective measurements is not stored as in many other previous studies \cite{Nahum2017,Lunt2020,Sang2021,Turkeshi2021,Lavasani2021,Klocke2022}.
In the practical simulation, we fix the measured value to $+1$ for all stabilizers in the MoC. In other words, this fixing means that a projective measurement of stabilizes denoted by $P^{\alpha}_j$ is fixed as $P^{\alpha}_j=\frac{1+K^{\alpha}_j}{2}$ at each time step. Corresponding to this setup of MoCs, the following Hamiltonian, instead of $H_{\rm stab}$, to be considered for clarifying the subsequent discussion,
\begin{eqnarray}
H'_{\rm stab}=-\sum^{L-1}_{j=0}\sum^{M}_{\alpha=1}J^{\alpha}K^{\alpha}_{j},
\label{Hstab2}
\end{eqnarray}
where the binary random couplings have been set as $J^{\alpha}_j\to -J^{\alpha}$.
Previous studies \cite{Lavasani2021,Klocke2022} showed that the ground state phase diagram of $H_{\rm stab}$ or $H'_{\rm stab}$ is significantly close to that of the MoC, which is determined by an ensemble average of the measurement pattern of the MoC. This fact implies that the ratio of parameters $\frac{J^\alpha}{J^{\beta}}$ is related to the ratio of probabilities $\frac{p^{\alpha}}{p^{\beta}}$, that is, $\frac{J^\alpha}{J^{\beta}}\longleftrightarrow\frac{p^{\alpha}}{p^{\beta}}$.
This relation is nothing but the explicit form of ``parameter ratio-probability ratio correspondence''.
In what follows, we study the conjecture of the PRC in a qualitative level by employing imaginary-time path integral and by focusing on the averaged states in the MoC.
We further examine the PRC for small size systems as a concrete example.
\subsection{Imaginary-time evolution}
For the stabilizer Hamiltonian $H'_{\rm stab}$, the ground state can be generated by imaginary-time evolution, which is used by various numerical simulations, such as path-integral quantum Monte-Carlo method \cite{Avella2013}.
The imaginary-time evolution starting with a state $|\psi(0)\rangle$ generates a final state as
\begin{eqnarray}
|\psi(\tau)\rangle =e^{-\tau H'_{\rm stab}}|\psi(0)\rangle,
\end{eqnarray}
where $\tau$ is the imaginary-time interval (regarded as inverse temperature). For sufficiently large $\tau$, we assume the final state $|\psi(\tau)\rangle$ reaches the ground state of the $H'_{\rm stab}$.
We split the interval $\tau$ into $N$segments ($N\gg 1$) and insert identities composed of a complete set of basis,
\begin{eqnarray}
|\psi(\tau)\rangle &=&\sum_{\{\ell\}}
|\ell_{N}\rangle \langle \ell_{N}|e^{-\delta \tau H'_{\rm stab}}|\ell_{N-1}\rangle\nonumber\\
&&\cdots \langle \ell_{1}|e^{-\delta \tau H'_{\rm stab}}|\ell_{0}\rangle \langle \ell_{0}|\psi(0)\rangle\nonumber\\
&=&\sum_{\ell_{N}}|\ell_{N}\rangle \biggl[\sum_{{\bf \{\ell\}}-\ell_N}\prod^{N-1}_{j=0}G_{j+1,j}\langle \ell_{0}|\psi(0)\rangle\biggr],\\
G_{i,j}&\equiv& \langle \ell_{i}|e^{-\delta\tau H'_{\rm stab}}|\ell_j\rangle.
\end{eqnarray}
where $\delta \tau=\tau/N$, $\sum_{\ell}|\ell\rangle\langle\ell|=1$, i.e., $\{|\ell\rangle\}$ is a set of basis and we have employed Suzuki-Trotter decomposition \cite{Suzuki1976}, and $G_{j+1,j}$ is a propagator for small discrete time step $\delta \tau$.
The above is a discrete imaginary-time path integral, and
the imaginary-time dynamics is governed by the propagator $G_{j+1,j}$.
\subsection{Time evolution of MoC and ensemble state}
As the next step, we turn to the MoC starting with a state $|\psi(0)\rangle$, where a sufficient large number of discrete time steps denoted by $t_N$ are performed. In the MoC, a measurement pattern of time evolution is selected (called unraveling and this is a single stochastic process).
Then, we assume that the final state reaches a steady state. This state can be written by \cite{Gullans2020}
\begin{eqnarray}
|\psi(t_N)_{{\vec \alpha},{\vec j}}\rangle &=& C_{{\vec \alpha},{\vec j}} Q_{{\vec \alpha},{\vec j}} |\psi(0)\rangle,\\
Q_{{\vec \alpha},{\vec j}}&=&P^{\alpha_{N}}_{j_N}P^{\alpha_{N-1}}_{j_{N-1}}P^{\alpha_{N-2}}_{j_{N-2}}\cdots P^{\alpha_{1}}_{j_1}, \\
P^{\alpha_{m}}_{j_m}&=&\frac{1}{2}(1+ K^{\alpha_{m}}_{j_m}).
\end{eqnarray}
Here the single measurement pattern is represented by labels ${\vec \alpha}$ and ${\vec j}$, where ${\vec \alpha}=(\alpha_1, \alpha_2, \cdots, \alpha_{t_N})$, $\alpha_m (=1,\cdots, M)$ represents the type of the stabilizer at $m$-th time step with a probability $p^{\alpha}$, and ${\vec j}=(j_1, j_2, \cdots, j_{t_N})$, $j_m(=0,\cdots, L-1)$ represents the position of the performed projective measurement at $m$-th time step. $P^{\alpha_m}_{j_m}$ is $\alpha$-types projective measurement at $m$-th time step.
$C_{{\vec \alpha},{\vec j}}$ is a normalization constant of the state, which depends on the single measurement pattern $({\vec \alpha},{\vec j})$.
As in the imaginary-time evolution in the above, we insert many identities composed of a complete set of basis between neighboring projective operators,
\begin{eqnarray}
|\psi(t_N)_{{\vec \alpha},{\vec j}}\rangle &\propto&\sum_{{\bf \ell}}
|\ell_{N}\rangle \langle \ell_{N}|P^{\alpha_{N}}_{j_N}|\ell_{N-1}\rangle\nonumber\\
&&\times
\langle \ell_{N-1}|P^{\alpha_{N-1}}_{j_{N-1}}|\ell_{N-2}\rangle
\cdots
\langle \ell_{1}|P^{\alpha_{0}}_{j_{0}}|\ell_{0}\rangle \langle \ell_{0}|\psi(0)\rangle\nonumber\\
&=&\sum_{\ell_{N}}|\ell_{N}\rangle \biggl[\sum_{{\bf \{\ell\}}-\ell_{N}}\prod^{N-1}_{m=0}I_{m+1,m}\langle \ell_{0}|\psi(0)\rangle\biggr],\\
I_{m,m-1}&\equiv& \langle \ell_{m}|P^{\alpha_{m}}_{j_{m}}|\ell_{m-1}\rangle,
\end{eqnarray}
where we used $\sum_{\ell}|\ell\rangle\langle\ell|=1$ and $\{|\ell\rangle\}$ is the same set of basis used in the imaginary-time evolution in the above.
In what follows, we shall ignore normalization factor of the sequence of the projective measurements.
To compare the above state in the MoC with the ground state created by the imaginary-time evolution $|\psi(\tau)\rangle$, we further proceed with the above consideration of the MoC.
We take ensemble average over many single measurement patterns creating a steady state $|\psi(t_N)\rangle$.
Here, we introduce sample label $s$.
Each sample of measurement pattern is labeled as
$({\vec \alpha},{\vec j}) \to ({\vec \alpha}^s,{\vec j}^s)$, where $s=1,2,\cdots, N_p$ and $N_p$ is the total number of the samples.
Then, an averaged steady state is described by
\begin{eqnarray}
&&\overline{|\psi(t_N)\rangle} = \frac{1}{N_p}\sum_{s}C_{{\vec \alpha}^s,{\vec j}^s} Q_{{\vec \alpha}^s,{\vec j}^s} |\psi(0)\rangle\nonumber\\
&&\propto \frac{1}{N_p}\sum_{s} Q_{{\vec \alpha}^s,{\vec j}^s} |\psi(0)\rangle\nonumber\\
&&=\frac{1}{N_p}\sum_{s} \sum_{\ell_{N}}|\ell_{N}\rangle \biggl[\sum_{{\bf \{\ell\}}\neq \ell_{N}}\prod^{N-1}_{m=0}I^s_{m+1,m}\langle \ell_{0}|\psi(0)\rangle\biggr]\nonumber\\
&&\stackrel{N_p\to\infty}{=}\sum_{\ell_{N}}|\ell_{N}\rangle \biggl[\sum_{{\bf \{\ell\}}- \ell_{N}}\prod^{N-1}_{m=0}\overline{I_{m+1,m}}\langle \ell_{0}|\psi(0)\rangle\biggr]\nonumber\\
&&\equiv \overline{Q(t_N)} |\psi(0)\rangle, \label{Qave}
\end{eqnarray}
where
\begin{eqnarray}
\overline{I_{m+1,m}}\equiv \langle \ell_{m+1}|\biggl(\sum_{\alpha,j}\frac{p^{\alpha}}{L}P^{\alpha}_{j}\biggl)|\ell_{m}\rangle.
\end{eqnarray}
From this form, the averaged state $\overline{|\psi(t_N)\rangle}$ is approximately determined by an ensemble averaged propagator $\overline{I_{m+1,m}}$.
Note that we here ignore the factor $C_{{\vec \alpha}^s,{\vec j}^s}$.
\subsection{Qualitative relationship of propagators and concrete relation between parameter ratio and probability ratio}
Now we compare the two propagators $G_{j+1,j}$ and $\overline{I_{m+1,m}}$ to obtain a relation between them.
It is expected that if the structure of these propagators is close, the obtained ground state $|\psi(\tau)\rangle$ must be close to the steady state $\overline{|\psi(t_N)\rangle}$.
That is,
\begin{eqnarray}
G_{j+1,j}\longleftrightarrow\overline{I_{m+1,m}}
\Longleftrightarrow |\psi(\tau)\rangle \longleftrightarrow \overline{|\psi(t_N)\rangle},
\end{eqnarray}
Here, $\longleftrightarrow$ means ``close structure''.
From the above observation, if $|\psi(\tau)\rangle \longleftrightarrow \overline{|\psi(t_N)\rangle}$ is correct, we can conclude
$G_{j+1,j}\longleftrightarrow\overline{I_{m+1,m}}$.
Then, by comparing the internal structure of the matrices $G_{j+1,j}$ with that of $\overline{I_{m+1,m}}$, we can obtain important insight and relationship between model parameters of $H'_{\rm stab} $ [$\{ J^{\alpha}\}$] and emergent probability $\{ p^{\alpha}\}$ and types of stabilizers in the corresponding MoC.
In general, it is difficult to find strict and rigorous relations between model parameters of $H'_{\rm stab}$ [$\{ J^{\alpha}\}$] and probability $\{ p^{\alpha}\}$ for many-body system due to large Hilbert space dimension and large dimension of the matrices of propagator.
However, we can find a qualitative relation if we consider a simple Hamiltonian and its corresponding MoC.
We study two concrete examples given as follows:
\uline{Case (I)}:
Single spin Hamiltonian,
$$
H^{'}_{{\rm stab} 1}=-J_1Z-J_2X,
$$
where $Z$ and $X$ are Pauli operators of single $1/2$-spin, $J_{1(2)}>0$.
Note that $Z$ and $X$ are different types of stabilizers, which are anti-commutative with each other, corresponding to $M=2$ and $L=1$ case in Eq.~(\ref{Hstab2}).
For this Hamiltonian, the propagator of the imaginary-time path integral is $G_{j+1,j}=\langle \ell_{j+1}|e^{-\delta\tau H^{'}_{\rm stab 1}}|\ell_j\rangle$, where the set of basis is $\{|\ell_j\rangle\}=\{|\uparrow\rangle, \:\:|\downarrow\rangle \}$ where $Z|\uparrow\rangle=|\uparrow\rangle$ and $Z|\downarrow\rangle=-|\downarrow\rangle$.
The matrix form of $G_{j+1,j}$ is obtained by the practical calculation as
\begin{eqnarray}
(G_{ij})\approx
\begin{bmatrix}
e^{\delta\tau J_1}\cosh(\delta \tau J_2) & e^{-\delta\tau J_1}\sinh(\delta \tau J_2) \\
e^{\delta\tau J_1}\sinh(\delta \tau J_2)
& e^{-\delta\tau J_1}\cosh(\delta \tau J_2)
\end{bmatrix},
\end{eqnarray}
where we have ignored the contribution from the commutators of $X$ and $Z$ (due to $\delta \tau \ll 1$).
Let us turn to the propagator of the MoC.
The MoC corresponding to $H^{'}_{{\rm stab} 1}$ includes a single site projective measurement of $Z$ and $X$ with probability $p^A$ and $p^B$, respectively, where $p^A+p^B=1$.
Then, the matrix form of the averaged propagator $(\overline{I_{ij}})$ is given by
\begin{eqnarray}
(\overline{I_{m+1,m}})=
\begin{bmatrix}
\frac{1+p^A}{2} & \frac{p^B}{2} \\
\frac{p^B}{2}
& \frac{1-p^A}{2}
\end{bmatrix}.
\end{eqnarray}
We compare the components of the two matrices $(G_{j+1,j})$ and $(\overline{I_{m+1,m}})$.
The following four relations are then obtained (we ignoring an overall factor $e^{i\rho}$),
\begin{eqnarray}
\mbox{1 column:}\:&& e^{\delta\tau J_1}\cosh(\delta \tau J_2) \longleftrightarrow \frac{1+p^A}{2}, \nonumber\\
&&e^{\delta\tau J_1}\sinh(\delta \tau J_2)\longleftrightarrow \frac{p^B}{2}.\\
\mbox{2 column:}\:&& e^{-\delta\tau J_1}\sinh(\delta \tau J_2) \longleftrightarrow \frac{p^B}{2}, \nonumber\\
&&e^{-\delta\tau J_1}\cosh(\delta \tau J_2)
\longleftrightarrow \frac{1-p^A}{2}.
\end{eqnarray}
At first glance, we note that an increase (decrease) of the ratio $J_1/J_2$ corresponds to an increase (decrease) of $p^A/p^B$.
More precisely for $\delta \tau \ll 1$, we expand each component up to $\mathcal{O}(\delta \tau)$, then we reach the following relations,
\begin{eqnarray}
\mbox{1 column:}\:
1+\delta\tau J_1 &\longleftrightarrow& \frac{1+p_A}{2}, \nonumber\\
\delta\tau J_2 &\longleftrightarrow& \frac{p_B}{2}.\\
\mbox{2 column:}\:
\delta \tau J_2 &\longleftrightarrow& \frac{p_B}{2}, \nonumber\\ 1-\delta\tau J_1 &\longleftrightarrow& \frac{1-p_A}{2}.
\end{eqnarray}
By requiring $G_{j+1,j}=C_0(\overline{I_{m+1,m}})$, the comparing $(1,1)$-component with $(2,2)$-component leads to $\delta\tau J_1=\frac{C_0}{2}p^A$, and also the comparing $(1,2)$-component with $(2,1)$-component leads to
$\delta\tau J_2=\frac{C_0}{2}p^B$.
Thus, we obtain
\begin{eqnarray}
\frac{J_1}{J_2} \longleftrightarrow \frac{p^A}{p^B}.
\end{eqnarray}
This is a concrete form of the PRC between the imaginary-time path integral formalism of $H'_{\rm stab1}$ and its corresponding counterpart MoC.
\uline{Case (II)}:
As second example, we consider a three-site cluster spin model,
\begin{eqnarray}
H^{'}_{\rm stab 2}=\sum^{2}_{j=0}[-J_1Z_{j-1}X_jZ_{j+1}-J_2X_j],
\end{eqnarray}
where periodic boundary conditions are imposed and $J_{1(2)}>0$.
The operators $ZXZ$ and $X$ are different types of stabilizers, which are commutative/anti-commutative with each other depending on their locations, and the model corresponds to the $M=2$ and $L=3$ case in Eq.~(\ref{Hstab2}).
We first consider the matrix propagator of the imaginary-time propagation $\langle \ell|e^{-\delta\tau H^{'}_{\rm stab 2}}|\ell'\rangle= G_{j+1,j}$. Here, we employ eigenstates of $\{ X_j\}$ as a complete set of basis, and therefore,
\begin{eqnarray}
\{|\ell\rangle\}&=&\{|+++\rangle, \:\:|-++\rangle, \:\:
|+-+\rangle, \:\:|++-\rangle,\:\:\nonumber\\
&&|--+\rangle, \:\:|-+-\rangle,\:\:
|+--\rangle, \:\:|---\rangle
\}.
\end{eqnarray}
$G_{j+1,j}$ is $8\times 8$ matrix. Components of the propagators are approximately obtained by ignoring the contributions from the stabilizers' commutators by the Suzuki-Trotter decomposition. The $8\times 8$ full matrix is explicitly shown in Appendix A.
We turn to the propagator of the MoC.
By the PRC guiding principle,
the MoC corresponding to $H^{'}_{{\rm stab} 2}$ includes a single projective measurement of $ZXZ$ and $X$ with probability $p^A$ and $p^B$ at a single time step, respectively. Here, $p^A+p^B=1$ and the measurement site is chosen randomly with equal probability $1/L=1/3$.
Then, the $8\times 8$ full matrix of the averaged propagator $(\overline{I_{m+1,mj}})$ is also directly calculable.
The full form is also shown in Appendix A.
Now, we employ the same strategy to the case I. That is, we compare the components of the two matrices $(G_{j+1,j})$ and $(\overline{I_{m+1,m}})$.
Fortunately, we find only five relations given by (the detailed calculation is shown in Appendix A)
\begin{eqnarray}
1+3\delta \tau J_2 &\longleftrightarrow& p^A/2+p^B, \nonumber\\
1+\delta \tau J_2 &\longleftrightarrow& p^A/2+2p^B/3, \nonumber\\
1-\delta \tau J_2 &\longleftrightarrow& p^A/2+p^B/3, \nonumber\\
1-3\delta \tau J_2 &\longleftrightarrow& p^A/2,\nonumber\\
\delta\tau J_1&\longleftrightarrow& p^A/6.
\end{eqnarray}
From the above relations, if we require $G_{j+1,j}=C_0(\overline{I_{m+1,m}})$, then $\delta\tau J_1=\frac{C_0}{6}p^A$, and the relations $1+3\delta \tau J_2 = C_0(p^A/2+p^B)$ and $1+\delta \tau J_2 = C_0(p^A/2+2p^B/3)$ lead to $\delta \tau J_2=\frac{C_0}{6}p^B$.
Hence, we have
\begin{eqnarray}
\frac{J_1}{J_2} \longleftrightarrow \frac{p^A}{p^B}.
\end{eqnarray}
This relation is the same with that obtained in the case I.
This is the genuine PRC relation between the imaginary-time path integral formalism and the MoC.
\subsection{PRC for mixed-state dynamics}
We have strengthened the PRC conjecture in previous subsections, following
the previous works for pure-state evolution, which numerically imply the PRC in some parts of phase diagrams of certain models \cite{Lavasani2021,Klocke2022}.
Forwarding the discussion one step further, we extend the above discussion on the pure state to the mixed state, in particular, starting with an infinite-temperature mixed state.
Under the imaginary-time evolution, the density matrix dynamics is given by
\begin{eqnarray}
\rho(\tau)=e^{-\tau H} \rho(0)e^{\tau H},
\end{eqnarray}
where $H$ is a Hamiltonian and a suitable normalization of $\rho(\tau)$ is assumed.
Here, we set $\rho(0)$ to an infinite-temperature state.
For sufficient large $\tau$, the state $\rho(\tau)$ results in a ground state, which is a multiplet if the ground state of $H$ is degenerate.
Similar observation with the above can be applied to the MoC for each single measurement pattern.
We consider the ensemble average of density matrix averaged over samples of measurement patterns.
If we employ the averaged time-evolution operator of the MoC $\overline{Q(t_N)}$ in Eq.(\ref{Qave}), the averaged time-evolved density matrix $\overline{\rho(t_N)}$ is approximately given by
\begin{eqnarray}
\overline{\rho(t_N)}\sim \overline{Q(t_N)} \rho(0)\overline{Q(t_N)}^{\dagger}.
\end{eqnarray}
One might expect that the PRC, similar to the pure-state system, holds for the above mixed-state system.
However for the mixed state in the quantum circuit, $\overline{\rho(t_N)}$ is \textit{not} commonly used for calculation of physical quantities such as entropy, entanglement entropy, etc \cite{Sharma2022}.
More precisely in the MoC, physical quantities are obtained for each single measurement path and so-obtained results are averaged over various measurement patterns.
Then rigorously, it is a nontrivial question whether the PRC holds for the MoC of mixed states from the view point of quantum-mechanical coherence. Therefore, it is very important to examine if the PRC holds for the mixed states in the MoC.
In this work, we address this problem by employing numerical methods. We shall study MoCs for a lattice gauge models as a concrete example, which has a very rich and interesting phase diagram.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{Fig2.pdf}
\end{center}
\caption{Lattice-link setting for extended cluster model. Open boundary conditions are imposed.
The total number of the matter site is $L$ and the total number of the gauge link is $L+1$.
}
\label{Fig2}
\end{figure}
\section{(1+1)-D $Z_2$ lattice gauge-Higgs model and its measurement-only circuit counterpart}
In previous section, we discussed the PRC between the imaginary-time formalism of Hamiltonian and the ensemble average of the MoC. Certain simple examples were investigated there.
We shall further examine and strengthen this guiding principle by investigating another model of great physical interest. In this section, we study a lattice gauge model called ``(1+1)-D $Z_2$ lattice gauge-Higgs model'' with open boundary conditions.
Its global ground state phase diagram was recently studied rather in detail\cite{Borla2021,Verresen2022}. The model includes interesting phases, and the study on it reveals an important relationship between gauge theory and topological order in condensed matter.
We address the following issue:
Based on the PRC guiding principle, whether or not a suitably-chosen MoC generates steady states phase diagram of which is similar or identical to the ground state phase diagram of the target gauge-theory model.
We shall present a suitable setup of the MoC and clarify this issue.
\subsection{Model Hamiltonian proposed in Refs. \cite{Borla2021,Verresen2022}}
We introduce a gauge-lattice as shown in Fig.~\ref{Fig2}, where spin-$1/2$ degrees of freedom reside both on matter sites and gauge links in one spatial dimension.
Therefore, the total degrees of freedom are $L_t\equiv 2L+1$ spins.
We focus on the following cluster spin Hamiltonian \cite{Borla2021,Verresen2022},
\begin{eqnarray}
H_{Z_2}&=&\sum^{L-1}_{j=0}\biggl[-K_1\sigma^{x}_{j-1/2}X_{j}\sigma^{x}_{j+1/2}-K_2\sigma^z_{j+1/2}\biggl]\nonumber\\
&+&\sum^{L-1}_{j=0}\biggl[-J_1Z_{j}\sigma^{z}_{j+1/2}Z_{j+1}
-J_2X_{j}\biggl],
\label{HZ2}
\end{eqnarray}
where $X_j$ and $Z_j$ are Pauli operators defined on matter sites and $\sigma^x_{j\pm\frac{1}{2}}$ and $\sigma^z_{j\pm\frac{1}{2}}$ are also Pauli operators on gauge links.
We consider open boundary conditions throughout this work.
Note that the boundaries of the system are the gauge links as shown in Fig.~\ref{Fig2}.
The model has two important symmetries: (I) Parity symmetry, $P\equiv \prod^{L-1}_{j=0}X_{j}$ and (II) Magnetic symmetry $W\equiv \prod^{L-1}_{j'=-1}\sigma^z_{j'+\frac{1}{2}}$, resulting in $Z_2\times Z_2$ symmetry, which has been referred as key symmetry for SPT phase \cite{Son2011,Son2012,Bahri2015,Verrsen2017}.
The model $H_{Z_2}$ in (\ref{HZ2}) reduces to well-known (1+1)-D $Z_2$ lattice gauge-Higgs for $K_1/K_2\to \infty$, and also it has SPT properties of condensed-matter physics in certain parameter region.
More precisely from the gauge-theoretical point of view, the $K_1$-term acts as energetic penalty caused by breaking of Gauss' law constraint.
On the other hand, the $K_2$-term hinders fluctuations of the gauge field.
$J_1$-term is a cluster term, interpreted as a matter-($Z_2$)gauge coupling and also it is a topological stabilizer protected by $Z_2\times Z_2$ symmetry in SPT literature, and $J_2$-term acts as a chemical potential of the matter and also is regarded as a `transverse field' competing with the cluster term.
The above four terms are different types of stabilizers from the MoC point of view.
The ground state of $H_{Z_2}$ was studied in detail and its phase diagram was obtained \cite{Borla2021,Verresen2022}.
The model has four ground state phases: (1) Higgs=SPT phase,
(2) deconfinement phase, (3) ferromagnetic phase, (4) simple product phase.
For $K_1/K_2\gg 1$, $K_1$-term is dominant.
This condition gives Gauss' law constraint $\sigma^{x}_{j-1/2}X_{j}\sigma^{x}_{j+1/2}=1$ for the Hilbert space \cite{LGT_simu}.
Under this condition, in the parameter region such as $J_1 > J_2$, $J_1$-cluster term is dominant, leading to the emergence of SPT phase protected by $Z_2\times Z_2$ symmetry.
This SPT phase is also interpreted as Higgs phase, in which the matter parity symmetry, $P$, is spontaneously broken, as it was recently suggested in \cite{Verresen2022}.
While for $J_1 < J_2$, $J_2$-term is dominant.
The emergent phase is a topological 1D counterpart of the toric code in 2D and is a deconfinement phase in LGT, where two-fold degeneracy appears in the ground state by the long-range order $\langle\sigma^{x}_{j-1/2}\sigma^{x}_{j'+1/2}\rangle \neq 0$ via Gauss' law and `condensation' $\langle X_j\rangle \neq 0$.
Domain wall, a low-energy excitation, moves freely there.
Interestingly enough, this phase can be regarded as a spontaneously broken phase of the $W$-symmetry.
Furthermore, $K_2 > K_1$ regime is also interesting, where Gauss' law is weakened, and other phases emerge.
For $J_1 > J_2$, a ferromagnetic phase appears with spontaneous broken $Z_2$- symmetry since $\sigma^z_{j+1/2}$ is frozen and the model reduces to a transverse field Ising model~\cite{Borla2021}.
While for $J_1< J_2$, the $J_2$ term is dominant and as a result, a trivial product state emerges, stabilized by $X_j$ and $\sigma^z_{j+1/2}$.
With open boundary conditions, the above four ground states exhibit different characters \cite{Borla2021,Verresen2022}, in particular, the degeneracy of these ground states is different.
In the Higgs=SPT phase, the ground state is four-fold degenerate due to the presence of a zero-energy edge mode at each edge.
This is directly observed by counting the number of stabilizers stabilizing the state.
For $K_1\to \infty$ and $J_1\to \infty$, the total number of the two stabilizers of $K_1$ and $J_1$ term is $2L-1$.
This leads to two redundant degree of freedom, $L_t-(2L-1)=2$, inducing four-fold ($=2^{L_t-(2L-1)}$) degeneracy.
In the topological phase for $J_2/J_1 \gg 1$, the ground state is two-fold degenerate since the total number of matter site is smaller than that of the gauge link \cite{deconfinement_stab}.
For the ferromagnetic phase, the ground state is doubly degenerate since cat states occur.
For the product phase, the ground state is unique since the state is stabilized by all $K_2$ and $J_2$ terms.
The above ground state degeneracy is one of the properties of the Hamiltonian.
We shall show that states, which can be regarded as counterparts of the above four ground states, are produced by the MoC as mixed states by employing the mixed-state protocol.
It is expected that some of four phases can be characterized by bulk non-local order parameters \cite{Verresen2022}.
For the Higgs=SPT phase, the bulk order can be characterized by the following decorated domain wall operator (DWO),
\begin{eqnarray}
G(i_0,j_0)=Z_{i_0}\biggl(\prod^{j_0-1}_{j=i_0}\sigma^z_{j+\frac{1}{2}}\biggr)Z_{j_0},
\end{eqnarray}
where $i_0$ and $j_0$ are two separated matter sites.
The Higgs=SPT phase has a finite expectation value of $G(i_0,j_0)$.
For the ferromagnetic phase, the bulk order can be characterized by spin-spin correlation operator,
\begin{eqnarray}
S(i_0,j_0)=Z_{i_0}Z_{j_0},
\end{eqnarray}
where $i_0$ and $j_0$ are two separated matter sites.
The ferromagnetic phase has a finite expectation value of $S(i_0,j_0)$.
We shall apply modified version of these non-local order parameters to the numerics of the MoC as shown later on.
The global ground state phase diagrams were analytically studied and obtained in \cite{Borla2021} (See Fig.~8 in \cite{Borla2021}, where the phase diagrams of a related model to $H_{Z_2}$ are shown).
There, the four phases are displayed in $(J_1/J_2)$-$(K_1/K_2)$ plane, and two phase boundaries are given by $J_1/J_2=1$ and $K_1/K_2=1$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{Fig3.pdf}
\end{center}
\caption{Schematic figure of the measurement-only circuit corresponding to the Hamiltonian $H_{Z_2}$. The black and red lines represent the matter sites and gauge links, respectively.
The blue and magenta shaded layers represent Gauss' law and matter layers, respectively.
The one time step includes the two layers.
}
\label{Fig3}
\end{figure}
\subsection{Circuit setup corresponding to $H_{Z_2}$}
We setup a MoC, which is expected to produce a qualitatively the same phase diagram with that of the ground state reviewed in the previous subsection.
To this end, we introduce a two-layered projective measurement in a single time step as shown in Fig.~\ref{Fig3}. The two layers are composed of matter layer and Gauss' law layer.
We consider system of $L$ matter sites with open boundary conditions, where $L+1$ gauge links exist as in Fig.~\ref{Fig2}.
The total degree of freedom is therefore $L_t$.
Here, we introduce four different types of stabilizers and corresponding projective measurements,
which are defined as
\begin{eqnarray}
{\hat M}^{1a}_{j}&=&\sigma^{x}_{j-1/2}X_{j}\sigma^{x}_{j+1/2},\:\:\:
{\hat M}^{1b}_{j'}=\sigma^z_{j'+1/2},\\
{\hat M}^{2a}_{j}&=&Z_j\sigma^{z}_{j+1/2}Z_{j+1},\:\:\:
{\hat M}^{2b}_{j}=X_{j},
\end{eqnarray}
where $j=0,1,\cdots, L-1$ and $j'=-1,0,\cdots, L-1$.
The above four kinds of operators are included in $H_{Z_2}$ and they satisfy properties of stabilizer, i.e.,
$[\hat{M}^{k \alpha}_{i},\hat{M}^{k \alpha}_{j}]=0$ and $(\hat{M}^{k \alpha}_j)^2=1$ for $k=1,2$ and $\alpha=a,b$, and note that $\hat{M}^{k a}_{i}$ and $\hat{M}^{k b}_{j}$ anti-commute with each other for a pair of $(i,j)$. That is, projective measurements of $\hat{M}^{k a}_{i}$ and $\hat{M}^{k b}_{j}$ for $k=1,2$ are competitive with each other.
In the MoC, for each matter layer, we apply stabilizers ${\hat M}^{2a}_{j}$ and ${\hat M}^{2b}_{j}$ with probability $p^A$ and $p^B$, respectively, with $p^A+p^B=1$.
The measured site $j$ is chosen randomly with equal probability, similar to the case in Sec.II.B.
In each Gauss' law layer, we apply stabilizers ${\hat M}^{1a}_{j}$ and ${\hat M}^{1b}_{j}$ with probability $p^C$ and $p^D$, respectively, with $p^C+p^D=1$.
The measured site $j$ is chosen again randomly with equal probability.
For this MoC, we take an infinite-temperature state (a mixed state) as an initial state and then the mixed state is evolved for a large number of discrete time steps.
The reason to employ the mixed-state protocol is that the initial state dependence existing in the pure state update is avoided and the degenerate character (ground state multiplet) can be correctly captured.
We consider a long time evolution with the total number of steps $t_N=4(2L+1)$.
In general, the initial mixed state is purified by projective measurements.
We first obtain a steady state (mixed or pure state) in each measurement pattern (a single stochastic process) and calculate physical observables in the steady state.
Then, we gather many samples of steady states and physical observables as an ensemble and investigate the properties of the ensemble to compare them with the ground state properties of the target Hamiltonian.
\section{Numerical Results of purification dynamics}
In this section, we shall show numerical demonstrations of the MoC defined in the previous section, and verify that the MoC generates steady stabilizer states, the phase diagram of which is similar to the ground state phase diagram of $H_{Z_2}$.
\subsection{Explanation of numerical calculation}
We make use of stabilizer update numerical algorithm \cite{Gottesman1997,Aaronson2004,Nielsen_Chuang} to simulate the MoC.
In particular, we employ the mixed-state update methods of stabilizer dynamics employed in \cite{Gullans2020,Ippoliti2021}, in which information of sign for updating stabilizers is not stored.
We start with the state at infinite temperature $\rho=\frac{1}{N_D}\hat{I}$, where $N_D$ is the Hilbert space dimension of the system ($N_D=2^{L_t}$).
Generally, the time evolution by sequential projective measurements of stabilizers makes the initial mixed state purified (the rank of the density matrix is decreasing.)
For a long time period, a purified state emerges as a steady state, but it cannot be necessarily a genuine pure state, i.e., it is allowed to be a mixed state.
We expect that a steady mixed state corresponds to a multiplet of the ground states of $H_{Z_2}$.
More precisely, the rank of a steady mixed state denoted by $N_{cs}$ is related to the degeneracy of the ground state of $H_{Z_2}$ denoted by $N_{gd}$, as $2^{N_{cs}}=N_{gd}$.
In the stabilizer formalism, the rank is related to the dimension of code space $2^{N_{cs}}$ with $N_{cs} \equiv L_t-N_R$, where $N_R$ is total number of linearly-independent stabilizers generating the mixed state \cite{QI_text}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=7.5cm]{Fig4.pdf}
\end{center}
\caption{Schematic figure of phase diagram obtained by the MoC.
Mixed-state algorithm is employed.
The red and blue dashed lines are obtained phase boundaries, $p^A\approx 0.5$ and $p^C\approx 0.5$
in the present work.
The red and blue solid lines represent
the typical parameter sweeps studied in detail.
}
\label{Fig4}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=18cm]{Fig5.pdf}
\end{center}
\caption{Phase transition behaviors of $\langle N_{cs}\rangle$ for various system sizes. The solid lines are fitting lines.
(a) The data for a strong Gauss' law case, $p^C=0.9$.
(b) The data for a weak Gauss' law case, $p^C=0.1(p^D=0.9)$, where the gauge dynamics is frozen, $\sigma^{z}_{j+1/2}\to 1$.
(c) The data for a Gauss' law sweep case with a fixed $p^A=0.9$.
(d) The data for a Gauss' law sweep case with a fixed $p^A=0.1$.
Right small panels: System-size dependence of $p^A_c$ and $p^C_c$ deduced from the peak of the fitting lines.
We can extrapolate the critical probabilities for $1/L\to 0$. Here we used exponential fitting line.}
\label{Fig5}
\end{figure*}
In practical calculation of the target observables shown later, we employ $400-600$ different measurement patterns for various system sizes and various values of probabilities, and take an ensemble average of saturation values of the observables at $t_N=4L$, where the state reaches a steady state (mixed or pure state).
\subsection{Physical observables}
To identify phase of the state obtained by the MoC,
we first observe the degree of the code space $N_{cs}$ obtained by counting the total number of linearly-independent stabilizers $N_{R}$.
In particular for steady states, we calculate the ensemble average of it, denoted by $\langle N_{cs}\rangle$, obtained through many samples of the measurement patterns.
In fact, $\langle N_{cs}\rangle$ is related to the average entropy of the state \cite{Gullans2020} and also is expected to relate to the degeneracy of the ground state of the corresponding Hamiltonian $H_{Z_2}$, as we explained in the above.
Furthermore, to examine if the MoC dynamics generates the Higgs=SPT or ferromagnetic phase in the bulk as a steady state, we calculate decorated domain-wall order (DWO)~\cite{Lavasani2021,Verresen2022}, which is defined as follows,
\begin{eqnarray}
({\rm DWO})^2 \equiv 2^{N_{cs}}\mathrm{tr}[\rho(t_N)G(i_0,j_0)\rho(t_{N})G(i_0,j_0)].
\end{eqnarray}
where
$$
G(i_0,j_0)=Z_{i_0}\biggl(\prod^{j_0-1}_{j=i_0}\sigma^z_{j+\frac{1}{2}}\biggr)Z_{j_0}.
$$
Here, by using linearly-independent stabilizer generators, the density matrix of the system state (mixed state) is given by
\begin{eqnarray}
\rho(t_N)=\prod^{N_R-1}_{\ell=0}\biggr( \frac{1+s^{\ell}(t_N)}{2}\biggl),
\end{eqnarray}
where $s^{\ell}(t_N)$ is $N_R$'s updated stabilizers (linearly-independent).
In the LGT, $G(i_0,j_0)$ is nothing but a gauge-invariant correlator of matter field (Higgs field) connected by Wilson string.
On the other hand, $({\rm DWO})^2$ is a kind of Edward-Anderson type string order to detect SPT phase \cite{Lavasani2021}.
We further calculate the following spin-glass long-range order parameter (SGO) to characterize ferromagnetic phase,
\begin{eqnarray}
({\rm SG})^2 \equiv 2^{N_{cs}}\mathrm{tr}[\rho(t_N)S(i_0,j_0)\rho(t_N)S(i_0,j_0)],
\label{SGO}
\end{eqnarray}
where $S(i_0,j_0)=Z_{i_0}Z_{j_0}$.
In the update of the MoC without storing information of sign of the stabilizers, the ferromagnetic phase implies the presence of spin-glass like phase, thus, in what follows, we call the phase ``spin-glass (SG) phase" instead of ferromagnetic phase.
The further practical calculation scheme in our numerics is explained in Appendix B.
\subsection{Phase diagram of steady state obtained by the MoC}
We start with observing $\langle N_{cs}\rangle$.
From the behavior of $\langle N_{cs}\rangle$, we verify that the MoC generates four different kinds of steady states and find the qualitative phase diagram in $p^A$-$p^C$ plane as shown in Fig.\ref{Fig4}.
This phase diagram is very close to the ground state phase diagram of the Hamiltonian $H_{Z_2}$ proposed in \cite{Borla2021,Verresen2022}, in which two phase boundaries exist at $J_1/J_2=1$ and $K_1/K_2=1$ separating the ground state of the system $H_{Z_2}$.
The phase diagram of the steady state obtained by our numerics of the MoC has also two phase boundaries at $p^{A}/p^{B}\sim 1$ and $p^{C}/p^{D}\sim 1$.
Therefore, our study confirms the PRC, i.e., the MoC with a suitable setting of projective measurement of stabilizers can generate (mixed) steady states that are very close to the gauge-theoretical ground states of $H_{Z_2}$ through long but finite-period evolution by the MoC.
We investigate details of the transition properties of mixed states in the MoC.
The behavior of $\langle N_{cs}\rangle$ along the four typical lines in the parameter space (I)-(IV), displayed in Fig.~\ref{Fig4}, is observed.
The results for various system sizes are shown in Fig.~\ref{Fig5}.
We find that all data exhibit clear system-size dependence and the peaks of $\langle N_{cs}\rangle$ are located in the vicinity of $p_A$ or $p_c\sim 0.5$.
These peaks are obviously a signature of the phase transition.
Calculations in Fig.~\ref{Fig5} (a) are for the case of $p^C=0.9$, in which Gauss' law is enforced strongly.
We find the value of $\langle N_{cs}\rangle$ clearly changes $1\to 2$ as increasing $p^{A}$.
This indicates that the mixed state exhibits transition from the deconfinement phase to the Higgs=SPT phase since $\langle N_{cs}\rangle=2$ shows the presence of four-fold degenerate stabilizer states, corresponding to the ground state degeneracy of the Higgs=SPT phase of $H_{Z_2}$ in open boundary case, whereas $\langle N_{cs}\rangle=1$ corresponds to two-fold degenerate states by the spontaneous breaking of the magnetic symmetry in $H_{Z_2}$.
See the data of Fig.~\ref{Fig5} (b) for $p^C=0.1(p^D=0.9)$.
Gauss' law is weak and the gauge variable is frozen as $\sigma^z_{j+1/2}\to 1$ instead.
We observe that the value of $\langle N_{cs}\rangle$ clearly changes $0\to 1$ as increasing $p^{A}$.
This implies that the state changes from the product pure state stabilized by all $X_j$ and $\sigma^z_{j+1/2}$ to the SG phase, which are two-fold degenerate states stabilized by $Z_j\sigma^{z}_{j+1/2}Z_{j+1} \to Z_{j}Z_{j+1}$, corresponding to the ground state degeneracy of the ferromagnetic phase (cat states) of $H_{Z_2}$.
Next, see the data of Fig.~\ref{Fig5} (c) where we fix $p^A=0.9$ and vary the strength of Gauss' law.
We observe that the value of $\langle N_{cs}\rangle$ clearly changes $1\to 2$ as increasing $p^{C}$ implying that the SG phase transitions into the Higgs=SPT phase.
Also, see the data of Fig.~\ref{Fig5} (d) where we fix $p^A=0.1$ and vary the strength of Gauss' law.
We observe that the value of $\langle N_{cs}\rangle$ clearly changes $0\to 1$ as increasing $p^{C}$ implying the simple product pure state transitions into the deconfinement phase.
We further analyze the system-size dependence of $\langle N_{cs}\rangle$ along the above mentioned four lines in the parameter space.
From the data, we can obtain phase transition points in the MoC.
We fit the data points of $\langle N_{cs}\rangle$ \cite{Ncs_fit} and deduce the location of the peak of the fitting line with the corresponding probability for each system size.
The exponential fitting \cite{exp_fit} of selected probability points of different system sizes is performed on $1/L$-axis, and the fitting line is extrapolated to estimate the transition probability point $p^{A}_c$ or $p^C_{c}$ for $L_t\to \infty$.
These FSS data are displayed in the right panels in (a)-(d) of Fig.~\ref{Fig5}.
By using this method, we estimate the phase transition points: for the line (I), $p^A_c=0.485(8)$, the line (II), $p^A_c=0.494(7)$, the line (III), $p^C_c=0.491(1)$, and the line (IV), $p^C_c=0.485(6)$.
The above values are fairly close to $0.5$, implying that $p^A/p^B\sim 1$ and $p^C/p^D\sim 1$ are phase boundaries in the MoC.
The results of estimation indicate the validity of the PRC for the phase boundaries, $p^A/p^B \longleftrightarrow J^{1}/J^2$ and $p^C/p^D \longleftrightarrow K^{1}/K^2$.
Note that these estimated values are slightly smaller than $0.5$, and we expect that the reason for that comes from the difference of the total number of the stabilizers corresponding to each phase.
In Appendix C, we further show the behavior of $\langle N_{cs}\rangle$ on other lines in the parameter space and determine a quadruple critical transition point.
In addition, we calculate the average values of the DWO and SGO along the lines (I) and (II) in Fig.~\ref{Fig4}, where we set $i_0=1$ and $j_0=L-2$.
The results are shown in Fig.~\ref{Fig6} (a) and \ref{Fig6} (b).
The DWO becomes finite in the Higgs=SPT regime ($p^A\gtrapprox 0.5$) and the SGO becomes finite in the SG ferromagnetic regime ($p^C\lessapprox 0.5$).
These results support the existence of the bulk Higgs=SPT and SG phases produced by the MoC.
We also find that the behaviors of the DWO and SGO are insensitive to the system size.
Finally, we estimate the criticality of the transitions in the MoC simulation.
Before showing the numerical results, it should be remarked that the present simulation of mixed-state update in the MoC has aspects different from the usual ground state simulation governed by the Hamiltonian.
That is, the critical exponents of the mixed-state transition in the MoC might be different from those of the genuine ground state phase transition emerging by varying the parameters in Hamiltonian.
The criticality of the ground state phase transitions in the system $H_{Z_2}$ was investigated in \cite{Borla2021,Verresen2022} in terms of conformal field theory (CFT).
However, our finding phase transition is not necessarily governed by such a CFT.
Keeping this remark in mind,
we carry out FSS analysis for the (I) and (III) lines in Fig.~\ref{Fig4}, that is, we consider the deconfinement-Higgs=SPT phase and the SG phase-Higgs=SPT phase transitions.
\begin{figure}[t]
\begin{center}
\includegraphics[width=9cm]{Fig6.pdf}
\end{center}
\caption{(a) Decorated domain wall operator (DWO) with fixed value $p^C=0.9$ (Strong Gauss' law is enforced). Decorated domain wall is condensed in the Higgs=SPT phase indicating SSB of matter parity symmetry. (b) Spin-glass order (SGO) with fixed value $p^A=0.9$.
Both data indicate no system size dependence.}
\label{Fig6}
\end{figure}
To estimate its criticality (critical exponents), we apply the FSS analysis to $\langle N_{cs}\rangle$.
Here, we employ the following scaling ansatz \cite{Szyniszewski2019,Takashima2005},
\begin{eqnarray}
\langle N_{cs}\rangle (p^\alpha, L)=L^{\frac{\gamma}{\nu}}\Psi((p^\alpha-p^{\alpha}_c)L^{1/\nu}),
\end{eqnarray}
where $\Psi$ is a scaling function, $\gamma$ and $\nu$ are critical exponents and $p^{\alpha=A,C}_c$ is a critical transition probability.
We use the extrapolated values of $p^{A(C)}_{c}$ for $L_t\to \infty$ shown in Fig.~\ref{Fig5} (a) and ~\ref{Fig5} (c), and determine the scaling function $\Psi$ by searching the optimal values of $\gamma$ and $\nu$.
There, by using the data of $\langle N_{cs}\rangle$, the fitting curve for the scaling function is obtained via a 12-th order polynomial function with the optimal coefficients for various values of $\gamma$ and $\nu$, and then the coefficient of determination $R^2$ is estimated to find optimal $\gamma$ and $\nu$.
The scaling functions obtained by this FSS analysis are displayed in Figs.~\ref{Fig7}(a) and \ref{Fig7} (b) where we used $L=48, 64, 96$ data points in Fig.~\ref{Fig5} (a) and \ref{Fig5} (c) and set $p^A_c=0.485(8)$ and $p^C_c=0.491(1)$ for the parameter sweeps (I) and (III), respectively.
For the deconfinement-Higgs=SPT phase transition, the optimal critical exponents are estimated as $\gamma=1.53(0)$ and $\nu=2.15(0)$.
The fitting line of the scaling function has $R^2=0.990(4)$.
For the SG phase-Higgs=SPT phase transition, the optimal critical exponents are estimated as $\gamma=1.85(0)$ and $\nu=2.6(0)$.
The fitting line of the scaling function has $R^2=0.995(8)$.
We should not compare these values with those of CFTs since our target phase transition is for mixed states and occurs in the MoC, as we explained in the above.
Furthermore, the criticality observed in the present MoC for the mixed states may reveal some non-trivial aspects of the symmetry enriched topological phase transition.
Anyway, to clarify physical meanings of the obtained critical exponents is a future problem
\begin{figure}[t]
\begin{center}
\includegraphics[width=7.5cm]{Fig7.pdf}
\end{center}
\caption{Best optimal scaling functions for the transition behavior of the sweeps (I) and (III) in Fig.~\ref{Fig4}.
The curves are obtained from the optical fitting calculation by using the estimated values, $p^A_c=0.485(8)$ and $p^C_c=0.491(1)$ from the data in Fig.~\ref{Fig5} (a) and ~\ref{Fig5} (c).
In the data (a), the optimal critical exponents are $\gamma=1.53(0)$ and $\nu=2.15(0)$.
The fitting line of the scaling function has $R^2=0.990(4)$.
In the data (b), the optimal critical exponents are $\gamma=1.85(0)$ and $\nu=2.6(0)$.
The fitting line of the scaling function has $R^2=0.995(8)$.
For both data, the fitting curves of the scaling function is obtained via a 12-th order polynomial function with the best optimal coefficients.}
\label{Fig7}
\end{figure}
We summarize the results of our numerical calculation of the MoC, where we used mixed-state update method.
We numerically demonstrated that the PRC between the gauge-theory Hamiltonian $H_{Z_2}$ and the corresponding MoC holds in the phase diagram level even though mixed-state update simulation is employed.
Conversely, based on the PRC guiding principle, the MoC with a suitable set of stabilizer projective measurements can produce various stabilizer states corresponding to the interesting ground states of the gauge theory.
\section{Conclusion}
\uline{In the first half of this work}, we gave a qualitative argument of the PRC by comparing the propagators obtained from the imaginary-time path integral and the ensemble average of the MoC.
In particular, we showed two concrete examples supporting and strengthen the validity of the PRC.
Needless to say, the discussion on the PRC in this work is qualitative, and more rigorous and mathematical proof for this conjecture is an important future problem and welcome.
\uline{In the second half of this work}, to examine the validity and utility of the PRC, we investigated the (1+1)-D $Z_2$ lattice gauge-Higgs model, which includes very rich physics, by the practical use of the MoC.
We showed that the MoC with suitable stabilizer projective measurements and suitable probability ratios produces a steady-state phase diagram, which is quite similar to the ground state phase diagram of the corresponding gauge-Higgs Hamiltonian previously studied in \cite{Borla2021,Verresen2022}.
Our numerical result of the MoC is a concrete example indicating that the PRC can be a good guiding principle to produce interesting and desired states (including mixed state) by MoCs with suitable stabilizer projective measurement suggested by the PRC.
As a specific concrete example, our MoC demonstrate the presence of Higgs=SPT phase and other symmetry-breaking type orders such as SG phase by controlling the strength of Gauss' law with varying the measurement probability.
Finally, even though this work mainly studied gauge theory in $(1+1)$D as a concrete example, it is straightforward to apply the present methods to other quantum systems in higher dimensions.
We hope that we will report studies on them in a future.
\section*{Acknowledgements}
This work is supported by JSPS KAKEN-HI Grant Number JP21K13849 (Y.K.).
|
{
"arxiv_id": "2302.13754",
"language": "en",
"timestamp": "2023-03-02T02:18:09",
"url": "https://arxiv.org/abs/2302.13754",
"yymm": "2302"
} | \section*{Acknowledgements} \label{sec:acknowledgements}
The authors thank Karim Barsim, Alexander Gr\"afe and Andreas Ren\'{e} Geist for valuable discussions and feedback.
Furthermore, we thank Pawe\l \,Olejnik for providing measurement data from the double-torsion pendulum for the systems (ii)-(iv) that we consider in our experiments.
\section{Additional experiments} \label{section:add_exp}
In this section, we provide additional experiments.
We demonstrate the flexibility of our method in the purely learning-based scenario by applying different cutoff frequencies and downsampling rates.
Furthermore, we provide additional plots for the results in Section 5.
\subsection{Additional experiments}
Here, we demonstrate the flexibility of our purely learning-based method.
In particular, we compute the results for the double-mass spring system (cf. Sec. 5) with different cutoff frequencies and up/downsampling ratios.
Since we consider data from a simulated double mass-spring system, we can derive the frequencies of the corresponding sine waves.
Thus, the system is ideal in order to investigate the behavior of the method.
In particular, the system consists of two superposed sine waves, one with frequency 0.57, one with frequency 0.115 (cf. Figure \ref{subfig:mass_f}).
Thus, the cutoff frequency should be chosen in this range.
Otherwise the predictions of one of the GRUs would be eliminated by the filter.
Here, we choose cutoff frequencies $w=0.2 , w=0.1$, and $w=0.5$ with downsampling ratios $k=2$.
The results in Table \ref{t:RMSE_cutoff} demonstrate that the method is flexible with regards to varying cutoff frequencies.
For the investigation of different downsampling rates, we choose $w=0.4$ as cutoff frequency.
For the sampling frequency $f_{\textrm{sample}}$ it has to hold $2 w < f_{\textrm{sample}}$ in order to respect the Nyquist-Shannon theorem.
With $w=0.4$ this yields $f_{\textrm{sample}}=0.8$.
We choose $f_{\textrm{sample}}=1.0$ close to this frequency and thus a downsampling ratios $k=10$.
Further we investigate $k=3, k=4$ and $k=10$.
Again, the results in Table \ref{t:RMSE_downsample} demonstrate that the method is flexible with regards to different downsampling ratios.
\begin{table*}[ht]
\caption{Total RMSEs (mean (std)) over 5 independent runs with varying cutoff frequencies.}
\label{t:RMSE_cutoff}
\centering
\begin{tabular}{rcc}
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
cutoff frequency & split GRU (ours) & split GRU+HP (ours) \\
\hline
0.2 & 0.149 (0.022) & 0.227 (0.043) \\
0.4 &0.112 (0.025) & 0.116 (0.03)\\
0.5 & 0.204 (0.022) & 0.195 (0.028) \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}\quad
\end{table*}
\begin{table*}[ht]
\caption{Total RMSEs (mean (std)) over 5 independent runs with varying downsampling rates.}
\label{t:RMSE_downsample}
\centering
\begin{tabular}{rcc}
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
donwsampling rate k & split GRU (ours) & split GRU+HP (ours) \\
\hline
2 & 0.12 (0.008) & 0.168 (0.03)\\
5 &0.112 (0.025) & 0.116 (0.03)\\
10 & 0.185 (0.023) & 0.187 (0.017) \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}\quad
\end{table*}
\subsection{Additional plots}
We provide additional rollout plots for the results in Sec. 5 in Figure \ref{fig:add_plots}. Shown are plots for the double torsion pendulum with excitation iii) in Figure \ref{subfig:friction_7} and the friction model vi) with GRU in Figure \ref{subfig:friction_GRU}.
Further, we provide the RMSEs over time for all systems in Figure \ref{fig:lb}, Figure \ref{fig:hybridVDP} and Figure \ref{fig:hybridfrition}.
For the GRU results, we compute the RMSEs after the warmup phases since the type of predictions differs from warmup phase to actual rollout.
For the MLP we compute the RMSEs along the whole trajectory, since no warmup phase is performed but the initial hidden state is estimated via the recognition model.
\begin{figure*}[tb]
\begin{subfigure}[htb]{0.33\textwidth}
\centering
\includegraphics[width= \textwidth]{Plots/friction_7.pdf}\label{subfig:friction_7}
\caption{system iii)}\label{subfig:friction_7}
\end{subfigure}
\begin{subfigure}[htb]{0.33\textwidth}
\centering
\includegraphics[width= \textwidth]{Plots/friction_GRU.pdf}\label{subfig:friction_GRU}
\caption{system vi) with GRU}\label{subfig:friction_GRU}
\end{subfigure}
\begin{subfigure}[htb]{0.33\textwidth}
\centering
\includegraphics[width= \textwidth]{Plots/VDP_GRU.pdf}\label{subfig:VDP_GRU}
\caption{system vi) with GRU}\label{subfig:VDP_GRU}
\end{subfigure}
\caption{Shown are rollouts over time for the purely learning-based method on System (iii) in Figure \ref{subfig:friction_7}, for hybrid model with GRU on System (vi) in Figure \ref{subfig:friction_GRU} and for the Van-der-Pol oscillator with GRU on System (vi) in Figure \ref{subfig:VDP_GRU}. }\label{fig:add_plots}
\end{figure*}
\begin{figure*}[tb]
\begin{subfigure}[htb]{0.33\textwidth}
\centering
\includegraphics[width= \textwidth]{Plots/loss_friction5.pdf}\label{subfig:loss_friction_5}
\caption{system iii)}\label{subfig:loss_friction_5}
\end{subfigure}
\begin{subfigure}[htb]{0.33\textwidth}
\centering
\includegraphics[width= \textwidth]{Plots/loss_friction7.pdf}\label{subfig:loss_friction_7}
\caption{system vi) with GRU}\label{subfig:loss_friction_7}
\end{subfigure}
\begin{subfigure}[htb]{0.33\textwidth}
\centering
\includegraphics[width= \textwidth]{Plots/loss_friction8.pdf}\label{subfig:loss_friction_8}
\caption{system vi) with GRU}\label{subfig:loss_friction_8}
\end{subfigure}
\caption{Shown are absolute errors over time for the purely learning-based method on System (ii) in Figure \ref{subfig:loss_friction_5}, System (iii) in Figure \ref{subfig:loss_friction_7} and System (iv) in Figure \ref{subfig:loss_friction_8}. The results are computed after the warmup phase of the GRU.}\label{fig:lb}
\end{figure*}
\begin{figure*}[tb]
\begin{subfigure}[htb]{0.49\textwidth}
\centering
\includegraphics[width= 0.6\textwidth]{Plots/loss_VDPMLP.pdf}\label{subfig:VDPMLP}
\caption{system v) with MLP}\label{subfig:VDPMLP}
\end{subfigure}
\begin{subfigure}[htb]{0.49\textwidth}
\centering
\includegraphics[width= 0.6\textwidth]{Plots/loss_VDPGRU.pdf}
\caption{system v) with GRU}\label{subfig:VDPGRU}
\end{subfigure}
\caption{Shown are absolute errors over time for the hybrid model on the Van-der-Pol oscillator (v) with MLP in Figure \ref{subfig:VDPMLP} and GRU in Figure \ref{subfig:VDPGRU}.
The GRU and the corresponding residual models have difficulties to catch the oscillations in the beginning and therefore start with a high error.}\label{fig:hybridVDP}
\end{figure*}
\begin{figure*}[tb]
\begin{subfigure}[htb]{0.49\textwidth}
\centering
\includegraphics[width= 0.6\textwidth]{Plots/loss_frictionMLP.pdf}\label{subfig:frictionMLP}
\caption{system vi) with MLP}\label{subfig:frictionMLP}
\end{subfigure}
\begin{subfigure}[htb]{0.49\textwidth}
\centering
\includegraphics[width= 0.6\textwidth]{Plots/loss_frictionGRU.pdf}
\caption{system vi) with GRU}\label{subfig:frictionGRU}
\end{subfigure}
\caption{Shown are absolute errors over time for the hybrid model for drill-string system (vi) with MLP in Figure \ref{subfig:frictionMLP} and GRU in Figure \ref{subfig:frictionGRU}. The residual model with MLP needs a few steps to converge in the beginning.}\label{fig:hybridfrition}
\end{figure*}
\section{Background} \label{section:background}
In this section, we provide the necessary background on signal processing and filtering.
For a more detailed introduction, we refer the reader to \citet{oppenheim1999discrete}.
\subsection{Motivation}
Filters are linear time-invariant systems that aim to extract specific frequency components from a signal.
Standard types are high-pass and low-pass filters.
Low-pass filters extract low frequencies and attenuate high frequencies, whereas high-pass filters extract high frequencies and attenuate low frequencies.
Frequencies that are allowed to pass are determined by a desired cutoff frequency.
Further, additional specifications play a principal role in filter design, such as pass- and stop-band fluctuations and width of the transition band \citep{oppenheim1999discrete}.
Technically, a filter $F$ is a mapping in the time domain $F: l^\infty \to l^\infty: y \mapsto Z^{-1}(\mathcal F(Z(y)))$, where $\mathcal F: \mathbb C \to \mathbb C$ is the so-called transfer function in the frequency domain, $l^\infty$ is the signal space of bounded sequences and $Z: l^\infty \to \mathbb C$ is the well-known z-transform. Hence, a filter is obtained by designing a transfer function $\mathcal F$ in the frequency domain. For the type of filters considered here, the structure of $\mathcal F$ allows to directly compute $F(y)$ via a recurrence equation in the time domain (see the appendix for more details).
A typical application of filters is, for example, the denoising of signals.
Noise adds a high-frequency component to the signal and can therefore be tackled by applying a low-pass filter.
\subsection{IIR-filter}
Typical filter types are finite-impulse response (FIR) and infinite-impulse response (IIR) \citep{oppenheim1999discrete}.
Here, we consider IIR filters.
In contrast to FIR filters, IIR filters possess internal feedback.
Filtering a signal $y$ via an IIR-filter yields a recurrence equation for the filtered signal $\tilde{y}=(\tilde{y}_n)_{n=0}^N$ given by
\begin{equation}\label{eq:rearange}
\tilde y_n=\frac{1}{a_0}\left(\sum_{k=1}^P a_k \tilde{y}_{n-k}+ \sum_{k=0}^P b_k y_{n-k}\right),
\end{equation}
where $P$ describes the filter order.
The filter coefficients $a_k$ and $b_k$ are obtained from filter design with respect to the desired properties in the frequency domain.
A detailed derivation is given in the appendix.
There are different strategies to initialize the first $P$ values $\tilde{y}_0,\dots,\tilde{y}_{P-1}$ \citep{initialize, 492552}.
\subsection{Complementary filter pairs}\label{sec:complementary}
A complementary filter pair consists of a high-pass filter transfer function $\mathcal{H}$ and a low-pass filter transfer function $\mathcal{L}$ \citep{4101411}, chosen in a way that they cover the whole frequency domain, thus
\begin{equation}\label{eq:decomposition}
y \approx L(y)+H(y)
\end{equation}
for any signal $y \in l^{\infty}$.
Applying the complementary filter pair to two different signals $y^{\text{h}}$ and $y^{\text{l}}$ via $\tilde{y}=L(y^{\text{l}})+H(y^{\text{h}})$ directly yields a recurrence equation for the complementary filtered signal $\tilde{y}$ given by
\begin{equation} \label{eq:IIR}
\tilde{y}_n = \frac{1}{a_0} \left(\sum_{k=1}^P a_k \tilde{y}_{n-k}+\sum_{k=0}^P b_k y^{\text{h}}_{n-k}+\sum_{k=0}^{P} \tilde{b}_k y^{\text{l}}_{n-k}\right),
\end{equation}
where $a_k,b_k$ describe the high-pass filter parameters and $a_k,\tilde{b}_k$ describe the low-pass filter parameters.
To obtain a joint recurrence equation, the filters are forced to share the parameters $a_k$.
However, this can be done without loss of generality.
\paragraph{Perfect complement} \label{section:perfect}
There are different strategies to express the decomposition \eqref{eq:decomposition} mathematically.
One way is to construct the perfect complement in the frequency domain such that $\mathcal{H}+\mathcal{L}=1$ \citep{s21061937}.
Applying the perfect complementary filter to two identical signals $y^{\text{h}}=y^{\text{l}}$ results in the same signal as output.
For the IIR complementary filter \eqref{eq:IIR} this holds if $\tilde{b}_k=a_k-b_k$.
A detailed derivation is moved to the appendix.
However, depending on the desired behavior of the filters, perfectly complementary filters are not always favorable.
Different approaches have been investigated in \citet{VAIDYANATHAN, inproceedings}.
\section*{\refname}}
\newcommand*{\citep}{\citep}
\newcommand{\comment}[1]{\textcolor{RubineRed}{#1}}
\newcommand{i\/.\/e\/.,\/~}{i\/.\/e\/.,\/~}
\newcommand{e\/.\/g\/.,\/~}{e\/.\/g\/.,\/~}
\newcommand{cf\/.\/~}{cf\/.\/~}
\newcommand{Fig\/.\/\,}{Fig\/.\/\,}
\newcommand{Thm\/.\/\,}{Thm\/.\/\,}
\newcommand{Sec\/.\/\,}{Sec\/.\/\,}
\newcommand{Def\/.\/\,}{Def\/.\/\,}
\setcounter{secnumdepth}{2}
\title{Combining Slow and Fast: \\Complementary Filtering for Dynamics Learning}
\author{Katharina Ensinger\textsuperscript{\rm 1,\rm 2}, Sebastian Ziesche\textsuperscript{\rm 1}, Barbara Rakitsch\textsuperscript{\rm 1}, \\
Michael Tiemann\textsuperscript{\rm 1}, Sebastian Trimpe\textsuperscript{\rm 2} }
\affiliations{
\textsuperscript{\rm 1} Bosch Center for Artificial Intelligence, Renningen, Germany\\
\textsuperscript{\rm 2} Institute for Data Science in Mechanical Engineering, RWTH Aachen University
[email protected], [email protected], [email protected]
}
\iffalse
\title{My Publication Title --- Multiple Authors}
\author {
First Author Name,\textsuperscript{\rm 1,\rm 2}
Second Author Name, \textsuperscript{\rm 2}
Third Author Name \textsuperscript{\rm 1}
}
\affiliations {
\textsuperscript{\rm 1} Affiliation 1\\
\textsuperscript{\rm 2} Affiliation 2\\
[email protected], [email protected], [email protected]
}
\fi
\usepackage{bibentry}
\begin{document}
\maketitle
\begin{abstract}
Modeling an unknown dynamical system is crucial in order to predict the future behavior of the system.
A standard approach is training recurrent models on measurement data.
While these models typically provide exact short-term predictions, accumulating errors yield deteriorated long-term behavior.
In contrast, models with reliable long-term predictions can often be obtained, either by training a robust but less detailed model, or by leveraging physics-based simulations.
In both cases, inaccuracies in the models yield a lack of short-time details.
Thus, different models with contrastive properties on different time horizons are available.
This observation immediately raises the question: \emph{Can we obtain predictions that combine the best of both worlds?}
Inspired by sensor fusion tasks, we interpret the problem in the frequency domain and leverage classical methods from signal processing, in particular complementary filters.
This filtering technique combines two signals by applying a high-pass filter to one signal, and low-pass filtering the other.
Essentially, the high-pass filter extracts high-frequencies, whereas the low-pass filter extracts low frequencies.
Applying this concept to dynamics model learning enables the construction of models that yield accurate long- and short-term predictions.
Here, we propose two methods, one being purely learning-based and the other one being a hybrid model that requires an additional physics-based simulator.
\end{abstract}
\input{intro}
\input{rel}
\input{background}
\input{method}
\input{exp}
\input{conclusion}
\input{acknowledgements}
\section{Conclusion} \label{section:conclusion}
In this paper, we propose to combine complementary filtering with dynamics model learning.
In particular, we fuse the predictions of different models, where one models provides reliable long-term predictions and the other reliable short-term predictions.
Leveraging the concept of complementary filter pairs yields a model that combines the best of both worlds.
Based on this idea, we propose a purely learning-based model and a hybrid model.
In the hybrid scenario, the long-term predictions are addressed by a simulator, whereas in the purely learning-based scenario an additional model has to be trained.
The experimental results demonstrate that our approach yields predictions with accurate long and short-term behavior.
An interesting topic for future research is an extension of the hybrid scenario learning the relationship between simulator predictions and learning-based predictions.
\section{Experiments} \label{section:exp}
In this section, we demonstrate that our complementary filter-based methods yield accurate long and short-term predictions on simulated and real world data.
In the hybrid setting, we consider additional access to a physics-based simulation that is able to predict the long-term behavior of the system but is not capable of accommodating all short-term details due to e.g., modeling simplifications.
\subsection{Baselines}
We consider four systems.
For each system, we have access to measurement data.
Either real measurements are available, or we simulate trajectories from the ground truth system and corrupt them with noise.
We consider the following baselines.
\textbf{RNN: }
RNN structure that corresponds to an MLP that is propagated through time.
\textbf{GRU: }
state-of-the-art recurrent architecture for time-series learning \citep{cho-etal-2014-learning}.
\textbf{Simulator: }
in the hybrid setting, access to simulator predictions $y^\text{s}$ is required.
\textbf{Residual GRU/ RNN: }
in the hybrid case, we consider a residual model that combines RNN or GRU predictions $y^\text{r}$ with simulator predictions $y^\text{s}$ via $y=y^\text{r}+y^\text{s}$.
\subsection{Constructing the filters}
We use the tools for IIR filter design provided by \texttt{Scipy} \citep{2020SciPy-NMeth} and apply Butterworth filters.
We construct the coefficients $b_k$ and $a_k$ for the low-pass filter and coefficients $\tilde{b}_k$ and $\tilde{a}_k$ for the high-pass filter as described in Sec. \ref{section:background}, where both filters share the cutoff frequency.
An example of a frequency spectrum and choice of the cutoff frequency is shown in Figure \ref{fig:mass} (left).
In the appendix, we add information on the specific design of the complementary filter pairs for each experiment.
Further, we add frequency spectra for each system.
\subsection{Learning task and comparison}
For each system, we observe a single trajectory.
The models are trained on a fixed subtrajectory of the full trajectory.
Predictions are performed by computing a rollout of the model over the full trajectory.
We evaluate the model accuracy by computing the RMSE along the full trajectory.
On the simulated systems, the RMSE between predictions and ground truth is computed.
On real world data, the RMSE between predictions and measurements is computed.
Runtimes are reported in the appendix.
\subsection{Purely learning-based model: }
We apply the strategy derived in Sec. \ref{section:method} to GRU models (referred to by “split GRU“) and compare to a single GRU model trained on the entire bandwidth.
In order to draw a fair comparison, we choose an equal number of total hidden units for the baseline GRU and the sum of hidden states in our approach.
We provide architecture details in the appendix.
Furthermore, we optionally wrap an additional high-pass filter around the predictions $F^h_{\theta}(\hat{y}_R^h,h_R^h,N-R)$ during training and inference (cf. Eq. \eqref{eq:rec_training} and \eqref{eq:rec_predictions}), and denote this by the suffix "+HP".
In order to demonstrate the flexibility of our method, we add results with varying cutoff frequencies and downsampling ratios in the appendix.
We train our model on the following systems:
\textbf{(i) Double-mass spring system: }
We simulate a double-mass spring system that consists of two sinusoidal waves with different frequencies and corrupt the simulation with additional observation noise.
Training is performed on an interval of 250 steps, while predictions are computed on 1000 steps (further details can be found in the appendix).
\textbf{(ii) - (iv) Double torsion pendulum: }
In the second set of experiments, we consider real measurements from the double-torsion pendulum system introduced in \citet{Lisowski}.
Data are obtained by exciting the system with different inputs.
In particular, we consider 4 different excitations with varying frequencies.
Training is performed on the first 600 measurements, while predictions are performed on an 2000-steps interval.
\subsection{Hybrid model}
For the hybrid model, we train our complementary filtering method with GRU and RNN and compare against the corresponding non-hybrid models (GRU and RNN), the corresponding residual models (residual GRU/ RNN), and the simulator. We consider the following systems:
\textbf{(v) Van-der-Pol oscillator: }
Data from a Van-der-Pol oscillator with external force is simulated from the four-dimensional ground truth system (\citet{Cartwright}).
It is assumed that only the first dimension, corresponding to the position, is observed.
Simulator data are obtained from an unforced Van-der-Pol oscillator.
For the corresponding equations, we refer to the appendix.
\textbf{(vi) Drill-string: }
We consider measurement data from the drill string experiment provided in \citet{AARSNES2018712} Figure 14 as training data and the corresponding simulated signal as simulator.
\subsection{Results}
The results indicate the advantage of leveraging complementary filters for dynamics model learning.
In particular, the resulting predictions show stable short and long-term behavior, while especially the GRU and RNN baselines tend to drift on the long-term due to accumulating errors.
For both scenarios, we provide additional plots showing the accumulated RMSE over time for each system in the appendix.
\begin{table*}[!htpb]
\centering
\begin{tabular}{rccc}
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
System & GRU & split GRU (ours) & split GRU + HP (ours) \\
\hline
(i) & 0.587 (0.002) & \textbf{0.127} (0.008) & 0.168 (0.03)\\
(ii) & 1.124 (0.485) & 0.331 (0.065) & \textbf{0.318} (0.089) \\
(iii) & 0.287 (0.15) & 0.159 (0.051) & \textbf{0.13} (0.02)\\
(iv) & 0.262 (0.17) & \textbf{0.201} (0.07) & 0.18 (0.06) \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}\quad
\caption{Total RMSEs (mean (std)) over 5 indep. runs with purely learning-based scheme.}
\label{t:errors}
\end{table*}
\paragraph{Purely learning-based}
The results in Table \ref{t:errors} indicate the advantage of our approach due to accumulating errors for the baseline method.
Integrating a small model error in each time-step leads to a long-term drift that can also be directly observed in the rollouts (cf. Figure \ref{fig:learning_based}).
Our approach on the other hand does not suffer from this drift due to the specific architecture and therefore outperforms the baseline method on every task.
The findings are also supported by the RSME over time $(e_n)_{n=0}^N$ with $e_n= \sqrt{\sum_{k=0}^n \frac{1}{n+1} \Vert y_k-\hat{y}_k\Vert^2}$ shown in Figure \ref{fig:mass} (right).
In some cases our methods yields faster convergence than the baseline method.
For System i) we report the results after 300 training epochs for our method, while the GRU was trained on 2000 epochs.
To provide more insights, we demonstrate the functionality of our method with the double-mass spring system (i) (cf. Figure \ref{fig:mass}).
Designing the filters shown in Figure \ref{fig:mass} (left) yields seperate predictions from the two GRUs in Figure \ref{fig:mass} (middle).
Similar results of our split GRU and our split GRU+HP indicate that the most effective part is already contained in the split GRU (cf. Table \ref{t:errors}).
Here, the high-frequency model already stays on the desired time scale and the additional high-pass filter rather introduces a small distortion.
Further, our split GRU+HP shows a higher error in the beginning due to transient behavior of the filter, which can be seen in Figure \ref{fig:mass} (right).
However, the additional high-pass filter guarantees that the high-frequency predictions are indeed affecting the correct time scale.
\begin{table*}[h!]
\centering
\begin{tabular}{rcccc}
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
System & RNN & residual RNN & simulator & filtered RNN (ours) \\
\hline
(v) & 1.29 (0.63) & 0.417 (0.03) &0.418& \textbf{0.347} (0.041) \\
(vi) & 1.1 (1.26) & 3.60 (1.62) & 0.729 & \textbf{0.487} (0.381) \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}\quad
\caption{Total RMSEs for the hybrid model with RNN (mean (std)) over 5 indep. runs.}
\label{t:MLP}
\end{table*}
\begin{table*}[h!]
\centering
\begin{tabular}{rcccc}
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
System & GRU & residual GRU & simulator & filtered GRU (ours) \\
\hline
(v) & 0.463 (0.305) & 0.476 (0.096) & 0.418 & \textbf{0.387} (0.026) \\
(vi) & 1.140 (0.258) & \textbf{0.681} (0.055) & 0.729 & 0.765 (0.008) \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}\quad
\caption{Total RMSEs for the hybrid model with GRU (mean (std)) over 5 indep. runs.}
\label{t:GRU}
\end{table*}
\paragraph{Hybrid model}
We report the RMSEs for the hybrid setting with RNNs in Table \ref{t:MLP} and with GRUs in Table \ref{t:GRU}.
The results demonstrate that our method is beneficial for different types of models, here MLP-based RNNs and GRUs.
Again, the standard training with single GRU or single RNN shows some drift causing bad long-term behavior.
The unstable long-term behavior is demonstrated particularly clearly by the RNN results shown in Figure \ref{fig:hybrid} (left and right).
While the residual RNN baseline does not suffer from the typical drift that is observed for the RNN baseline, it still shows instabilities in the long-term behavior.
In particular, the results for System (vi) in Figure \ref{fig:hybrid} (right) demonstrate that low-frequency errors occur for the residual model as well.
Our method, in contrast, eliminates these errors by design.
However, on System (vi), our filtered GRU is outperformed by the residual GRU since our predictions stay close to the simulator predictions.
We provide additional insights into our method by depicting the RNN and simulator predictions before combining them via the complementary filter for System (v) in Figure \ref{fig:hybrid} (middle).
Additional plots are provided in the appendix.
\section{Experiment setup} \label{section:experiment_setup}
In this section, we provide the experimental details of our methods.
In particular, we specify the hyperparameters and filter details.
The experiments were conducted on a GPU cluster.
\subsection{Purely learning-based method}
We supply algorithmic details for the purely learning-based model.
\paragraph{Filter design}
The filters coefficients are designed before training.
We use the tools for IIR filter design provided by Scipy \citep{2020SciPy-NMeth} and apply Butterworth filters with specified cutoff frequency $\texttt{W\_n}$.
Precisely, we use the scipy function \texttt{scipy.signal.iirfilter}.
First, the frequency spectra are analyzed in order to find a suitable cutoff frequency.
The cutoff frequency is then used to obtain the complementary filter pair, where high-pass and low-pass filter share the cutoff frequency.
For each experiment, expect the double mass-spring system (i), we consider the filter order $N=1$, which directly yields perfectly complementary filter pairs via the default parameters.
For System (i) we observed better reconstruction properties of the original training signal by choosing a higher filter order and a non-perfectly complementary filter pair.
The filters are applied by calling the function $\texttt{filtfilt}$ from $\texttt{torchaudio.functional.filtering}$ with the predefined parameters.
This function applies forward-backward filtering to the signals.
The exact filter parameters are shown in Table \ref{t:LB}.
\paragraph{GRU training}
For all experiments, we construct a GRU via the pytorch function $\texttt{torch.nn.GRU}$ with default parameters.
The observation matrix $C_{\theta}$ is chosen as linear layer.
We set $\texttt{input\_size}=1$ and specify the $\texttt{hidden\_size}$ for each experiment.
For all experiments we choose Adam optimizer with learning rate $10^{-3}$.
In order to accelerate the training process and prevent the method from overfitting, we split the training trajectory into subtrajectories.
For all experiments we use batch sizes of $50$.
Downsampling with rate $k$ is obtained by considering every $k^{th}$ step of the signal.
Upsampling is obtained via calling the function \texttt{upsample(scale\_factor = k, mode = 'linear', align\_corners=False)} from $\texttt{torchaudio.functional.filtering}$.
This function provides upsampling without adding high-frequency artefacts.
The exact hyperparameters can be obtained from Table \ref{t:LB}.
\begin{table*}[ht]
\caption{Hyperparameters for learning-based method}
\centering
\label{t:LB}
\begin{tabular}{rcccc}
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
parameter & i) & ii) & iii) & iv) \\
\hline
\texttt{hidden\_size} (GRU 1) & 48 & 64 & 64 & 64 \\
\texttt{hidden\_size} (GRU 2) & 16 & 32 & 32 & 32\\
cutoff frequency w & 0.4 & 0.07 & 0.1 & 0.2 \\
filter order & 3 & 1 & 1 & 1 \\
sample frequency f & 10 & 10 & 10 & 10\\
subtrajectory length & 150 & 250 & 250 & 250 \\
warmup phase & 30 & 100 & 50 & 50 \\
training steps & 300 & 1000 & 501 & 1000 \\
sampling rate k &2 & 2 & 2 & 2 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}\quad
\end{table*}
\subsection{Hybrid model}
We provide algorithmic details for the hybrid model.
\paragraph{Filter design: }
The filters are designed before training.
We use the tools for IIR filter design provided by Scipy \citep{2020SciPy-NMeth} and apply Butterworth filters.
First, the frequency spectra from training data and simulator are analyzed in order to find a suitable cutoff frequency.
Here we chose $P=1$ for the filter order.
We construct the coefficients $b_k$ and $a_k$ for a low-pass filter as described and compute the complementary high-pass filter coefficients $\tilde{b}_k =a_k-b_k$ and $\tilde{a}_k=a_k$.
The low filter order worked well in our experiments and showed good training results.
For all experiments, we initialize the first $P$ steps of the filtered signal with the first $P$ steps of the training signal.
\paragraph{MLP training: }
Our MLP dynamics $f_{\theta}$ (cf. Eq. \eqref{eq:RNN}) consist of input layer, hidden layer and output layer.
The observation matrix $C_{\theta}$ is chosen $(1,0,\dots,0)$, such that it extracts the first value of the hidden state.
This can be done without loss of generality.
We apply $\texttt{tanh}$ activation functions.
Thus, we end up with an input layer $\texttt{tanh(Lin(input\_dim,hidden\_dim)}$, hidden layer $\texttt{tan(\texttt{Lin}(hidden\_dim,hidden\_dim)}$ and output layer $\texttt{Lin(hidden\_dim,output\_dim)}$.
For all experiments, we choose the hidden dimension $\texttt{hidden\_dim}=500$.
For the MLP setup, we train a recognition MLP that takes the first $R$ steps of the trajectory as an input and provides the initial hidden state $h_R$ as an output. The recognition MLP consists of an input layer $\texttt{tanh(Lin(n,rec\_dim))}$, a hidden layer $\texttt{tanh(Lin(rec\_dim,rec\_dim))}$ and an output layer $\texttt{tanh(Lin(rec\_dim,input\_dim))}$.
For all experiments we choose the hidden dimension of the recognition model $\texttt{rec\_dim}=100$.
For all experiments, we choose Adam optimizer.
The exact hyperparameters can be obtained from Table \ref{t:MLP}.
For task iii) (Van-der-Pol oscillator) we choose an initial learning rate $10^{-3}$.
For our method and the single MLP baseline, the learning rate is reduced to $10^{-4}$ after 20 steps and to $10^{-5}$ after 500 steps.
For task iv) (Drill-string system) we choose an initial learning rate $10^{-3}$.
For our method and the basic hybrid model the learning rate is reduced to $10^{-4}$ after 10 steps.
\begin{table*}[ht]
\caption{Hyperparameters for hybrid model with MLP}
\centering
\label{t:MLP}
\begin{tabular}{rcc}
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
parameter & i) & ii) \\
\hline
cutoff frequency w & 0.25 & 0.1 \\
sample frequency f & 10 & 10 \\
subtrajectory length & 200 & 1500 \\
recognition steps $n$ & 10 & 50 \\
training steps & 2000 & 300\\
$\texttt{input\_dim}$ & 4 & 15 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}\quad
\end{table*}
\paragraph{GRU training: }
For all experiments, we construct a GRU via the pytorch function $\texttt{torch.nn.GRU}$ with default parameters.
We set $\texttt{input\_size}=1$ and specify the $\texttt{hidden\_size}$ for each experiment.
The exact hyperparameters can be obtained from Table \ref{t:GRU}
For all experiments, we choose Adam optimizer with learning rate $10^{-3}$.
\begin{table*}[ht]
\caption{Hyperparameters for hybrid model with GRU}
\centering
\label{t:GRU}
\begin{tabular}{rcc}
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
parameter & i) & ii) \\
\hline
\texttt{hidden\_size} GRU & 64 & 64 \\
cutoff frequency w & 0.25 & 0.1 \\
sample frequency f & 10 & 10 \\
subtrajectory length & 200 & 500 \\
warmup phase & 10 & 50 \\
training steps & 2000 & 500 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}\quad
\end{table*}
\paragraph{Runtimes: }
We provide results for the runtimes (in seconds) for all experiments in Table \ref{t:errors}.
For our purely learning-based scheme the runtimes could be further improved by parallelizing the training of the two networks.
\begin{table*}[ht]
\centering
\caption{Runtimes for purely learning-based scheme}
\label{t:errors}
\begin{tabular}{rccc}
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
task & GRU & split GRU (ours) & split GRU + HP (ours) \\
\hline
(i) & 535 & 233 & 234\\
(ii) & 1017 & 1436 & 1439\\
(iii) & 497 & 703 & 699\\
(iv) & 496 & 702 &705\\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}\quad
\end{table*}
\begin{table*}[ht]
\centering
\caption{Runtimes for hybrid model with MLP}
\label{t:MLP}
\begin{tabular}{rcccc}
\noalign{\smallskip} \hline \hline \hline \noalign{\smallskip}
task & MLP & Residuum MLP & simulator & filtered MLP (ours) \\
\hline
(iii) & 1414 & 963 & - & 1867 \\
(iv) &2161 & 2913 & - & 4283 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}\quad
\end{table*}
\begin{table*}[ht]
\centering
\caption{Runtimes for hybrid model with GRU}
\label{t:GRU}
\begin{tabular}{rcccc}
\noalign{\smallskip} \hline \hline \hline \noalign{\smallskip}
task & GRU & Residuum GRU & simulator & filtered GRU (ours) \\
\hline
(iii) & 2071 & 2263 & - & 5105 \\
(iv) & 3831 & 3781 & - & 6797 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}\quad
\end{table*}
\section{Background} \label{section:filter_background}
In this section, we provide the derivation for the filtering theory used in Sec. 2 and Sec. 3.
In particular, we derive the recurrence equation from the corresponding transfer functions in the z-domain for the IIR filter and the complementary filter.
For a more detailed introduction to digital filter design, we refer the reader to \citet{oppenheim1999discrete}.
\subsection{Z-transform}
The z-transform \citep{ztransform} can be interpreted as discrete-time version of the Laplace transform.
It is designed in order to analyze and manipulate discrete signals in the frequency domain.
Let $(y_n)_{n=0}^N$ be a discrete-time signal. For the z-transform $\mathcal{Z}(y): \mathbb{C} \to \mathbb{C}$ of $y$ it holds that
\begin{equation}
\begin{aligned}
\mathcal{Z}(y)(z) = Y(z) = \sum_{n=1}^{\infty} y_n z^{-n},
\end{aligned}
\end{equation}
where $z$ represents the frequency components and their corresponding amplitudes. Similar to the Fourier transform and Laplace transform, the z-transform has useful properties that are leveraged for filter design.
We specifically leverage the time delay property
\begin{equation}\label{eq:time_delay}
\mathcal{Z}(y_{n-k})=z^{-k}Y(z).
\end{equation}
Here we use $y_{n-k}$ as a notation for the original signal $y$ shifted $k$ steps.
\subsection{Filter design}
Theoretically, a filter is obtained by designing a transfer function $F:l^{\infty}\rightarrow l^{\infty}$ in the frequency domain.
In order to obtain the representation in the time domain, the following steps are performed:
\begin{itemize}
\item design a filter transfer function $\mathcal{F}$ in z-domain according to desired properties.
\item apply z-transform to signal $Y = \mathcal{Z}(y)$.
\item multiply filter transfer function to z-transformed signal $\tilde{Y} = \mathcal{F} Y$.
\item apply inverse z-transform to obtain filtered signal in time domain $\tilde{y} = \mathcal{Z}^{-1}(\tilde{Y})$.
\end{itemize}
\subsection{IIR-filter}
In this section, we derive the recurrence equation for the IIR filter $H$ with transfer function $\mathcal{H}$ given in the form
\begin{equation}\label{eq:coeff}
\mathcal{H}(z)=\frac{\sum_{k=0}^P b_k z^{-k}}{\sum_{k=0}^Q a_k z^{-k}}.
\end{equation}
Here, we will consider filters with $P=Q$. Clearly, such a formulation can be derived without loss of generality by adding as many zero coefficients as nescessary.
The coefficients in Eq. \eqref{eq:coeff} are obtained from the desired filter properties in the frequency spectrum.
There are different strategies for constructing the parameters with certain advantages and disadvantages.
By applying the above strategy to the signal $y$, a recurrence equation for the filtered signal $\tilde{y}$ can be obtained.
Multiplying the transfer function $\mathcal{H}(z)$ with the z-transformed signal $Y(z)$ results in the filtered signal in the frequency domain
\begin{equation}\label{eq:IIR_der}
\begin{aligned}
\tilde{Y}(z)=\frac{\sum_{k=0}^P b_k z^{-k}}{\sum_{k=0}^P a_k z^{-k}} Y(z).
\end{aligned}
\end{equation}
In order to derive the corresponding recurrence equation in the time-domain, Eq. \eqref{eq:IIR_der} is multiplied with the denominator resulting in
\begin{equation}\label{eq:mult}
\left(\sum_{k=0}^P a_k z^{-k}\right)\tilde{Y}(z)=\left(\sum_{k=0}^P b_k z^{-k}\right)Y(z).
\end{equation}
Transforming back to the time domain yields
\begin{equation}\label{eq:inv}
\mathcal{Z}^{-1}\left(\sum_{k=0}^P a_k z^{-k}\tilde{Y}(z)\right)=\mathcal{Z}^{-1}\left(\sum_{k=0}^P b_k z^{-k} Y(z)\right).
\end{equation}
Applying the time-delay rule \eqref{eq:time_delay} and the linearity of the z-transform yields
\begin{equation}\label{eq:time}
\sum_{k=0}^P a_k \tilde{y}_{n-k}=\sum_{k=0}^P b_k y_{n-k}.
\end{equation}
Solving for $\tilde y$ yields a recurrence equation for the signal
\begin{equation}\label{eq:rearange}
\tilde y_n=\frac{1}{a_0}\left(\sum_{k=1}^P a_k \tilde{y}_{n-k}+ \sum_{k=0}^P b_k y_{n-k}\right),
\end{equation}
where $P$ describes the filter order.
Eq. \eqref{eq:rearange} can be applied for $n\geq P$. For $n\leq P$ an initialization technique has to applied.
The filter can, for example, be initialized with zeros or the original signal.
\paragraph{Forward-backward filter: }
IIR-filters suffer from a phase delay yielding a slightly shifted signal.
This effect can be eliminated by applying the identical IIR-filter twice. Once in forward- and once in backward direction. I.e., the final result is obtained by reversing the filtered signal $\tilde y$, filtering this again via Eq. \eqref{eq:rearange} and doing another reverse.
Technically, this results in an IIR-filter with higher order and specific initialization strategy \citep{492552}.
\subsection{Complementary filter}
In this section, we derive the equation for the complementary filter.
Given are the two filters $H$ and $L$ with IIR transfer functions $\mathcal{H}$ and $\mathcal{L}$.
Consider a high-pass filter $\mathcal{H}$ with coefficients $a_k,b_k$ and a low-pass filter $\mathcal{L}$ with coefficients $a_k, \tilde{b}_k, k=1,\dots P$.
For notational simplicity we chose the same denominator and the same order $P$ for both filters. This can be done without loss of generality.
Applying the transfer functions to the two transformed signals $Y^{\text{h}}$ and $Y^{\text{l}}$ yields
\begin{equation}
\begin{aligned}
\tilde{Y}(z) & = \mathcal{H}(z) Y^{\text{h}}(z) + \mathcal{L}(z)Y^{\text{l}}(z) \\
&= \frac{\sum_{k=0}^P b_k z^{-k}}{\sum_{k=0}^P a_k z^{-k}} Y^{\text{h}}(z)+ \frac{\sum_{k=0}^{P} \tilde{b}_k z^{-k}}{\sum_{k=0}^P a_k z^{-k}}Y^{\text{l}}(z).
\end{aligned}
\end{equation}
Multiplying with the denominator as in Eq. \eqref{eq:mult} yields
\begin{equation}
\begin{aligned}
\tilde{Y}(z) \sum_{k=0}^P a_k z^{-k} = Y^{\text{h}}(z) \sum_{k=0}^P b_k z^{-k} + Y^{\text{l}}(z) \sum_{k=0}^{\tilde{P}} \tilde{b}_k z^{-k}
\end{aligned}
\end{equation}
Similar to Eq. \eqref{eq:inv}, we apply the time delay rule \eqref{eq:time_delay} and the inverse z-transform which yields
\begin{equation} \label{eq:IIR}
\sum_{k=0}^P a_k \tilde{y}_{n-k} = \left(\sum_{k=0}^P b_k y^{\text{h}}_{n-k}+\sum_{k=0}^{P} \tilde{b}_k y^{\text{l}}_{n-k}\right).
\end{equation}
Solving for $\tilde y_n$ yields
\begin{equation}\label{eq:rearangeComp}
\tilde y_n=\frac{1}{a_0}\left(\sum_{k=1}^P a_k \tilde{y}_{n-k}+ \sum_{k=0}^P b_k y^{\text{h}}_{n-k}+\sum_{k=0}^{P} \tilde{b}_k y^{\text{l}}_{n-k}\right).
\end{equation}
Again, Eq. \eqref{eq:rearangeComp} can be applied for $n\geq P$. For $n\leq P$ an initialization technique has to applied.
The filter can, for example, be initialized with zeros or one of the input signals.
In our case, the filter is initialized via the training signal.
\paragraph{Perfect complement} \label{section:perfect}
To construct perfectly complementary filter pair $H$, $L$ for a given transfer function $\mathcal{H}$ with coefficients $a_k,b_k$, we can chose $\tilde{a}_k=a_k$ and $\tilde{b}_k=a_k-b_k$ as coefficients for $\mathcal{L}$. In this case it holds that $y=H(y)+L(y)$ since
\begin{equation}\label{eq:identity}
\begin{aligned}
\tilde{Y}(z)&= \frac{\sum_{k=0}^P b_k z^{-k}}{\sum_{k=0}^P a_k z^{-k}}Y(z)+\frac{\sum_{k=0}^P (a_k-b_k) z^{-k}}{\sum_{k=0}^P a_k z^{-k}}Y(z)=Y(z).
\end{aligned}
\end{equation}
\subsection{Nyquist frequency}
In the purely learning-based scenario, we train on the downsampled signal and apply an upsampling technique for predictions.
Low-pass filtering the training signal ensures that downsampling causes no loss of information.
Admissable sampling rates $f_{\textrm{sample}}$ can be obtained via the Nyquist-Shannon theorem (see \citet{Shannon1949}).
In general for the cutoff frequency $f_w$ it has to hold
\begin{equation}\label{eq:nyquist}
f_w < f_{\textrm{nyquist}},
\end{equation}
with $f_{\textrm{nyquist}}=\frac{1}{2} f_{\textrm{sample}}$.
Thus, it has to be assured that the sampling rate after downsampling fulfills Eq. \eqref{eq:nyquist}.
\subsection{Recurrent dynamics model learning}
We specify the learning details for the recurrent dynamics model learning in Sec. 4.1.
In particular, we demonstrate the transition function for a GRU and an MLP and add details on obtaining the initial hidden state.
\paragraph{MLP training: }
An MLP with dynamics $f_{\theta}$ and linear observation model $C_{\theta}$ is trained via Euler steps on the hidden layers
\begin{equation}\label{eq:RNN}
\begin{aligned}
h_{n+1} & = h_n+\Delta_t f_{\theta}(h_n) \\
y_n & = C_{\theta} h_n,
\end{aligned}
\end{equation}
with initial state $h_0$ and step size $\Delta_t$.
For the MLP, we train a recognition model $r_{\theta}$ that is inspired by \citet{pmlr-v80-doerr18a}.
The recognition model consists of an additional MLP that estimates the latent state $h_R$ from the first $R$ noisy observations $\hat{y}_{0:R}$ via
\begin{equation}
h_R = r_{\theta}(\hat{y}_{0:R})
\end{equation}
\paragraph{GRU training: }
The GRU with transition dynamics $f_{\theta}$ is trained via
\begin{equation}\label{eq:GRU}
\begin{aligned}
h_{n+1} & = f_{\theta}(h_n,y_n) \\
y_n & = C_{\theta} h_n,
\end{aligned}
\end{equation}
The warmup phase over $R$ steps is performed by feedbacking the observations $\hat{y}_n$ instead of the GRU outputs $y_n$ via $h_{n+1} = f_{\theta}(h_n,\hat{y}_n)$.
This yields an appropriate hidden state $h_R$.
\section{Introduction} \label{section:intro}
Many physical processes $\left(x_n\right)_{n=0}^N$ with $x_n \in \mathbb{R}^{D_x}$ can be described via a discrete-time dynamical system
\begin{equation}\label{eq:dyn}
\begin{aligned}
x_{n+1}= f(x_n).
\end{aligned}
\end{equation}
Typically, it is not possible to measure the whole state-space of the system \eqref{eq:dyn}, but a function of the states corrupted by noise $\hat{y}_n$ can, for example, be measured by sensors
\begin{equation}\label{eq:obs}
\begin{aligned}
y_n &= g(x_n) = Cx_n, \\
\hat{y}_n &= y_n+\epsilon_n, \textrm{ with } \epsilon_n \sim \mathcal{N}(0,\sigma^2)
\end{aligned}
\end{equation}
and $C \in \mathbb{R}^{D_y \times D_x}$.
Our general interest is to make accurate predictions for the observable components $y_n$ in Eq. \eqref{eq:obs}.
One possible way to address this problem is training a recurrent model on the noisy measurements $\hat{y}_n$ in Eq. \eqref{eq:obs}.
Learning-based methods are often able to accurately reflect the system's behavior and therefore produce accurate short-term predictions.
However, the errors accumulate over time leading to deteriorated long-term behavior \citep{DBLP:conf/iclr/ZhouLXHH018}.
To obtain reliable prediction behavior on each time scale, we propose to decompose the problem into two components. In particular, we aim to combine two separate models, where one component reliably predicts the long-term behavior, while the other adds short-term details, thus combining the strengths of each component.
Interpreted in the frequency domain, one model tackles the low-frequency components while the other tackles the high-frequency parts.
Combining high and low-frequency information from different signals or models is well-known from control engineering or signal processing tasks.
One typical example is tilt estimation in robotics, where accelerometer and gyroscope data are often available simultaneously \citep{5509756, 9834094}.
On one hand, the gyroscope provides position estimates that are precise on the short-term but due to integration in each time step, accumulating errors cause a drift on the long-term.
On the other hand, the accelerometer-based position estimates are long-term stable, but considerably noisy and thus not reliable on the short-term.
Interpreted in the frequency domain, the gyroscope is more reliable on high frequencies, whereas the accelerometer is more reliable on low frequencies.
Therefore, a high-pass filter is applied to the gyroscope measurements, whereas a low-pass filter is applied to the accelerometer measurements. Both filtered components are subsequently combined in a new complementary filtered signal that is able to approximate the actual position more accurately.
Here, we adopt the concept of complementary filter pairs to our task to fuse models with contrastive properties.
In general, a complementary filter pair consists of a high-pass filter $H$ and a low-pass filter $L$, where the filters map signals to signals.
Depending on the specific filter, certain frequencies are eliminated while others pass.
Intuitively the joint information of both filters in a complementary filter pair covers the whole frequency domain.
Thus, the key concept that we leverage here is the decomposition of a signal $y=(y_n)_{n=0}^N$ into a high-pass filter component $H(y)$ and a low-pass filter component $L(y)$ via
\begin{equation}\label{eq:concept}
y=H(y)+L(y).
\end{equation}
Based on the decomposition, we propose to address $H(y)$ and $L(y)$ by different models that are reliable on their specific time scale.
\emph{In particular, we propose two methods, one being purely-learning based and one being a hybrid method that leverages an additional physics-based simulation.}
Both concepts are visualized in Figure \ref{fig:scheme}.
In the purely learning-based scenario, we train seperate networks that represent $H(y)$ and $L(y)$ in Eq. \eqref{eq:concept}.
In order to obtain a low-frequency model that indeed provides accurate long-term predictions, we apply a downsampling technique to the training data, thus reducing the number of integration steps. During inference, the predictions are upsampled up to the original sampling rate.
Applying the low-pass filter allows lossless downsampling of the signal depending on the downsampling ratio.
In the hybrid scenario, only a single model is trained.
Hybrid modeling addresses the problem of producing predictions by mixing different models that are either learning-based or obtained from first principles, e.g. physics \citep{yin2021augmenting,Suhartono_2017}.
Here, we consider the case where access to predictions $y^{\text s}$ for the system \eqref{eq:dyn} is provided by a physics-based simulator.
Additional insights, such as access to the simulator's latent space or differentiability are not given.
While physics-based approaches are typically robust and provide reliable long-term behavior, incomplete knowledge of the underlying physics leads to short-term errors in the model.
Hence, we consider the case where $L(y^{\text s}) \approx L(y)$ holds.
By training a model for $H(y)$, the decomposition \eqref{eq:concept} becomes a hybrid model that combines the strengths of both components.
The filter pair $(L,H)$ is integrated into the training process, assuring that the long-term behavior is indeed solely addressed by the simulator.
In both scenarios, the learning-based and the hybrid, recurrent neural networks (RNNs) are trained on whole trajectories.
In summary, the main contributions of this paper are:
\begin{compactitem}
\item By leveraging complementary filters, we propose a new view on dynamics model learning;
\item we propose a purely learning-based and a hybrid method that decompose the learning problem into a long-term and a short-term component; and
\item we show that this decomposition allows for training models that provide accurate long and short-term predictions.
\end{compactitem}
\begin{figure*}[tb]
\includegraphics[width= \textwidth]{Plots/test_cmyk.jpg}
\caption{A high-level overview of our methods. Purely-learning-based scheme (left): a training signal is filtered into complementary components. The low-pass filtered signal is downsampled. Two seperate RNNs are trained on the decomposed signal. Hybrid model (right): The predictions of simulator and RNN are fed into the complementary filter. The resulting signal is trained end-to-end on the noisy observations by minimizing the root mean-squared error (RMSE). This structure is also applied to obtain predictions from the model.}
\label{fig:scheme}
\end{figure*}
\section{Method} \label{section:method}
We present two methods that leverage the idea of complementary filters for dynamics model learning in order to produce accurate short- and long-term predictions.
Our first approach is applicable to general dynamics model learning, whereas our second approach is a hybrid modeling technique.
In the second case, access to trajectory data produced by a physics-based simulator is required.
The key ingredient of both models is a complementary filter pair $(H,L)$ with parameters $a_k,b_k,\tilde{a}_k$ and $\tilde b_k$ (cf. Sec. \ref{sec:complementary}).
While in the hybrid case reliable long-term predictions are already provided by the simulator, the long-term predictions have to be addressed by an additional model in the purely learning-based scenario.
\subsection{Recurrent dynamics model learning}
First, we give an overview of the recurrent dynamics model learning structure that serves as a backbone for our method.
Here, we consider a recurrent multilayer perceptron (MLP) and a gated recurrent unit (GRU) model \citep{cho-etal-2014-learning}.
However, the method is not restricted to that choice and could be combined with other recurrent architectures such as \citet{HochSchm97, pmlr-v80-doerr18a}.
Consider a trainable neural network transition function $f_{\theta}:\mathbb{R}^{D_h \times D_y} \rightarrow \mathbb{R}^{D_h}$ and a linear observation model $C_{\theta} \in \mathbb{R}^{D_y \times D_h}$.
Here, $\theta$ defines the trainable parameters and $h$ the latent states with corresponding latent dimension $D_h$.
Predictions are computed via
\begin{equation}\label{eq:RNN}
\begin{aligned}
h_{n+1} & = f_{\theta}(h_n,y_n) \\
y_n & = C_{\theta}h_n,
\end{aligned}
\end{equation}
where the initial hidden state $h_0$ can be obtained from the past trajectory by training a recognition model similar to \citet{pmlr-v80-doerr18a} or by performing a warmup phase.
Details are provided in the appendix.
The mapping $F_{\theta}:\mathbb{R}^{D_h \times D_y} \times \mathbb{N} \rightarrow \mathbb{R}^{D_y \times N}$ that computes an $N$-step rollout via Eq. \eqref{eq:RNN} reads
\begin{equation} \label{eq:trajectory}
F_{\theta}(h_0,y_0,N)=y_{0:N},
\end{equation}
where $y_{0:N} \in \mathbb{R}^{D_y \times N}$ defines a trajectory with $N$ steps.
\subsection{Purely learning-based model}
Next, we dive into the details of constructing complementary filter-based learning schemes and introduce our methods.
In the purely learning-based scenario, two different models are trained, wherein one model addresses the high-frequency parts and the other addresses the low-frequency parts (see Figure \ref{fig:scheme} (left).)
To this end, the training signal $\hat{y}$ is decomposed into a high-frequency component $H(\hat{y})$ and a low-frequency component $L(\hat{y})$ via the complementary filter pair (cf. Sec. \ref{sec:complementary}). The models are trained separately on the decomposition.
In order to obtain a model that indeed provides stable long-term behavior, the low-frequency training data is downsampled.
During inference, the predicted signal is upsampled again.
Downsampling yields a model that performs less integration steps and thus, produces less error accumulation.
As an additional advantage, backpropagation through less integration steps is computationally more efficient.
Applying the low-pass filter allows lossless downsampling up to a specific ratio that is determined by the Nyquist frequency.
Intuitively, only high-frequency information is removed that is addressed by the second network during training and inference.
Details are provided in the appendix.
Splitting the training signal and training the models separately ensures that one model indeed addresses the low-frequency part of the signal and thus, the long-term behavior.
End-to-end training on the other hand might yield deteriorated long-term behavior since it generally allows a single network to tackle both- short, and long-term behavior.
\paragraph{Up and Downsampling: }
The downsampling operation $d_k:\mathbb{R}^{D_y \times N} \rightarrow \mathbb{R}^{D_y \times \lfloor N/k \rfloor}$
maps a signal to a lower resolution by considering every $k^{th}$ step of the signal via
\begin{equation} \label{eq:downsampling}
d_k(y_{0:N})=(y_0,y_k,\dots,y_{k \lfloor N/k \rfloor}).
\end{equation}
The reverse upsampling operation $u_k:\mathbb{R}^{D_y \times N} \rightarrow \mathbb{R}^{D_y \times kN}$ maps a signal to a higher resolution by filling in the missing data without adding high-frequency artifacts to the signal. Mathematically, this corresponds to an interpolation problem \citep{oppenheim1999discrete}. Here, we consider lossless downsampling, where tolerable downsampling ratios are determined by the cutoff frequency of the low-pass filter.
\paragraph{Training: }
Consider training data $\hat{y}_{0:N} \in \mathbb{R}^{D_y \times N}$ from which the first $R<N$ steps $\hat{y}_{0:R} \in \mathbb{R}^{D_y \times R}$ are used to obtain an appropriate initial hidden state. We consider trainable models $f_{\theta}^\text{h}, C_{\theta}^\text{h}, f_{\nu}^\text{l}, C_{\nu}^\text{l}$ with corresponding rollout mappings $F_{\theta}^\text{h}$ and $F_{\nu}^\text{l}$ (cf. Eq. \eqref{eq:trajectory}), an up/downsampling ratio $k$ and a complementary filter pair $L,H$ (cf. Sec. \ref{sec:complementary}). The weights $\theta$ and $\nu$ are trained by minimizing the root-mean-squared error (RMSE) $\Vert y-\hat{y} \Vert_2$ via
\begin{equation} \label{eq:rec_training}
\begin{aligned}
\hat{\theta} & = \arg\min_{\theta} \Vert H(\hat{y})_{R:N} - F^\text{h}_{\theta}(\hat{y}_R^\text{h},h_R^\text{h},N-R) \Vert_2\\
\hat{\nu} & = \arg\min_{\nu} \Vert d_k(L(\hat{y})_{R:N})-F^\text{l}_{\nu}(\hat{y}_R^\text{l},h_R^\text{l},\tilde{N})\Vert_2,
\end{aligned}
\end{equation}
with $\hat{y}_R^\text{h}=H(\hat{y})_R$, $\hat{y}_R^\text{l}=L(\hat{y})_R$, $N-R$ steps $H(\hat{y})_{R:N}$ and $L(\hat{y})_{R:N}$ from the filtered signals $H(\hat{y})$ and $L(\hat{y})$ and $\tilde{N}=\lfloor (N^{\prime}-R)/k \rfloor$.
The hidden states $h_R^\text{h}$ and $h_R^\text{l}$ are obtained from a warmup phase that we specify in the appendix.
\paragraph{Predictions: }
A prediction with $N^{\prime}-R$ steps $\tilde y_{R:N^{\prime}}$ is obtained by adding the high-frequency predictions and the upsampled low-frequency predictions
\begin{equation} \label{eq:rec_predictions}
\tilde{y}_{R:N^{\prime}}=F^\text{h}_{\theta}(\tilde{y}_R^\text{h},h_R^\text{h},N^{\prime}-R)+u_k(F_{\nu}^\text{l}(\tilde{y}_R^\text{l},h_R^\text{l},\tilde{N})),
\end{equation}
where $\tilde{y}_R^\text{h}=H(\tilde{y}_{0:R})$ and $\tilde{y}_R^\text{l}=L(\tilde{y}_{0:R})$ have to be provided and $\tilde{N}=\lfloor (N^{\prime}-R)/k \rfloor$. The hidden states $h_R^\text{h}$ and $h_R^\text{l}$ can, for example, be obtained from a short warmup phase.
A slight modification of the method is obtained by wrapping an additional high-pass filter around the predictions via $H(F^\text{h}_{\theta}(\tilde{y}_R^\text{h},h_R^\text{h},N-R))$ in Eq. \eqref{eq:rec_training} during training and via $H(F^\text{h}_{\theta}(\tilde{y}_R^\text{h},h_R^\text{h},N^{\prime}-R))$ in Eq. \eqref{eq:rec_predictions} during predictions.
This adds an additional guarantee preventing the high-frequency model from producing low-frequency errors.
We provide a numerical comparison of both variants in our experiments.
\subsection{Hybrid modeling}
In the hybrid case, we assume access to a simulator that produces predictions $y^\text{s}_{0:N}$.
Thus, reliable long-term predictions are already available.
In this case, we can directly train a single recurrent model in an end-to-end fashion (see Figure \ref{fig:scheme} (right)).
In particular, the low-pass filter is applied to the simulator, whereas the high-pass filter is applied to the learning-based trajectory.
Training directly with the complementary filter ensures that each model indeed stays on its time scale.
By decoupling the propagation of latent states and the filtered simulator states, the method is technically applicable to a large class of simulators.
It is solely required that the simulator is able to produce time-series predictions of the system given initial conditions.
Differentiating through the simulator or any insight into the simulator's hidden states is not required.
\paragraph{Training and predictions: }
Consider training data $\hat{y}_{0:N} \in \mathbb{R}^{D_y \times N}$, a trainable model $f_{\theta}, C_{\theta}$ with corresponding rollout mapping $F_{\theta}$ (cf. Eq. \eqref{eq:trajectory}) and a complementary filter pair $(L,H)$. Again, the first $R$ steps $\hat y_{0:R}$ are used for providing the intial hidden state $h_R$. The weights $\theta$ are trained via
\begin{equation}\label{eq:loss}
\hat{\theta} = \arg \min_{\theta} \Vert H(y^{\text{r}}) + L(y^\text{s}_{R:N}) - \hat{y}_{R:N} \Vert_2,
\end{equation}
with $y^{\text{r}}=F_{\theta}(\hat{y}_R,h_R,N-R)$. The calculation of $H(y^{\text{r}}) + L(y^\text{s}_{R:N})$ can directly be obtained by Eq. \eqref{eq:IIR}.
\subsection{Filter design}
We design filters $H$ and $L$ (cf. Eq. \eqref{eq:rec_training} and \eqref{eq:loss}) before training.
In the purely learning-based scenario, a broad range of cutoff frequencies is possible, which we demonstrate empirically in the appendix.
In the hybrid case, we aim to use as much correct long-term information as possible from the simulator without including short term errors.
In general, suitable cutoff frequencies can often be derived from domain knowledge.
Here, we analyze the frequency spectra of ground truth and simulator in order to find a suitable cutoff frequency.
For a specific filter design, we test the plausibility of the complementary filter by applying the high-pass component to the measurements and the low-pass component to the simulator.
Calculating the RMSE between the combined signal and ground truth indicates whether the filters are appropriate.
For a more detailed introduction to general filter design, we refer the reader to \citet{oppenheim1999discrete}.
\section{Model analysis} \label{section:model}
In this section, we provide details on the generation of our data.
For the simulated data, we specify the chosen parameters.
Additionally, we provide the Fourier spectra of the models in order to give an insight into the behavior of the models and the choice of cutoff frequencies.
\begin{figure*}[tb]
\begin{subfigure}[htb]{0.32\textwidth}
\centering
\includegraphics[width= \textwidth]{Plots/mass_f.pdf}\label{subfig:mass_f}
\caption{Double-mass spring system (i)}\label{subfig:mass_f}
\end{subfigure}
\begin{subfigure}[htb]{0.32\textwidth}
\centering
\includegraphics[width= \textwidth]{Plots/VDP_freq.pdf}
\caption{Van der Pol system (v)}\label{subfig:VDP}
\end{subfigure}
\begin{subfigure}[htb]{0.32\textwidth}
\centering
\includegraphics[width= \textwidth]{Plots/friction_frequency.pdf}
\caption{Drill-string system (vi)}\label{subfig:friction}
\end{subfigure}
\caption{Fourier spectra for double-mass spring system in Figure \ref{subfig:mass_f}, Van-der-Pol system in Figure \ref{subfig:VDP} and drill-string system in Figure \ref{subfig:friction}.}
\label{fig:freq_1}
\end{figure*}
\begin{figure*}[tb]
\begin{subfigure}[htb]{0.32\textwidth}
\centering
\includegraphics[width= \textwidth]{Plots/friction_5_freq.pdf}
\caption{Double-torsion pendulum (ii)}\label{subfig:ii}
\end{subfigure}
\begin{subfigure}[htb]{0.32\textwidth}
\centering
\includegraphics[width= \textwidth]{Plots/friction_7_freq.pdf}
\caption{Double-torsion pendulum (iii)}\label{subfig:iii}
\end{subfigure}
\begin{subfigure}[htb]{0.32\textwidth}
\centering
\includegraphics[width= \textwidth]{Plots/friction_8_freq.pdf}
\caption{Double-torsion pendulum (iv)}\label{subfig:iv}
\end{subfigure}
\caption{Fourier spectra for double-torsion pendulum with excitation ii) in Figure \ref{subfig:ii}, excitation (iii) in Figure \ref{subfig:iii} and iv) in Figure \ref{subfig:iv}.}
\label{fig:freq_2}
\end{figure*}
\subsection{Double-mass spring system} \label{section:mass_model}
The double mass-spring system consists of two superposed sine waves.
Here, we consider the following normalized system
\begin{equation}\label{eq:double-mass}
x(t)=1.28 \cos(2 \pi 0.115 t-7.7)+0.677 \cos(2\pi 0.57 t)-0.009.
\end{equation}
Training data is generated on an interval of 100 seconds with step size $dt=0.1$.
Further, observation noise with variance $\sigma^2 = 0.1$ is added.
\subsection{Van-der-Pol oscillator} \label{section:VDP_model}
Ground truth data are generated from the four-dimensional ground truth system
\begin{equation} \label{eq:VDP}
\begin{aligned}
\begin{pmatrix}
\dot{x} \\
\dot{y} \\
\dot{u} \\
\dot{v}
\end{pmatrix}=
\begin{pmatrix}
y \\
-x + a(1-x^2)y+b u \\
v \\
- \omega^2 u
\end{pmatrix}.
\end{aligned}
\end{equation}
We assume that the first dimension $x$ referring to the position of the oscillator is observed.
The simulator refers to a standard Van-der-Pol oscillator, ignoring the external force
\begin{equation} \label{eq:VDP_std}
\begin{aligned}
\begin{pmatrix}
\dot{x} \\
\dot{y} \\
\end{pmatrix}=
\begin{pmatrix}
y \\
-x + \tilde{a}(1-x^2)y \\
\end{pmatrix}
\end{aligned}.
\end{equation}
Here, we choose $a=5, b=80, \tilde{a}=3.81$.
Data is simulated with a step size $dt=0.05$.
\subsection{Frequency domain}
In our experiments, a suitable cutoff frequency can be obtained by analyzing the frequency spectrum.
Here, we show the Fourier spectra of all methods and the chosen cutoff frequencies in order to gain insight into the system properties.
For the systems that are used to train a hybrid model, we additionally depict the Fourier spectra of the simulator.
Figure \ref{fig:freq_1} depicts the results for the double-mass spring system in Figure \ref{subfig:mass_f}, the Van-der-Pol oscillator in Figure \ref{subfig:VDP} and the drill-string system in Figure \ref{subfig:friction}. The results for the double-torsion pendulum with excitations (ii)-(iv) are shown in Figure \ref{fig:freq_2}.
\section{Related work} \label{section:rel}
In this section, we give an overview of related literature.
Several works point out parallels between classical signal-theoretic concepts and neural network architectures.
In particular, connections to finite-impulse response (FIR) and infinite-impulse response (IIR) filters have been drawn.
The relations between these filters and feedforward-models have been investigated in \citet{6795541}.
Precisely, they construct different feedforward architectures by building synapses from different filters.
Depending on the specific type, locally recurrent but globally feedforward structure can be obtained.
These models are revisited in \citet{Campolucci} by introducing a novel backpropagation technique.
More recently, feedforward Sequential Memory Networks are introduced, which can be interpreted as FIR filters \citep{zhang2016feedforward}.
Relations between fully recurrent models and filters have been drawn as well.
The hidden structure of many recurrent networks can be identified with classical filters.
\citet{Kuznetsov2020DIFFERENTIABLEIF} point out the relation between Elman-Networks and filters and
introduce trainable IIR structures that are applied to sound signals in the experiments section.
Precisely, an Elman network can be interpreted as simple first-order IIR filter.
In \citet{oliva2017statistical}, long-term dependencies are modeled via a moving average in the hidden units.
Moving averages can again be interpreted as special FIR filters.
\citet{Stepleton2018LowpassRN} recover long-term dependencies via a hidden structure of memory pools that consist of first-order IIR filters.
However, none of this works leverages complementary filters in order to capture effects on multiple time scales.
Additionally, none of these approaches addresses hybrid dynamics models.
\citet{doi:10.1177/0142331218755234, CERTIC2011419, Milic} combine learning techniques and in particular gradient-descent with complementary filters.
However, they consider the automatical adaption of the filter parameters.
In contrast, we leverage complementary filters for learning, in particular dynamics learning.
Filters manipulate signals on the frequency domain and thus address spectral properties.
In \citet{Kutz2016DynamicMD} and \citet{Lange2021FromFT}, a signal is identified via spectral methods that are transformed into a linear model.
Koopman theory is then leveraged to lift the system to the nonlinear space again.
However, in our work, we use filters in order to separate the predictions on different time-horizons.
Thus, in contrast to these works, our methods can be combined with different (recurrent) architectures and therefore allow for computing predictions via state-of-the-art techniques.
Combining physics-based simulators with learning-based models is an emerging trend.
Hybrid models produce predictions by taking both models into account. Typically, the simulator is extended or parts of the simulator are replaced.
There is a vast literature that deals with hybrid models for dynamical systems or time-series data.
A traditional approach is learning the errors or residua of simulator predictions and data \citep{Forssell97combiningsemi-physical, Suhartono_2017}.
Another common approach in hybrid modeling is extending a physics-based dynamics model with neural ODEs \citep{yin2021augmenting, qian2021integrating}.
However, in contrast to our approach, these hybrid architectures do not explicitly exploit characteristics of the simulator, in particular the long-term behavior.
\citet{10.1785/0120170293} construct a hybrid model for the prediction of seismic behavior.
Similar to our setting, they consider the case where a physics-based simulation provides reliable predictions for low frequencies, whereas lacking of a reliable model for high frequencies. However, the approach differs significantly from ours since a neural network is trained on a mapping from low to high frequencies. Furthermore, they do not consider dynamics models.
Therefore, it is unclear how to apply the approach to our problem setting.
\section*{\refname}}
\newcommand*{\citep}{\citep}
\newcommand{\comment}[1]{\textcolor{RubineRed}{#1}}
\newcommand{i\/.\/e\/.,\/~}{i\/.\/e\/.,\/~}
\newcommand{e\/.\/g\/.,\/~}{e\/.\/g\/.,\/~}
\newcommand{cf\/.\/~}{cf\/.\/~}
\newcommand{Fig\/.\/\,}{Fig\/.\/\,}
\newcommand{Thm\/.\/\,}{Thm\/.\/\,}
\newcommand{Sec\/.\/\,}{Sec\/.\/\,}
\newcommand{Def\/.\/\,}{Def\/.\/\,}
\title{Supplementary material: Combining Slow and Fast: \\
Complementary Filtering for Dynamics Learning}
\begin{document}
\maketitle
\input{filter_background}
\input{experiment_setup}
\input{models}
\input{add_exp}
\newpage
|
{
"arxiv_id": "2302.13688",
"language": "en",
"timestamp": "2023-03-02T02:13:36",
"url": "https://arxiv.org/abs/2302.13688",
"yymm": "2302"
} | \section{Introduction} \label{section:introduction}
\noindent Understanding the mass transport of colloidal and molecular species in non-equilibrium environments is crucial for various processes, ranging from active intracellular transport~\citep{Koslover:PhysBiol:2020} to the dispersion of nutrients in world oceans~\citep{Katija:ExpBiol:2012}. Apart from its practical importance, the transport properties of tracer particles in generic ``active baths'' has attracted much interest from a statistical physics perspective, where they can be viewed as a minimal example of particles driven by external, non-equilibrium noise~\citep{Volpe:PRE:2016,Park:SoftMatter:2020}. Beyond the level of tracer particles driven by generic non-equilibrium noise, the archetypical example of a tracer particle in an active bath is a collection of point-like tracers being advected by a set of microswimmers such as bacteria or algae~\citep{Lauga1}. When swimming through a viscous fluid, these swimmers create long-ranged flow fields that advect the tracers, leading to tracer dynamics that is ballistic at short times and diffusive over timescales longer than the autocorrelation time of the local flow field~\citep{Lin1}. Realisations of this system have been extensively studied both experimentally, typically in suspensions of \emph{E. coli} bacteria~\citep{Wu_Libchaber:PRL:2000,Drescher1,Jepson1,Kim1,Koumakis1,Mino1,Mino2,Patteson1,Peng1,Semeraro1} or \emph{Chlamydomonas} algae~\citep{Leptos1,Ortlieb1,Yang1,Eremin:2021}, and theoretically, with microswimmers typically being modelled either as force dipoles acting on the surrounding fluid~\citep{Morozov1,Yeomans:JFM:2013,Pushkin1,Nordanger:PRFluids:2022}, as spherical ``squirmers'' with an imposed slip velocity along their body~\citep{Lin1,Thiffeault2,Thiffeault1}, or as needle-shaped ``slender swimmers'' with imposed stresses along their body lengths~\citep{Krishnamurthy1,Saintillan2}. While the details of these three microswimmer models differ, the results regarding enhanced tracer diffusion are largely generic and consistent with experimental results, which have shown the swimmer-induced, hydrodynamic diffusivity $D_A$ to scale linearly with microswimmer density $n$ in the dilute limit where swimmer-swimmer correlations can be neglected~\citep{Lin1,Thiffeault2,Mino1}. In this limit, a fruitful way of calculating $D_A$ is to consider the net displacement due to binary swimmer-tracer scattering events~\citep{Pushkin1,Morozov1}; two examples of resulting (deterministic) tracer trajectories for scattering events are shown in Fig.~\ref{fig:trajectories}. For a non-tumbling swimmer, starting and ending at $x = \pm \infty$, the resulting loop is closed, leading to a vanishing tracer net displacement $\Delta$ (Fig.~\ref{fig:trajectories}a). For tumbling swimmers with a finite persistence length, the trajectory is however punctuated mid-way through the tracer loop, leading to significantly larger values of $\Delta$ (Fig.~\ref{fig:trajectories}b). The resulting value of $D_A$ due to a large set of such scattering events can then be obtained by explicitly summing over all possible sets of scattering parameters.
\begin{figure*}
\centering
\begin{minipage}[t]{0.4\textwidth}
\includegraphics[width=\textwidth, clip, trim=0.3cm 0.3cm 0.3cm 0.3cm]{trajectory_a13.0_b2500.0_tmax50000.pdf}
\end{minipage}
\begin{minipage}[t]{0.4\textwidth}
\includegraphics[width=\textwidth, clip, trim=0.3cm 0.3cm 0.3cm 0.3cm]{trajectory_a13.0_b25.0_tmax275.pdf}
\end{minipage}
\caption{\textbf{Tracer trajectories for infinite and finite swimmer paths.} Panel (a) shows a typical trajectory for a non-diffusing tracer advected by a non-tumbling, point-dipole swimmer following an effectively infinite, straight path, while (b) shows the corresponding trajectory terminated due to a tumbling event. Note that, per Eq.~\eqref{eq:DA_approx}, the effective tracer diffusion $D_A$ is independent of tumbling rate $\lambda$ for sufficiently high swimming speeds when averaged over all possible swimmer-tracer configurations even though the net displacement $\Delta$ is much larger for the tumbling swimmer. Tracer trajectories were obtained through direct numerical integration using a non-regularised dipolar flow field, as decribed by~\cite{Morozov1}.}
\label{fig:trajectories}
\end{figure*}
In spite of the dependence of $\Delta$ on the microswimmer tumbling rate $\lambda$ for single scattering events such as that in Fig.~\ref{fig:trajectories}, \cite{Pushkin1} showed that, in the limit of large swimming speeds $v_s$, $D_A$ is in fact \emph{independent} of $\lambda$ when summed over all possible swimmer trajectories. This result was later generalised by \cite{Morozov:PRX:2020} to arbitrary swimming speeds, leading to the following approximate expression for $D_A$:
\begin{equation}\label{eq:DA_approx}
D_A \approx \frac{7\kappa^2 n}{2048 \lambda \varepsilon + 336\pi v_s}.
\end{equation}
Here, $\kappa$ is the magnitude of the microswimmer dipole and $\varepsilon$ is a characteristic size of the microswimmer, which we take to be equal to the short-range regularisation length of the dipolar flow field. Rather than using the scattering approach outlined above, Eq.~\eqref{eq:DA_approx} was derived by formulating a kinetic theory for the spatiotemporal correlations of the disturbance velocity field ${\bm U}$ created by the swimmers. Due to the linearity of Stokes flow, we write ${\bm U}$ as the superposition of the individual swimmer flow fields ${\bm u}_s$:
\begin{equation}\label{eq:Pairwise_U}
{\bm U}({\bm r},t) = \sum_{i=1}^{N} {\bm u}_s({\bm r};{\bm r}_i,{\bm p}_i),
\end{equation}
where ${\bm r}_i$ and ${\bm p}_i$ is, respectively, the position and orientation of swimmer $i$. Knowing the statistical properties of ${\bm U}$, $D_A$ can readily be calculated \emph{via} the Green-Kubo relation
\begin{align}\label{eq:GreenKubo}
D_A = \frac{1}{3} \int_{0}^{\infty}\langle \dot{{\bm r}}_T(t) \cdot \dot{{\bm r}}_T(0)\rangle dt = \frac{1}{3} \int_{0}^{\infty}\langle {\bm U}({\bm r}_T,t) \cdot {\bm U}({\bm r}_T,0) \rangle dt \equiv \frac{1}{3} \int_{0}^{\infty}C_T(t) dt,
\end{align}
where, in the second equality, we have assumed point-like tracers advected by the disturbance flow, so that $\dot{{\bm r}}_T = {\bm U}({\bm r}_T)$, and the third equality defines the velocity autocorrelation function $C_T(t)$ in the co-moving tracer frame. While yielding identical results for dilute suspensions as the scattering approach discussed above, kinetic theories are however more readily extended to accommodate the effect of swimmer-swimmer correlations due to the mutual advection and reorientation of swimmers~\citep{Morozov:PRX:2020}. Importantly, these interactions break the symmetry between rear-actuated (pusher) swimmers, such as most bacteria, and front-actuated (puller) ones, such as \emph{Chlamydomonas}, leading to a super-linear scaling of $D_A$ with $n$ for pushers and a corresponding sub-linear scaling for pullers~\citep{Stenhammar1}.
Equation~\eqref{eq:DA_approx} shows two qualitatively different regimes for high and low $v_s$: For $v_s \rightarrow 0$ -- the so-called \emph{shaker} limit -- the dominant mechanism controlling the decorrelation of $C_T(t)$ is tumbling of the swimmer. For fast swimmers, with $v_s \gg \lambda \varepsilon$, the decorrelation of the fluid velocity is instead dominated by the swimmer self-propulsion and thus independent of $\lambda$, and Eq.~\eqref{eq:DA_approx} reduces to the expression derived by~\cite{Pushkin1}. A third, somewhat less explored, mechanism affecting $C_T(t)$ is Brownian translational diffusion of the tracer: even though Brownian diffusion does not affect the statistics of the flow field ${\bm U}({\bm r},t)$ as measured in the lab frame, the positional noise of the tracer particle will cause it to cross the streamlines of the disturbance flow, thus perturbing the trajectory compared to the athermal case shown in Fig. \ref{fig:trajectories} and leading to a lower $D_A$. Since the Brownian diffusion constant $D_0$ depends inversely on the tracer radius per the Stokes-Einstein relation, the magnitude of this effect is expected to be significant primarily for small tracer particles, and it has been hypothesised to explain the non-monotonic size dependence of enhanced tracer diffusion seen in experiments with colloids in \emph{E. coli} suspensions, where \cite{Patteson1} observed a maximum in $D_A$ for a tracer radius of approximately 5 $\mu$m. The effect of Brownian motion on enhanced tracer diffusion was furthermore theoretically analysed by \cite{Kasyap2014} for a model of slender-body swimmers, showing that $D_A$ is a non-monotonic function of $D_0$, with $D_A$ first showing a small increase for intermediate $D_0$, before falling below the athermal value as $D_0$ grows larger. In a more recent study of finite-size spherical tracers in microswimmer suspensions, \cite{Dyer:PoF:2021} numerically analysed the combined effect of thermal fluctuations and near-field flows on the size-dependent tracer dynamics, finding a similar non-monotonic behaviour as observed experimentally. In this Paper, we will revisit the problem of the interplay between Brownian and hydrodynamic diffusion for the simple case of point-like tracers immersed in a dilute suspension of microswimmers described via a regularised dipolar flow field. Using kinetic theory and large-scale lattice Boltzmann simulations of \emph{E. coli}-like suspensions, we show that the effect of Brownian diffusion on active diffusion is only practically relevant whenever $v_s < \lambda \varepsilon$, which corresponds to extremely slow (or frequently tumbling) swimmers. For biologically relevant values of $v_s$ and $\lambda$, swimming is instead the dominant decorrelation mechanism, so that $D_A$ becomes independent of both $D_0$ and $\lambda$. In contrast to previous studies, our results thus indicate that the effect of Brownian motion on the enhanced diffusion is in fact negligible for most microswimmer realisations, and thus that the experimentally observed non-monotonic size dependence of $D_A$ on tracer size reported by~\cite{Patteson1} has other explanations.
\section{Model and Method}\label{section:method}
\noindent We consider a collection of $N$ non-interacting microswimmers at number density $n = N/V$ moving through a three-dimensional viscous fluid of viscosity $\mu$. Each microswimmer is composed of two equal and opposite point forces of magnitude $F$ separated by a length $\ell$ and swims with a constant speed $v_s$. The resulting reduced hydrodynamic dipole strength is $\kappa = F\ell/\mu$. The swimming direction ${\bm p}_i$ furthermore relaxes through Poisson-distributed random tumbles occurring with average frequency $\lambda$.
The position ${\bm r}_T$ of a point-like tracer obeys the equation of motion
\begin{equation}\label{eq:rdot_tr}
\dot{{\bm r}}_T = {\bm U}({\bm r}_T) + \sqrt{2D_0}\boldsymbol{\eta},
\end{equation}
where $\boldsymbol{\eta}$ is a unit-variance white noise, $\delta$-correlated in space and time, and $D_0$ is the Brownian diffusion constant. Thus, the effect of Brownian motion is fully contained in the tracer dynamics, while we assume the effects of thermal fluctuations on the fluid and on the pairwise swimmer-tracer dynamics to be subdominant. The fluid disturbance velocity ${\bm U}({\bm r}_T)$ due to the presence of all swimmers is given by Eq.~\eqref{eq:Pairwise_U}, and is numerically solved for using an efficient point-force implementation of the lattice Boltzmann (LB) method described previously~\citep{Bardfalvy:PRL:2020,Nash1} using a system with periodic boundaries and a size of $(100)^3$ lattice units. All results are presented in units set by the LB lattice spacing $\Delta l$ and time step $\Delta t$. In these units, the microswimmer density was kept constant at $n = 0.01$, unless otherwise stated. To obtain an analytical expression for $D_A$, In Section~\ref{sec:Theory} we will furthermore extend the kinetic theory developed previously by~\cite{Morozov:PRX:2020} to the case of tracers with Brownian diffusion.
Inserting Eq.~\eqref{eq:rdot_tr} into the Green-Kubo relation~\eqref{eq:GreenKubo} yields
\begin{equation}
D_{\mathrm{Tot}} = D_0 + \frac{1}{3} \int_{0}^{\infty} C_T(t) dt = D_0 + D_A,
\end{equation}
where $C_T$ was defined in Eq.~\eqref{eq:GreenKubo}. Thus, to obtain $D_A$, we numerically evaluate the time correlation of the disturbance velocity measured in the co-moving tracer frame. Since the tracer position ${\bm r}_T$ depends on $D_0$, $C_T(t)$, and thus $D_A$, will depend implicitly on $D_0$.
To characterise the system, we will use three dimensionless quantities. Firstly, we define the P\'eclet number, which measures the relative importance of active and thermal forces, as
\begin{equation}\label{eq:Pe_Def}
\mathrm{Pe} \equiv \frac{D_A(D_0=0)}{D_0},
\end{equation}
where $D_A(D_0=0)$ is the active diffusivity of a tracer immersed in an equivalent microswimmer suspension but in the absence of Brownian tracer motion. It should be noted that our definition of Pe is qualitatively different from that of \cite{Kasyap2014}, who, instead of $D_A(D_0=0)$, use the swimming speed $v_s$ to characterise the active forces. We however argue that the activity experienced by the tracers depend on the magnitude of the velocity fields generated by the swimmers, and is thus dependent on $\kappa$ and encoded in $D_A(D_0 = 0)$, while $v_s$ is instead a measure of the swimmers' self-propulsion. For experimental realisations of microswimmers, $\kappa$ and $v_s$ are directly proportional to each other; however, the specific relation between between them will nevertheless be specific to each type (or species) of swimmer, and decoupling them conveniently enables us to study separately the effects of self-propulsion and fluid forcing, as we demonstrate further below.
Secondly, we measure the change in active diffusion due to Brownian motion through the quantity
\begin{equation}
\xi \equiv \frac{D_A(D_0)}{D_A(D_0 = 0)}.
\end{equation}
In the limit $\mathrm{Pe} \rightarrow \infty$, we thus expect that $\xi \rightarrow 1$. Finally, in accordance with \cite{Morozov:PRX:2020}, we account for the effect of microswimmer self-propulsion using the reduced swimmer persistence length $L$, defined by
\begin{equation}\label{eq:L_def}
L \equiv \frac{v_s}{\varepsilon \lambda}.
\end{equation}
\section{Kinetic theory}\label{sec:Theory}
\noindent In this Section, we will outline the main steps in the derivation of $D_A$ for a suspension of Brownian tracer particles immersed in a dilute microswimmer suspension, whose dynamics are governed by Eq.~\eqref{eq:rdot_tr}. Just as in our previous works~\citep{Morozov:PRX:2020,Stenhammar1} , we describe the flow field measured at ${\bm r}$ due to a swimmer with position ${\bm r}_i$ and orientation ${\bm p}_i$ by a regularised dipolar flow field ${\bm u}_s({\bm r})$
\begin{equation}\label{eq:u_s}
{\bm u}_s({\bm r};{\bm r}_i,{\bm p}_i) = \frac{\kappa}{8\pi} \left[ 3\frac{({\bm p}_i \cdot {\bm r}')^2 {\bm r}' + \varepsilon^2 ({\bm p}_i \cdot {\bm r}'){\bm p}_i }{(r'^2 + \varepsilon^2)^{5/2}} - \frac{{\bm r}'}{(r'^2 + \varepsilon^2)^{3/2}} \right],
\end{equation}
where ${\bm r}' = {\bm r}-{\bm r}_i$, $r' = |{\bm r}'|$, and $\varepsilon$ is the regularisation length. Our starting point is the derivation of \cite{Morozov:PRX:2020}, where we formulated and solved a kinetic theory describing the fluctuations of the velocity field ${\bm U}({\bm r},t)$ due to a superposition of single-swimmer flow fields. In the limit of non-interacting swimmers, which is the case that we consider here, the temporal correlations of the steady-state velocity field ${\bm U}$ measured in the lab frame, $C_U(t) \equiv \langle {\bm U}({\bm r},t) \cdot {\bm U}({\bm r},0) \rangle$, is given by
\begin{equation}\label{eq:CU_nonBrownian}
C_U(t) = \frac{\kappa^2 n}{15 \pi^2 \varepsilon} \int_0^{\infty} A^2(\zeta) e^{-\tau} \mathcal{F}(L\zeta\tau) d\zeta,
\end{equation}
where
\begin{equation}
A(x) = \frac{1}{2} x^2 K_2(x),
\end{equation}
with $K_2$ being the modified Bessel function of the second kind, and
\begin{equation}
\mathcal{F}(x) = 15\frac{(5x^2-12)\sin x - x(x^2-12)\cos x}{x^5},
\end{equation}
defined such that $\mathcal{F}(0) = 1$. We furthermore used the dimensionless variables $L = v_s/(\lambda \varepsilon)$, $\tau = t\lambda$, and $\zeta = k\varepsilon$, where $k = |{\bm k}|$ is the wavevector magnitude. Equation~\eqref{eq:CU_nonBrownian} can equivalently be expressed in closed form in terms of elliptic integrals, as given by Eq.~(72) of \cite{Morozov:PRX:2020}. Equation~\eqref{eq:CU_nonBrownian} contains decorrelation of the flow field due to two separate mechanisms: exponential decay of $C_U(t)$ due to tumbling, and a more complex, oscillatory behaviour due to swimming, encoded in the function $\mathcal{F}$.
To obtain the hydrodynamic diffusivity of a passive tracer, \cite{Morozov:PRX:2020} used a \emph{stationary tracer} approximation, implying that the tracer advection by the swimmer flow field is negligible compared to the self-propulsion of the swimmer. This implies that ${\bm r}_T$ remains effectively constant over the time it takes for ${\bm U}$ to relax, such that $C_T(t) = \langle {\bm U}({\bm r}_T[t],t) \cdot {\bm U}({\bm r}_T[t=0],0) \rangle \approx \langle {\bm U}({\bm r}_T[t=0],t) \cdot {\bm U}({\bm r}_T[t=0],0) \rangle = C_U(t)$,
where, in the last equality, we have made the additional assumption that the tracers are isotropically distributed in space so that they sample an unweighted spatial average of the flow field. Thus, if we can replace the correlation function $C_T$ in the co-moving tracer frame with that in the stationary lab frame, $C_U$, we can insert Eq.~\eqref{eq:CU_nonBrownian} into the Green-Kubo relation~\eqref{eq:GreenKubo} and integrate over time to yield the following expression for $D_A$, identical to Eq. (85) of \cite{Morozov:PRX:2020}:
\begin{equation}\label{eq:DA_nonBrownian}
D_A = \frac{\kappa^2 n}{45 \pi^2 \lambda \varepsilon} \int_0^{\infty} A^2(\zeta) \mathcal{G} ( L\zeta ) d\zeta,
\end{equation}
where
\begin{equation}
\mathcal{G}(x) = \frac{5}{2} \frac{3x + 2x^3 -3(1+x^2)\arctan x}{x^5},
\end{equation}
defined such that $\mathcal{G}(0) = 1$. By matching the asymptotic behaviours for $L\rightarrow 0$ and $L \rightarrow \infty$, Equation~\eqref{eq:DA_nonBrownian} can furthermore be approximated by the simple expression given in Eq.~\eqref{eq:DA_approx}
The generalisation of Eq.~\eqref{eq:DA_nonBrownian} to the case of Brownian tracers might seem straightforward, but unfortunately is not: Since the approximation $C_T(t) = C_U(t)$ amounts to the tracer remaining effectively stationary during a swimmer-tracer scattering event, this approximation will, by construction, not capture any effects on $D_A$ coming from Brownian diffusion across streamlines. This can easily be realised by noticing that $C_U(t)$ is solely a property of the swimmer suspension, and will be strictly unaffected by the tracer dynamics; thus, $D_A$ in Eq.~\eqref{eq:DA_nonBrownian} remains unaffected by the inclusion of tracer diffusion. Instead of going beyond the stationary tracer approximation, we circumvent this problem by noticing that, in a suspension of non-interacting microswimmers, the dynamics of a Brownian tracer will be \emph{statistically identical} to that of a non-Brownian tracer in a suspension of Brownian \emph{swimmers} with the same translational diffusivity $D_0$. This is because the single-swimmer flow field in Eq.~\eqref{eq:u_s} solely depends on the separation vector ${\bm r}_T - {\bm r}_i$, implying that the flow field experienced by a diffusing tracer (\emph{i.e.}, noise acting on ${\bm r}_T$) is identical to that experienced by a non-Brownian tracer sampling the flow field from a swimmer with the same noise instead applied to ${\bm r}_i$. In a non-interacting microswimmer suspension this equivalence is exact, and we verify it numerically in Fig.~\ref{fig:xi_Pe}. It however breaks down as soon as swimmer-swimmer correlations become significant, since swimmer diffusion will then affect the magnitude of such correlations, which tracer diffusion will not.
Thus, as outlined in Appendix \ref{App:CT}, we instead calculate $C_U(t)$ for the case of a suspension of diffusing swimmers, described by the dynamics
\begin{equation}
\dot{{\bm r}}_i = v_s {\bm p}_i + \sqrt{2D_0}\boldsymbol{\eta},
\label{eq:eom}
\end{equation}
yielding the following generalisation of Eq.~\eqref{eq:CU_nonBrownian}:
\begin{equation}\label{eq:CU}
C_U(t;D_0) = \frac{\kappa^2 n}{15 \pi^2 \varepsilon} \int_0^{\infty} A^2(\zeta) e^{-(1+\tilde{D}\zeta^2) \tau} \mathcal{F}(L\zeta\tau) d\zeta,
\end{equation}
where we have additionally defined the non-dimensional diffusivity $\tilde{D} = D_0/(\lambda \varepsilon^2)$. Since the effect of Brownian diffusion is now fully incorporated into the properties of $C_U$, we again use the stationary-tracer approximation and insert this expression into the Green-Kubo relation~\eqref{eq:GreenKubo}, leading to the following expression for $D_A$ in the presence of Brownian diffusion:
\begin{equation}\label{eq:DA_exact}
D_A(D_0) = \frac{\kappa^2 n}{45 \pi^2 \lambda \varepsilon} \int_0^{\infty} \frac{A^2(\zeta)}{1 + \tilde{D}\zeta^2} \mathcal{G}\left( \frac{L\zeta}{1 + \tilde{D}\zeta^2} \right) d\zeta.
\end{equation}
In Section~\ref{sec:Results}, we numerically evaluate Eqs.~\eqref{eq:CU} and~\eqref{eq:DA_exact} and compare the results with direct numerical simulations of microswimmer suspensions.
\section{Results and Discussion}\label{sec:Results}
\begin{figure*}
\centering
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=\textwidth, clip, trim=0.0cm -0.1cm 0.0cm 0cm]{D_lambda.pdf}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=\textwidth, clip, trim=0.1cm 0cm -0.05cm 0cm]{D_vs.pdf}
\end{minipage}
\caption{\textbf{Effective tracer diffusion in the absence of Brownian motion}. Panel (a) shows the $\lambda^{-1}$-dependence of $D_A(D_0=0)$ for shakers with $v_s = 0$, and panel (b) its $v_s$-dependence at constant $\lambda = 10^{-4}$. Symbols denote simulation results and solid lines show results from Eq.~\eqref{eq:DA_nonBrownian} using $\varepsilon = 2.0$. All results are presented in LB units set by the lattice spacing $\Delta l$ and timestep $\Delta t$. Error bars represent one standard deviation as obtained from averaging over four separate runs.}
\label{fig:D_lambda_vs}
\end{figure*}
In Fig.~\ref{fig:D_lambda_vs} we begin by verifying Eq.~\eqref{eq:DA_nonBrownian} for the hydrodynamic diffusion coefficient $D_A$ in the limit $D_0 = 0$. In Fig.~\ref{fig:D_lambda_vs}a, we demonstrate the $\lambda^{-1}$ dependence of $D_A$ in the shaker limit $v_s = 0$, while Fig.~\ref{fig:D_lambda_vs}b shows its more complex dependence on $v_s$ for constant $\lambda$. Apart from numerically verifying the kinetic theory expression~\eqref{eq:DA_nonBrownian}, these results illustrate how $D_A$ decreases abruptly due to the temporal decorrelation of the flow field ${\bm U}({\bm r}_T,t)$ induced respectively by tumbling and swimmer self-propulsion. The slight deviation between the theoretical curve and simulation results at small $v_s$ in Fig.~\ref{fig:D_lambda_vs}b is likely due to the specific form for the short-range regularisation, which becomes important as $v_s \rightarrow 0$. In our derivation of Eq.~\eqref{eq:DA_nonBrownian} we use the regularised flow field~\eqref{eq:u_s}, based on the regularisation first introduced by~\cite{Cortez2005}. In the LB simulations we instead use a numerical interpolation scheme based on a regularisation of the $\delta$ function~\citep{Peskin1} acting separately on the two point forces that make up each microswimmer. Unlike the expression in~\eqref{eq:u_s}, this numerical regularisation does not allow a direct mapping (or adjustment) of the regularisation length $\varepsilon$. We thus do not expect perfect agreement between kinetic theory and simulation in the low-$v_s$ regime where the short-range regularisation becomes important, and therefore treat $\varepsilon$ as a fitting parameter when comparing data from LB simulations with kinetic theory predictions. However, we find that the fitted value of $\varepsilon$ only varies slightly ($\varepsilon \in [1.9 \Delta l, 2.5\Delta l]$) for the values of $v_s$ used throughout this work, in good accordance with the fact that the regularised $\delta$ function is interpolated over a support of $2 \Delta l$ in each Cartesian direction; for a more in-depth discussion of the effect of the interpolation scheme on the tracer dynamics, see~\cite{deGraaf1}.
\begin{figure}
\centering
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=\textwidth, clip, trim=0.0cm -0.1cm 0.0cm 0cm]{D_D0__n0.01.pdf}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=\textwidth, clip, trim=0.0cm -0.1cm 0.0cm 0cm]{xi_Pe__n0.01.pdf}
\end{minipage}
\caption{\textbf{Brownian motion suppresses active diffusion for slow swimming speeds.} Panel (a) shows values of $D_A$ measured from LB simulations (diamonds) and calculated from Eq.~\eqref{eq:DA_exact} (solid lines), both expressed in LB units. Panel (b) shows the same data but expressed in the dimensionless quantities $\xi$ and Pe. For very slow swimmers with $L \lesssim 1$, $D_A$ is reduced compared to the non-Brownian value ($\xi = 1$) whenever $\mathrm{Pe} < 1$, while for faster swimmers, significantly lower values of Pe are necessary to affect $D_A$. The circles for $L = 0$ and 2.4 correspond to the hydrodynamic diffusion of non-Brownian tracers measured in a suspension of Brownian \emph{swimmers} of the same $D_0$, verifying the statistical equivalence between tracer and swimmer diffusion in the non-interacting limit. Error bars represent one standard deviation as obtained from averaging over four separate runs.
}
\label{fig:xi_Pe}
\end{figure}
In Fig.~\ref{fig:xi_Pe}, we study the additional effect of varying the Brownian diffusion coefficient $D_0$, as encoded in Eq.~\eqref{eq:DA_exact}. From the data in Fig.~\ref{fig:xi_Pe}a, it is clear that, for large enough $D_0$, the active diffusivity $D_A$ decreases compared to its non-Brownian value. To enable an easier analysis of the effect of varying swimming speed, in panel (b) we present the same data instead plotted as a function of the reduced variables $\xi$ and Pe. For shakers with $L = 0$ (blue curve in Fig.~\ref{fig:xi_Pe}b), $D_A$ is reduced compared to its non-Brownian value ($\xi < 1$) as soon as $\mathrm{Pe} < 1$, reaching a value as low as $\xi = 0.2$ for $\mathrm{Pe} \approx 10^{-3}$. For finite values of $v_s$, this effect on $D_A$ however occurs for gradually lower values of Pe; for the fastest swimmers considered here, with $L = 4.0$, no significant reduction of $D_A$ is observed even for Pe as low as $10^{-3}$. Instead, we observe a small but significant \emph{increase} in the active diffusion compared to its non-Brownian value, in accordance with what was previously observed for slender swimmers by~\cite{Kasyap2014}; we discuss this effect further below. Crucially, a reduced persistence length $L = 4$ nevertheless corresponds to relatively slow swimming from a biological perspective: According to the approximate calculation in~\cite{Morozov:PRX:2020}, $L$ for \emph{E. coli} bacteria lies somewhere in the range between 5 and 20, indicating that the effect of Brownian motion on active diffusion is likely negligible in suspensions of swimming bacteria due to their fast self-propulsion. In our LB simulations, studying values higher than $L \approx 4$ is challenging, as these large swimming speeds both require very large systems to avoid significant finite-size effects and yields artifacts due to the effect of finite Reynolds number in the swimmer-tracer scattering dynamics~\citep{deGraaf1}. We nevertheless numerically studied $\xi (\mathrm{Pe})$ using Eq.~\eqref{eq:DA_exact} for larger values of $L$, verifying that both the peak and the subsequent decrease in $\xi$ continues to move to even lower values of Pe as $L$ is increased.
To further understand the mechanism behind the reduction in $D_A$ with $D_0$, we consider the two autocorrelation functions $C_T(t)$ and $C_U(t)$, which respectively measure the fluid autocorrelation in the co-moving tracer frame and in the lab frame. Figure~\ref{fig:U_correlation} shows these correlation functions for $L = 0$ and $L = 2.4$, with the top row corresponding to LB results for $C_T$ and the bottom row to kinetic theory results from Eq.~\eqref{eq:CU} for $C_U$. First, we notice that the two sets of curves are very similar, implying that the stationary tracer approximation $C_T \approx C_U$ is indeed accurate. Secondly, we notice that the decay of the correlation function is significantly faster for swimmers than for shakers, again illustrating that self-propulsion acts an efficient decorrelation mechanism for ${\bm U}$. The effect of finite $D_0$ for shakers (left column) is simply to decrease the relaxation time of the exponential decay, in accordance with the $L = 0$ limit of Eq.~\eqref{eq:CU}. For swimmers, the situation is more complex: For short times, the flow field decays faster with decreasing Pe, while the long-time tail of $C_T$ and $C_U$ instead becomes somewhat more extended with decreasing Pe. For fast enough swimmers, the latter effect leads to the local maximum at $\xi > 1$ for intermediate Pe observed in Fig.~\ref{fig:xi_Pe}b for the two highest $L$. Finally, we note that the equal-time fluid velocity variance $\langle U^2({\bm r}_T) \rangle$, corresponding to the $t = 0$ values of $C_T$ and $C_U$, is independent of Pe. This means that, regardless of the ratio between diffusive and active motion, the tracer particles sample the overall flow field isotropically. This fact is non-trivial, since active particles in inhomogeneous motility landscapes are known to preferentially sample regions where they move slowly~\citep{Stenhammar:SciAdv:2016}. Furthermore, for ``entrained'' tracers that are captured by the near-field flows of passing swimmers~\citep{Polin:NatComm:2016}, we expect the fluid flow sampled by tracers to be significantly different from the average flow field in the system.
\begin{figure*}
\centering
\begin{minipage}[t]{0.47\textwidth}
\includegraphics[width=\textwidth, clip, trim=0.0cm 0.0cm 0.0cm 0.0cm]{tcf__vs0.0_sim.pdf}
\includegraphics[width=\textwidth, clip, trim=0.0cm 0.0cm 0.0cm 0.0cm]{tcf__vs0.0_analytical.pdf}
\end{minipage}
\hfill
\begin{minipage}[t]{0.47\textwidth}
\includegraphics[width=\textwidth, clip, trim=0.0cm 0.0cm 0.0cm 0.0cm]{tcf__vs0.001_sim.pdf}
\includegraphics[width=\textwidth, clip, trim=0.0cm 0.0cm 0.0cm 0.0cm]{tcf__vs0.001_analytical.pdf}
\end{minipage}
\caption{\textbf{Brownian motion decorrelates tracer trajectories.} Panels (a) and (b) show the fluid velocity autocorrelation $C_T(t)$ in the co-moving tracer frame measured from LB simulations for (a) shakers ($L=0$) and (b) swimmers with $L = 2.4$ at indicated values of Pe. The dotted line shows the correlation function $C_U(t)$ of the fluid velocity in the lab frame, demonstrating that the stationary-tracer approximation $C_T(t) \approx C_U(t)$ is excellent in the absence of Brownian tracer diffusion (Pe $\rightarrow \infty$). Panels (c) and (d) show the corresponding lab-frame correlation function $C_U(t)$, obtained from kinetic theory (Eq.~\eqref{eq:DA_exact}) for a suspension of diffusing swimmers, as described in Section~\ref{sec:Theory}. All results are presented in LB units. }
\label{fig:U_correlation}
\end{figure*}
In Fig.~\ref{fig:xi_n}, we study the dependence of the suppression of active diffusion on the microswimmer density $n$ in the shaker limit $L=0$. At first sight, this dependence might appear trivial, since $D_A$ is well-known to be linearly dependent on $n$~\citep{Jepson1,Mino1} in the limit of non-interacting swimmers, a fact which is unaffected by Brownian motion as shown by Eq.~\eqref{eq:DA_exact}. Since $\xi$ measures the ratio between the Brownian and non-Brownian values of $D_A$, one would na\"ively expect $\xi$ to be independent of $n$. However, since the P\'eclet number itself, as defined in Eq.~\eqref{eq:Pe_Def}, increases with $n$ for constant $D_0$, the relative effect of Brownian motion on $D_A$ is in fact a complex function of $n$ even for noninteracting swimmers, as shown in Fig.~\ref{fig:xi_n}. More specifically, according to Fig.~\ref{fig:xi_n}b the suppression of active diffusion becomes more significant with increasing microswimmer density, until it saturates around $n = 0.01$, \emph{i.e.}, the density considered in Figs.~\ref{fig:D_lambda_vs},~\ref{fig:xi_Pe}, and~\ref{fig:U_correlation}. In physical units, the latter concentration approximately corresponds to a bacterial concentration of $10^9$ mL$^{-1}$, which is somewhat higher than the highest \emph{E. coli} concentration considered by \cite{Jepson1} but still within the range of concetrations where swimmer-swimmer correlations are reasonably small~\citep{Stenhammar1}. In summary, our results thus show that, for Brownian diffusion to have any measurable influence on the hydrodynamic diffusion, it is necessary to create a system with a relatively high density of very slow microswimmers; as we discuss in Section~\ref{sec:Conclusions}, this set of parameters is likely not achievable for suspensions of biological microswimmers.
\section{Summary and conclusions}\label{sec:Conclusions}
In this study we have demonstrated a number of theoretical and computational results regarding the effect of Brownian diffusion on the swimmer-induced hydrodynamic diffusion of tracer particles in a suspension of dipolar microswimmers. Our key finding is that the effect of Brownian diffusivity $D_0$ on the activity-induced, hydrodynamic diffusivity $D_A$ is only significant when the P\'eclet number as defined by Eq.~\eqref{eq:Pe_Def} is below unity, meaning that Brownian diffusivity needs to dominate over the hydrodynamic one. However, the necessary requirement $\mathrm{Pe} < 1$ is only sufficient in the shaker limit $v_s \rightarrow 0$: for swimmers with persistence lenghts larger than the organism size, significantly lower values of Pe are required to perturb the tracer trajectories sufficiently to affect $D_A$. This conclusion is analogous to the independence of $D_A$ on the tumbling rate $\lambda$ for large $v_s$ illustrated in Eq.~\eqref{eq:DA_approx}: Whenever $v_s$ is large, the decorrelation of ${\bm U}$ by swimming will dominate over the decorrelation due to tumbling and translational diffusion, and the dependence on $\lambda$ and $D_0$ will thus vanish in the limit $v_s \rightarrow \infty$. While this effect is expected, what is perhaps surprising is the rather moderate values of $L = v_s / (\varepsilon \lambda)$ necessary to render the coupling between $D_0$ and $D_A$ negligible, as illustrated in Fig.~\ref{fig:xi_Pe}b. To put these values into perspective, we use the conservative estimate $L = 5$ for \emph{E. coli}. By virtue of Fig.~\ref{fig:xi_Pe}b, we require that $\mathrm{Pe} \leq 10^{-3}$ for Brownian motion to have an effect of $\sim 5$ percent on $D_A$. Using as an example the minimum value $D_A \approx 10^{-2}$ $\mu$m$^2$s$^{-1}$ measured by \cite{Jepson1} in a 3-dimensional \emph{E. coli} suspension, this requirement thus implies that $D_0 \geq 10 \mu$m$^2$s$^{-1}$, which by virtue of the Stokes-Einstein relation corresponds to a tracer radius of $R_0 \sim 20$ nm. While this is significantly smaller than used in typical measurements on colloidal tracers~\citep{Mino1,Patteson1,Leptos1}, this diffusion coefficient is close to the value of $D_0$ measured for dextran in \emph{E. coli} suspensions by~\cite{Kim1}. It is also fully feasible to realise such low P\'eclet numbers for micron-sized spheres by instead decreasing the bacterial density to very low values; however, measuring the correction to $D_A$ for $\mathrm{Pe} \sim 10^{-3}$ represents a major difficulty, since it amounts to measuring a $\sim$ 5 percent deviation of an effective diffusivity which is itself a thousand times smaller than the Brownian diffusion. It would thus require an extremely accurate determination of $D_0$, which then needs to be subtracted from the total measured diffusion constant to determine $D_A$. Obtaining this accuracy in a colloidal suspension would be very challenging due to particle polydispersity, interactions with boundaries, temperature gradients, and other system-specific complications. We thus conclude that, for typical $L$ values relevant for biological microswimmers, our results imply that the effect of Brownian motion on $D_A$ is likely negligible for all practical purposes. To experimentally observe the reduction in $D_A$, one would instead need to study a system of dipolar shakers, which stir up the surrounding fluid without self-propelling. While this is a somewhat exotic type of system, it could potentially be realised by anchoring molecular motors or biological microswimmers to a surface. In a biological setting, the shaker limit furthermore resembles previously developed models of enzymes anchored to lipid bilayers that induce dipolar flows through cyclical conformation changes~\citep{Hosaka:SoftMatter:2020}.
\begin{figure*}
\centering
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=\textwidth, clip, trim=0.1cm 0.1cm 0.1cm 0.1cm]{xi_Pe__vs0.0.pdf}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=\textwidth, clip, trim=0.1cm 0.1cm 0.1cm 0.1cm]{Da_DaNB_vs_n.pdf}
\end{minipage}
\caption{\textbf{Reduction of $D_A$ varies with microswimmer density.} Panel (a) shows $\xi$ as a function of Pe for shakers ($L=0$) at various densities $n$, as indicated, while panel (b) shows $\xi$ as a function of $n$, at a fixed value of $\mathrm{Pe} = 0.013$. Simulation data are given by the symbols, with error bars obtained from averaging over four separate runs, while solid lines are computed from Eq. \eqref{eq:DA_exact}.}
\label{fig:xi_n}
\end{figure*}
Our results are qualitatively consistent with the previous theoretical results obtained by~\cite{Kasyap2014}, including the non-monotonic behaviour of $D_A$ with respect to $D_0$ at high swimming speed. Their results however differ in three important ways. First of all, their results consider a ``slender swimmer'' model, where the fluid is forced via a stress applied along a line representing the swimmer body, rather than by two point forces. While this model should lead to a dipolar flow in the far field, the near-field differences are significant, as illustrated in Fig. 8 of~\cite{Kasyap2014}. Furthermore, they consider only the fast-swimming limit where, according to Eq.~\eqref{eq:DA_approx}, $D_A$ for dipolar microswimmers is independent of $\lambda$. Finally, they parametrise their model in a qualitatively different way than us: as discussed above, they adopt a definition of $\mathrm{Pe}$ based on $v_s$ rather than on the tracer diffusivity as in Eq.~\eqref{eq:Pe_Def}. In this description, $v_s$ is furthermore directly coupled to $\kappa$, so that changing Pe simultaneously changes the activity of the bath (via $\kappa$) and the swimming speed $v_s$. These differences makes it difficult to compare directly with our results, as we consider the effects of fluid advection and self-propulsion separately via Pe and $L$.
Thus, even though Brownian motion is unlikely to provide a significant dependence of $D_A$ on $R_0$ for tracers immersed in suspensions of biological microswimmers, there are several other mechanisms that need to be studied to explain the non-monotonic dependence observed experimentally~\citep{Patteson1} and computationally~\citep{Dyer:PoF:2021}. First of all, the effect of tracer entrainment by the near field flows of the swimmer is strongly dependent on the size ratio between the swimmer and the tracer~\citep{Polin:NatComm:2016}, although we expect this term to be small for micron-sized tracer particles in \emph{E. coli} suspensions. Secondly, the finite size of the tracer will change the equation of motion~\eqref{eq:rdot_tr} into the Fax\'en equation that takes into account the non-linearity of the flow field~\citep{Kim-Karrila}, an effect which was implicitly included in the wavelet Monte Carlo simulations by~\cite{Dyer:PoF:2021} and, together with tracer entrainment, is a significant explanation of their observed $R_0$ dependence of $D_A$. Finally, non-hydrodynamic interactions such as direct collisions, electrostatic interactions, and artifacts due to container walls are likely to depend in a non-trivial manner on the tracer size for each system in question. We thus conclude by noting that significant further experimental and theoretical work is necessary to disentangle the system-specific properties from the generic properties of tracer dynamics in microswimmer suspensions.
\section*{Acknowledgement}
Discussions with Cesare Nardini are kindly acknowledged. The computations were enabled by resources provided by LUNARC. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
\section*{Funding}
This work was financed through the Knut and Alice Wallenberg Foundation (project grant KAW 2014.0052). JS acknowledges financial support from the Swedish Research Council (Project No. 2019-03718).
\section*{Competing Interestst}
The authors report no competing interests.
|
{
"arxiv_id": "2302.13606",
"language": "en",
"timestamp": "2023-02-28T02:25:13",
"url": "https://arxiv.org/abs/2302.13606",
"yymm": "2302"
} | \subsubsection{}}
\def\begin{gather*}{\begin{gather*}}
\def\end{gather*}{\end{gather*}}
\def\begin{question}{\begin{question}}
\def\end{question}{\end{question}}
\def\on{rank}{\on{rank}}
\newcommand{V_{-1}}{V_{-1}}
\newcommand{V_{-2}}{V_{-2}}
\newcommand{{\stackrel{\scriptscriptstyle{1}}{\rho}}{}}{{\stackrel{\scriptscriptstyle{1}}{\rho}}{}}
\newcommand{{\stackrel{\scriptscriptstyle{2}}{\rho}}{}}{{\stackrel{\scriptscriptstyle{2}}{\rho}}{}}
\newcommand{{\stackrel{\scriptscriptstyle{a}}{\rho}}{}}{{\stackrel{\scriptscriptstyle{a}}{\rho}}{}}
\newcommand{\mathbb{S}}{\mathbb{S}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbf{SO}}{\mathbf{SO}}
\newcommand{\mathbf{CO}}{\mathbf{CO}}
\newcommand{\mathbf{Spin}}{\mathbf{Spin}}
\newcommand{\mathbf{GL}}{\mathbf{GL}}
\newcommand{\mathbf{SL}}{\mathbf{SL}}
\newcommand{\mathbf{Sp}}{\mathbf{Sp}}
\newcommand{\mathbf{O}}{\mathbf{O}}
\newcommand{\mathfrak{so}}{\mathfrak{so}}
\newcommand{\mathfrak{sp}}{\mathfrak{sp}}
\newcommand{\mathfrak{spin}}{\mathfrak{spin}}
\newcommand{\mathfrak{gl}}{\mathfrak{gl}}
\newcommand{\mathfrak{sl}}{\mathfrak{sl}}
\newcommand{\mathfrak{su}}{\mathfrak{su}}
\newcommand{\mathrm{d}}{\mathrm{d}}
\newcommand{\mathbf{SU}}{\mathbf{SU}}
\newcommand{\mathbf{CSp}}{\mathbf{CSp}}
\begin{document}
\title[Exceptional geometries]{Exceptional real Lie algebras $\mathfrak{f}_4$ and $\mathfrak{e}_6$ via contactifications}
\vskip 1.truecm
\author{Pawe\l~ Nurowski} \address{Center for Theoretical Physics,
Polish Academy of Sciences, Al. Lotnik\'ow 32/46, 02-668 Warszawa, Poland}
\email{[email protected]}
\thanks{The research was funded from the Norwegian Financial Mechanism 2014-2021 with project registration number 2019/34/H/ST1/00636.}
\date{\today}
\begin{abstract}
In Cartan's PhD thesis, there is a formula defining a certain rank 8 vector distribution in dimension 15, whose algebra of authomorphism is the split real form of the simple exceptional complex Lie algebra $\mathfrak{f}_4$. Cartan's formula is written in the standard Cartesian coordinates in $\mathbb{R}^{15}$. In the present paper we explain how to find analogous formula for the flat models of any bracket generating distribution $\mathcal D$ whose symbol algebra $\mathfrak{n}({\mathcal D})$ is constant and 2-step graded, $\mathfrak{n}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$.
The formula is given in terms of a solution to a certain system of linear algebraic equations determined by two representations $(\rho,\mathfrak{n}_{-1})$ and $(\tau,\mathfrak{n}_{-2})$ of a Lie algebra $\mathfrak{n}_{00}$ contained in the $0$th order Tanaka prolongation $\mathfrak{n}_0$ of $\mathfrak{n}({\mathcal D})$.
Numerous examples are provided, with particular emphasis put on the distributions with symmetries being real forms of simple exceptional Lie algebras $\mathfrak{f}_4$ and $\mathfrak{e}_6$.
\end{abstract}
\maketitle
\tableofcontents
\newcommand{\tilde{\gamma}}{\tilde{\gamma}}
\newcommand{\tilde{\Gamma}}{\tilde{\Gamma}}
\newcommand{\tilde{\theta}}{\tilde{\theta}}
\newcommand{\tilde{T}}{\tilde{T}}
\newcommand{\tilde{r}}{\tilde{r}}
\newcommand{\sqrt{3}}{\sqrt{3}}
\newcommand{\tilde{\kappa}}{\tilde{\kappa}}
\newcommand{{K^{{~}^{\hskip-3.1mm\circ}}}}{{K^{{~}^{\hskip-3.1mm\circ}}}}
\newcommand{{\rm div}}{{\rm div}}
\newcommand{{\rm curl}}{{\rm curl}}
\newcommand{{\mathcal N}}{{\mathcal N}}
\newcommand{{\Upsilon}}{{\Upsilon}}
\newcommand{\invol}[2]{\draw[latex-latex] (root #1) to
[out=-30,in=-150] (root #2);}
\newcommand{\invok}[2]{\draw[latex-latex] (root #1) to
[out=-90,in=-90] (root #2);}
\section{Introduction: the notion of a contactification}\label{intr}
A \emph{contact structure} $(M,{\mathcal D})$ on a $(2n+1)$ dimensional real manifold $M$ is usually defined in terms of a 1-form $\lambda$ on $M$ such that
$$\underbrace{{\rm d}\lambda\wedge{\rm d}\lambda\wedge\dots\wedge{\rm d}\lambda}_{n\,\,\mathrm{times}}\wedge\lambda\neq 0$$
at each point $x\in M$. Given such a 1-form, the contact structure $(M,{\mathcal D})$ on $M$ is the rank $s=2n$ \emph{vector distribution}
$${\mathcal D}=\{X\in \mathrm{T}M\,\,\mathrm{s.t.}\,\, X\hook\lambda=0\}.$$
Note that any $\lambda'=a\lambda$, with $a$ being a nonvanishing function on $M$, defines the same contact structure $(M,{\mathcal D})$. We also note that given a contact structure $(M,{\mathcal D})$, we additionally have a family of 2-forms on $M$
$$\omega'=a\omega +\mu\wedge\lambda,\quad\mathrm{with}\quad \omega={\rm d}\lambda,$$
where $a\neq 0$ is a function, and $\mu$ is a 1-form on $M$. This, in particular, means that given a contact structure $(M,{\mathcal D})$, we have a rank $s=2n$ (bracket generating) distribution ${\mathcal D}$, and a \emph{line} of a closed 2-form $\omega$ \emph{in the distribution} ${\mathcal D}$, with
$${\rm d}\omega=0\quad\&\quad\underbrace{\omega\wedge\omega\wedge\dots\wedge\omega}_{n\,\,\mathrm{times}}\neq 0.$$
This can be compared with the notion of a \emph{symplectic} structure $(N,[\omega])$ on a $s=2n$ dimensional real manifold $N$. Such a structure is defined in terms of a line $\omega'=h\omega$ of a nowhere vanishing 2-form $\omega$ on $N$, such that
$${\rm d}\omega=0\quad\&\quad\underbrace{\omega\wedge\omega\wedge\dots\wedge\omega}_{n\,\,\mathrm{times}}\neq 0.$$
Here, contrary to the contact case, we have a \emph{ line} of a closed 2-form $\omega$ \emph{in the tangent space} $\mathrm{T}N$ rather than in the proper vector subbundle ${\mathcal D}\subsetneq\mathrm{T}N$.
By the \emph{Poincar\'e lemma}, locally, in an open set ${\mathcal O}\subset N$, the form $\omega$ defines a 1-form $\Lambda$ on $N$ such ${\rm d}\Lambda=\omega$. Therefore given a symplectic structure $(N,[\omega])$, we can locally \emph{contactify} it, by considering a $(2n+1)$ dimensional manifold $${\mathcal U}=\mathbb{R}\times{\mathcal O}\stackrel{\pi}{\to}\mathcal O,$$ with a 1-form $$\lambda={\rm d} u+\pi^*(\Lambda) $$
on $\mathcal U$; here the real variable $u$ is a coordinate along the $\mathbb{R}$ factor in $\mathcal U=\mathbb{R}\times\mathcal O$. As a result the structure $(M,{\mathcal D})=\big({\mathcal U},\ker(\lambda)\big)$ is a \emph{contact structure}, called a \emph{contact structure associated with the symplectic structure} $(N,[\omega])$.
\vspace{0.3cm}
We introduce the notion of a \emph{contactification} as a generalization of the above considerations.
\begin{definition}\label{def1}
Let $N$ be an $s$-dimensional manifold and let
${\rm d}{\mathcal D}^\perp:=\mathrm{Span}(\omega^1,\omega^2,$ $\dots,\omega^r)$ be a rank $r$ subbundle of $\bigwedge^2N$. Consider an $(s+r)$-dimensional fiber bundle $F\to M\stackrel{\pi}{\to}N$ over $N$. Let $(X_1,X_2,\dots, X_r)$ be a coframe of \emph{vertical vectors} in $M$. In particular we have $\pi_*(X_i)=0$ for all $i=1,2,\dots,r$.
Let us assume that on $M$ there exist $r$ one-forms $\lambda^i$, $i=1,2,\dots,r$, such that
$\det (X_i\hook\lambda^j)\neq 0$ on $M$,
and that
${\rm d}\lambda^i=\sum_{j=1}^r a^i{}_j\pi^*(\omega^j)+\sum_{j=1}^r\mu^i{}_j\wedge\lambda^j$ for all $i=1,2,\dots r$,
with some 1-forms $\mu^i{}_j$ and some functions $a^i{}_j$ on $M$ satisfying $\mathrm{det}(a^i{}_j)\neq 0$. Consider the corresponding rank $s$ distribution
${\mathcal D}=\{TM\ni X~|~ X\hook\lambda^i=0, i=1,2,\dots r\}$
on $M$.
Then the pair $(M,{\mathcal D})$ is called a \emph{contactification} of the pair $(N,{\rm d} {\mathcal D}^\perp)$.
\end{definition}
\begin{definition}
A real Lie algebra $\mathfrak{g}$ spanned over $\mathbb{R}$ by the vector fields $Y$ on $M$ of the contactification $(M,{\mathcal D})$ satisfying
\begin{equation}{\mathcal L}_Y\lambda^i\wedge\lambda^1\wedge\dots\wedge\lambda^r=0, \quad\forall i=1,2,\dots,r\label{ssymm}\end{equation}
is called the Lie algebra of infinitesimal symmetries of the contactification $(M,{\mathcal D})$. By definition, it is the same as the Lie algebra of infinitesimal symmetries of the distribution ${\mathcal D}$ on $M$. The vector fields $Y$ on $(M,{\mathcal D})$ satisfying \eqref{ssymm} are called infinitesimal symmetries of $(M,{\mathcal D})$, or of $\mathcal D$, for short.\label{def2}
\end{definition}
Below, we give a nontrivial example of the notions included in Definitions \ref{def1} and \ref{def2}.
\begin{example}\label{exa4}
Consider $N=\mathbb{R}^8$ with Cartesian coordinates $(x^1,x^2,$ $x^{3},x^{4},x^{5},x^{6},$ $x^{7},x^{8})$, and a space ${\rm d}{\mathcal D}^\perp=\mathrm{Span}(\omega^1,\omega^2,\omega^3,\omega^4,\omega^5,\omega^6,\omega^7)\subset\bigwedge^2N$, which is spanned by the following seven 2-forms on $N$:
$$
\begin{aligned}
\omega^1=\,\,&{\rm d} x^1\wedge{\rm d} x^{8}+{\rm d} x^2\wedge{\rm d} x^{5}+{\rm d} x^{3}\wedge{\rm d} x^{7}+{\rm d} x^{4}\wedge{\rm d} x^{6}\\
\omega^2=\,\,&-{\rm d} x^1\wedge{\rm d} x^{5}+{\rm d} x^2\wedge{\rm d} x^{8}+{\rm d} x^{3}\wedge{\rm d} x^{6}-{\rm d} x^{4}\wedge{\rm d} x^{7}\\
\omega^3=\,\,&-{\rm d} x^1\wedge{\rm d} x^{7}-{\rm d} x^2\wedge{\rm d} x^{6}+{\rm d} x^{3}\wedge{\rm d} x^{8}+{\rm d} x^{4}\wedge{\rm d} x^{5}\\
\omega^4=\,\,&{\rm d} x^1\wedge{\rm d} x^{2}+{\rm d} x^{3}\wedge{\rm d} x^{4}+{\rm d} x^{5}\wedge{\rm d} x^{8}+{\rm d} x^{6}\wedge{\rm d} x^{7}\\
\omega^5=\,\,&-{\rm d} x^1\wedge{\rm d} x^{6}+{\rm d} x^2\wedge{\rm d} x^{7}-{\rm d} x^{3}\wedge{\rm d} x^{5}+{\rm d} x^{4}\wedge{\rm d} x^{8}\\
\omega^6=\,\,&{\rm d} x^1\wedge{\rm d} x^{4}+{\rm d} x^2\wedge{\rm d} x^{3}-{\rm d} x^{5}\wedge{\rm d} x^{7}+{\rm d} x^{6}\wedge{\rm d} x^{8}\\
\omega^7=\,\,&{\rm d} x^1\wedge{\rm d} x^{3}-{\rm d} x^2\wedge{\rm d} x^{4}+{\rm d} x^{5}\wedge{\rm d} x^{6}+{\rm d} x^{7}\wedge{\rm d} x^{8}.
\end{aligned}
$$
As the bundle $N$ take $M=\mathbb{R}^{7}\times\mathbb{R}^8\to N$ with coordinates $(x^1,\dots,x^8,x^9\dots,x^{15})$, and take seven 1-forms
$$\begin{aligned}
\lambda^1=\,\,&{\rm d} x^9+ x^1{\rm d} x^{8}+ x^2{\rm d} x^{5}+ x^{3}{\rm d} x^{7}+ x^{4}{\rm d} x^{6}\\
\lambda^2=\,\,&{\rm d} x^{10} - x^1{\rm d} x^{5}+ x^2{\rm d} x^{8}+ x^{3}{\rm d} x^{6}- x^{4}{\rm d} x^{7}\\
\lambda^3=\,\,&{\rm d} x^{11} - x^1{\rm d} x^{7}- x^2{\rm d} x^{6}+ x^{3}{\rm d} x^{8}+ x^{4}{\rm d} x^{5}\\
\lambda^4=\,\,&{\rm d} x^{12} + x^1{\rm d} x^{2}+ x^{3}{\rm d} x^{4}+ x^{5}{\rm d} x^{8}+ x^{6}{\rm d} x^{7})\\
\lambda^5=\,\,&{\rm d} x^{13}- x^1{\rm d} x^{6}+ x^2{\rm d} x^{7}- x^{3}{\rm d} x^{5}+ x^{4}{\rm d} x^{8}\\
\lambda^6=\,\,&{\rm d} x^{14}+ x^1{\rm d} x^{4}+ x^2{\rm d} x^{3}- x^{5}{\rm d} x^{7}+ x^{6}{\rm d} x^{8}\\
\lambda^7=\,\,&{\rm d} x^{15}+ x^1{\rm d} x^{3}- x^2{\rm d} x^{4}+ x^{5}{\rm d} x^{6}+ x^{7}{\rm d} x^{8}.
\end{aligned}
$$
This defines a rank 8 distribution ${\mathcal D}=\{TM\ni X~|~ X\hook\lambda^i=0, i=1,2,\dots 7\},$ on $M$.
The pair $\big(M,{\mathcal D}\big)$ \emph{is a contactification of} $(N,d{\mathcal D}^\perp)$, since
$X_i=\partial_{i+8}$, $\det(X_i\hook\lambda^j)=1$, and ${\rm d}\lambda^i=\omega^i$ for all $i=1,\dots,7$.
In particular, in this example the rank 8 distribution
${\mathcal D}$ gives a \emph{2-step filtration} ${\mathcal D}_{-1}\subset\mathcal{D}_{-2}=\mathrm{T}M$, where ${\mathcal D}_{-1}={\mathcal D}$ and $\mathcal{D}_{-2}=[{\mathcal D}_{-1},{\mathcal D}_{-1}]=\mathrm{T}M$.
\end{example}
This example is essentially taken from \`Elie Cartan's PhD thesis \cite{CartanPhd}, actually its German version. We took it as our example inspired by the following quote from Sigurdur Helgason \cite{He}:
\begin{quote}
Cartan represented [the simple exceptional Lie group] ${\bf F}_4$ (...) by the Pfaffian system in $\mathbb{R}^{15}$ (...). Similar results for ${\bf E}_6$ in $\mathbb{R}^{16}$, ${\bf E}_7$ in $\mathbb{R}^{27}$ and ${\bf E}_8$ in $\mathbb{R}^{29}$ are indicated in \cite{CartanPhd}. Unfortunately, detailed proofs of these remarkable representations of the exceptional groups do not seem to be available.
\end{quote}
The 15-dimensional contactification $(M,{\mathcal D})$ from our Example \ref{exa4} is obtained in terms of the seven 1-forms $\lambda^i$, which are equivalent to the seven forms from the Cartan Pfaffian system in dimension 15 mentioned by Helgason. In particular, it follows that the \emph{distribution structure} $(M,{\mathcal D})$ has the simple exceptional Lie group ${\bf F}_4$, actually its \emph{real form} $F_I$ in the terminology of \cite{CS}, as a \emph{group of automorphisms}.
In this paper we will explain how one gets this realization of the exceptional Lie group ${\bf F}_4$, a realization of its real form $F_{II}$, and realizations of the two (out of 5) real forms $E_I$ and $E_{IV}$ of the complex simple exceptional Lie group ${\bf E}_6$. For this explanation we need some preparations consisting of recalling few notions associated with vector distributions on manifolds and spinorial representations of the orthogonal groups in space of \emph{real} spinors.
Finally we note that our approach in this paper is \emph{purely utilitarian}. We answer the question: \emph{How to get the explicit formulas in Cartesian coordinates for Pfaffian forms $(\lambda^1,\dots,\lambda^r)$, which have simple Lie algebras as symmetries?} One can study more general problems related to this on purely Lie theoretical ground. For example, one can ask when a 2-step graded nilpotent Lie algebra $\mathfrak{n}_{\minu}=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$ has a given Lie algebra $\mathfrak{n}_{00}$ as a part of its Lie algebra $\mathfrak{n}_0$ of derivations preserving the strata, or a question as to when the Tanaka prolongation of such $\mathfrak{n}_{\minu}$ with $\mathfrak{n}_{00}\subset\mathfrak{n}_0$ is finite, or simple. This is beyond the scope of our paper. A reader interested in such problems may consult e.g. \cite{AC,Alt,Krug}.
\section{Magical equation for a contactification}
The purpose of this section is to prove the following crucial lemma, about a certain algebraic equation, which we call a \emph{magical equation}. It is the boxed equation \eqref{der0} below.
\begin{lemma}\label{l21}
Let $(\mathfrak{n}_{00},[\cdot,\cdot]_0)$ be a finite dimensional Lie algebra, and let $\rho:\mathfrak{n}_{00}\stackrel{\mathrm{hom}}{\to} \mathrm{End}(S)$ be its finite dimensional representation in a real vector space $S$ of dimension $s$.
In addition, let $R$ be an $r$-dimensional real vector space, and $\tau:\mathfrak{n}_{00}\to \mathrm{End}(R)$, be a linear map.
Finally let $\omega$ be a linear map $\omega:\bigwedge^2S\to R$, or what is the same, let $\omega\in\mathrm{Hom}(\bigwedge^2S,R)$.
Suppose now that the triple $(\rho,\omega,\tau)$ satisfy the following equation:
\begin{equation}\boxed{
\omega\big(\rho(A)X,Y\big)+\omega\big(X,\rho(A)Y\big)=\tau(A)\,\omega(X,Y),}
\label{maga}
\end{equation}
for all $A\in\mathfrak{n}_{00}$ and all $X,Y\in S$.
Then we have:
\begin{enumerate}
\item The map $\tau$ satisfies $$\big(\,\,\tau([A,B]_0)-[\tau(A),\tau(B)]_{\mathrm{End}(R)}\,\,\big)\omega\,\,=\,\,0\quad\quad \forall\,\,A,B\in\mathfrak{n}_{00}.$$
\item If the map $\tau:\mathfrak{n}_{00}\to \mathrm{End}(R)$ is a representation of $\mathfrak{n}_{00}$, i.e. if $$\tau([A,B]_0)=[\tau(A),\tau(B)]_{\mathrm{End}(R)},$$ then the real vector space $\mathfrak{g}_0:=R\oplus S\oplus\mathfrak{n}_{00}$ is a \emph{graded} Lie algebra
$$\mathfrak{g}_0=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_{00},$$
with the graded components $$\mathfrak{n}_{-2}=R,\quad \mathfrak{n}_{-1}=S,\quad\mathrm{with}\,\, \mathfrak{n}_{00}\,\,\mathrm{as\,\,the}\,\,0\,\,\mathrm{grade},$$ and with the Lie bracket $[\cdot,\cdot]$ given by:\label{ca2}
\begin{enumerate}
\item if $X,Y\in \mathfrak{n}_{00}$ then $[X,Y]=[X,Y]_0$,
\item if $A\in \mathfrak{n}_{00}$, $X\in \mathfrak{n}_{-1}$ then $[A,X]=\rho(A)X$,
\item if $A\in \mathfrak{n}_{00}$, $X\in \mathfrak{n}_{-2}$ then $[A,X]=\tau(A)X$,
\item $[\mathfrak{n}_{-1},\mathfrak{n}_{-2}]=[\mathfrak{n}_{-2},\mathfrak{n}_{-2}]=\{0\}$,
\item and, if $X,Y\in \mathfrak{n}_{-1}$ then $[X,Y]=\omega(X,Y)$.
\end{enumerate}
\item Moreover, in the case {\rm \eqref{ca2}} the Lie subalgebra
$$\mathfrak{n}_\minu=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$$ of $\mathfrak{g}_0$
is a 2-step graded Lie algebra, and the algebra $\mathfrak{n}_{00}$ is a Lie subalgebra of the Lie algebra $$\mathfrak{n}_0=\big\{\,\mathrm{Der}(\mathfrak{n}_\minu)\ni D\,\,\mathrm{s.t.}\,\,D\mathfrak{n}_j\subset\mathfrak{n}_j\,\,\mathrm{for}\,\, j=-1,-2\,\big\}$$ of all derivations of $\mathfrak{n}_\minu$ preserving its strata $\mathfrak{n}_{-1}$ and $\mathfrak{n}_{-2}$.
\end{enumerate}
\end{lemma}
\begin{remark}
Note that, in the respective bases $\{ f_\mu\}_{\mu=1}^s$ in $S$ and $\{e_i\}_{i=1}^r$ in $R$, the equation \eqref{maga} is:
\begin{equation}\boxed{
\rho(A)^\alpha{}_\mu\,\,\omega^i{}_{\alpha\nu}+\rho(A)^\alpha{}_\nu\,\,\omega^i{}_{\mu\alpha}\,\,=\,\,\tau(A)^i{}_j\,\,\omega^i{}_{\mu\nu}}\label{magb}\end{equation}
for all $A\in\mathfrak{n}_{00}$, all $i=1,2,\dots,r$ and all $\mu,\nu=1,2,\dots,s$.
In this basis the condition (1) is $$\big(\,\,\tau([A,B]_0)-[\tau(A),\tau(B)]_{\mathrm{End}(R)}\,\,\big)^i{}_j\,\,\omega^j{}_{\mu\nu}\,\,=\,\,0$$ for all $i=1,2,\dots,r,\,\mu,\nu=1,2,\dots s$, and $A,B\in\mathfrak{n}_{00}$.
\end{remark}
\noindent
\emph{Proof of the lemma.}
The proof of part (1) is a pure calculation using the equation \eqref{maga}. We first rewrite it in the shorthand notation as:
$$
\rho(A)\omega+\omega\rho(A)^T=\tau(A)\omega, \quad \forall A\in\mathfrak{n}_{00}.$$
Then we have:
$$\begin{aligned}
\tau([A,B]_0)\omega=&\rho([A,B]_0)\omega+\omega\rho([A,B]_0)^T=\\
&\rho(A)\rho(B)\omega-\rho(B)\rho(A)\omega+\omega\rho(B)^T\rho(A)^T-\omega\rho(A)^T\rho(B)^T=\\
&\rho(A)\Big(\tau(B)\omega-\omega\rho(B)^T\Big)-\rho(B)\Big(\tau(A)\omega-\omega\rho(A)^T\Big)+\\&\Big(\tau(B)\omega-\rho(B)\omega\Big)\rho(A)^T-\Big(\tau(A)\omega-\rho(A)\omega\Big)\rho(B)^T=\\
&\rho(A)\Big(\tau(B)\omega\Big)-\rho(B)\Big(\tau(A)\omega\Big)+\Big(\tau(B)\omega\Big)\rho(A)^T-\Big(\tau(A)\omega\Big)\rho(B)^T=\\
&\tau(A)\tau(B)\omega-\tau(B)\omega\rho(A)^T-\Big(\tau(B)\tau(A)\omega-\tau(A)\omega\rho(B)^T\Big)+\\
&\tau(B)\omega\rho(A)^T-\tau(A)\omega\rho(B)^T=\tau(A)\tau(B)\omega-\tau(B)\tau(A)\omega=\\
&([\tau(A_,\tau(B)]_{\mathrm{End}(R)})\omega,
\end{aligned}
$$
which proves part (1).
The proof of parts (2) and (3) is as follows:\\
We need to check the Jacobi identity for the bracket $[\cdot,\cdot]$.
We first consider the representation
$$\sigma=\tau\oplus\rho\quad\mathrm{ of} \quad\mathfrak{n}_{00}\quad \mathrm{in}\quad \mathfrak{n}_\minu=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1},$$
defined by
$$\sigma(A)(Y\oplus X)=\tau(A)Y\oplus\rho(A)X, \quad \forall A\in\mathfrak{n}_{00},\,\, X\in\mathfrak{n}_{-1},\,\, Y\in\mathfrak{n}_{-2}.$$
We then prove that the representation $\sigma$ is a \emph{strata preserving derivation} in $\mathfrak{n}_\minu$. This is implied by the definitions (a)-(e) of the bracket, and the fundamental equation \eqref{maga} as follows:
The strata preserving property of $\sigma$, $\sigma(\mathfrak{n}_{-i})\subset\mathfrak{n}_{-i}$, $i=1,2$, is obvious by the definitions of $\rho$ and $\tau$. However, we need to check that $\sigma$ is a derivation, i.e. that
\begin{equation}\sigma (A)[X,Y] =[\sigma(A)X,Y]+[X,\sigma(A)Y]\label{der0}\end{equation}
for all $A\in\mathfrak{n}_{00}$ and for all $X,Y\in \mathfrak{n}_\minu$.
Because of the strata preserving property of $\sigma$, which we have just established, and because of the point (d) of the definition of the bracket, the equation \eqref{der0} is satisfied when both $X$ and $Y$ are in $\mathfrak{n}_{-2}$, or when $X$ is in $\mathfrak{n}_{-1}$ and $Y$ is $\mathfrak{n}_{-2}$. The only thing to be checked is if \eqref{der0} is also valid when both $X$ and $Y$ belong to $\mathfrak{n}_{-1}$.
But this just follows directly from \eqref{maga}, since if $ X,Y\in\mathfrak{n}_{-1}$ then
$$\begin{aligned}\sigma(A)[X,Y]&=\sigma(A)\omega(X,Y)=\tau(A)\omega(X,Y)=\\
&\omega(\rho(A)X,Y)+\omega(X,\rho(A)Y)=[\rho(A)X,Y]+[X,\rho(A)Y]=\\
&[\sigma(A)X,Y]+[X,\sigma(A)Y],\quad \forall A\in\mathfrak{n}_{00}.\end{aligned}$$
Now we return to checking the Jacobi identity for the bracket $[\cdot,\cdot]$ in $\mathfrak{g}_0$:
On elements of the form $A,B\in \mathfrak{n}_{00}$, $Z\in \mathfrak{n}_\minu$, by (b)-(c), we have
$$[[A,B],Z]+[[Z,A],B]+[[B,Z],A]=\Big(\sigma([A,B])-[\sigma(A),\sigma(B)]\Big)Z,$$
which vanishes due to the representation property of $\sigma$. On the other hand, on elements $A\in \mathfrak{n}_{00}$ and $Z_1,Z_2\in \mathfrak{n}_\minu$ we have
$$[[A,Z_1],Z_2]+[[Z_2,A],Z_1]+[[Z_1,Z_2],A]=[\sigma(A)Z_1,Z_2]+[Z_1,\sigma(A)Z_2]-\sigma(A)[Z_1,Z_2],$$
which is again zero, on the ground of the derivation property \ref{der0} of $\sigma$. Obviously the bracket satisfies the Jacobi identity when it is restricted to $\mathfrak{n}_{00}$; it is the Lie bracket $[\cdot,\cdot]_o$ of the Lie algebra $\mathfrak{n}_{00}$. Finally, property (2) implies that $[[Z_1,Z_2],Z_3]=0$ for all $Z_1,Z_2,Z_3$ in $\mathfrak{n}_\minu$, hence the Jacobi identity is trivially satisfied for $[\cdot,\cdot]$, when it is restricted to $\mathfrak{n}_\minu$. \hspace{9.cm}$\Box$
\vspace{0.5cm}
In the following we will use the map $\omega\in\mathrm{Hom}(\bigwedge^2S,R)$ satisfying the \emph{magical equation} \eqref{der0}, to construct contactifications with nontrivial symmetry algebras $\mathfrak{g}$. The setting will include Cartan's contactification with symmetry ${\bf F}_4$ mentioned in the Helgason's quote. For this, however we need few preparations.
\section{Two-step filtered manifolds}
A \emph{2-step filtered structure} on an $(s+r)$-dimensional manifold $M$ is a pair $(M,{\mathcal D})$, in which $\mathcal D$ is a vector distribution of rank $s$ on $M$, such that it is \emph{bracket generating} in the quickest possible way. This means that its \emph{derived distribution} ${\mathcal D}_{-2}:=[{\mathcal D}_{-1},{\mathcal D}_{-1}]$, with ${\mathcal D}_{-1}={\mathcal D}$, is such that $${\mathcal D}_{-2}=\mathrm{T}M.$$
It provides the simplest nontrivial \emph{filtration}
$$\mathrm{T}M={\mathcal D}_{-2}\supset{\mathcal D}_{-1}$$
of the tangent bundle $\mathrm{T}M$.
A (local) \emph{authomorphism} of a 2-step filtered manifold $(M,{\mathcal D})$ is a (local) diffeomorphism $\phi:M\to M$ such that $\phi_*{\mathcal D}\subset{\mathcal D}$. Since authomorphism can be composed and have inverses, they form a \emph{group} $G$ of (local) authomorphism of $(M,{\mathcal D})$, also called a \emph{group of (local) symmetries of} $\mathcal D$. Infinitesimally the Lie group of automorphisms defines the \emph{Lie algebra} $\mathfrak{aut}({\mathcal D})$ \emph{of symmetries}, which is the real span of all vector fields $X$ on $M$ such that $[X,Y]\subset {\mathcal D}$ for all $Y\in{\mathcal D}$.
Among all the 2-step filtered manifolds $(M,{\mathcal D})$ particularly simple are those which can be realized on a group manifold of a \emph{2-step nilpotent} Lie group. These are related to the notion of the \emph{nilpotent approximation} of a pair $(M,{\mathcal D})$. This is defined as follows:
At every point $x\in M$ equipped with a 2-step filtration ${\mathcal D}_{-2}\supset{\mathcal D}_{-1}$ we have well defined vector spaces
$n_{-1}(x)={\mathcal D}_{-1}(x)$ and $n_{-2}(x)={\mathcal D}_{-2}(x)/{\mathcal D}_{-1}(x)$, which define a vector space
$$\mathfrak{n}(x)=\mathfrak{n}_{-2}(x)\oplus\mathfrak{n}_{-1}(x).$$
This vector space is naturally a \emph{Lie algebra}, with a \emph{Lie bracket} induced form the Lie bracket of vector fields in $\mathrm{T}M$. Due to the 2-step property of the filtration defined by $\mathcal D$ this Lie algebra is \emph{2-step nilpotent},
$$[\mathfrak{n}_{-1}(x),\mathfrak{n}_{-1}(x)]=\mathfrak{n}_{-2}(x)\quad\&\quad[\mathfrak{n}_{-1}(x),\mathfrak{n}_{-2}(x)]=\{0\}.$$
This 2-step nilpotent Lie algebra is a \emph{local invariant} of the structure $(M,{\mathcal D})$, and it is called a \emph{nilpotent approximation of} the structure $(M,{\mathcal D})$ at $x\in M$.
This enables for defining a class of particularly simple examples of 2-step filtered structures:
Consider a \emph{2-step nilpotent Lie algebra} $\mathfrak{n}=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$, and let $M$ be a Lie \emph{group}, whose Lie algebra is $\mathfrak{n}$. The Lie algebra $\mathfrak{n}_M$ of left invariant vector fields on $M$ is isomorphic to $\mathfrak{n}$ and mirrors its gradation, $\mathfrak{n}_M={\mathfrak{n}_M}_{-2}\oplus{\mathfrak{n}_M}_{-1}$. Now, taking all linear combinations with \emph{smooth functions} coefficients of all vector fields from the graded component ${\mathfrak{n}_M}_{-1}$ of $\mathfrak{n}_M$, one defines a \emph{vector distribution} ${\mathcal D}=\mathrm{Span}_{{\mathcal{F}}(M)}(\mathfrak{n}_M)$ on $M$. The so constructed filtered structure $(M,{\mathcal D})$ is obviously 2-step graded and is the \emph{simplest} filtered structure with nilpotent approximation being equal to $\mathfrak{n}$ everywhere. We call this $(M,{\mathcal D})$ structure the \emph{flat model} for all the 2-step filtered structures having the same constant nilpotent approximation $\mathfrak{n}$.
It is remarkable that the largest possible symmetry of all 2-step filtered structures $(M,{\mathcal D})$ is precisely the symmetry of the flat model. As such it is \emph{algebraically} determined by the nilpotent approximation $\mathfrak{n}$. This is the result of Noboru Tanaka \cite{tanaka}. To describe it we recall the notion of \emph{Tanaka prolongation}.
\begin{definition}
The \emph{Tanaka prolongation} of a 2-step nilpotent Lie algebra $\mathfrak{n}$ is a graded Lie algebra $\mathfrak{g}(\mathfrak{n})$ given by a direct sum
\begin{equation}
\mathfrak{g}(\mathfrak{n})=\mathfrak{n}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\dots\oplus\mathfrak{n}_j\oplus\cdots,\label{gt1}\end{equation} with
\begin{equation}\mathfrak{n}_k=\Big\{\bigoplus_{j<0}\mathfrak{n}_{k+j}\otimes\mathfrak{n}_j^*\ni A\,\,\mathrm{s.t.}\,\,A[X,Y]=[AX,Y]+[X,AY]\Big\}\label{gt}\end{equation}
for each $k\geq 0$.
Furthermore, for each $j\geq 0$, the Lie algebra $$\mathfrak{g}_j(\mathfrak{n})=\mathfrak{n}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\dots\oplus\mathfrak{n}_j$$
is called the Tanaka prolongations of $\mathfrak{n}$ up to $j^{th}$ order.
\end{definition}
Setting $[A,X]=AX$ for all $A\in \mathfrak{n}_k$ with $k\geq 0$ and for all $X\in\mathfrak{n}$ makes the condition in \eqref{gt} into the Jacobi identity. Moreover, if $A\in \mathfrak{n}_k$ and $B\in\mathfrak{n}_l$, $k,l\geq 0$, then their commutator $[A,B]\in\mathfrak{n}_{k+l}$ is defined on elements $X\in\mathfrak{n}$ inductively, according to the Jacobi identity. By this we mean that it should satisfy
$$[A,B]X=[A,BX]-[B,AX],$$
which is sufficient enough to define $[A,B]$.
\begin{remark}
Note, in particular, that $\mathfrak{n}_0$ is the Lie algebra of \emph{all derivations of} $\mathfrak{n}$ preserving the two strata $\mathfrak{n}_{-1}$ and $\mathfrak{n}_{-2}$ of the direct sum $\mathfrak{n}=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$:
$$\mathfrak{n}_0=\big\{\,\mathrm{Der}(\mathfrak{n})\ni D\,\,\mathrm{s.t.}\,\,D\mathfrak{n}_j\subset\mathfrak{n}_j\,\,\mathrm{for}\,\, j=-1,-2\,\big\}.$$
\end{remark}
Although the Tanaka prolongation of a nilpotent Lie algebra $\mathfrak{n}$ is in general infinite, in this paper we will be interested in \emph{situations when the Tanaka prolongation}
$$\mathfrak{g}=\mathfrak{g}(\mathfrak{n})$$
of the $2$-step nilpotent part $$\mathfrak{n}=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$$ is \emph{finite} and \emph{symmetric}, in the sense $$\mathfrak{g}(\mathfrak{n})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with
$$\dim(\mathfrak{n}_{-k})=\dim(\mathfrak{n}_k), \quad k=1,2.$$
Such situations \emph{are possible}, and in them the so defined Lie algebra $\mathfrak{g}(\mathfrak{n})$ is \emph{simple}. In such case the Tanaka prolongation $\mathfrak{g}(\mathfrak{n})$ is \emph{graded}, and the subalgebra $$\mathfrak{p}=\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$ in such $\mathfrak{g}(\mathfrak{n})$ is \emph{parabolic}. Moreover, the Lie algebra
$$\mathfrak{p}_{opp}=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0,$$
is also a parabolic subalgebra of this simple $\mathfrak{g}(\mathfrak{n})$. It is isomorphic to $\mathfrak{p}$, $\mathfrak{p}\simeq\mathfrak{p}_{opp}$.
Regardless of the fact if $\mathfrak{g}(\mathfrak{n})$ is finite or not, we have the following general theorem, which is a specialization of a remarkable theorem by Noboru Tanaka \cite{tanaka}:
\begin{theorem}\label{tansym}
Consider 2-step filtered structures $(M,{\mathcal D})$, with distributions ${\mathcal D}$ having the same constant milpotent approximation $\mathfrak{n}$. Then
\begin{itemize}
\item The most symmetric of all of these distribution structures is the flat model $(M,{\mathcal D})$, with $M$ being a nilpotent Lie group associated of the nilpotent approximation algebra $\mathfrak{n}$, and with $\mathcal D$ being the first component ${\mathcal D}^{-1}$ of the natural filtration on $M$ associated to the $2$-step grading in $\mathfrak{n}$.
\item The Lie algebra of automorphisms $\mathfrak{aut}({\mathcal D})$ of the flat model structure is isomorphic to the Tanaka prolongation $\mathfrak{g}(\mathfrak{n})$ of the nilpotent approximation $\mathfrak{n}$,
$\mathfrak{aut}({\mathcal D})\simeq \mathfrak{g}(\mathfrak{n}).$
\end{itemize}
\end{theorem}
\begin{remark}
This theorem is of fundamental importance for explanation of the Cartan's result about a realization of ${\bf F}_4$ in $\mathbb{R}^{15}$. As we will see Cartan's $\mathbb{R}^{15}$ is actually a \emph{domain of a chart} $({\mathcal U},\varphi)$ on a certain 2-step nilpotent Lie group $M$, with a 2-step nilpotent Lie algebra $\mathfrak{n}$, and the equivalent description of ${\bf F}_4$ in terms of a symmetry group of the contactification $(M,{\mathcal D})$ from our Example \ref{exa4} is valid because this contactification is just the flat model for the 2-step filtration $(M,{\mathcal D})$ with the nilpotent approximation $\mathfrak{n}$.
\end{remark}
Using the information about the Tanaka prolongation of a nilpotent Lie algebra $\mathfrak{n}$ we can enlarge our Lemma \ref{l21} by changing its point (3) into the following more complete form:
\begin{lemma}\label{l213}
With all the assumptions of Lemma \ref{l21}, and with points {\rm(1)} and {\rm (2)} as in Lemma \ref{l21}, its point {\rm(3)} is equivalent to
\begin{itemize}
\item[] {\rm(3)} Moreover, in the case {\rm \eqref{ca2}} the Lie subalgebra
$$\mathfrak{n}_\minu=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$$ of $$\mathfrak{g}_0=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus \mathfrak{n}_{00}$$
is a 2-step graded nilpotent Lie algebra, and the algebra $\mathfrak{n}_{00}$ is a Lie \emph{subalgebra} of the Tanaka prolongation up to $0^{th}$ order $\mathfrak{g}_0(\mathfrak{n}_\minu)$ of the Lie algebra $\mathfrak{n}_\minu=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$.
\end{itemize}
\end{lemma}
\begin{remark} The term `... $\mathfrak{n}_{00}$ is a Lie \emph{subalgebra} of the Tanaka prolongation up to $0^{th}$ order $\mathfrak{g}_0(\mathfrak{n}_\minu)$ of the Lie algebra $\mathfrak{n}_\minu=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}$..' in the above lemma, means that $\mathfrak{n}_{00}$, although nontrivial, is in general only a subalgebra of the $$\mathfrak{n}_0=\big\{\,\mathrm{Der}(\mathfrak{n}_\minu)\ni D\,\,\mathrm{s.t.}\,\,D \mathfrak{n}_j\subset \mathfrak{n}_j\,\,\mathrm{for}\,\, j=-1,-2\,\big\},\quad\quad \mathfrak{n}_{00}\subsetneq \mathfrak{n}_0,$$ which is the \emph{full} $0$ graded component of the Tanaka prolongation of $\mathfrak{n}_\minu$. So for applications it is reasonable to choose $\mathfrak{n}_{00}$ as large as possible.
\end{remark}
\section{Construction of contactifications with nice symmetries}
Consider a Lie algebra $(\mathfrak{n}_{00},[\cdot,\cdot]_0)$ and its two real representations $(\rho,S)$, $(\tau,R)$, in the respective real $s$- and $r$-dimensional vectors spaces $S$ and $R$.
Let $S=\mathbb{R}^s$, $R=\mathbb{R}^r$, and let $\{ f_\mu\}_{\mu=1}^s$ and $\{e_i\}_{i=1}^r$ be respective bases in $S$ and in $R$. Let $\{f^\mu\}_{\mu=1}^s$ be a basis in the vector space $S^*$ dual to the basis $\{ f_\mu\}_{\mu=1}^s$ , $f_\nu\hook f^\mu=\delta_\nu{}^\mu$.
To be in a situation of Lemma \ref{l21} we also assume that we have the homomorphism $\omega\in\mathrm{Hom}\bigwedge^2 S,R)$ satisfying the magical equation \eqref{maga}.
Then the map $\omega$ is $$\omega=\tfrac12\omega^i_{\mu\nu}e_i\otimes f^\mu\wedge f^\nu,$$ and it defines the coefficients $\omega^i{}_{\mu\nu}$, $i=1,\dots,r$, $\mu,\nu=1,2,\dots s$, which satisfy $\omega^i{}_{\mu\nu}=-\omega^i{}_{\nu\mu}$.
Now, consider an $s$-dimensional manifold, which is an open set $N$ of $\mathbb{R}^s$, $N\subset\mathbb{R}^s$, with coordinates $(x^\mu)_{\mu=1}^r$. Then, we have $r$ two-forms $(\omega^i)_{i=1}^r$ on $N$ defined by
$$\omega^i=\tfrac12\omega^i{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu.$$
This produces an $(N,{\rm d}{\mathcal D}^\perp)$ structure on $N$, with
$${\rm d}{\mathcal D}^\perp=\mathrm{Span}_\mathbb{R}(\omega^1,\dots,\omega^r).$$
We contactify it. For this we take a local $M=\mathbb{R}^r\times N$, with coordinates $\big(u^i,x^\mu\big)(_{i=1}^r)(_{\mu=1}^s)$, and define the `contact forms' on $M$ by
$$\lambda^i={\rm d} u^i+\omega^i{}_{\mu\nu}x^\mu{\rm d} x^\nu.$$
Because of Lemmas \ref{l21} and \ref{l213} the distribution ${\mathcal D}$ on $M$ defined by this contactification as in Definition \ref{def1}, equips $M$ with a \emph{2-step} filtered structure having ${\mathcal D}_{-1}=\mathcal D$. This has rank $s$. Now using Lemmas \ref{l21} and \ref{l213}, and Tanaka's Theorem \ref{tansym}, we get the following corollary.
\begin{corollary}
Let $M=\mathbb{R}^r\times\mathbb{R}^s$ and let $$\lambda^i={\rm d} u^i+\omega^i{}_{\mu\nu}x^\mu{\rm d} x^\nu, \quad i=1,\dots r,$$
with $\omega$ being a solution of the magical equation \ref{maga} such that $\mathrm{Im}(\omega)=R$.
Consider the distribution structure $(M,{\mathcal D})$ with
a rank $r$ distribution $${\mathcal D}=\{\mathrm{T}M\ni X,\,\, s.t.\,\,X\hook\lambda^i=0,\,\,i=1,\dots,r\}$$
on $M$.
Then, the Lie algebra of automorphisms $\mathfrak{aut}({\mathcal D})$ of $(M,{\mathcal D})$ is isomorphic to the Tanaka prolongation of the 2-step nilpotent Lie algebra $\mathfrak{n}_\minu=R\oplus S$ defined in point {\rm (3)} of Lemma \ref{l21} or \ref{l213}. The Lie algebra $\mathfrak{g}_0=R\oplus S\oplus \mathfrak{n}_{00}$ is nontrivially contained in the Tanaka prolongation up to the $0^{th}$ order $\mathfrak{g}_0(\mathfrak{n}_\minu)$ of $\mathfrak{n}_\minu$, with $\{0\}\neq \mathfrak{n}_{00}\subset\mathfrak{n}_0$, and as such is a subalgebra of the algebra of $\mathfrak{aut}({\mathcal D})$. \label{cruco}
\end{corollary}
\section{Majorana spinor representations of $\mathfrak{so}(p,q)$}\label{spintraut}
In this section we will explain how to construct the real spin representations of the Lie algebras $\mathfrak{so}(p,q)$, in cases when $p=n$, $q=n-1$, or $p=q=n$, $n=1,2,\dots n$. We will also give a construction of these representations for $\mathfrak{so}(0,n)$. We emphasize that we are only interested in \emph{real} spin representations. They share a general name of \emph{Majorana representations}. Our presentation of this material is adapted from \cite{traut}.
We will need Pauli matrices
\begin{equation}
\sigma_x=\begin{pmatrix} 0&1\\1&0\end{pmatrix},\quad \epsilon=-i\sigma_y=\begin{pmatrix} 0&-1\\1&0\end{pmatrix},\quad \sigma_z=\begin{pmatrix} 1&0\\0&-1\end{pmatrix},\label{pauu}
\end{equation}
and the $2\times 2$ identity matrix
\begin{equation}
I=\begin{pmatrix} 1&0\\0&1\end{pmatrix}.\label{pauu1}\end{equation}
We have the following identities:
\begin{equation}\begin{aligned}
&\sigma_x^2=\sigma_z^2=-\epsilon^2=I\\
&\sigma_x\epsilon=-\epsilon\sigma_x=\sigma_z,\quad\sigma_z\sigma_x=-\sigma_x\sigma_z=-\epsilon,\quad \epsilon\sigma_z=-\sigma_z\epsilon=\sigma_x.
\end{aligned}\label{iden}
\end{equation}
Now we quote \cite{traut}:
\begin{quote}
With this notation, \emph{restricting to low dimensions} $p+q=4,5,6$ \emph{and} $7$, the real representations of the Clifford algebra ${\mathcal C}\ell(0,p+q)$ are all in dimension $s=8$, and are generated by the $p+q$ matrices $\rho_1,\dots,\rho_{(p+q)}$ given by:
\begin{equation}\begin{aligned}
\rho_1&=\sigma_z\otimes I\otimes \epsilon\\
\rho_2&=\sigma_z\otimes \epsilon\otimes \sigma_x\\
\rho_3&=\sigma_z\otimes \epsilon\otimes \sigma_z\\
\rho_4&=\sigma_x\otimes \epsilon \otimes I\\
\rho_5&=\sigma_x\otimes \sigma_x\otimes \epsilon\\
\rho_6&=\sigma_x\otimes \sigma_z\otimes \epsilon\\
\rho_7&=\epsilon\otimes I\otimes I.
\end{aligned}\label{cl07}\end{equation}
The 8 matrices $\theta_\mu=\sigma_x\otimes\rho_\mu$, $\mu=1,\dots, 7$ and $\theta_8=\epsilon\otimes I \otimes I\otimes I$ give the real representation of ${\mathcal C}\ell(0,8)$ in $S=\mathbb{R}^{16}$. Dropping the first factor in $\rho_1,\rho_2,\rho_3$ one obtains the matrices generating a representation of ${\mathcal C}\ell(0,3)$ in $S=\mathbb{R}^4$, etc.
\end{quote}
Majorana representations of $\mathfrak{so}(n-1,n)$ in dimension $s=2^{n-1}$ are called \emph{Pauli representations}, and Majorana representations of $\mathfrak{so}(n,n)$ in dimension $s=2^n$, are called \emph{Dirac representations}.
To construct them we need generalizations of the \emph{Pauli} $\sigma$ matrices and \emph{Dirac} $\gamma$ \emph{matrices}. The construction of those is \emph{inductive}.
It starts with $p+q=1$ with one matrix $\sigma_1=1$, and for every $n=1,2,\dots$, it alternates between $p+q=2n-1$ of Pauli matrices $\sigma_\mu$, $\mu=1,\dots, 2n-1$, and $p+q=2n$ of Dirac matrices $\gamma_\mu$, $\mu=1,\dots, 2n$.
Again quoting Trautman \cite{traut} we have:
\begin{enumerate}
\item In dimension $p+q=1$ put $\sigma_1=1$.
\item Given $2^{n-1}\times 2^{n-1}$ matrices $\sigma_\mu$, $\mu=1,\dots,2n-1$, define
$$\gamma_\mu=\begin{pmatrix} 0&\sigma_\mu\\\sigma_\mu&0\end{pmatrix}\,\,\mathrm{for}\,\,\mu=1,\dots,2n-1,$$
and
$$\gamma_{2n}=\begin{pmatrix} 0&-I\\I&0\end{pmatrix},$$
where $I$ is the identity $2^{n-1}\times 2^{n-1}$ matrix.
\item Given $2^n\times 2^n$ matrices $\gamma_\mu$, $\mu=1,\dots,2n$, define $\sigma_\mu=\gamma_\mu$ for $\mu=1,\dots, 2n$, and $\sigma_{2n+1}=\gamma_1\dots\gamma_{2n}$, so that for $n>0$,
$$\sigma_{2n+1}=\begin{pmatrix} I&0\\0&-I\end{pmatrix}.$$
\end{enumerate}
In every dimension $p+q=2n-1$, $n\geq 1$, the Pauli matrices $\sigma_\mu$, $\mu=1,\dots,2n-1$ satisfy
$$\sigma_\mu\sigma_\nu+\sigma_\nu\sigma_\mu=2g_{\mu\nu}\underbrace{\big(I\otimes\dots\otimes I\big)}_{n-1\,\, \mathrm{times}},$$
where the $(2n-1)\times (2n-1)$ symmetric matrix $(g_{\mu\nu})$ is \emph{diagonal}, and has the following diagonal elements:
$$(g_{\mu\nu})=\mathrm{diag}\underbrace{(1,-1,\dots,-1,1)}_{(2n-1)\,\, \mathrm{times}}.$$
Likewise, in every dimension $p+q=2n$, $n\geq 1$, the Dirac matrices $\gamma_\mu$, $\mu=1,\dots,2n$ satisfy
$$\gamma_\mu\gamma_\nu+\gamma_\nu\gamma_\mu=2g_{\mu\nu}\underbrace{\big(I\otimes\dots\otimes I\big)}_{n\,\, \mathrm{times}},$$
where the $(2n)\times (2n)$ symmetric matrix $(g_{\mu\nu})$ is \emph{diagonal}, and has the following diagonal elements:
$$(g_{\mu\nu})=\mathrm{diag}\underbrace{(1,-1,\dots,1,-1)}_{2n\,\, \mathrm{times}}.$$
Therefore, for each $n=1,2,\dots$ the set $\{\sigma_\mu\}_{\mu=1}^{2n-1}$ of Pauli matrices generates the elements of a real $2^{n-1}$-dimensional representation of the Clifford algebra ${\mathcal C}\ell(n-1,n)$, and the set $\{\gamma_\mu\}_{\mu=1}^{2n}$ of Dirac matrices generates the elements of a real $2^{n}$-dimensional representation of the Clifford algebra ${\mathcal C}\ell(n,n)$.
Then, in turn, these real Clifford algebras representations can be further used to define the real spin representations of the Lie algebras $\mathfrak{so}(p+q,0)$, $\mathfrak{so}(n-1,n)$ and $\mathfrak{so}(n,n)$ as follows. One obtains all the generators of the spin representation of $\mathfrak{so}(g)$ by spanning it by all the elements of the form
\begin{itemize}
\item $\tfrac12\rho_\mu\rho_\nu$, with $1\leq \mu<\nu\leq (p+q)$, in the case of $\mathfrak{so}(p+q,0)$, $p+q=3,5,6,7$;
\item $\tfrac12\theta_\mu\theta_\nu$, with $1\leq \mu<\nu\leq 8$, in thecase of $\mathfrak{so}(8,0)$;
\item $\tfrac12\sigma_\mu\sigma_\nu$, with $1\leq \mu<\nu\leq (p+q)=2n-1$, in the case of $\mathfrak{so}(n-1,n)$;
\item $\tfrac12\gamma_\mu\gamma_\nu$, with $1\leq \mu<\nu\leq (p+q)=2n$, in the case of $\mathfrak{so}(n,n)$.
\end{itemize}
For further details consult \cite{traut}.
We will use all this information in next sections, when we create examples.
\section{Application: Obtaining the flat model for (3,6) distributions}
Let $(\rho,S)$ be the defining representation of $\mathfrak{so}(3)$ in $S=\mathbb{R}^3$. It can be generated by:
\begin{equation}\rho(A_1)=\begin{pmatrix} 0&0&-1\\0&0&0\\1&0&0\end{pmatrix},\quad\rho(A_2)=\begin{pmatrix} 0&1&0\\-1&0&0\\0&0&0\end{pmatrix},\quad\rho(A_1)=\begin{pmatrix} 0&0&0\\0&0&-1\\0&1&0\end{pmatrix}.\label{ro3a}\end{equation}
And let $(\tau,R)$ be an equivalent 3-dimensional representation of $\mathfrak{so}(3)$ given by
\begin{equation} \tau(A_1)=\begin{pmatrix} 0&0&-1\\0&0&0\\1&0&0\end{pmatrix},\quad\tau(A_2)=\begin{pmatrix} 0&-1&0\\1&0&0\\0&0&0\end{pmatrix},\quad\tau(A_1)=\begin{pmatrix} 0&0&0\\0&0&1\\0&-1&0\end{pmatrix}.\label{tau3a}\end{equation}
We claim that for these two representations of $\mathfrak{so}(3)$, in the standard bases in $S=\mathbb{R}^3$, $R=\mathbb{R}^3$, the magical equation \eqref{magb} has the following solution:
$$\omega^1_{\mu\nu}=\begin{pmatrix} 0&0&0\\0&0&-1\\0&1&0\end{pmatrix},\quad\omega^2_{\mu\nu}=\begin{pmatrix} 0&0&-1\\0&0&0\\1&0&0\end{pmatrix},\quad\omega^3_{\mu\nu}=\begin{pmatrix} 0&-1&0\\1&0&0\\0&0&0\end{pmatrix}.$$
Now using this solution $(\rho,\tau,\omega)$ of the magical equation \eqref{maga} we use the Corollary \ref{cruco} with $\lambda^i={\rm d} u^u+\omega^i{}_{\mu\nu}x^\mu{\rm d} x^\nu$, and obtain the following theorem.
\begin{theorem}
Let $M=\mathbb{R}^6$ with coordinates $(u^1,u^2,u^3,x^1,x^2,x^3)$ and consider three 1-forms
$$\begin{aligned}
\lambda^1=&{\rm d} u^1+x^2{\rm d} x^3\\
\lambda^2=&{\rm d} u^2+x^1{\rm d} x^3\\
\lambda^3=&{\rm d} u^3+x^1{\rm d} x^2
\end{aligned}$$
on $M$.
Then the rank 3 distribution ${\mathcal D}$ on $M$ defined by ${\mathcal D}=\{\mathrm{T}\mathbb{R}^6\ni X\,\,|\,\, X\hook \lambda^i=0,\,\,i=1,2,3\}$ has its Lie algebra of infinitesimal symmetries $\mathfrak{aut}{\mathcal D}$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_\minu=R\oplus S$ where $(\rho,S=\mathbb{R}^3)$ and $(\tau,R=\mathbb{R}^3)$ are the respective representations \eqref{ro3a}, \eqref{tau3a} of $\mathfrak{n}_{00}=\mathfrak{so}(3)$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple graded Lie algebra $\mathfrak{so}(4,3)$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{so}(4,3),$$
with the following gradation:
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S$,
$$
\mathfrak{n}_0=\mathfrak{gl}(3,\mathbb{R})\supset\mathfrak{n}_{00}$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$,
which is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{so}(4,3)$.
The contactification $(M,{\mathcal D})$ is locally a flat model for the parabolic geometry of type $\big({\bf Spin}(4,3),P\big)$ related to the following \emph{crossed} Satake diagram: \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{B}{oot}
\end{dynkinDiagram}.
\end{theorem}
\begin{proof}
Proof is by calculating the Tanaka prolongation of $\mathfrak{n}_\minu=R\oplus S$, which is $\mathfrak{gl}(3,\mathbb{R})$, naturally graded by the Tanaka prolongation algebraic procedure precisely as $\mathfrak{aut}({\mathcal D})$ in the statement of the theorem.
\end{proof}
\section{Application: Obtaining Biquard's 7-dimensional flat quaternionic contact manifold via contactification using spin representations of $\mathfrak{so}(1,2)$ and $\mathfrak{so}(3,0)$}
According to Trautman's procedure \cite{traut} there is a real representation of ${\mathcal C}\ell(0,3)$ in $\mathbb{R}^4$. There also is an analogous representation of ${\mathcal C}\ell(1,2)$. Both of them are generated by the $\sigma$ matrices
$$\sigma_1=\begin{pmatrix} 0&-1&0&0\\1&0&0&0\\0&0&0&-1\\0&0&1&0\end{pmatrix},\quad\sigma_2=\begin{pmatrix} 0&0&0&-\varepsilon\\0&0&-\varepsilon&0\\0&1&0&0\\1&0&0&0\end{pmatrix},\quad\sigma_3=\begin{pmatrix} 0&0&-\varepsilon&0\\0&0&0&\varepsilon\\1&0&0&0\\0&-1&0&0\end{pmatrix},$$
where $$\varepsilon=1\quad \mathrm{for}\quad {\mathcal C}\ell(0,3),$$ and $$\varepsilon=-1\quad\mathrm{for}\quad{\mathcal C}\ell(2,1).$$ One can check that these matrices\footnote{In Trautman's quote in the previous section, these matrices where denoted by $\rho_1$, $\rho_2$, $\rho_3$, and they were only explicitly given for $\varepsilon=1$.} satisfy the (representation of) Clifford algebra relations:
$$\sigma_\mu\sigma_\nu+\sigma_\nu\sigma_\mu=2g_{\mu\nu}\big(I\otimes I)$$
with all $g_{\mu\nu}$ being zero, except $g_{11}=-1$, $g_{22}=g_{33}=-\varepsilon$.
This leads to the following spinorial
representation $\rho$ of $\mathfrak{so}(0,3)$ or $\mathfrak{so}(1,2)$
\begin{equation}\rho(A_1)=-\tfrac12\sigma_3,\quad \rho(A_2)=\tfrac12\sigma_2,\quad\rho(A_3)=-\tfrac12\varepsilon\sigma_1.\label{so31}\end{equation}
Here $(A_1,A_2,A_3)$ constitutes a basis for $\mathfrak{so}(0,3)$ when $\varepsilon=1$ and for $\mathfrak{so}(2,1)$ when $\varepsilon=-1$. This can be extended to the representation of
$$\mathfrak{n}_{00}=\mathbb{R}\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)$$
in $S=\mathbb{R}^4$ by setting the value of $\rho$ on the generator $A_4=\mathrm{Id}$ as
\begin{equation}\rho(A_4)=\tfrac12\mathfrak(I\otimes I).\label{so32}\end{equation}
For this representation of $\mathbb{R}\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)$, the magical equation \eqref{maga} has a following solution
$$\begin{aligned}
\omega^1=\big(\omega^1{}_{\mu\nu}\big)=&\begin{pmatrix} 0&0&1&0\\0&0&0&-1\\-1&0&0&0\\0&1&0&0\end{pmatrix},\quad \omega^2=\big(\omega^2{}_{\mu\nu}\big)=\begin{pmatrix} 0&0&0&-1\\0&0&-1&0\\0&1&0&0\\1&0&0&0\end{pmatrix},\\
&\\
&\omega^3=\big(\omega^1{}_{\mu\nu}\big)=\begin{pmatrix} 0&-\varepsilon&0&0\\\varepsilon&0&0&0\\0&0&0&-1\\0&0&1&0\end{pmatrix},
\end{aligned}$$
with
\begin{equation}\begin{aligned}
\tau(A_1)=\begin{pmatrix} 0&0&0\\0&0&\varepsilon\\0&-1&0\end{pmatrix},\quad &\tau(A_2)=\begin{pmatrix} 0&0&-\varepsilon\\0&0&0\\-1&0&0\end{pmatrix},\quad\tau(A_3)=\begin{pmatrix} 0&-\varepsilon&0\\\varepsilon&0&0\\0&0&0\end{pmatrix}\\
&\tau(A_4)=\begin{pmatrix} 1&0&0\\0&1&0\\0&0&1\end{pmatrix}.
\end{aligned}
\label{so33}\end{equation}
This in particular gives the vectorial representation $\tau$ of
$$\mathfrak{n}_{00}=\mathbb{R}\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)$$
in $R=\mathbb{R}^3$.
Now, by using this solution for $(\rho,\omega,\tau)$ and applying our Corollary \ref{cruco} we have an $(s=4)$-dimensional manifold $N=\mathbb{R}^4$, equipped with $r=3$ two-forms $\omega^i=\tfrac12\omega^{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu$, $i=1,2,3$, which contactifies to an $(s+r)=7$-dimensional manifold $M=\mathbb{R}^7$ having a distribution structure $(M,{\mathcal D})$ defined as an annihilator of the $r=3$ one-forms $\lambda^i={\rm d} u^i+\omega^i{}_{\mu\nu}x^\mu{\rm d} x^\nu$, $i=1,2,3$.
We have the following theorem.
\begin{theorem}
Let $M=\mathbb{R}^7$ with coordinates $(u^1,u^2,u^3,x^1,x^2,x^3,x^4)$, and consider three 1-forms $\lambda^1,\lambda^2,\lambda^3$ on $M$ given by
$$\begin{aligned}
\lambda^1=&{\rm d} u^1+x^1{\rm d} x^3-x^2{\rm d} x^4,\\
\lambda^2=&{\rm d} u^2-x^1{\rm d} x^4-x^2{\rm d} x^3,\\
\lambda^3=&{\rm d} u^3-\varepsilon x^1{\rm d} x^2-x^3{\rm d} x^4,
\end{aligned}\quad\quad\mathrm{with}\quad\quad\varepsilon=\pm1.$$
The rank 4 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^7\ni X\,\,|\,\,X\hook\lambda^1=X\hook\lambda^2=X\hook\lambda^3=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^4)$ is the spinorial representation \eqref{so31}-\eqref{so32} of $\mathfrak{n}_{00}=\mathbb{R}\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)$, and $(\tau,R=\mathbb{R}^3)$ is the vectorial representation \eqref{so33} of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple Lie algebra $\mathfrak{sp}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{sp}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big),$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S$,
$$\begin{aligned}
\mathfrak{n}_0=&\mathfrak{n}_{00}\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)=\\&\mathbb{R}\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)\oplus \mathfrak{so}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big),\end{aligned}$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$,
which is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{sp}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big)$.
The contactification $(M,{\mathcal D})$ is locally a flat model for the parabolic geometry of type $\Big(\mathbf{Sp}\big(\tfrac{1-\varepsilon}{2},\tfrac{5+\varepsilon}{2}\big),P\Big)$ related to the following \emph{crossed} satake diagrams:
\begin{enumerate}
\item
\tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{C}{*t*}
\end{dynkinDiagram}
in the case of $\varepsilon=1$, and
\item \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{C}{oto}
\end{dynkinDiagram}
in the case of $\varepsilon=-1$.
\end{enumerate}
\end{theorem}
\begin{remark}
When $\varepsilon=1$ the flat parabolic geometry described in the above theorem is the lowest dimensional example of the \emph{quaternionic contact} geometry considered by Biquard \cite{bicquard}.
\end{remark}
\section{Application: Obtaining the exceptionals from contactifications of spin representations; the $\mathfrak{f}_4$ case}
We will now explain the Cartan realization of the simple exceptional Lie algebra $\mathfrak{f}_4$ in dimension $\mathbb{R}^{15}$ mentioned in the introduction.
The Satake diagrams for the real forms of the complex simple exceptional Lie algebra $\mathfrak{f}_4$ are as follows:\\
\centerline{\begin{dynkinDiagram}[edge length=.4cm]{F}{****}
\end{dynkinDiagram},\hspace{0.5cm}
\begin{dynkinDiagram}[edge length=.4cm]{F}{***o}
\end{dynkinDiagram},\hspace{0.5cm}
\begin{dynkinDiagram}[edge length=.4cm]{F}{oooo}\end{dynkinDiagram}.}
The first diagram corresponds to the \emph{compact} real form of $\mathfrak{f}_4$ an is not interesting for us. The other two diagrams are interesting:
\begin{enumerate}
\item The last, \begin{dynkinDiagram}[edge length=.4cm]{F}{oooo}\end{dynkinDiagram}, corresponds to the \emph{split} real form $\mathfrak{f}_I$ , and
\item the middle one, \begin{dynkinDiagram}[edge length=.4cm]{F}{***o}
\end{dynkinDiagram}, denoted by $\mathfrak{f}_{II}$ in \cite{CS}, is also interesting, since similarly to $\mathfrak{f}_I$, it defines a \emph{parabolic geometry} in dimension 15.
\end{enumerate}
Crossing the last node on the right in the diagrams for $\mathfrak{f}_I$ or $\mathfrak{f}_{II}$, as in\\
\centerline{\begin{dynkinDiagram}[edge length=.4cm]{F}{ooox}\end{dynkinDiagram} \hspace{0.5cm} or \hspace{0.5cm}\begin{dynkinDiagram}[edge length=.4cm]{F}{***x}\end{dynkinDiagram},} we see that in both algebras there exist \emph{parabolic subalgebras} $\mathfrak{p}_I$ or $\mathfrak{p}_{II}$, respectively, of dimension 37, $\dim(\mathfrak{p}_I)=\dim(\mathfrak{p}_{II})=37$. In both respective cases, these choices of parabolics, define similar gradations in the corresponding real forms $\mathfrak{f}_I$, $\mathfrak{f}_{II}$, of the simple exceptional Lie $\mathfrak{f}_4$:
$$
\mathfrak{f}_A=\mathfrak{n}_{-2A}\oplus\mathfrak{n}_{-1A}\oplus\mathfrak{n}_{0A}\oplus\mathfrak{n}_{1A}\oplus\mathfrak{n}_{2A}\quad \mathrm{for}\quad A=I,II,
$$
with
$$\mathfrak{n}_{-A}=\mathfrak{n}_{-2A}\oplus\mathfrak{n}_{-1A}\quad \mathrm{for}\quad A=I,II,$$
being 2-step nilpotent and having grading components $\mathfrak{n}_{-2A}$ and $\mathfrak{n}_{-1A}$ of respective dimension $r_A=7$ and $s_A=8$,
$$r_A=\dim(\mathfrak{n}_{-2A})=7,\quad\quad s_A=\dim(\mathfrak{n}_{-1A})=8\quad \mathrm{for}\quad A=I,II.$$
The Lie algebra $\mathfrak{n}_{0A}$ in the Tanaka prolongation of $\mathfrak{n}_{-A}$ up to $0^{th}$ order is
\begin{enumerate}
\item $\mathfrak{n}_{0I}=\mathbb{R}\oplus\mathfrak{so}(4,3)$ in the case of $\mathfrak{f}_I$, and
\item $\mathfrak{n}_{0II}=\mathbb{R}\oplus\mathfrak{so}(0,7)$ in the case of $\mathfrak{f}_{II}$.
\end{enumerate}
Thus, from the analysis performed here, we see that there exists two different 2-step filtered structures $(M_I,{\mathcal D}_I)$ and $(M_{II},{\mathcal D}_{II})$, both in dimension 15, with the respective $F_I$-symmetric, or $F_{II}$-symmetric flat models, realized on $M_I=F_I/P_I$ or $M_{II}=F_{II}/P_{II}$. Here $F_I$ and $F_{II}$ denote the real Lie groups whose Lie algebras are $\mathfrak{f}_I$ and $\mathfrak{f}_{II}$, respectively. Similarly $P_I$ and $P_{II}$ are parabolic subgroups of respective $F_i$ and $F_{II}$, whose Lie algebras are $\mathfrak{p}_I$ and $\mathfrak{p}_{II}$. Recalling that each of the real groups $\mathbf{SO}(4,3)$ and $\mathbf{SO}(0,7)$ has \emph{two} real irreducible representations $\rho$ in dimension $s=8$ and $\tau$ in dimension $r=7$, with the 8-dimensional representation $\rho$ being the \emph{spin} representation of either $\mathbf{SO}(4,3)$ or $\mathbf{SO}(0,7)$, we can now give the explicit realizations of the ${\bf F}_4$-symmetric structures $(M_A,{\mathcal D}_A)$ for $A=I,II$.
\subsection{Cartan's realization of $\mathfrak{f}_I$}\label{41}
The plan is to start with the Lie algebra $\mathfrak{n}_{00}=\mathbb{R}\oplus\mathfrak{so}(4,3)$, as in the crossed Satake diagram \begin{dynkinDiagram}[edge length=.4cm]{F}{ooox}\end{dynkinDiagram} of $\mathfrak{f}_I$, and its two representations: \begin{itemize}
\item a representaion $(\rho,S=\mathbb{R}^8)$, corresponding to the spin representation of $\mathbf{SO}(4,3)$ in $(s=8)$-dimensional space $\mathfrak{n}_{-1}=S$ of real Pauli spinors, and
\item a representation $(\tau,R=\mathbb{R}^7)$, corresponding to the vectorial representation of $\mathbf{SO}(4,3)$ in $(r=7)$-dimensional space $\mathfrak{n}_{-2}=R$ of vectors in $\mathbb{R}^{(4,3)}$.
\end{itemize}
Having these two representations of $\mathfrak{n}_{00}=\mathbb{R}\oplus\mathfrak{so}(4,3)$ in the same basis, we will then solve the equations \eqref{maga} for the map $\omega\in\mathrm{Hom}(\bigwedge^2S,R)$ which will give us the commutators between elements in $\mathfrak{n}_{-1}$. This via Corollary \ref{cruco} will provide the explicit realization of the 15-dimensional contactification $(M,{\mathcal D})$ with the exception simple Lie algebra $F_I$ as its symmetry.
Actually, the passage from $\rho$ to $\tau$ in the above plan, is a bit tricky, since we need to have these representations expressed in the same basis. To handle with this obstacle, we will start with the spin representation $\rho$, in the space of \emph{Pauli spinors} $S$, and then we will use the fact that the skew representation $\rho\wedge\rho$ in the space of the bispinors $\bigwedge^2S$ decomposes as
$$\textstyle \bigwedge^2S=\bigwedge_{21}\oplus\bigwedge_{7},$$
where $\bigwedge_{21}$ is the 21-dimensional \emph{adjoint representation} of $\mathbf{SO}(4,3)$ and $\bigwedge_7$ is its 7-dimensional \emph{vectorial representation} $\tau$. In this way we will have the two representations $(\rho,S)$ and $(\tau,R=\bigwedge_7)$, expressed in the same basis $\{A_I\}$ of $\mathbb{R}\oplus\mathbf{SO}(4,3)$, and will apply the Corollary \ref{cruco} to get the desired $F_I$-symmetric contactification in dimension 15. On doing this we will use notation from Section \ref{spintraut}.
According to \cite{traut}, the real 8-dimensional \emph{representation of the Clifford algebra} ${\mathcal C}\ell(4,3)$ is generated by the seven \emph{8-dimensional Pauli matrices}:
$$\begin{aligned}
&\sigma_1=\sigma_x\otimes\sigma_x\otimes\sigma_x\\
&\sigma_2=\sigma_x\otimes\sigma_x\otimes\epsilon\\
&\sigma_3=\sigma_x\otimes\sigma_x\otimes\sigma_z\\
&\sigma_4=\sigma_x\otimes\epsilon\otimes I\\
&\sigma_5=\sigma_x\otimes\sigma_z\otimes I\\
&\sigma_6=\epsilon\otimes I\otimes I\\
&\sigma_7=\sigma_z\otimes I\otimes I.
\end{aligned}
$$
Using the identities \eqref{iden}, especially the one saying that $\epsilon^2=-I$, one easily finds that the seven Pauli matrices $\sigma_i$, $i=1,2,\dots,7$, satisfy the \emph{Clifford algebra identity}
$$\sigma_i\sigma_j+\sigma_j\sigma_i\,\,=\,\,2g_{ij}\,\,(I\otimes I \otimes I)\,,\quad\quad i,j=1,2,\dots,7,$$
with the coefficients $g_{ij}$ forming a diagonal $7\times7$ matrix
$$\Big(\,\,g_{ij}\,\,\Big)\,\,=\,\,\mathrm{diag}\Big(1,-1,1,-1,1,-1,1\Big),$$
of signature $(4,3)$. Thus, the 8-dimensional Pauli matrices $\sigma_i$, $i=1,\dots,7$, generate the Clifford algebra ${\mathcal C}\ell(4,3)$, and in turn, \emph{by the general theory}, as described in Section \ref{spintraut}, they define the \emph{spin representation} $\rho$ of $\mathfrak{so}(4,3)$ in an 8-dimensional real vector space $S=\mathbb{R}^8$ of Pauli(-\emph{Majorana}) spinors.
\subsubsection{The spinorial representation of $\mathfrak{so}(4,3)$}
To be more explicit, let $(i,j)$ be such that $1\leq i<j\leq 7$, and let $I$ be a function
\begin{equation} I(i,j)=1+i+\tfrac12(j-3)j\label{Iij}\end{equation}
on such pairs. Note that the function $I$ is a bijection between the 21 pairs $(i,j)$ and the set of 21 natural numbers $I=1,2,\dots, 21$. Consider the twenty one $8\times 8$ real matrices $\sigma_i\sigma_j$ with $1\leq i<j\leq 7$, and a basis $\{A_I\}_{I=1}^{21}$ in the Lie algebra $\mathfrak{so}(4,3)$. Then the spin representation $\rho$ of $\mathfrak{so}(4,3)$ is given by
$$\rho(A_{I(i,j)})=\tfrac12\sigma_i\sigma_j\quad\mathrm{with}\quad 1\leq i<j\leq7.$$
Explicitly, we have:
\begin{equation}
\begin{array}{lll}
\rho(A_1)=\tfrac12 I\otimes I \otimes\sigma_z,&\quad \rho(A_8)=\tfrac12 I\otimes \epsilon\otimes\epsilon,&\quad \rho(A_{15})=\tfrac12 \sigma_z\otimes \sigma_z \otimes I,\\
\rho(A_2)=\tfrac12 I\otimes I \otimes\epsilon, &\quad\rho(A_9)=\tfrac12 I\otimes \epsilon \otimes\sigma_z, &\quad \rho(A_{16})=\tfrac12 \epsilon\otimes \sigma_x \otimes\sigma_x, \\
\rho(A_3)=\tfrac12 I\otimes I \otimes\sigma_x, &\quad\rho(A_{10})=\tfrac12 I\otimes \sigma_x \otimes I,&\quad \rho(A_{17})=\tfrac12 \epsilon\otimes \sigma_x \otimes\epsilon,\\
\rho(A_4)=\tfrac12 I\otimes I \otimes\sigma_x,&\quad \rho(A_{11})=\tfrac12 \sigma_z\otimes \sigma_x \otimes\sigma_x,&\quad \rho(A_{18})=\tfrac12 \epsilon\otimes \sigma_x \otimes\sigma_z,\\
\rho(A_5)=\tfrac12 I\otimes \sigma_z \otimes\epsilon,&\quad \rho(A_{12})=\tfrac12 \sigma_z\otimes \sigma_x \otimes\epsilon,&\quad \rho(A_{19})=\tfrac12 \epsilon\otimes \epsilon \otimes I,\\
\rho(A_6)=\tfrac12 I\otimes \sigma_z \otimes\sigma_z,&\quad \rho(A_{13})=\tfrac12 \sigma_z\otimes \sigma_x\otimes\sigma_z,&\quad \rho(A_{20})=\tfrac12 \epsilon\otimes \sigma_z \otimes I,\\
\rho(A_7)=\tfrac12 I\otimes \epsilon \otimes\sigma_x, &\quad \rho(A_{14})=\tfrac12 \sigma_z\otimes \epsilon\otimes I, &\quad
\rho(A_{21})=\tfrac12 \sigma_x\otimes I \otimes I.
\end{array}
\label{f41}\end{equation}
The spin representation $\rho$ of $V_0=\mathbb{R}\oplus\mathfrak{so}(4,3)$ needs one generator more. Let us call its $\rho(A_{22})$. We have
$$\rho(A_{22})=\tfrac12 I\otimes I\otimes I.$$
We determine the structure constants $c^K{}_{IJ}$ of $\mathbb{R}\oplus\mathfrak{so}(4,3)$ in the basis $A_I$ from
\begin{equation} [\,\,\rho(A_I),\,\,\rho(A_J)\,\,]\,\,=\,\,c^K{}_{IJ}\,\,\rho(A_K).\label{strco}\end{equation}
\subsubsection{Obtaining the vectorial representation of $\mathfrak{so}(4,3)$}
Now, we take the space $\bigwedge^2S$ and consider the skew symmetric representation $${\stackrel{\scriptscriptstyle{a}}{\rho}}{}=\rho\wedge\rho$$ in it. We will write it in the standard basis $f_\mu$, $\mu=1,\dots, 8$ in $S=\mathbb{R}^8$. We have $\rho(A_I)f_\mu=\rho_I{}^\nu{}_\mu f_\nu$. Now, the components of the 28-dimensional representation ${\stackrel{\scriptscriptstyle{a}}{\rho}}{}=\rho\wedge\rho_1$ are
$${\stackrel{\scriptscriptstyle{a}}{\rho}}{}_{I}{}^{\mu\nu}{}_{\alpha\beta}\,\,=\,\,\,\,\rho_I{}^{\mu}{}_\alpha\delta^{\nu}{}_\beta+
\delta^{\mu}{}_\alpha\rho_I{}^{\nu}{}_\beta-\rho_I{}^{\nu}{}_\alpha\delta^{\mu}{}_\beta-\delta^{\nu}{}_\alpha\rho_I{}^{\mu}{}_\beta\,\,,$$
and we have
$$\big({\stackrel{\scriptscriptstyle{a}}{\rho}}{}(A_I)w\big){}^{\mu\nu}={\stackrel{\scriptscriptstyle{a}}{\rho}}{}_{I}{}^{\mu\nu}{}_{\alpha\beta}w^{\alpha\beta}, \quad \forall w^{\alpha\beta}=w^{[\alpha\beta]}.$$
The Casimir operator for this representation is
$${\mathcal C}\,\,=\,\,10\,\,K^{IJ}\,\,{\stackrel{\scriptscriptstyle{a}}{\rho}}{}(A_I)\,{\stackrel{\scriptscriptstyle{a}}{\rho}}{}(A_J),$$
where $K^{IJ}$ is the inverse of the Killing form matrix $K_{IJ}=c^L{}_{IM}c^M{}_{JL}$ in the basis $A_I$. Since for the Killing form to be nondegenerate we must restrict to the semisimple part of $\mathbb{R}\oplus\mathfrak{so}(4,3)$, here the indices $I,J,K,L,M=1,2\dots,21$, and as always are summed over the repeated indices. One can check that in this basis of $\mathfrak{so}(4,3)$ the Killing form matrix is diagonal, and reads
$$
\big(\,\, K_{IJ}\,\,\big)=
10\,\mathrm{diag}\Big(\,1,-1,1,1,-1,1,-1,1,-1,1,1,-1,1,-1,1,-1,1,-1,1,-1,1\,\Big).
$$
The Casimir $\mathcal C$ defines the decomposition of the 28-dimensional reducible representation ${\stackrel{\scriptscriptstyle{a}}{\rho}}{}=\rho_1\wedge\rho_1$ onto $$\textstyle \bigwedge^2S=\bigwedge_{21}\oplus\bigwedge_7,$$ where
the 7-dimensional irreducible representation \emph{space} $\bigwedge_7$ \emph{is the eigenspace of the Casimir operator consisting of eigen-bispinors with eigenvalue equal to 6},
$$\textstyle {\mathcal C}\,\,\big(\bigwedge_7\big)\,=6\,\bigwedge_7.$$
Explicitly, in the same basis $A_I$, $I=1,2,\dots,21$, as before, this 7-dimensional representation $(\tau,R=\bigwedge_7)$ of the $\mathfrak{so}(4,3)$ Lie algebra is given by:
\begin{equation}\begin{aligned}
&\tau(A_1)=E_{66}-E_{22},\\
&\tau(A_2)=\tfrac12(E_{23}-E_{32}+E_{25}-E_{52}+E_{36}-E_{63}+E_{56}-E_{65}),\\
&\tau(A_3)=\tfrac12(E_{23}+E_{32}+E_{25}+E_{52}+E_{36}+E_{63}+E_{56}+E_{65}),\\
&\tau(A_4)=\tfrac12(E_{23}+E_{32}-E_{25}-E_{52}-E_{36}-E_{63}+E_{56}+E_{65}),\\
&\tau(A_5)=\tfrac12(E_{23}-E_{32}-E_{25}+E_{52}-E_{36}+E_{63}+E_{56}-E_{65}),\\
&\tau(A_6)=E_{33}-E_{55},\\
&\tau(A_7)=\tfrac12(E_{12}-E_{21}-E_{16}+E_{61}-E_{27}+E_{72}+E_{67}-E_{76}),\\
&\tau(A_8)=\tfrac12(-E_{12}-E_{21}-E_{16}-E_{61}-E_{27}-E_{72}-E_{67}-E_{76}),\\
&\tau(A_9)=\tfrac12(E_{13}-E_{31}+E_{15}-E_{51}-E_{37}+E_{73}-E_{57}+E_{75}),\\
&\tau(A_{10})=\tfrac12(E_{13}+E_{31}-E_{15}-E_{51}+E_{37}+E_{73}-E_{57}-E_{75}),\\
&\tau(A_{11})=\tfrac12(-E_{12}-E_{21}+E_{16}+E_{61}+E_{27}+E_{72}-E_{67}-E_{76}),\\
&\tau(A_{12})=\tfrac12(E_{12}-E_{21}+E_{16}-E_{61}+E_{27}-E_{72}+E_{67}-E_{76}),\\
&\tau(A_{13})=\tfrac12(-E_{13}-E_{31}-E_{15}-E_{51}+E_{37}+E_{73}+E_{57}+E_{75}),\\
&\tau(A_{14})=\tfrac12(-E_{13}+E_{31}+E_{15}-E_{51}-E_{37}+E_{73}+E_{57}-E_{75}),\\
&\tau(A_{15})=E_{11}-E_{77},\\
&\tau(A_{16})=\tfrac12(2E_{24}-E_{42}+E_{46}-2E_{64}),\\
&\tau(A_{17})=\tfrac12(2E_{24}+E_{42}+E_{46}+2E_{64}),\\
&\tau(A_{18})=\tfrac12(2E_{34}-E_{43}-E_{45}+2E_{54}),\\
&\tau(A_{19})=\tfrac12(-2E_{34}-E_{43}+E_{45}+2E_{54}),\\
&\tau(A_{20})=\tfrac12(-2E_{14}+E_{41}+E_{47}-2E_{74}),\\
&\tau(A_{21})=\tfrac12(2E_{14}+E_{41}-E_{47}-2E_{74}),\\
&\tau(A_{22})=E_{11}+E_{22}+E_{33}+E_{44}+E_{55}+E_{66}+E_{77},
\end{aligned}\label{f411}\end{equation}
where $E_{ij}$, $i,j=1,2,\dots,7$, denote $7\times 7$ matrices with zeroes everywhere except the value 1 in the entry $(i,j)$ seating at the crossing of the $i$th row and the $j$th column.
One can check that $$[\,\,\tau(A_I),\,\,\tau(A_J)\,\,]\,\,=\,\,c^K{}_{IJ}\,\,\tau(A_K),$$ with the same structure constants as in \eqref{strco}.
\subsubsection{A contactification with $\mathfrak{f}_I$ symmetry}
So now we are in the situation of having two representations $(\rho,S)$ and $(\tau,R=\bigwedge_7)$ od $\mathfrak{n}_{00}=\mathbb{R}\oplus\mathfrak{so}(4,3)$ and we can try to solve the equation \eqref{maga} for the map $\omega\in\mathrm{Hom}(\bigwedge^2S,R)$. Of course, if we started with some arbitrary $\rho$ and $\tau$ this equation would not have solutions other than 0, but here we expect to have solution, since we know it from the Cartan's PhD thesis \cite{CartanPhd}, and the announcement in Helgason's paper \cite{He}. And indeed there is a solution for a nonzero $\omega$, which when written in the basis $\{f_\mu\}$ in $S$ and $\{e_i\}$ in $R$ is such that it gives the \emph{seven} 2-forms $\omega^i=\tfrac12\omega^i{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu$, $i=1,\dots,7$, in $N=\mathbb{R}^8$ given by:
\begin{equation}\begin{aligned}
\omega^1=&{\rm d} x^1\wedge{\rm d} x^2-{\rm d} x^7\wedge{\rm d} x^8,\\
\omega^2=&{\rm d} x^2\wedge{\rm d} x^4-{\rm d} x^6\wedge{\rm d} x^8,\\
\omega^3=&{\rm d} x^1\wedge{\rm d} x^4-{\rm d} x^5\wedge{\rm d} x^8,\\
\omega^4=&\tfrac12\,\big(\,{\rm d} x^1\wedge{\rm d} x^6-{\rm d} x^2\wedge{\rm d} x^5-{\rm d} x^3\wedge{\rm d} x^8+{\rm d} x^4\wedge{\rm d} x^7\,\big),\\
\omega^5=&{\rm d} x^2\wedge{\rm d} x^3-{\rm d} x^6\wedge{\rm d} x^7,\\
\omega^6=&{\rm d} x^1\wedge{\rm d} x^3-{\rm d} x^5\wedge{\rm d} x^7,\\
\omega^7=&{\rm d} x^3\wedge{\rm d} x^4-{\rm d} x^5\wedge{\rm d} x^6.
\end{aligned}\label{2forms}\end{equation}
These, via the contactification and the theory summarized in Corollary \ref{cruco} lead to the following theorem.
\begin{theorem}\label{distf1}
Let $M=\mathbb{R}^{15}$ with coordinates $(u^1,\dots,u^7,x^1,\dots ,x^8)$, and consider seven 1-forms $\lambda^i,\dots,\lambda^7$ on $M$ given by
$$\begin{aligned}
\lambda^1=&{\rm d} u^1+ x^1{\rm d} x^2- x^7{\rm d} x^8,\\
\lambda^2=&{\rm d} u^2+ x^2{\rm d} x^4- x^6{\rm d} x^8,\\
\lambda^3=&{\rm d} u^3+ x^1{\rm d} x^4- x^5{\rm d} x^8,\\
\lambda^4=&{\rm d} u^4+\tfrac12\,\big(\, x^1{\rm d} x^6- x^2{\rm d} x^5-x^3{\rm d} x^8+ x^4{\rm d} x^7\,\big),\\
\lambda^5=&{\rm d} u^5+x^2{\rm d} x^3- x^6{\rm d} x^7,\\
\lambda^6=&{\rm d} u^6+x^1{\rm d} x^3- x^5{\rm d} x^7,\\
\lambda^7=&{\rm d} u^7+x^3{\rm d} x^4- x^5{\rm d} x^6.
\end{aligned}$$
The rank 8 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^{15}\ni X\,\,|\,\,X\hook\lambda^1=\dots=X\hook\lambda^7=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^8)$ is the spinorial representation \eqref{f41} of $\mathfrak{n}_{00}=\mathbb{R}\oplus \mathfrak{so}\big(4,3)$, and $(\tau,R=\mathbb{R}^7)$ is the vectorial representation \eqref{f411} of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple Lie algebra $\mathfrak{f}_I$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{f}_I,$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S$,
$$
\mathfrak{n}_0=\mathfrak{n}_{00}=\mathbb{R}\oplus\mathfrak{so}(4,3),$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$,
which is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{f}_I$.
The contactification $(M,{\mathcal D})$ is locally a flat model for the parabolic geometry of type $(F_I,P_I)$ related to the following \emph{crossed} Satake diagram \begin{dynkinDiagram}[edge length=.4cm]{F}{ooox}\end{dynkinDiagram} of $\mathfrak{f}_I$.
\end{theorem}
\begin{remark}
Please note, that this is an example of an application of the magical equation \eqref{maga} in which the starting algebra $\mathfrak{n}_{00}$ was big enough, so that its Tanaka prolongation $\mathfrak{n}_0$ counterpart is precisely equal to $\mathfrak{n}_{00}$. This was actually expected from the construction based on the crossed Satake diagram \begin{dynkinDiagram}[edge length=.4cm]{F}{ooox}\end{dynkinDiagram}, which shows that $\mathfrak{n}_0$ of this parabolic geometry is precisely our $\mathfrak{n}_{00}=\mathbb{R}\oplus\mathfrak{so}(4,3)$.
\end{remark}
\begin{remark}
One sees that the distribution $\mathcal D$ in $\mathbb{R}^{15}$ with $\mathfrak{f}_I$ symmetry presented in Theorem \ref{distf1} looks different that the distribution from our Example \ref{exa4}. It follows, however that both these distributions are locally equivalent, and both have the same simple exceptional Lie algebra $\mathfrak{f}_I$ as an algebra of their authomorphisms.
\end{remark}
\subsubsection{Contactification for $\mathfrak{f}_I$: more algebra about $\mathfrak{so}(4,3)$}\label{phiform}
In our construction of the $\mathfrak{f}_I$ symmetric distribution $\mathcal D$ in Theorem \ref{distf1} the crucial role was played by the 7-dimensional span of 2-forms $\omega^i$, $i=1,2,\dots,7$. If we were given these seven 2-forms, we would produce the $\mathfrak{f}_I$ symmetric distribution $\mathcal D$ by the procedure of contactification.
It turns out that in $S=\mathbb{R}^8$ there is a particular 4-form
$$\Phi=\tfrac{1}{4!}\Phi_{\mu\nu\rho\sigma}{\rm d} x^\mu\wedge{\rm d} x^\nu\wedge{\rm d} x^\rho\wedge {\rm d} x^\sigma$$
that is $\mathbb{R}\oplus \mathfrak{so}(4,3)$ invariant
$$\rho_I{}^\alpha{}_\mu \Phi_{\alpha\nu\rho\sigma}+\rho_I{}^\alpha{}_\nu \Phi_{\mu\alpha\rho\sigma}+\rho_I{}^\alpha{}_\rho \Phi_{\mu\nu\alpha\sigma}+\rho_I{}^\alpha{}_\sigma \Phi_{\mu\nu\rho\alpha}=S_I\Phi_{\mu\nu\rho\sigma}.$$
It may be represented by:
$$\Phi=h_{ij}\omega^i\wedge\omega^j,$$
where $\omega^i$ are given by \eqref{2forms} and
$$\big(\,\,h_{ij}\,\,\big)=
\begin{pmatrix}
0&0&0&0&0&0&1\\0&0&0&0&0&-1&0\\0&0&0&0&1&0&0\\0&0&0&1&0&0&0\\0&0&1&0&0&0&0\\0&-1&0&0&0&0&0\\1&0&0&0&0&0&0 \end{pmatrix},$$
or in words\footnote{Note that since $(h_{ij})$ is a symmetric matrix of signature $(4,3)$, this fact alone shows that the span of seven 2-forms $\omega^i$ is a $7$-dimensional representation space of $\mathbf{SO}(4,3)$. Actually, this fact easily leads to the construction of the double cover ${\mathbb{Z}}_2\to{\bf Spin}(4,3)\to\mathbf{SO}(4,3)$. }: $h_{ij}$, $i,j,=1,2,\dots,7$, \emph{are all zero except} $h_{17}=h_{71}=-h_{26}=-h_{62}=h_{35}=h_{53}=h_{44}=1$.
The form $\Phi$ in full beauty reads:
\begin{equation}\begin{aligned}
\tfrac23\Phi\,\,=\,\,2\,\,\big(\,\,&{\rm d} x1\wedge{\rm d} x^2\wedge{\rm d} x^3\wedge{\rm d} x^4+{\rm d} x^5\wedge{\rm d} x^6\wedge{\rm d} x^7\wedge{\rm d} x^8\,\,\big)-\\
&{\rm d} x^1\wedge{\rm d} x^2\wedge{\rm d} x^5\wedge{\rm d} x^6+{\rm d} x^1\wedge{\rm d} x^3\wedge{\rm d} x^6\wedge{\rm d} x^8-\\&{\rm d} x^1\wedge{\rm d} x^4\wedge{\rm d} x^6\wedge{\rm d} x^7-{\rm d} x^2\wedge{\rm d} x^3\wedge{\rm d} x^5\wedge{\rm d} x^8+\\&{\rm d} x^2\wedge{\rm d} x^4\wedge{\rm d} x^5\wedge{\rm d} x^7-{\rm d} x^3\wedge{\rm d} x^4\wedge{\rm d} x^7\wedge{\rm d} x^8.
\end{aligned}
\label{4form}\end{equation}
It is remarkable that this 4-form alone encaptures all the features of the $\mathfrak{f}_I$ symmetric contactification we discussed in the entire Section \ref{41}. By this we mean following:
\begin{enumerate}
\item Consider $N=\mathbb{R}^8$ with coordinates $(x^\mu)$, $\mu=1,2,\dots,8$, and the 4-form $$\Phi=\tfrac{1}{4!}\Phi_{\mu\nu\rho\sigma}{\rm d} x^\mu\wedge{\rm d} x^\nu\wedge{\rm d} x^\rho\wedge {\rm d} x^\sigma$$ given by \eqref{4form}.
\item Consider an equation
$$A^\alpha{}_\mu \Phi_{\alpha\nu\rho\sigma}+A^\alpha{}_\nu \Phi_{\mu\alpha\rho\sigma}+A^\alpha{}_\rho \Phi_{\mu\nu\alpha\sigma}+A^\alpha{}_\sigma \Phi_{\mu\nu\rho\alpha}=S\Phi_{\mu\nu\rho\sigma}$$
for the real $7\times 7$ matrix $A=(A^\mu{}_\nu)$.
\item For simplicity solve it in two steps: \begin{itemize}
\item First with $S=0$. You obtain 21-dimensional solution space, which will be the \emph{spin representation} $\rho$ of $\mathfrak{so}(4,3)$. It is given $\rho(A)=A$.
\item Then prove that the only solution with $S\neq 0$ corresponds to $S=4$, and that, modulo the addition of linear combinations of solutions with $S=0$, it is given by $A=\mathrm{Id}_{8\times 8}$. Extend your possible $A$s with $S=0$ to $A$s including $A=\mathrm{Id}_{8\times 8}$.
\end{itemize}
\item In this way you will show that the stabilizer in $\mathfrak{gl}(8,\mathbb{R})$ of the 4-form $\Phi$ is the Lie algebra $\mathbb{R}\oplus\mathfrak{so}(4,3)$ in the \emph{spin representation} $\rho$ of Pauli spinors; $\rho(A)=A$.
\item Then search for a 7-dimensional space of
2-forms, spanned say by the 2-forms $\omega^i=\tfrac12\omega^i{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu$ satisfying
$$A^\alpha{}_\mu\,\, \omega^i{}_{\alpha\nu}\,\,+\,\,A^\alpha{}_\nu\,\, \omega^i{}_{\mu\alpha}\,\,=\,\,s^i{}_j \,\,\omega^j{}_{\mu\nu}\,\,$$
for all $A$s from the spin representation $\rho(A)=A$ of $\mathbb{R}\oplus\mathfrak{so}(4,3)$. Here $s^i{}_j$ are auxiliary constants
\footnote{Note however, that although you look for $\omega^i{}_{\mu\nu}$ with \emph{some} constants $s^i{}_j$, these constants have geometric meaning: comparing with our magical equation \eqref{magb} we see that the $7\times 7$ matrices $(s^i{}_j)$ constitute matrices of the defining representation $\tau$ of $\mathbb{R}\oplus\mathfrak{so}(4,3)$.}.
\item This space is uniquely defined by these equations, and after solving them you will get 7 linearly independent 2-forms $(\omega^1,\dots,\omega^7)$ in $N=\mathbb{R}^8$.
\item Contactifying the resulting structure $\big(N,\mathrm{Span}(\omega^1,\dots,\omega^7)\big)$, as we did e.g in Theorem \ref{distf1}, you will get $\mathfrak{f}_I$ symmetric distribution ${\mathcal D}$ in $\mathbb{R}^7\to \big(\,M=\mathbb{R}^{15}\,\big)\to \big(\,N=\mathbb{R}^8\,\big)$.
\end{enumerate}
\subsection{Realization of $\mathfrak{f}_{II}$}\label{42}
It seems that Cartan was only interested in the explicit realization of $\mathfrak{f}_I$. The realization of $\mathfrak{f}_{II}$ can be obtained in the same spirit as we have described in Section \ref{41}. Here without much of explanations since they parallel Section \ref{41}, we only display the main steps leading to this realization.
We start with the representation of the Clifford algebra ${\mathcal C}\ell(0,7)$ generated by the seven $\rho$-matrices from \eqref{cl07}. They satisfy
$$\rho_i\rho_j+\rho_j\rho_i=-2\delta_{ij}I_{8\times 8},\quad i,j=1,\dots,7.$$
They induce the 8-dimensional representation
$$\rho:\mathbb{R}\oplus\mathfrak{so}(0,7)\to \mathrm{End}(S)$$
of $n_{00}=\mathbb{R}\oplus\mathfrak{so}(0,7)$ in the space $S=\mathbb{R}^8$ of real Pauli spinors, generated by the 22 real $8\times 8$ matrices:
$$\begin{aligned}
&\rho(A_{I(i,j)})=\tfrac12\rho_i\rho_j, \quad 1\leq i<j\leq 7,\\
&\rho(A_{22})=\tfrac12 (I\otimes I\otimes I),
\end{aligned}
$$
with the index $I=I(i,j)$ given by \eqref{Iij}, and with $I,\sigma_x,\epsilon,\sigma_z$ given by \eqref{pauu}-\eqref{pauu1}.
Explicitly, in terms of matrices $I,\sigma_x,\epsilon,\sigma_z$ the generators of this spinorial representation of $\mathfrak{so}(0,7)$ are:
\begin{equation}
\begin{array}{lll}
\rho(A_1)=-\tfrac12 I\otimes \epsilon \otimes\sigma_z,&\quad \rho(A_8)=\tfrac12 \epsilon\otimes \sigma_z\otimes\sigma_z,&\quad \rho(A_{15})=-\tfrac12 I\otimes \epsilon \otimes I,\\
\rho(A_2)=\tfrac12 I\otimes \epsilon \otimes\sigma_x, &\quad\rho(A_9)=-\tfrac12 \epsilon\otimes \sigma_z \otimes\sigma_x, &\quad \rho(A_{16})=-\tfrac12 \sigma_x\otimes I \otimes\epsilon, \\
\rho(A_3)=-\tfrac12 I\otimes I \otimes\epsilon, &\quad\rho(A_{10})=-\tfrac12 I\otimes \sigma_z \otimes \epsilon,&\quad \rho(A_{17})=-\tfrac12 \sigma_x\otimes\epsilon \otimes\sigma_x,\\
\rho(A_4)=-\tfrac12 \epsilon\otimes \epsilon\otimes\epsilon,&\quad \rho(A_{11})=\tfrac12 \epsilon\otimes \sigma_z \otimes I,&\quad \rho(A_{18})=-\tfrac12 \sigma_x\otimes \epsilon \otimes\sigma_z,\\
\rho(A_5)=\tfrac12 \epsilon\otimes I \otimes\sigma_x,&\quad \rho(A_{12})=-\tfrac12 \epsilon\otimes \sigma_x \otimes\sigma_z,&\quad \rho(A_{19})=\tfrac12 \sigma_z\otimes \epsilon \otimes I,\\
\rho(A_6)=\tfrac12 \epsilon\otimes I\otimes\sigma_z,&\quad \rho(A_{13})=\tfrac12 \epsilon\otimes \sigma_x\otimes\sigma_x,&\quad \rho(A_{20})=\tfrac12 \sigma_z\otimes \sigma_x \otimes \epsilon,\\
\rho(A_7)=\tfrac12 \epsilon\otimes \sigma_x \otimes I, &\quad \rho(A_{14})=\tfrac12 I \otimes \sigma_x\otimes \epsilon, &\quad
\rho(A_{21})=\tfrac12 \sigma_z\otimes \sigma_z \otimes \epsilon.
\end{array}
\label{f42}\end{equation}
We also write down the corresponding generators of the vectorial representation $\tau$, which is the 7-dimensional irreducible component $\bigwedge_7$ of the representation $\rho\wedge\rho$,
which decomposes as $\bigwedge^2S=\bigwedge_{21}\oplus\bigwedge_7$. These generators read:
\begin{equation}
\begin{array}{lll}
\tau(A_1)=E_{31}-E_{13},&\quad \tau(A_8)=E_{37}-E_{73},&\quad \tau(A_{15})=E_{75}-E_{57},\\
\tau(A_2)=E_{12}-E_{21}, &\quad\tau(A_9)=E_{72}-E_{27}, &\quad \tau(A_{16})=E_{14}-E_{41}, \\
\tau(A_3)=E_{32}-E_{23}, &\quad\tau(A_{10})=E_{76}-E_{67},&\quad \tau(A_{17})=E_{34}-E_{43},\\
\tau(A_4)=E_{61}-E_{16},&\quad \tau(A_{11})=E_{51}-E_{15},&\quad \tau(A_{18})=E_{42}-E_{24},\\
\tau(A_5)=E_{36}-E_{63},&\quad \tau(A_{12})=E_{53}-E_{35},&\quad \tau(A_{19})=E_{46}-E_{64},\\
\tau(A_6)=E_{62}-E_{26},&\quad \tau(A_{13})=E_{25}-E_{52},&\quad \tau(A_{20})=E_{47}-E_{74},\\
\tau(A_7)=E_{17}-E_{71}, &\quad \tau(A_{14})= E_{65}-E_{56},&\quad
\tau(A_{21})=E_{54}-E_{45},
\end{array}
\label{f421}\end{equation}
where $E_{ij}$ are $7\times 7$ matrices with all zero entries, except at the $i$th-$j$th entry, where 1 resides.
We are again in a position ready for application of our Lemma \ref{l21}. Given the representations $(\rho,S=\mathbb{R}^8)$ and $(\tau,R=\bigwedge_7)$ of $\mathfrak{so}(0,7)$ we solve the magical equation \eqref{maga} for $\omega=\tfrac12\omega^i{}_{\mu\nu}e_i\otimes f^\mu\wedge f^\nu$. In this way we obtain the seven 2-forms $\omega^i=-\tfrac12\omega^i{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu$ on $N=\mathbb{R}^8$, with coordinates $(x^\mu)_{\mu=1}^8$, which read as follows:
\begin{equation}\begin{aligned}
\omega^1=&-{\rm d} x^1\wedge{\rm d} x^2-{\rm d} x^3\wedge{\rm d} x^4+{\rm d} x^5\wedge{\rm d} x^6+{\rm d} x^7\wedge{\rm d} x^8,\\
\omega^2=&{\rm d} x^1\wedge{\rm d} x^3-{\rm d} x^2\wedge{\rm d} x^4-{\rm d} x^5\wedge{\rm d} x^7+{\rm d} x^6\wedge{\rm d} x^8,\\
\omega^3=&-{\rm d} x^1\wedge{\rm d} x^4-{\rm d} x^2\wedge{\rm d} x^3+{\rm d} x^5\wedge{\rm d} x^8+{\rm d} x^6\wedge{\rm d} x^7,\\
\omega^4=&{\rm d} x^1\wedge{\rm d} x^5+{\rm d} x^2\wedge{\rm d} x^6+{\rm d} x^3\wedge{\rm d} x^7+{\rm d} x^4\wedge{\rm d} x^8,\\
\omega^5=&-{\rm d} x^1\wedge{\rm d} x^6+{\rm d} x^2\wedge{\rm d} x^5+{\rm d} x^3\wedge{\rm d} x^8-{\rm d} x^4\wedge{\rm d} x^7,\\
\omega^6=&{\rm d} x^1\wedge{\rm d} x^7+{\rm d} x^2\wedge{\rm d} x^8-{\rm d} x^3\wedge{\rm d} x^5-{\rm d} x^4\wedge{\rm d} x^6,\\
\omega^7=&{\rm d} x^1\wedge{\rm d} x^8-{\rm d} x^2\wedge{\rm d} x^7+{\rm d} x^3\wedge{\rm d} x^6-{\rm d} x^4\wedge{\rm d} x^5.
\end{aligned}\label{2forms2}\end{equation}
These, via the contactification lead to the following theorem.
\begin{theorem}\label{distf2}
Let $M=\mathbb{R}^{15}$ with coordinates $(u^1,\dots,u^7,x^1,\dots ,x^8)$, and consider seven 1-forms $\lambda^i,\dots,\lambda^7$ on $M$ given by
$$\begin{aligned}
\lambda^1=&{\rm d} u^1- x^1{\rm d} x^2- x^3{\rm d} x^4+ x^5{\rm d} x^6+ x^7{\rm d} x^8,\\
\lambda^2=&{\rm d} u^1+ x^1{\rm d} x^3- x^2{\rm d} x^4-x^5{\rm d} x^7+x^6{\rm d} x^8,\\
\lambda^3=&{\rm d} u^1- x^1{\rm d} x^4- x^2{\rm d} x^3+ x^5{\rm d} x^8+ x^6{\rm d} x^7,\\
\lambda^4=&{\rm d} u^1+ x^1{\rm d} x^5+ x^2{\rm d} x^6+ x^3{\rm d} x^7+ x^4{\rm d} x^8,\\
\lambda^5=&{\rm d} u^1- x^1{\rm d} x^6+x^2{\rm d} x^5+x^3{\rm d} x^8-x^4{\rm d} x^7,\\
\lambda^6=&{\rm d} u^1+ x^1{\rm d} x^7+ x^2{\rm d} x^8- x^3{\rm d} x^5-x^4{\rm d} x^6,\\
\lambda^7=&{\rm d} u^1+ x^1{\rm d} x^8- x^2{\rm d} x^7+ x^3{\rm d} x^6 - x^4{\rm d} x^5.
\end{aligned}$$
The rank 8 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^{15}\ni X\,\,|\,\,X\hook\lambda^1=\dots=X\hook\lambda^7=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^8)$ is the spinorial representation \eqref{f42} of $\mathfrak{n}_{00}=\mathbb{R}\oplus \mathfrak{so}\big(0,7)$, and $(\tau,R=\mathbb{R}^7)$ is the vectorial representation \eqref{f421} of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple Lie algebra $\mathfrak{f}_{II}$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{f}_{II},$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S$,
$$
\mathfrak{n}_0=\mathfrak{n}_{00}=\mathbb{R}\oplus\mathfrak{so}(0,7),$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$,
which is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{f}_{II}$.
The contactification $(M,{\mathcal D})$ is locally a flat model for the parabolic geometry of type $(F_{II},P_{II})$ related to the following \emph{crossed} Satake diagram \begin{dynkinDiagram}[edge length=.4cm]{F}{***x}\end{dynkinDiagram} of $\mathfrak{f}_I$.
\end{theorem}
\begin{remark} In this way we realized the real form $\mathfrak{f}_{II}$ of the simple exceptional complex Lie algebra $\mathfrak{f}_4$ in $M=\mathbb{R}^{15}$ as a symmetry algebra of the Pfaffian system $(\lambda^1,\dots,\lambda^7)$.
This realization does not appear in Cartan's theses.
\end{remark}
\begin{remark}
Our present case of $\mathfrak{f}_{II}$ also admits description in terms of a certain $\mathbb{R}\oplus\mathfrak{so}(0,7)$ invariant 4-form $\Phi$ in $S=\mathbb{R}^8$, analogous to the 4-form $\Phi$ introduced in Section \ref{phiform}, when we discussed $\mathfrak{f}_I$. We skip the details of this here.
\end{remark}
\section{Spinorial representations in dimension 8}
Dimension \emph{eight} is quite exceptional, as for example, 8 is the highest possible dimension for the existence of Euclidean Hurwitz algebras, gifting us with the algebra of \emph{octonions}. From the perspective of our paper, which meanders through the realm of simple Lie algebras, eight is \emph{very} special: among all the complex simple Lie algebras, the Dynkin diagram of $\mathfrak{d}_4=\mathfrak{so}(8,\mathbb{C})$ which is \emph{defined} in dimension \emph{eight}, is the most symmetric:\\
\centerline{\begin{dynkinDiagram}[edge length=.4cm]{D}{oooo}
\end{dynkinDiagram}.}
\noindent
Visibly it has a threefold symmetry $S_3$.
The Lie algebra $\mathfrak{so}(8,\mathbb{C})$ has six real forms. These are: $\mathfrak{so}(8,0)$, $\mathfrak{so}(7,1)$, $\mathfrak{so}(6,2)$, $\mathfrak{so}^*(8)$, $\mathfrak{so}(5,3)$ and $\mathfrak{so}(4,4)$, with the following respective Satake diagrams:\\
\centerline{\begin{dynkinDiagram}[edge length=.4cm]{D}{****}
\end{dynkinDiagram},\hspace{0.5cm}\begin{dynkinDiagram}[edge length=.4cm]{D}{o***}
\end{dynkinDiagram},\hspace{0.5cm}\begin{dynkinDiagram}[edge length=.4cm]{D}{oo**}
\end{dynkinDiagram},\hspace{0.5cm}\begin{dynkinDiagram}[edge length=.4cm]{D}{ooo*}
\end{dynkinDiagram},\hspace{0.5cm}\tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.4cm]{D}{oooo}
\dynkinFold{3}{4}
\end{dynkinDiagram},\hspace{0.5cm}\begin{dynkinDiagram}[edge length=.4cm]{D}{oooo}\end{dynkinDiagram}.}
We see that among these Satake diagrams the only ones that share the $S_3$ symmetry of the Dynkin diagram of the complex algebra $\mathfrak{d}_4$ are those of the \emph{compact real form} $\mathfrak{so}(8,0)$ and of the \emph{split real form} $\mathfrak{so}(4,4)$.
This $S_3$ symmetry of these two diagrams, indicates that the lowest dimensional real representations of $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$, may have additional features when compared with spinorial representations of other $\mathfrak{so}(p,q)$s. In particular, for \emph{both} $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$ we have:
\begin{itemize}
\item Their Dirac representation $(\rho,S)$ in the real vector space $S=\mathbb{R}^{16}$ is reducible over $\mathbb{R}$ and its split into two real \emph{Weyl} representations $(\rho_+,S_+)$ and $(\rho_-,S_-)$ in the respective vector spaces of \emph{Weyl spinors} $S_+=\mathbb{R}^8$ and $S_-=\mathbb{R}^8$, which have the same real dimension \emph{eight},
$$\rho=\rho_+\oplus\rho_-\quad\mathrm{in}\quad S=S_+\oplus S_-, \quad \mathrm{dim}_\mathbb{R} S_\pm=8.$$
\item The real Weyl representations $(\rho_\pm,S_\pm)$ are \emph{faithful}, \emph{irreducible} and \emph{nonequivalent}.
\item The defining representations $(\tau,R)$ of $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$, as the algebra of endomorphisms in the space $R=\mathbb{R}^8$ of vectors preserving the bilinear form of respective signatures $(4,4)$ and $(8,0)$ has the same dimension \emph{eight} as the two Weyl representations $(\rho_\pm,S_\pm)$.
\item The real defining representations $(\tau,R)$ are \emph{irreducible} for both $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$.
\item All three real 8-dimensional irreducible representations $(\rho_+,S_+)$, $(\rho_-,S_-)$ and $(\tau,R)$ of, respectively both, $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$ are \emph{pairwise nonequivalent}.
\end{itemize}
Thus the Lie algebras $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$ have three real, irreducible and nonequivalent representations $(\rho_+,\rho_-,\tau)$ in the vector space $\mathbb{R}^8$ of the \emph{defining} dimension $p+q=8$. For all $\mathfrak{so}(p,q)$ Lie algebras this is the only dimension $p+q$ that such situation with the irreducible representations occurs.
Below, we provide the explicit description of the \emph{triality representations} $(\rho_+,\rho_-,\tau)$ separately for $\mathfrak{so}(4,4)$ and $\mathfrak{so}(8,0)$.
\subsection{Triality representations of $\mathfrak{so}(4,4)$}
We recall from Section \ref{spintraut} that the Lie algebra $\mathfrak{so}(4,4)$ admits a representation $\rho$ in the 16-dimensional real vector space $S=\mathbb{R}^{16}$ of Dirac spinors. This is obtained by using the Dirac $\gamma$ matrices generating the representation of the Clifford algebra ${\mathcal C}\ell(4,4)$. In terms of the 2-dimensional Pauli matrices $(\sigma_x,\epsilon,\sigma_z,I)$ these look as follows:
\begin{equation} \begin{aligned}
&\gamma_1=\sigma_x\otimes\sigma_x\otimes\sigma_x\otimes\sigma_x\\
&\gamma_2=\sigma_x\otimes\sigma_x\otimes\sigma_x\otimes\epsilon\\
&\gamma_3=\sigma_x\otimes\sigma_x\otimes\sigma_x\otimes\sigma_z\\
&\gamma_4=\sigma_x\otimes\sigma_x\otimes\epsilon\otimes I\\
&\gamma_5=\sigma_x\otimes\sigma_x\otimes\sigma_z\otimes I\\
&\gamma_6=\sigma_x\otimes\epsilon\otimes I\otimes I\\
&\gamma_7=\sigma_x\otimes\sigma_z\otimes I\otimes I\\
&\gamma_8=\epsilon\otimes I\otimes I\otimes I.
\end{aligned}\label{dirga}\end{equation}
They satisfy the \emph{Dirac identity}
\begin{equation} \gamma_i\gamma_j+\gamma_j\gamma_i=2g_{ij} (I\otimes I\otimes I\otimes I), \quad i,j=1,\dots,8,\label{clifi}\end{equation}
with
$$\big( g_{ij} \big)=\mathrm{diag}(1,-1,1,-1,1,-1,1,-1).$$
The 28 generators of $\mathfrak{so}(4,4)$ in the Majorana-Dirac spinor representation $\rho$ in the space of Dirac spinors $S=\mathbb{R}^{16}$ are given by
$$\rho(A_{I(i,j)})=\tfrac12\gamma_i\gamma_j, \quad 1\leq i<j\leq 8,$$
where we again have used the function $I=I(i,j)$ defined in \eqref{Iij}. Note that since now $i<j$ can run from 1 to 8, the function has a range from 1 to 28.
We add to these generators the scaling generator, $\rho(A_{29})$,
$$\rho(A_{29})=\tfrac12 I\otimes I\otimes I\otimes I.$$
This extends the Dirac representation $\rho$ of the Lie algebra $\mathfrak{so}(4,4)$ to the representation of the \emph{homothety} Lie algebra $\mathfrak{coa}(4,4)=\mathbb{R}\oplus\mathfrak{so}(4,4)$.
In terms of the 2-dimensional Pauli matrices these generators look like:
\begin{equation}\label{dir44}
\begin{array}{ll}
\rho(A_1)= \tfrac12 I \otimes I\otimes I\otimes \sigma_z,&\rho(A_{15})=\tfrac12 I \otimes \sigma_z\otimes \sigma_z\otimes I,\\
\rho(A_2)= \tfrac12 I \otimes I\otimes I\otimes \epsilon ,&\rho(A_{16})=\tfrac12 I \otimes \epsilon\otimes \sigma_x\otimes \sigma_x,\\
\rho(A_3)= \tfrac12 I \otimes I\otimes I\otimes \sigma_x ,&\rho(A_{17})=\tfrac12 I \otimes \epsilon\otimes \sigma_x\otimes \epsilon ,\\
\rho(A_4)=\tfrac12 I \otimes I\otimes \sigma_z\otimes \sigma_x,&\rho(A_{18})=\tfrac12 I \otimes \epsilon\otimes \sigma_x\otimes \sigma_z ,\\
\rho(A_5)= \tfrac12 I \otimes I\otimes \sigma_z\otimes \epsilon,&\rho(A_{19})=\tfrac12 I \otimes \epsilon\otimes \epsilon\otimes I ,\\
\rho(A_6)= \tfrac12 I \otimes I\otimes \sigma_z\otimes \sigma_z,&\rho(A_{20})=\tfrac12 I \otimes \epsilon\otimes \sigma_z\otimes I ,\\
\rho(A_7)= \tfrac12 I \otimes I\otimes \epsilon\otimes \sigma_x,&\rho(A_{21})=\tfrac12 I \otimes \sigma_x\otimes I\otimes I,\\
\rho(A_8)= \tfrac12 I \otimes I\otimes \epsilon\otimes \epsilon,&\rho(A_{22})=\tfrac12 \sigma_z \otimes \sigma_x\otimes \sigma_x\otimes \sigma_x ,\\
\rho(A_9)= \tfrac12 I \otimes I\otimes \epsilon\otimes \sigma_z,&\rho(A_{23})=\tfrac12 \sigma_z \otimes \sigma_x\otimes \sigma_x\otimes \epsilon ,\\
\rho(A_{10})=\tfrac12 I \otimes I\otimes \sigma_x\otimes I,&\rho(A_{24})=\tfrac12 \sigma_z \otimes \sigma_x\otimes \sigma_x\otimes \sigma_z ,\\
\rho(A_{11})= \tfrac12 I \otimes \sigma_z\otimes \sigma_x\otimes \sigma_x ,&\rho(A_{25})=\tfrac12 \sigma_z \otimes \sigma_x\otimes \epsilon\otimes I ,\\
\rho(A_{12})= \tfrac12 I \otimes \sigma_z\otimes \sigma_x\otimes \epsilon,&\rho(A_{26})=\tfrac12 \sigma_z \otimes \sigma_x\otimes \sigma_z\otimes I ,\\
\rho(A_{13})= \tfrac12 I \otimes \sigma_z\otimes \sigma_x\otimes \sigma_z,&\rho(A_{27})=\tfrac12 \sigma_z \otimes \epsilon\otimes I\otimes I,\\
\rho(A_{14})= \tfrac12 I \otimes \sigma_z\otimes \epsilon\otimes I,&\rho(A_{28})=\tfrac12 \sigma_z \otimes \sigma_z\otimes I\otimes I.
\end{array}
\end{equation}
Looking at the first factor in \emph{all} of these generators we observe that it is either $I$ or $\sigma_z$, i.e. it is \emph{diagonal}. This means that this 16-dimensional representation of $\mathbb{R}\oplus\mathfrak{so}(4,4)$ is \emph{reducible}. It \emph{splits} onto two real $8$-dimensional \emph{Weyl representations}
$$\rho=\rho_+\oplus\rho_-\quad\mathrm{in}\quad S=S_+\oplus S_-, \quad \mathrm{dim}_\mathbb{R} S_\pm=8,$$
in the spaces $S_\pm$ of (Majorana)-Weyl spinors.
On generators of $\mathfrak{so}(4,4)$ these two 8-dimensional representations $\rho_\pm$, are given by:
\begin{equation}
\begin{array}{ll}
\rho_\pm(A_1)= \tfrac12 I\otimes I\otimes \sigma_z,&\rho_\pm(A_{15})=\tfrac12 \sigma_z\otimes \sigma_z\otimes I,\\
\rho_\pm(A_2)= \tfrac12 I\otimes I\otimes \epsilon ,&\rho_\pm(A_{16})=\tfrac12 \epsilon\otimes \sigma_x\otimes \sigma_x,\\
\rho_\pm(A_3)= \tfrac12 I\otimes I\otimes \sigma_x ,&\rho_\pm(A_{17})=\tfrac12 \epsilon\otimes \sigma_x\otimes \epsilon ,\\
\rho_\pm(A_4)=\tfrac12 I\otimes \sigma_z\otimes \sigma_x,&\rho_\pm(A_{18})=\tfrac12 \epsilon\otimes \sigma_x\otimes \sigma_z ,\\
\rho_\pm(A_5)= \tfrac12 I\otimes \sigma_z\otimes \epsilon,&\rho_\pm(A_{19})=\tfrac12 \epsilon\otimes \epsilon\otimes I ,\\
\rho_\pm(A_6)= \tfrac12 I\otimes \sigma_z\otimes \sigma_z,&\rho_\pm(A_{20})=\tfrac12 \epsilon\otimes \sigma_z\otimes I ,\\
\rho_\pm(A_7)= \tfrac12 I\otimes \epsilon\otimes \sigma_x,&\rho_\pm(A_{21})=\tfrac12 \sigma_x\otimes I\otimes I,\\
\rho_\pm(A_8)= \tfrac12 I\otimes \epsilon\otimes \epsilon,&\rho_\pm(A_{22})=\pm\tfrac12 \sigma_x\otimes \sigma_x\otimes \sigma_x ,\\
\rho_\pm(A_9)= \tfrac12 I\otimes \epsilon\otimes \sigma_z,&\rho_\pm(A_{23})=\pm\tfrac12 \sigma_x\otimes \sigma_x\otimes \epsilon ,\\
\rho_\pm(A_{10})=\tfrac12 I\otimes \sigma_x\otimes I,&\rho_\pm(A_{24})=\pm\tfrac12 \sigma_x\otimes \sigma_x\otimes \sigma_z ,\\
\rho_\pm(A_{11})= \tfrac12 \sigma_z\otimes \sigma_x\otimes \sigma_x ,&\rho_\pm(A_{25})=\pm\tfrac12 \sigma_x\otimes \epsilon\otimes I ,\\
\rho_\pm(A_{12})= \tfrac12 \sigma_z\otimes \sigma_x\otimes \epsilon,&\rho_\pm(A_{26})=\pm\tfrac12 \sigma_x\otimes \sigma_z\otimes I ,\\
\rho_\pm(A_{13})= \tfrac12 \sigma_z\otimes \sigma_x\otimes \sigma_z,&\rho_\pm(A_{27})=\pm\tfrac12 \epsilon\otimes I\otimes I,\\
\rho_\pm(A_{14})= \tfrac12 \sigma_z\otimes \epsilon\otimes I,&\rho_\pm(A_{28})=\pm\tfrac12 \sigma_z\otimes I\otimes I.
\end{array}\label{rhopm}
\end{equation}
We extend them to $\mathbb{R}\oplus\mathfrak{so}(4,4)$ by adding
$$\rho_\pm(A_{29})=\tfrac12 I\otimes I\otimes I.$$
It follows that the Weyl representations $(\rho_\pm,S_\pm)$ of $\mathfrak{so}(4,4)$ are \emph{irreducible} and \emph{nonequivalent}.
They can be used to find yet another real 8-dimensional representation of $\mathfrak{so}(4,4)$. For this one considers the tensor product representation $$\rho_+\otimes\rho_-.$$ This 64-dimensional real representation of $\mathfrak{so}(4,4)$ is \emph{reducible}. It decomposes as:
$$\rho_+\otimes\rho_-=\alpha\oplus\tau\quad\mathrm{in}\quad S_+\otimes S_-=T_{56}\oplus R,\quad\mathrm{with}\quad \dim_\mathbb{R}(R)=8,\,\,\dim_\mathbb{R}(T_{56})=56,$$
having irreducible components $(\alpha,T_{56})$ and $(\tau,R)$ of respective dimensions 56 and 8. Explicitly, on generators of $\mathbb{R}\oplus\mathfrak{so}(4,4)$, the 8-dimensional representation $\tau$ reads:
\begin{equation}\begin{aligned}
&\tau(A_1)=E_{66}-E_{22},\\
&\tau(A_2)=\tfrac12(E_{23}-E_{32}+E_{25}-E_{52}+E_{36}-E_{63}+E_{56}-E_{65}),\\
&\tau(A_3)=\tfrac12(E_{23}+E_{32}+E_{25}+E_{52}+E_{36}+E_{63}+E_{56}+E_{65}),\\
&\tau(A_4)=\tfrac12(E_{23}+E_{32}-E_{25}-E_{52}-E_{36}-E_{63}+E_{56}+E_{65}),\\
&\tau(A_5)=\tfrac12(E_{23}-E_{32}-E_{25}+E_{52}-E_{36}+E_{63}+E_{56}-E_{65}),\\
&\tau(A_6)=E_{33}-E_{55},\\
&\tau(A_7)=\tfrac12(E_{12}-E_{21}-E_{16}+E_{61}-E_{27}+E_{72}+E_{67}-E_{76}),\\
&\tau(A_8)=\tfrac12(-E_{12}-E_{21}-E_{16}-E_{61}-E_{27}-E_{72}-E_{67}-E_{76}),\\
&\tau(A_9)=\tfrac12(E_{13}-E_{31}+E_{15}-E_{51}-E_{37}+E_{73}-E_{57}+E_{75}),\\
&\tau(A_{10})=\tfrac12(E_{13}+E_{31}-E_{15}-E_{51}+E_{37}+E_{73}-E_{57}-E_{75}),\\
&\tau(A_{11})=\tfrac12(-E_{12}-E_{21}+E_{16}+E_{61}+E_{27}+E_{72}-E_{67}-E_{76}),\\
&\tau(A_{12})=\tfrac12(E_{12}-E_{21}+E_{16}-E_{61}+E_{27}-E_{72}+E_{67}-E_{76}),\\
&\tau(A_{13})=\tfrac12(-E_{13}-E_{31}-E_{15}-E_{51}+E_{37}+E_{73}+E_{57}+E_{75}),\\
&\tau(A_{14})=\tfrac12(-E_{13}+E_{31}+E_{15}-E_{51}-E_{37}+E_{73}+E_{57}-E_{75}),\\
&\tau(A_{15})=E_{11}-E_{77},\\
&\tau(A_{16})=\tfrac12(E_{24}-E_{42}+E_{28}-E_{82}+E_{46}-E_{64}-E_{68}+E_{86}),\\
&\tau(A_{17})=\tfrac12(E_{24}+E_{42}+E_{28}+E_{82}+E_{46}+E_{64}+E_{68}+E_{86}),\\
&\tau(A_{18})=\tfrac12(E_{34}-E_{43}+E_{38}-E_{83}-E_{45}+E_{54}+E_{58}-E_{85}),\\
&\tau(A_{19})=\tfrac12(-E_{34}-E_{43}-E_{38}-E_{83}+E_{45}+E_{54}+E_{58}+E_{85}),\\
&\tau(A_{20})=\tfrac12(-E_{14}+E_{41}-E_{18}+E_{81}+E_{47}-E_{74}-E_{78}+E_{87}),\\
&\tau(A_{21})=\tfrac12(E_{14}+E_{41}+E_{18}+E_{81}-E_{47}-E_{74}-E_{78}-E_{87}),\\
&\tau(A_{22})=\tfrac12(-E_{24}-E_{42}+E_{28}+E_{82}+E_{46}+E_{64}-E_{68}-E_{86}),\\
&\tau(A_{23})=\tfrac12(-E_{24}+E_{42}+E_{28}-E_{82}+E_{46}-E_{64}+E_{68}-E_{86}),\\
&\tau(A_{24})=\tfrac12(-E_{34}-E_{43}+E_{38}+E_{83}-E_{45}-E_{54}+E_{58}+E_{85}),\\
&\tau(A_{25})=\tfrac12(E_{34}-E_{43}-E_{38}+E_{83}+E_{45}-E_{54}+E_{58}-E_{85}),\\
&\tau(A_{26})=\tfrac12(E_{14}+E_{41}-E_{18}-E_{81}+E_{47}+E_{74}-E_{78}-E_{87}),\\
&\tau(A_{27})=\tfrac12(-E_{14}+E_{41}+E_{18}-E_{81}-E_{47}+E_{74}-E_{78}+E_{87}),\\
&\tau(A_{28})=-E_{44}+E_{88},\\
&\tau(A_{29})=E_{11}+E_{22}+E_{33}+E_{44}+E_{55}+E_{66}+E_{77}+E_{88},
\end{aligned}\label{tauweyl}\end{equation}
where $E_{ij}$, $i,j=1,2,\dots,8$, denote $8\times 8$ matrices with zeroes everywhere except the value 1 in the entry $(i,j)$ seating at the crossing of the $i$th row and the $j$th column.
The three real, irreducible, pairwise nonequivalent representations $(\rho_+,\rho_-,\tau)$ of $\mathfrak{so}(4,4)$, given by the formulas \eqref{rhopm} and \eqref{tauweyl} constitute the set of the \emph{triality representations} for $\mathfrak{so}(4,4)$.
\subsection{Triality representations of $\mathfrak{so}(8,0)$}
To get the real representation $(\rho,S)$ of $\mathfrak{so}(8,0)$ in the space $S=\mathbb{R}^{16}$ of Dirac spinors we need the real Dirac $\gamma$ matrices satisfying the \emph{Dirac identity} \eqref{clifi}, but now with $$g_{ij}=\delta_{ij},$$ where $\delta_{ij}$ is the \emph{Kronecker delta} in 8 dimensions.
Thus we need to modify the Dirac matrices $\gamma_i$ from \eqref{dirga} to have the proper signature of the metric. This is done in \emph{two} steps \cite{traut}. \emph{First} one puts the \emph{imaginary unit} $i$ in front of some of the Dirac matrices $\gamma_i$ generating the Clifford algebra ${\mathcal C}\ell(4,4)$, to get the proper signature of $(g_{ij})$. Although this produces few \emph{complex} generators, \emph{in step two} one uses them with the others, and modifies them in an algorithmic fashion so that they become all real and still satisfy the \emph{Dirac identity} with the proper signature of $(g_{ij})$. Explicitly, it is done as follows:
By placing the \emph{imaginary unit} $i$ in front of $\gamma_2$, $\gamma_4$, $\gamma_6$ and $\gamma_8$ in \eqref{dirga} we obtain 8 matrices
$$\tilde{\gamma}_{2j-1}=\gamma_{2j-1}, \quad \tilde{\gamma}_{2j}=i\gamma_{2j}, \quad j=1,2,3,4,$$
with $\gamma_i$, $i,1,\dots,8$, in \eqref{dirga}. These constitute generators of the \emph{complex} 16-dimensional representation of the Clifford algebra ${\mathcal C}\ell(8,0)$. We will also need the representation of this Clifford algebra, which is \emph{complex conjugate} of $\tilde{\gamma}$. This is generated by
$$\overline{\tilde{\gamma}}_{2j-1}=\gamma_{2j-1}, \quad \overline{\tilde{\gamma}}_{2j}=-\gamma_{2j}, \quad j=1,2,3,4.$$
The Clifford algebra representations generated by the Dirac matrices $\tilde{\gamma}$ and $\overline{\tilde{\gamma}}$ are \emph{real equivalent}, i.e. there exists a real $16\times 16$ matrix $B$ such that
$$B\tilde{\gamma}_i=\overline{\tilde{\gamma}}_iB,\quad \forall i=1,\dots,8.$$
It can be chosen so that
$$B^2=\mathrm{Id},$$
where $\mathrm{Id}=I\otimes I\otimes I\otimes I$.
Explicitly,
$$B=\sigma_z\otimes\epsilon\otimes\sigma_z\otimes \epsilon.$$
Using this matrix we define a \emph{new} set of eight $\gamma$ matrices\footnote{The $\gamma$-matrices used below should be considered as new symbols, and should not be confused with the $\mathfrak{so}(4,4)$ $\gamma$-matrices in formulas defining $\tilde{\gamma}$-matrices at the beginning of this section. One should forget about the definition of $\tilde{\gamma}$s in the formula below.} by:
$$\gamma_i\,=\,(i B+\mathrm{Id})\,\,\tilde{\gamma}_i\,\,(i B+\mathrm{Id})^{-1}, \quad\quad\forall i=1,\dots,8.$$
One can check that these 8 matrices are \emph{all real} and that they satisfy the desired Dirac identity:
$$\gamma_i\gamma_j+\gamma_j\gamma_i=2\delta_{ij} (I\otimes I\otimes I\otimes I), \quad i,j=1,\dots,8.$$
Explicitly we have:
$$\begin{aligned}
&\gamma_1=\sigma_x\otimes\sigma_x\otimes\sigma_x\otimes\sigma_x\\
&\gamma_2=-\epsilon\otimes\sigma_z\otimes\epsilon\otimes I\\
&\gamma_3=\sigma_x\otimes\sigma_x\otimes\sigma_x\otimes\sigma_z\\
&\gamma_4=\epsilon\otimes\sigma_z\otimes\sigma_x\otimes \epsilon\\
&\gamma_5=\sigma_x\otimes\sigma_x\otimes\sigma_z\otimes I\\
&\gamma_6=-\epsilon\otimes I\otimes \sigma_z\otimes \epsilon\\
&\gamma_7=\sigma_x\otimes\sigma_z\otimes I\otimes I\\
&\gamma_8=\sigma_x\otimes \epsilon\otimes \sigma_z\otimes \epsilon.
\end{aligned}$$
The 28 generators of $\mathfrak{so}(8,0)$ in the Majorana-Dirac spinor representation $\rho$ in the space of Dirac spinors $S=\mathbb{R}^{16}$ are given by
$$\rho(A_{I(i,j)})=\tfrac12\gamma_i\gamma_j, \quad 1\leq i<j\leq 8,$$
where again we have used the function $I=I(i,j)$ defined in \eqref{Iij}. Note that since now $i<j$ can run from 1 to 8, the function has a range from 1 to 28.
We add to this the scaling generator, $\rho(A_{29})$, extending the Lie algebra $\mathfrak{so}(4,4)$ to $\mathfrak{coa}(4,4)$, given by
$$\rho(A_{29})=\tfrac12 I\otimes I\otimes I\otimes I.$$
In terms of the 2-dimensional Pauli matrices these generators look like:
\begin{equation}\label{dir80}
\begin{array}{ll}
\rho(A_1)= -\tfrac12 \sigma_z \otimes \epsilon\otimes \sigma_z\otimes \sigma_x,&\rho(A_{15})=-\tfrac12 \sigma_z \otimes \sigma_x\otimes I\otimes \epsilon,\\
\rho(A_2)= \tfrac12 I \otimes I\otimes I\otimes \epsilon ,&\rho(A_{16})=\tfrac12 I \otimes \epsilon\otimes \sigma_x\otimes \sigma_x,\\
\rho(A_3)= \tfrac12 \sigma_z \otimes \epsilon\otimes \sigma_z\otimes \sigma_z ,&\rho(A_{17})=\tfrac12 \sigma_z \otimes I \otimes \epsilon\otimes I ,\\
\rho(A_4)=\tfrac12 \sigma_z \otimes \epsilon\otimes I\otimes \sigma_z,&\rho(A_{18})=\tfrac12 I \otimes \epsilon\otimes \sigma_x\otimes \sigma_z ,\\
\rho(A_5)= -\tfrac12 I \otimes I\otimes \sigma_z\otimes \epsilon,&\rho(A_{19})=-\tfrac12 \sigma_z \otimes I\otimes \sigma_x\otimes \epsilon ,\\
\rho(A_6)= -\tfrac12 \sigma_z \otimes \epsilon\otimes I\otimes \sigma_x,&\rho(A_{20})=\tfrac12 I \otimes \epsilon\otimes \sigma_z\otimes I ,\\
\rho(A_7)= \tfrac12 I \otimes I\otimes \epsilon\otimes \sigma_x,&\rho(A_{21})=\tfrac12 \sigma_z \otimes \sigma_z\otimes \sigma_z\otimes \epsilon,\\
\rho(A_8)= -\tfrac12 \sigma_z \otimes \epsilon\otimes \sigma_x\otimes I,&\rho(A_{22})=\tfrac12 I \otimes \sigma_z\otimes \epsilon\otimes \sigma_z ,\\
\rho(A_9)= \tfrac12 I \otimes I\otimes \epsilon\otimes \sigma_z,&\rho(A_{23})=-\tfrac12 \sigma_z \otimes \sigma_x\otimes \sigma_x\otimes \epsilon ,\\
\rho(A_{10})=\tfrac12 \sigma_z \otimes \epsilon\otimes \epsilon\otimes \epsilon,&\rho(A_{24})=-\tfrac12 I \otimes \sigma_z\otimes \epsilon\otimes \sigma_x ,\\
\rho(A_{11})= -\tfrac12 \sigma_z \otimes \sigma_x\otimes \epsilon\otimes \sigma_z ,&\rho(A_{25})=-\tfrac12 \sigma_z \otimes \sigma_x\otimes \epsilon\otimes I ,\\
\rho(A_{12})= -\tfrac12 I \otimes \sigma_z\otimes \sigma_x\otimes \epsilon,&\rho(A_{26})=\tfrac12 I \otimes \sigma_z\otimes I\otimes \epsilon ,\\
\rho(A_{13})=\tfrac12 \sigma_z \otimes \sigma_x\otimes \epsilon\otimes \sigma_x,&\rho(A_{27})=-\tfrac12 \sigma_z \otimes \epsilon\otimes I\otimes I,\\
\rho(A_{14})= -\tfrac12 I \otimes \sigma_z\otimes \epsilon\otimes I,&\rho(A_{28})=-\tfrac12 I \otimes \sigma_x\otimes \sigma_z\otimes \epsilon.
\end{array}
\end{equation}
Similarly as in the case of $\mathfrak{so}(4,4)$ this 16-dimensional representation of $\mathbb{R}\oplus\mathfrak{so}(4,4)$ is \emph{reducible}, again due to the appearance of $I$ and $\sigma_z$ only as the first factors in the above formulas. It \emph{splits} onto two real $8$-dimensional Weyl representations
$$\rho=\rho_+\oplus\rho_-\quad\mathrm{in}\quad S=S_+\oplus S_-, \quad \mathrm{dim}_\mathbb{R} S_\pm=8.$$
On generators of $\mathfrak{so}(8,0)$ these two 8-dimensional representations $\rho_\pm$, are given by:
\begin{equation}
\begin{array}{ll}
\rho_\pm(A_1)= \mp\tfrac12 \epsilon\otimes \sigma_z\otimes \sigma_x,&\rho_\pm(A_{15})=\mp\tfrac12 \sigma_x\otimes I\otimes \epsilon,\\
\rho_\pm(A_2)= \tfrac12 I\otimes I\otimes \epsilon ,&\rho_\pm(A_{16})=\tfrac12 \epsilon\otimes \sigma_x\otimes \sigma_x,\\
\rho_\pm(A_3)= \pm\tfrac12 \epsilon\otimes \sigma_z\otimes \sigma_z ,&\rho_\pm(A_{17})=\pm\tfrac12 I \otimes \epsilon\otimes I ,\\
\rho_\pm(A_4)=\pm\tfrac12 \epsilon\otimes I\otimes \sigma_z,&\rho_\pm(A_{18})=\tfrac12 \epsilon\otimes \sigma_x\otimes \sigma_z ,\\
\rho_\pm(A_5)= -\tfrac12 I\otimes \sigma_z\otimes \epsilon,&\rho_\pm(A_{19})=\mp\tfrac12 I\otimes \sigma_x\otimes \epsilon ,\\
\rho_\pm(A_6)= \mp\tfrac12 \epsilon\otimes I\otimes \sigma_x,&\rho_\pm(A_{20})=\tfrac12 \epsilon\otimes \sigma_z\otimes I ,\\
\rho_\pm(A_7)= \tfrac12 I\otimes \epsilon\otimes \sigma_x,&\rho_\pm(A_{21})=\pm\tfrac12 \sigma_z\otimes \sigma_z\otimes \epsilon,\\
\rho_\pm(A_8)= \mp\tfrac12 \epsilon\otimes \sigma_x\otimes I,&\rho_\pm(A_{22})=\tfrac12 \sigma_z\otimes \epsilon\otimes \sigma_z ,\\
\rho_\pm(A_9)= \tfrac12 I\otimes \epsilon\otimes \sigma_z,&\rho_\pm(A_{23})=\mp\tfrac12 \sigma_x\otimes \sigma_x\otimes \epsilon ,\\
\rho_\pm(A_{10})=\pm\tfrac12 \epsilon\otimes \epsilon\otimes \epsilon,&\rho_\pm(A_{24})=-\tfrac12 \sigma_z\otimes \epsilon\otimes \sigma_x ,\\
\rho_\pm(A_{11})= \mp\tfrac12 \sigma_x\otimes \epsilon\otimes \sigma_z ,&\rho_\pm(A_{25})=\mp\tfrac12 \sigma_x\otimes \epsilon\otimes I ,\\
\rho_\pm(A_{12})= -\tfrac12 \sigma_z\otimes \sigma_x\otimes \epsilon,&\rho_\pm(A_{26})=\tfrac12 \sigma_z\otimes I\otimes \epsilon ,\\
\rho_\pm(A_{13})=\pm\tfrac12 \sigma_x\otimes \epsilon\otimes \sigma_x,&\rho_\pm(A_{27})=\mp\tfrac12 \epsilon\otimes I\otimes I,\\
\rho_\pm(A_{14})= -\tfrac12 \sigma_z\otimes \epsilon\otimes I,&\rho_\pm(A_{28})=-\tfrac12 \sigma_x\otimes \sigma_z\otimes \epsilon.
\end{array}\label{weylso8}
\end{equation}
We extend them em to $\mathbb{R}\oplus\mathfrak{so}(8,0)$ by adding
$$\rho_\pm(A_{29})=\tfrac12 I\otimes I\otimes I.$$
It follows that the Weyl representations $(\rho_\pm,S_\pm)$ of $\mathfrak{so}(4,4)$ are \emph{irreducible} and \emph{nonequivalent}.
We use them to find the defining representation $(\tau,R)$ of $\mathfrak{so}(8,0)$ in the vector space $R=\mathbb{R}^8$ of vectors. We again consider the tensor product representation $\rho_+\otimes\rho_-$. It decomposes as:
$$\rho_+\otimes\rho_-=\alpha\oplus\tau\quad\mathrm{in}\quad S_+\otimes S_-=T_{56}\oplus R,\quad\mathrm{with}\quad \dim_\mathbb{R}(R)=8,\,\,\dim_\mathbb{R}(T_{56})=56,$$
having irreducible components $(\alpha,T_{56})$ and $(\tau,R)$ of respective dimensions 56 and 8. Explicitly, on generators $A_I$ of $\mathbb{R}\oplus\mathfrak{so}(8,0)$, the 8-dimensional representation $\tau$ reads:
\begin{equation}
\begin{array}{llll}
\tau(A_1)=E_{38}-E_{83},&\quad \tau(A_8)=E_{35}-E_{53},&\quad \tau(A_{15})=E_{52}-E_{25},&\quad \tau(A_{22})=E_{68}-E_{86},\\
\tau(A_2)=E_{78}-E_{87}, &\quad\tau(A_9)=E_{75}-E_{57}, &\quad \tau(A_{16})=E_{18}-E_{81},&\quad \tau(A_{23})=E_{36}-E_{63}, \\
\tau(A_3)=E_{37}-E_{73}, &\quad\tau(A_{10})=E_{54}-E_{45},&\quad \tau(A_{17})=E_{31}-E_{13},&\quad \tau(A_{24})=E_{76}-E_{67},\\
\tau(A_4)=E_{84}-E_{48},&\quad \tau(A_{11})=E_{28}-E_{82},&\quad \tau(A_{18})=E_{71}-E_{17},&\quad \tau(A_{25})=E_{64}-E_{46},\\
\tau(A_5)=E_{43}-E_{34},&\quad \tau(A_{12})=E_{32}-E_{23},&\quad \tau(A_{19})=E_{14}-E_{41},&\quad \tau(A_{26})=E_{56}-E_{65},\\
\tau(A_6)=E_{47}-E_{74},&\quad \tau(A_{13})=E_{72}-E_{27},&\quad \tau(A_{20})=E_{51}-E_{15},&\quad \tau(A_{27})=E_{26}-E_{62},\\
\tau(A_7)=E_{58}-E_{85}, &\quad \tau(A_{14})= E_{24}-E_{42},&\quad
\tau(A_{21})=E_{21}-E_{12},&\quad \tau(A_{28})=E_{16}-E_{61},
\end{array}
\label{so8}\end{equation}
where $E_{ij}$, $i,j=1,2,\dots,8$, denote $8\times 8$ matrices with zeroes everywhere except the value 1 in the entry $(i,j)$ seating at the crossing of the $i$th row and the $j$th column.
The three real, irreducible, pairwise nonequivalent representations $(\rho_+,\rho_-,\tau)$ of $\mathfrak{so}(8,0)$ given by the formulas \eqref{weylso8} and \eqref{so8} constitute the set of the \emph{triality representations} for $\mathfrak{so}(8,0)$.
\section{Application: 2-step graded realizations of real forms of the exceptional Lie algebra $\mathfrak{e}_6$ }
The simple exceptional complex Lie algebra $\mathfrak{e}_6$ has the following \emph{noncompact} real forms
\begin{enumerate}
\item $\mathfrak{e}_I$, with Satake diagram
\tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}} \begin{dynkinDiagram}[edge length=.5cm]{E}{oooooo}
\end{dynkinDiagram},
\item $\mathfrak{e}_{II}$, with Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}
\begin{dynkinDiagram}[edge length=.5cm]{E}{oooooo}\invol{1}{6}\invol{3}{5}
\end{dynkinDiagram},
\item $\mathfrak{e}_{III}$, with Satake diagram
\tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{oo***o}\invol{1}{6
\end{dynkinDiagram}, and
\item $\mathfrak{e}_{IV}$, with Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{o****o}
\end{dynkinDiagram}.
\end{enumerate}
\`Elie Cartan in his theses \cite{CartanPhd, CartanPhdF} mentioned realization of the real form $\mathfrak{e}_I$ in $N=\mathbb{R}^{16}$. In the modern language, Cartan's realization is such that $\mathfrak{e}_I$ is the \emph{algebra of automorphisms} of the flat model of a \emph{parabolic geometry} of type $(E_I,P)$, where the choice of parabolic subgroup in the real form $E_I$ of the exceptional Lie group ${\bf E}_6$ is indicated by the following decoration of the Satake diagram for $\mathfrak{e}_I$: \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}} \begin{dynkinDiagram}[edge length=.5cm]{E}{ooooot}
\end{dynkinDiagram}. The structure on the 16-dimensional manifold $N=E_I/P$, whose symmetry is $E_I$ is a Majorana-Weyl $\mathbb{R}{\bf Spin}(5,5)$ structure, i.e. the reduction of the structure group of the tangent bundle $\mathrm{T}N$ to the $\mathbb{R}{\bf Spin}(5,5)\subset\mathbf{GL}(16,\mathbb{R})$ in the irreducible 16-dimensional representation of Majorana-Weyl spinors \cite{traut}. \emph{This geometry, as 1-step graded, is quite different from 2-step graded geometries considered in our paper}. We also mention, that if we wanted to have a realization of, say $\mathfrak{e}_{II}$ or $\mathfrak{e}_{III}$, in the spirit of Cartan's realization of $\mathfrak{e}_I$, i.e. if we crossed one lateral node in the Satake diagram of $\mathfrak{e}_{II}$ or $\mathfrak{e}_{III}$, we would be forced to cross the conjugated lateral root, resulting in the Satake diagrams \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}
\begin{dynkinDiagram}[edge length=.5cm]{E}{toooot}\invol{1}{6}\invol{3}{5}
\end{dynkinDiagram} or \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{to***t}\invol{1}{6
\end{dynkinDiagram},
which would give realizations of the respective $\mathfrak{e}_{II}$ and $\mathfrak{e}_{III}$ in dimension \emph{twenty four}. This we did in \cite{DJPZ} providing realizations of $\mathfrak{e}_{II}$ and $\mathfrak{e}_{III}$ as Lie algebras of CR-automorphisms of certain 24-dimensional CR manifolds of CR dimension 16, and CR (real) codimension 8. The important point of these realizations of these two real forms of $]\mathfrak{e}_e$ was that these geometries were 2-step graded, as in the case of Cartan's realization of $\mathfrak{f}_I$, and they could have been also thought as realizations in terms of the symmetry algebras of the structure $(M,{\mathcal D})$, where $M$ is a certain 24-dimensional real manifold, and $\mathcal D$ is a real rank 16-distribution on $M$ with $[{\mathcal D},{\mathcal D}]=\mathrm{T}M$. Thus these two geometries described by us in \cite {DJPZ} are 2-step graded geometries of distributions. Very much like Cartan's realization of $\mathfrak{f}_I$.
In this section we give the remaining similar realizations of the yet untreated cases of $\mathfrak{e}_{II}$ and $\mathfrak{e}_{III}$.
\subsection{Realizations of $\mathfrak{e}_{I}$ and $\mathfrak{e}_{IV}$: generalities}
To get realizations of $\mathfrak{e}_{I}$ and $\mathfrak{e}_{IV}$ in dimension 24, we decorate the Satake diagrams of these two Lie algebras as follows: \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}} \begin{dynkinDiagram}[edge length=.5cm]{E}{toooot}
\end{dynkinDiagram} and \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{t****t}
\end{dynkinDiagram}.
These choices of a parabolic subalgebra in the respective $\mathfrak{e}_{I}$ and $\mathfrak{e}_{IV}$ produces the following gradation in these algebras:
$$\mathfrak{e}_A=\mathfrak{n}_{\minu2A}\oplus\mathfrak{n}_{\minu1A}\oplus\mathfrak{n}_{0A}\oplus\mathfrak{n}_{1A}\oplus\mathfrak{n}_{2A}\quad \mathrm{for}\quad A=I,IV,
$$
with
$$\mathfrak{n}_{\minu A}=\mathfrak{n}_{\minu2A}\oplus\mathfrak{n}_{\minu1A}\quad \mathrm{for}\quad A=I,IV,$$
being 2-step nilpotent and having grading components $\mathfrak{n}_{-2A}$ and $\mathfrak{n}_{\minu1A}$ of respective dimension $r_A=8$ and $s_A=16$,
$$r_A=\dim(\mathfrak{n}_{\minu2A})=8,\quad\quad s_A=\dim(\mathfrak{n}_{\minu1A})=16\quad \mathrm{for}\quad A=I,IV.$$
The Lie algebra $\mathfrak{n}_{0A}$ in the Tanaka prolongation of $\mathfrak{n}_{\minu A}$ up to $0^{th}$ order is
\begin{enumerate}
\item $\mathfrak{n}_{0I}=2\mathbb{R}\oplus\mathfrak{so}(4,4)=\mathbb{R}\oplus\mathfrak{co}(4,4)$ in the case of $\mathfrak{e}_I$, and
\item $\mathfrak{n}_{0IV}=2\mathbb{R}\oplus\mathfrak{so}(8,0)=\mathbb{R}\oplus\mathfrak{co}(8,0)$ in the case of $\mathfrak{e}_{IV}$.
\end{enumerate}
The last two statements, (1) and (2), get clear when one looks at the Satake diagrams we have just decorated. If we strip off the crossed nodes from these diagrams we get\begin{dynkinDiagram}[edge length=.4cm]{D}{oooo}\end{dynkinDiagram} and \begin{dynkinDiagram}[edge length=.4cm]{D}{****}
\end{dynkinDiagram}, clearly the simple part of $\mathfrak{n}_{0A}$s above.
Because of the grading property $[\mathfrak{n}_{iA},\mathfrak{n}_{jA}]\subset\mathfrak{n}_{(i+j)A}$ in the Lie algebras $\mathfrak{e}_A$, restricting to subalgebras $\mathfrak{n}_{\minu A}$ we see that we have representations $(\rho_A,\mathfrak{n}_{\minu1A})$ and $(\tau_A,\mathfrak{n}_{\minu 2A})$ given by the adjoint action of $\mathfrak{co}(4,4)$ or $\mathfrak{co}(8,0)$ which naturally seat in $\mathfrak{n}_{0A}$, respectively.
There is no surprise that the representations $(\rho_A,\mathfrak{n}_{\minu1A})$ are the Dirac spinor representations \eqref{dir44} and \eqref{dir80} of the respective $\mathfrak{co}(4,4)$ and $\mathfrak{co}(8,0)$ parts of $\mathfrak{n}_{0A}$s in the 16-dimensional real vector spaces $\mathfrak{n}_{\minu1A}$. As such, these representations are \emph{reducible} and they split each $\mathfrak{n}_{0A}$, $A=I,IV$, onto two irreducible representations $(\rho_{A\pm},\mathfrak{n}_{\minu1A\pm})$ in real 8-dimensional spaces $\mathfrak{n}_{\minu1A\pm}$ of Weyl spinors. This shows that the \emph{2-step nilpotent Lie algebra} $\mathfrak{n}_{\minu A}$ \emph{is}, for each $A=I,IV$, \emph{a natural representation space for the action of the three triality representations} $(\rho_+,\rho_-,\tau)$. We have,
$$\begin{aligned}\mathfrak{n}_{\minu A}=&\mathfrak{n}_{\minu2A}\oplus\mathfrak{n}_{\minu1A}=\\
&\mathfrak{n}_{\minu2A}\oplus\mathfrak{n}_{\minu1A+}\oplus\mathfrak{n}_{\minu1A-},\end{aligned}$$
and the 8-dimensional real irreducible representations $(\tau_A,\rho_A+,\rho_A-)$ of $\mathfrak{co}(4,4)$ or $\mathfrak{co}(8,0)$ acting in the respective $\mathfrak{n}_{\minu2A}$, $\mathfrak{n}_{\minu1A+}$ and $\mathfrak{n}_{\minu1A-}$.
We summarize the considerations from this section in the following theorem,
\begin{theorem} (Natural realization of the triality representations)
\begin{enumerate} \item The $\mathfrak{so}(4,4)$ triality:\\
The real form $\mathfrak{e}_I$ of the simple exceptional Lie algebra $\mathfrak{e}_6$, when graded according to the following decoration of its Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}} \begin{dynkinDiagram}[edge length=.5cm]{E}{toooot}
\end{dynkinDiagram}, has the $\mathfrak{n}_{\minu}$ part as a real 24-dimensional vector space, naturally split onto the three real 8-dimensional components $\mathfrak{n}_{\minu2}$, $\mathfrak{n}_{\minu1+}$ and $\mathfrak{n}_{\minu1-}$,
$$\mathfrak{n}_{\minu}=\mathfrak{n}_{\minu2}\oplus\mathfrak{n}_{\minu1+}\oplus\mathfrak{n}_{\minu1-}.$$
This decomposition is $\mathfrak{so}(4,4)$ invariant and consists of components $\mathfrak{n}_{\minu2}$, $\mathfrak{n}_{\minu1+}$ and $\mathfrak{n}_{\minu1-}$, on which the triality representation
$$\tau\oplus\rho_+\oplus\rho_-$$
of $\mathfrak{so}(4,4)$ acts irreducibly.
\item The $\mathfrak{so}(8,0)$ triality:\\
Likewise, the real form $\mathfrak{e}_{IV}$ of the simple exceptional Lie algebra $\mathfrak{e}_6$, when graded according to the following decoration of its Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}} \begin{dynkinDiagram}[edge length=.5cm]{E}{t****t}
\end{dynkinDiagram}, has the $\mathfrak{n}_{\minu}$ part as a real 24-dimensional vector space, naturally split onto the three real 8-dimensional components $\mathfrak{n}_{\minu2}$, $\mathfrak{n}_{\minu1+}$ and $\mathfrak{n}_{\minu1-}$,
$$\mathfrak{n}_{\minu}=\mathfrak{n}_{\minu2}\oplus\mathfrak{n}_{\minu1+}\oplus\mathfrak{n}_{\minu1-}.$$
This decomposition is $\mathfrak{so}(8,0)$ invariant and consists of components $\mathfrak{n}_{\minu2}$, $\mathfrak{n}_{\minu1+}$ and $\mathfrak{n}_{\minu1-}$, on which the triality representation
$$\tau\oplus\rho_+\oplus\rho_-$$
of $\mathfrak{so}(8,0)$ acts irreducibly.
\end{enumerate}
\end{theorem}
\subsection{An explicit realization of $\mathfrak{e}_I$ in dimension 24}
Taking as $(\rho,S)$ the Dirac spinors representation \eqref{dir44} of $\mathrm{co}(4,4)$ in dimension 16, and as $(\tau,R)$ the vectorial representation \eqref{tauweyl} of $\mathrm{co}(4,4)$ in dimension 8, we again are in the situation of a missing $\omega\in\mathrm{Hom}(\bigwedge^2S,R)$ from the triple $(\rho,\tau,\omega)$ described by the magical equation \eqref{maga}. Solving this equation for $\omega$ we obtain $\omega^i{}_{\mu\nu}$, $i=1,\dots,8$, $\mu,\nu=1,\dots,16$, which leads to the eight 2-forms $\omega^i=\tfrac12\omega^i{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu$ on a 16-dimensional manifold $N=\mathbb{R}^{16}$, which read
$$\begin{aligned}
\omega^1=\,\,&-{\rm d} x^1\wedge{\rm d} x^{10}+{\rm d} x^2\wedge{\rm d} x^{9}+{\rm d} x^7\wedge{\rm d} x^{16}-{\rm d} x^8\wedge{\rm d} x^{15}\\
\omega^2=\,\,&-{\rm d} x^2\wedge{\rm d} x^{12}+{\rm d} x^4\wedge{\rm d} x^{10}+{\rm d} x^6\wedge{\rm d} x^{16}-{\rm d} x^8\wedge{\rm d} x^{14}\\
\omega^3=\,\,&-{\rm d} x^1\wedge{\rm d} x^{12}+{\rm d} x^4\wedge{\rm d} x^{9}+{\rm d} x^5\wedge{\rm d} x^{16}-{\rm d} x^8\wedge{\rm d} x^{13}\\
\omega^4=\,\,&-{\rm d} x^5\wedge{\rm d} x^{10}+{\rm d} x^6\wedge{\rm d} x^{9}+{\rm d} x^7\wedge{\rm d} x^{12}-{\rm d} x^8\wedge{\rm d} x^{11}\\
\omega^5=\,\,&-{\rm d} x^2\wedge{\rm d} x^{11}+{\rm d} x^3\wedge{\rm d} x^{10}+{\rm d} x^6\wedge{\rm d} x^{15}-{\rm d} x^7\wedge{\rm d} x^{14}\\
\omega^6=\,\,&-{\rm d} x^1\wedge{\rm d} x^{11}+{\rm d} x^3\wedge{\rm d} x^{9}+{\rm d} x^5\wedge{\rm d} x^{15}-{\rm d} x^7\wedge{\rm d} x^{13}\\
\omega^7=\,\,&-{\rm d} x^3\wedge{\rm d} x^{12}+{\rm d} x^4\wedge{\rm d} x^{11}+{\rm d} x^5\wedge{\rm d} x^{14}-{\rm d} x^6\wedge{\rm d} x^{13}\\
\omega^8=\,\,&-{\rm d} x^1\wedge{\rm d} x^{14}+{\rm d} x^2\wedge{\rm d} x^{13}+{\rm d} x^3\wedge{\rm d} x^{16}-{\rm d} x^4\wedge{\rm d} x^{15}.
\end{aligned}$$
The manifold $N=\mathbb{R}^{16}$ with these 2-forms, after contactification, gives the following Theorem.
\begin{theorem}\label{diste6}
Let $M=\mathbb{R}^{24}$ with coordinates $(u^1,\dots,u^8,x^1,\dots ,x^{16})$, and consider eight 1-forms $\lambda^1,\dots,\lambda^8$ on $M$ given by
$$\begin{aligned}
\lambda^1=\,\,&{\rm d} u^1-x^1{\rm d} x^{10}+ x^2{\rm d} x^{9}+ x^7{\rm d} x^{16}- x^8{\rm d} x^{15}\\
\lambda^2=\,\,&{\rm d} u^2- x^2{\rm d} x^{12}+ x^4{\rm d} x^{10}+ x^6{\rm d} x^{16}- x^8{\rm d} x^{14}\\
\lambda^3=\,\,&{\rm d} u^3- x^1{\rm d} x^{12}+ x^4{\rm d} x^{9}+x^5{\rm d} x^{16}- x^8{\rm d} x^{13}\\
\lambda^4=\,\,&{\rm d} u^4- x^5{\rm d} x^{10}+ x^6{\rm d} x^{9}+ x^7{\rm d} x^{12}- x^8{\rm d} x^{11}\\
\lambda^5=\,\,&{\rm d} u^5-x^2{\rm d} x^{11}+ x^3{\rm d} x^{10}+ x^6{\rm d} x^{15}- x^7{\rm d} x^{14}\\
\lambda^6=\,\,&{\rm d} u^6- x^1{\rm d} x^{11}+ x^3{\rm d} x^{9}+ x^5{\rm d} x^{15}- x^7{\rm d} x^{13}\\
\lambda^7=\,\,&{\rm d} u^7- x^3{\rm d} x^{12}+ x^4{\rm d} x^{11}+ x^5{\rm d} x^{14}- x^6{\rm d} x^{13}\\
\lambda^8=\,\,&{\rm d} u^8- x^1{\rm d} x^{14}+ x^2{\rm d} x^{13}+ x^3{\rm d} x^{16}-x^4{\rm d} x^{15}
\end{aligned}$$
The rank 16 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^{24}\ni X\,\,|\,\,X\hook\lambda^1=\dots=X\hook\lambda^8=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^{16})$ is the Dirac spinors representation \eqref{dir44} of $\mathfrak{n}_{00}=\mathfrak{co}(4,4)$, and $(\tau,R=\mathbb{R}^8)$ is the vectorial representation \eqref{tauweyl} of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple exceptional Lie algebra $\mathfrak{e}_{I}$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{e}_{I},$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S=S_+\oplus S_-$,
$$
\mathfrak{n}_0=\mathbb{R}\oplus\mathfrak{co}(4,4)\supset \mathfrak{n}_{00},$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$,
and with the spaces $S_\pm$ being the carrier spaces for the Weyl spinors representations $\rho_\pm$ of $\mathfrak{co}(4,4)$.
The gradation in $e_I$ is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{e}_{I}$.
The contactification $(M,{\mathcal D})$ is locally the flat model for the parabolic geometry of type $(E_{I},P_{I})$ related to the following \emph{crossed} Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}} \begin{dynkinDiagram}[edge length=.5cm]{E}{toooot}
\end{dynkinDiagram}.
\end{theorem}
\subsection{An explicit realization of $\mathfrak{e}_{IV}$ in dimension 24}
Similarly as in the previous section we take as $(\rho,S)$ the Dirac spinors representation \eqref{dir80} of $\mathrm{co}(8.0)$ in dimension 16, and as $(\tau,R)$ the vectorial representation \eqref{so8} of $\mathrm{co}(8,0)$ in dimension 8and we search for $\omega\in\mathrm{Hom}(\bigwedge^2S,R)$ solving the magical equation \eqref{maga}. We obtain $\omega^i{}_{\mu\nu}$, $i=1,\dots,8$, $\mu,\nu=1,\dots,16$, which provides us with the eight 2-forms $\omega^i=\tfrac12\omega^i{}_{\mu\nu}{\rm d} x^\mu\wedge{\rm d} x^\nu$ on a 16-dimensional manifold $N=\mathbb{R}^{16}$, which read
$$\begin{aligned}
\omega^1=\,\,&{\rm d} x^1\wedge{\rm d} x^{9}+{\rm d} x^2\wedge{\rm d} x^{10}+{\rm d} x^3\wedge{\rm d} x^{11}+{\rm d} x^4\wedge{\rm d} x^{12}-\\&{\rm d} x^5\wedge{\rm d} x^{13}-{\rm d} x^6\wedge{\rm d} x^{14}-{\rm d} x^7\wedge{\rm d} x^{15}-{\rm d} x^8\wedge{\rm d} x^{16}\\
\omega^2=\,\,&-{\rm d} x^1\wedge{\rm d} x^{10}+{\rm d} x^2\wedge{\rm d} x^{9}+{\rm d} x^3\wedge{\rm d} x^{12}-{\rm d} x^4\wedge{\rm d} x^{11}-\\&{\rm d} x^5\wedge{\rm d} x^{14}+{\rm d} x^6\wedge{\rm d} x^{13}+{\rm d} x^7\wedge{\rm d} x^{16}-{\rm d} x^8\wedge{\rm d} x^{15}\\
\omega^3=\,\,&-{\rm d} x^1\wedge{\rm d} x^{11}-{\rm d} x^2\wedge{\rm d} x^{12}+{\rm d} x^3\wedge{\rm d} x^{9}+{\rm d} x^4\wedge{\rm d} x^{10}+\\&{\rm d} x^5\wedge{\rm d} x^{15}+{\rm d} x^6\wedge{\rm d} x^{16}-{\rm d} x^7\wedge{\rm d} x^{13}-{\rm d} x^8\wedge{\rm d} x^{14}\\
\omega^4=\,\,&-{\rm d} x^1\wedge{\rm d} x^{12}+{\rm d} x^2\wedge{\rm d} x^{11}-{\rm d} x^3\wedge{\rm d} x^{10}+{\rm d} x^4\wedge{\rm d} x^{9}+\\&{\rm d} x^5\wedge{\rm d} x^{16}-{\rm d} x^6\wedge{\rm d} x^{15}+{\rm d} x^7\wedge{\rm d} x^{14}-{\rm d} x^8\wedge{\rm d} x^{13}\\
\omega^5=\,\,&{\rm d} x^1\wedge{\rm d} x^{13}+{\rm d} x^2\wedge{\rm d} x^{14}-{\rm d} x^3\wedge{\rm d} x^{15}-{\rm d} x^4\wedge{\rm d} x^{16}+\\&{\rm d} x^5\wedge{\rm d} x^{9}+{\rm d} x^6\wedge{\rm d} x^{10}-{\rm d} x^7\wedge{\rm d} x^{11}-{\rm d} x^8\wedge{\rm d} x^{12}\\
\omega^6=\,\,&{\rm d} x^1\wedge{\rm d} x^{14}-{\rm d} x^2\wedge{\rm d} x^{13}-{\rm d} x^3\wedge{\rm d} x^{16}+{\rm d} x^4\wedge{\rm d} x^{15}-\\&{\rm d} x^5\wedge{\rm d} x^{10}+{\rm d} x^6\wedge{\rm d} x^{9}+{\rm d} x^7\wedge{\rm d} x^{12}-{\rm d} x^8\wedge{\rm d} x^{11}\\
\omega^7=\,\,&{\rm d} x^1\wedge{\rm d} x^{15}-{\rm d} x^2\wedge{\rm d} x^{16}+{\rm d} x^3\wedge{\rm d} x^{13}-{\rm d} x^4\wedge{\rm d} x^{14}+\\&{\rm d} x^5\wedge{\rm d} x^{11}-{\rm d} x^6\wedge{\rm d} x^{12}+{\rm d} x^7\wedge{\rm d} x^{9}-{\rm d} x^8\wedge{\rm d} x^{10}\\
\omega^8=\,\,&-{\rm d} x^1\wedge{\rm d} x^{16}-{\rm d} x^2\wedge{\rm d} x^{15}-{\rm d} x^3\wedge{\rm d} x^{14}-{\rm d} x^4\wedge{\rm d} x^{13}-\\&{\rm d} x^5\wedge{\rm d} x^{12}-{\rm d} x^6\wedge{\rm d} x^{11}-{\rm d} x^7\wedge{\rm d} x^{10}-{\rm d} x^8\wedge{\rm d} x^{9}.
\end{aligned}$$
Contactifying, we have the following theorem
\begin{theorem}
Let $M=\mathbb{R}^{24}$ with coordinates $(u^1,\dots,u^8,x^1,\dots ,x^{16})$, and consider eight 1-forms $\lambda^1,\dots,\lambda^8$ on $M$ given by
$$\begin{aligned}
\lambda^1=\,\,&{\rm d} u^1+ x^1{\rm d} x^{9}+ x^2{\rm d} x^{10}+ x^3 {\rm d} x^{11}+ x^4 {\rm d} x^{12}- x^5 {\rm d} x^{13}- x^6 {\rm d} x^{14}- x^7 {\rm d} x^{15}- x^8 {\rm d} x^{16}\\
\lambda^2=\,\,&{\rm d} u^2- x^1 {\rm d} x^{10}+ x^2 {\rm d} x^{9}+ x^3 {\rm d} x^{12}- x^4 {\rm d} x^{11}- x^5 {\rm d} x^{14}+ x^6 {\rm d} x^{13}+ x^7 {\rm d} x^{16}- x^8 {\rm d} x^{15}\\
\lambda^3=\,\,&{\rm d} u^3- x^1 {\rm d} x^{11}- x^2 {\rm d} x^{12}+ x^3 {\rm d} x^{9}+ x^4 {\rm d} x^{10}+ x^5 {\rm d} x^{15}+ x^6 {\rm d} x^{16}- x^7 {\rm d} x^{13}- x^8 {\rm d} x^{14}\\
\lambda^4=\,\,&{\rm d} u^4- x^1 {\rm d} x^{12}+ x^2 {\rm d} x^{11}- x^3 {\rm d} x^{10}+ x^4 {\rm d} x^{9}+ x^5 {\rm d} x^{16}- x^6 {\rm d} x^{15}+ x^7 {\rm d} x^{14}- x^8 {\rm d} x^{13}\\
\lambda^5=\,\,&{\rm d} u^5+ x^1 {\rm d} x^{13}+ x^2 {\rm d} x^{14}- x^3 {\rm d} x^{15}- x^4 {\rm d} x^{16}+ x^5 {\rm d} x^{9}+ x^6 {\rm d} x^{10}- x^7 {\rm d} x^{11}- x^8 {\rm d} x^{12}\\
\lambda^6=\,\,&{\rm d} u^6+ x^1 {\rm d} x^{14}- x^2 {\rm d} x^{13}- x^3 {\rm d} x^{16}+ x^4 {\rm d} x^{15}- x^5 {\rm d} x^{10}+ x^6 {\rm d} x^{9}+ x^7 {\rm d} x^{12}- x^8 {\rm d} x^{11}\\
\lambda^7=\,\,&{\rm d} u^7+ x^1 {\rm d} x^{15}- x^2 {\rm d} x^{16}+ x^3 {\rm d} x^{13}- x^4 {\rm d} x^{14}+ x^5 {\rm d} x^{11}- x^6 {\rm d} x^{12}+ x^7 {\rm d} x^{9}- x^8 {\rm d} x^{10}\\
\lambda^8=\,\,&{\rm d} u^8 - x^1 {\rm d} x^{16}- x^2 {\rm d} x^{15}- x^3 {\rm d} x^{14}- x^4 {\rm d} x^{13}- x^5 {\rm d} x^{12}- x^6 {\rm d} x^{11}- x^7 {\rm d} x^{10}- x^8 {\rm d} x^{9}.
\end{aligned}$$
The rank 16 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^{24}\ni X\,\,|\,\,X\hook\lambda^1=\dots=X\hook\lambda^8=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^{16})$ is the Dirac spinors representation \eqref{dir44} of $\mathfrak{n}_{00}=\mathfrak{co}(8,0)$, and $(\tau,R=\mathbb{R}^8)$ is the vectorial representation \eqref{tauweyl} of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple exceptional Lie algebra $\mathfrak{e}_{IV}$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{e}_{IV},$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S=S_+\oplus S_-$,
$$
\mathfrak{n}_0=\mathbb{R}\oplus\mathfrak{co}(8,0)\supset \mathfrak{n}_{00},$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$,
and with the spaces $S_\pm$ being the Carrier spaces for the Weyl spinors representations $\rho_\pm$ of $\mathfrak{co}(8,0)$.
The gradation in $e_{iV}$ is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{e}_{IV}$.
The contactification $(M,{\mathcal D})$ is locally the flat model for the parabolic geometry of type $(E_{IV},P_{IV})$ related to the following \emph{crossed} Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{t****t}
\end{dynkinDiagram}.
\end{theorem}
\section{Application: one more realization of $\mathfrak{e}_6$ and a warning}
Between the 24-dimensional realizations of $\mathfrak{e}_6$ mentioned in this paper, and Cartan's 16-dimensional realization of $\mathfrak{e}_6$ associated with the grading\tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{ooooot}\end{dynkinDiagram}, there are 21-dimensional realizations of this algebra $\mathfrak{e}_6$ associated with the following Dynkin diagram crossing
\tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{otoooo}\end{dynkinDiagram}. These define contact $\mathfrak{e}_6$ geometries and are described in \cite{CS} p. 425-426.
\subsection{Realization of $\mathfrak{e}_I$ in dimension 25}
Here we will briefly discuss yet another realization, now in dimension 25, corresponding to the following Dynkin diagram crossing: \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{ooooto}\end{dynkinDiagram} of $\mathfrak{e}_6$. This is for example mentioned in \cite{weyman}. Looking at the Satake diagrams of real forms of $\mathfrak{e}_6$, we see that this relaization is only possible for the real form $\mathfrak{e}_I$.
So we again use our Corollary \ref{cruco} with now $\mathfrak{n}_{00}=\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(5,\mathbb{R})$ and with representations $(\rho,S)$ and $(\tau,R)$, as indicated in \cite{weyman} Section 5.3,
$S=\mathbb{R}^2\otimes\bigwedge^2\mathbb{R}^5,\quad R=\bigwedge^2\mathbb{R}^2\otimes\bigwedge^4\mathbb{R}^5.$
To be more explicit we obtain this representations as follows:
\begin{itemize}
\item We start with the defining representations $\tau_2$ of $\mathfrak{sl}(2,\mathbb{R})$ in $\mathbb{R}^2$ and $\tau_5$ of $\mathfrak{sl}(5,\mathbb{R})$ in $\mathbb{R}^5$, and define the representation $$\rho=\tau_2\otimes\big(\tau_5\wedge\tau_5\big)\quad\mathrm{of}\quad \mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(5,\mathbb{R})\quad\mathrm{ in}\quad S=\mathbb{R}^2\otimes\bigwedge^2\mathbb{R}^5=\mathbb{R}^{20}.$$
The representation $(\rho,S)$ is an irreducible real 20-dimensional representation of
$$\mathfrak{n}_{00}=\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(5,\mathbb{R}).$$
\item Then we decompose the $190$-dimensional representation $\rho\wedge\rho$ onto the irreducibles:
$$\textstyle \rho\wedge\rho=\alpha\oplus\tau\oplus\beta\quad\mathrm{in}\quad\bigwedge_{50}\oplus R\oplus\bigwedge_{135},$$
with $(\alpha,\bigwedge_{50})$ being 50-dimensional, $(\tau,R)$ being 5-dimensional, and $(\beta,\bigwedge_{135})$ being $135$-dimensional.
\item We take the 20-dimensional representation $(\rho,S)$ and the $5$-dimensional representation $(\tau,R)$ of $\mathfrak{n}_{00}=\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(5,\mathbb{R})$ as above, and apply our Corollary \ref{cruco}.
\end{itemize}
We obtain the following theorem.
\begin{theorem}
Let $M=\mathbb{R}^{25}$ with coordinates $(u^1,\dots,u^5,x^1,\dots ,x^{20})$, and consider five 1-forms $\lambda^1,\dots,\lambda^5$ on $M$ given by
$$\begin{aligned}
\lambda^1=\,\,&{\rm d} u^1- x^3{\rm d} x^{20}+ x^5{\rm d} x^{19}- x^6 {\rm d} x^{18}- x^8 {\rm d} x^{16}+ x^9 {\rm d} x^{15}- x^{10} {\rm d} x^{13}\\
\lambda^2=\,\,&{\rm d} u^2- x^2 {\rm d} x^{20}+ x^4 {\rm d} x^{19}- x^6 {\rm d} x^{17}- x^7 {\rm d} x^{16}+ x^9 {\rm d} x^{14}- x^{10} {\rm d} x^{12}\\
\lambda^3=\,\,&{\rm d} u^3- x^1 {\rm d} x^{20}+ x^4 {\rm d} x^{18}-x^5 {\rm d} x^{17}- x^7 {\rm d} x^{15}+ x^8 {\rm d} x^{14}- x^{10} {\rm d} x^{11}\\
\lambda^4=\,\,&{\rm d} u^4- x^1 {\rm d} x^{19}+ x^2 {\rm d} x^{18}- x^3 {\rm d} x^{17}- x^7 {\rm d} x^{13}+ x^8 {\rm d} x^{12}- x^9 {\rm d} x^{11}\\
\lambda^5=\,\,&{\rm d} u^5- x^1 {\rm d} x^{16}+ x^2 {\rm d} x^{15}- x^3 {\rm d} x^{14}- x^4 {\rm d} x^{13}+ x^5 {\rm d} x^{12}- x^6 {\rm d} x^{11}.
\end{aligned}$$
The rank 20 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^{25}\ni X\,\,|\,\,X\hook\lambda^1=\dots=X\hook\lambda^5=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^{20})$ is the 20-dimensional irreducible representation of $\mathfrak{n}_{00}=\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(5,\mathbb{R})$, and $(\tau,R=\mathbb{R}^5)$ is the 5-dimensional irreducible subrepresentation $\tau\in(\rho\wedge\rho)$ of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple exceptional Lie algebra $\mathfrak{e}_{I}$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{e}_{I},$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S$,
$$
\mathfrak{n}_0=\mathbb{R}\oplus\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(5,\mathbb{R})\supset \mathfrak{n}_{00},$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$.
The gradation in $e_I$ is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{e}_{I}$.
The contactification $(M,{\mathcal D})$ is locally the flat model for the parabolic geometry of type $(E_{I},P_{I*})$ related to the following \emph{crossed} Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{ooooto}
\end{dynkinDiagram}.
\end{theorem}
\subsection{A realization of $\mathfrak{so}(7,6)$ in dimension 21} We know from\cite{CS} that the crossed Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{E}{otoooo}\end{dynkinDiagram} corresponds to the $\mathfrak{e}_I$-symmetric contact geometry in dimension 21. It corresponds to the grading
$$\mathfrak{e}_I=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\dim(\mathfrak{n}_{\pm 1})=20$, $\dim(\mathfrak{n}_{\pm2})=1$ and $\mathfrak{n}_0=\mathfrak{gl}(6,\mathbb{R})$.
Interestingly dimension $n=78$ is the dimension not only of the exceptional simple Lie algebra $\mathfrak{e}_6$, but also for the \emph{simple} Lie algebras $\mathfrak{b}_6$ and $\mathfrak{c}_6$. For example, if we take the crossed Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{B}{ooooot}\end{dynkinDiagram} we describe the following gradation
$$\mathfrak{so}(7,6)=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\dim(\mathfrak{n}_{\pm 1})=6$, $\dim(\mathfrak{n}_{\pm2})=15$ and $\mathfrak{n}_0=\mathfrak{gl}(6,\mathbb{R})$, in the simple Lie algebra $\mathfrak{so}(7,6)$. Here, taking $(\rho,S)$ as the defining representation $\rho(A)=A$ of $\mathbf{GL}(6,\mathbb{R})$ in $S=\mathbb{R}^6$, and taking the representation $(\tau,R)$ to be $\tau=\rho\wedge\rho$ in $R=\bigwedge^2\mathbb{R}^6=\mathbb{R}^{15}$, and applying our Corollary \ref{cruco} we get the following theorem\footnote{We invoke it, just to show that we do not only use spin representations in this paper.}.
\begin{theorem}
Let $M=\mathbb{R}^{21}$ with coordinates $(u^1,\dots,u^{15},x^1,\dots ,x^{6})$, and consider fifteen 1-forms $\lambda^1,\dots,\lambda^5$ on $M$ given by
$$
\lambda^{I(i,j)}=\,\,{\rm d} u^{I(i,j)}- x^i{\rm d} x^{j},$$
with $$I(i,j)=1+i+\tfrac12(j-3)j,\quad 1\leq i<j\leq 6.$$
The rank 6 distribution ${\mathcal D}$ on $M$ defined as ${\mathcal D}=\{\mathrm{T}\mathbb{R}^{21}\ni X\,\,|\,\,X\hook\lambda^1=\dots=X\hook\lambda^{15}=0\}$ has its Lie algebra of infinitesimal automorphisms $\mathfrak{aut}(\mathcal D)$ isomorphic to the Tanaka prolongation of $\mathfrak{n}_{\minu}=R\oplus S$, where $(\rho,S=\mathbb{R}^{6})$ is the 6-dimensional defining representation of $\mathfrak{n}_{00}=\mathfrak{gl}(6,\mathbb{R})$, and $(\tau,R=\bigwedge^2\mathbb{R}^6)$ is the $15$-dimensional irreducible subrepresentation $\tau=\rho\wedge\rho$ of $\mathfrak{n}_{00}$.
The symmetry algebra $\mathfrak{aut}({\mathcal D})$ is isomorphic to the simple exceptional Lie algebra $\mathfrak{so}(7,6)$,
$$\mathfrak{aut}({\mathcal D})=\mathfrak{so}(7,6),$$
having the following natural gradation
$$\mathfrak{aut}({\mathcal D})=\mathfrak{n}_{-2}\oplus\mathfrak{n}_{-1}\oplus\mathfrak{n}_0\oplus\mathfrak{n}_1\oplus\mathfrak{n}_2,$$
with $\mathfrak{n}_{-2}=R$, $\mathfrak{n}_{-1}=S$,
$$
\mathfrak{n}_0=\mathfrak{gl}(6,\mathbb{R})= \mathfrak{n}_{00},$$
$\mathfrak{n}_{1}=S^*$, $\mathfrak{n}_{2}=R^*$.
The gradation in $\mathfrak{so}(7,6)$ is inherited from the distribution structure $(M,{\mathcal D})$. The duality signs $*$ at $R^*$ and $S^*$ above are with respect to the Killing form in $\mathfrak{so}(7,6)$.
The contactification $(M,{\mathcal D})$ is locally the flat model for the parabolic geometry of type $(\mathfrak{so}(7,6),P)$ related to the following \emph{crossed} Satake diagram \tikzset{/Dynkin diagram/fold style/.style={stealth-stealth,thin,
shorten <=1mm,shorten >=1mm}}\begin{dynkinDiagram}[edge length=.5cm]{B}{ooooot}
\end{dynkinDiagram}.
\end{theorem}
\begin{bibdiv}
\begin{biblist}
\bib{AC}{article}
{
author={Dmitrij V. Alekseevsky},
author={Vicente Cortes},
title={Classification of N-(super)-extended Poincaré algebras and bilinear invariants of the spinor representation of Spin(p, q)},
journal={Comm. Math. Phys.},
volume={183(3)},
pages={477–510},
year={1997}
}
\bib{Alt}{article}
{
author={Andrea Altomani},
author={Andrea Santi},
title={Tanaka structures modeled on extended Poincaré algebras},
journal={Indiana Univ. Math. Journ.},
volume={63(1)},
pages={91–117},
year={2014}
}
\bib{bicquard}{article}
{
author = { Olivier Biquard },
title = {Quaternionic contact structures},
booktitle = {Quaternionic Structures in Mathematics and Physics},
chapter = {},
pages = {23-30},
doi = {10.1142/9789812810038\_0003},
URL = {https://www.worldscientific.com/doi/abs/10.1142/9789812810038_0003},
eprint = {https://www.worldscientific.com/doi/pdf/10.1142/9789812810038_0003},
abstract = { Abstract This article is a survey on the notion of quaternionic contact structures, which I defined in [2]. Roughly speaking, quaternionic contact structures are quaternionic analogues of integrable CR structures. }
}
\bib{CartanPhd}{article}{
author={Cartan, \'Elie},
title={\"Uber die einfachen Transformationsgruppen},
journal={Ber. Verh. k. Sachs. Ges. d. Wiss. Leipzig}
date={1893},
pages={395--420},
}
\bib{CartanPhdF}{article}
{
author={Cartan, \'Elie},
title={Sur la structure des grupes de transformations finis et continus},
journal={Oeuvres, 1}
date={1894},
pages={137-287},
}
\bib{CS}{book}{
author={\v{C}ap, Andreas},
author={Slov\'{a}k, Jan},
title={Parabolic geometries. I},
series={Mathematical Surveys and Monographs},
volume={154},
note={Background and general theory},
publisher={American Mathematical Society, Providence, RI},
date={2009},
pages={x+628},
isbn={978-0-8218-2681-2},
review={\MR{2532439}},
doi={10.1090/surv/154},
}
\bib{He}{article}{
author={Helgason, Sigurdur},
title={Invariant differential equations on homogeneous manifolds},
journal={BAMS},
volume={83},
date={1977},
pages={751-756},
}
\bib{DJPZ}{article}{
author = {Hill, C. Denson }
author={Merker, Joël}
author={Nie, Zhaohu}
author={Nurowski, Paweł},
title = {Accidental CR structures},
publisher = {arXiv},
year = {2023},
doi = {10.48550/ARXIV.2302.03119},
url = {https://arxiv.org/abs/2302.03119},
}
\bib{Krug}{article}
{
author={ M.G. Molina},
author={ B. Kruglikov},
author={I. Markina},
author={A. Vasil’ev},
title={Rigidity of 2-Step Carnot Groups},
journal={Journ. Geom. Anal.},
volume={28},
pages={1477–1501},
year={2018}
}
\bib{weyman}{article}{
author = {Kraśkiewicz, Witold}
author={Weyman, Jerzy},
title = {Geometry of orbit closures for the representations associated to gradings of Lie algebras of types $E_6$, $F_4$ and $G_2$},
year = {2012},
publisher = {arXiv},
doi = {10.48550/ARXIV.1201.1102},
url = {https://arxiv.org/abs/1201.1102}
}
\bib{tanaka}{article}{
author={Tanaka, Noboru},
title={On differential systems, graded Lie algebras and pseudogroups},
journal={Journal of Mathematics of Kyoto University},
pages={1-82}
volume = {10},
year = {1970},
}
\bib{traut}{book}{
author = {Trautman, Andrzej},
title={Clifford algebras and their representations},
year= {2006, http://trautman.fuw.edu.pl}, publisher = {in Encyclopedia of Mathematical Physics vol. 1, Elsevier},
editor= {J.-P. Françoise, G.L. Naber and Tsou S.T.},
address = {Oxford GB},
pages={518-530},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
{
"arxiv_id": "2302.13724",
"language": "en",
"timestamp": "2023-02-28T02:28:25",
"url": "https://arxiv.org/abs/2302.13724",
"yymm": "2302"
} | \section{Introduction}
The rapid growth of the Internet of things (IoT) device population has sparked extensive demands on IoT security in recent years. Many security-critical IoT applications need more stringent security support~\cite{hassija2019survey}. Device authentication is one of the most important categories, which includes rogue device detection and the classification of registered devices~\cite{yang2017survey}. Traditionally, device authentication is achieved by public-key cryptography (PKC). However, the implemented public key algorithms are not optimal for IoT devices because they are computationally costly. Further, PKC generally requires a certification authority when sharing keys. The authority may not always be available, considering the large volume and wide-area deployment of IoT devices~\cite{xu2015device}.
A lightweight and reliable authentication technique is thus required for IoT security. Radio frequency fingerprint identification (RFFI) is a non-cryptographic authentication technique that attracted much research interest~\cite{riyaz2018deep,zhang2019physical,sankhe2020no,xie2021generalizable}. It exploits the intrinsic features brought by various hardware impairments resulting from imperfect manufacturing processes. The features manifested as slight distortions on transmitted signals. Like the biometric characteristics used for authentication, the subtle features are unique for different devices and hard to duplicate. Therefore, receivers can extract the features from received signals, followed by the verification with the pre-shared feature information for device authentication. The process does not involve computationally costly algorithms; hence, it consumes less energy and is suitable for power-constrained IoT devices.
An RFFI classifier is a machine learning model trained using radio frequency fingerprints (RFFs) for multi-class classification. Specifically, deep learning is leveraged as it minimizes the process of locating transient signal segments~\cite{merchant2018deep,das2018deep,peng2020deep,roy2020rfal,he2020cooperative}. It automatically extracts RFFs from received signals, making it the technique requiring minimal manual selection to train RFFI classifiers. For the network architecture, convolutional neural network (CNN) is mostly implemented for image recognition tasks, which makes it especially suitable for device fingerprinting~\cite{ding2018specific,sankhe2019oracle,rajendran2020injecting,jian2021radio,qian2021specific,li2022radionet}. Among the feature selection, in-phase and quadrature (IQ) samples~\cite{zhou2021robust}, FFT results~\cite{robyns2017physical,guo2021specific}, and spectrogram~\cite{shen2021radioJEAC} are widely studied. In~\cite{shen2021radioJEAC}, the spectrogram CNN model was shown to achieve the highest classification accuracy. Therefore, we adopt deep learning and spectrogram-based classifiers to benchmark proposed classifiers.
Due to the nature of wireless communications, RFFs are susceptible to environmental changes. Large-scale fading, multipath fading, and the Doppler effect affect wireless channels and modify received signal characteristics~\cite{shen2022towards,soltani2020rf,yang2022channel}. Traditional RFFs, e.g., spectrogram, extracted from the received signals are distorted and cannot be used for authentication~\cite{restuccia2019deepradioid,al2020exposing,al2021deeplora}. We propose using a power amplifier (PA) nonlinearity quotient to mitigate the wireless channel effects introduced by environmental changes. The PA nonlinearity quotient is generated by taking division on the frequency domain of two consecutive signals transmitted with different power. The division mitigates the wireless channel effects, and RFFI classifiers are trained to exploit the resulting RFFs.
Implementing environment-robust RFFs is limited when communication environments have many fast-moving objects because multipath fading and the Doppler effect mostly dominate wireless channels. Particularly, fast fading can happen when transmitters have low data rates, i.e., IoT devices. In~\cite{soltani2020more,al2021deeplora,shen2022towards}, data augmentation is implemented to alleviate the impact of fast fading by training classifiers under channels with simulated multipath fading and the Doppler effect. However, the simulations had no pre-knowledge of the real deployment environments and significantly increased the required disk and memory storage for training classifiers.
Transfer learning can be implemented to combine RFFs resulting from different wireless channels~\cite{sharaf2016authentication,wang2020radio,kuzdeba2021transfer}. Hence, distortions caused by multipath fading and the Doppler effect are acknowledged in device authentication. The required storage for transfer learning is less than data augmentation. Therefore, we implement transfer learning to alleviate the impact of fast fading. Specifically, a base classifier is trained with the original RFFs of the devices under test (DUTs); then, the classifier is retrained with the RFFs collected in real deployment environments.
This paper aims to design and validate an environment-robust RFFI system for IoT device authentication. The approach trains a classifier using the PA nonlinearity quotient. Transfer learning is adopted to alleviate the impact of fast fading and reduce training costs. Extensive experiments, including indoor and outdoor settings, were carried out using LoRa devices. The results show that the proposed PA nonlinearity quotient and transfer learning classifier significantly outperformed conventional deep learning and spectrogram-based classifiers. Our contributions are summarized as follows.
\begin{itemize}
\item We formalized the PA nonlinearity quotient and demonstrated that it is independent of environmental changes. The improvements in rogue device detection and device classification are backed by experimental validation.
\item We developed data collection of real deployment, including indoor and outdoor environments. Further, we implemented transfer learning using the data to alleviate the impact of fast fading. The approach reduced the disk and memory storage requirements for training. The resulting classifiers have pre-knowledge of the real deployment environments compared to the data augmentation approach.
\item We designed an RFFI system that involves the PA nonlinearity quotient and transfer learning. Samples resulting from natural multipath fading and the Doppler effect were implemented to validate the system.
\end{itemize}
\section{Power Amplifier Nonlinearity Quotient}
The PA is an indispensable component in any wireless device, with the implementation to amplify low-power signals to high-power ones. It is inherently nonlinear~\cite{zhu2013challenges}. For low-power and narrowband systems, i.e., IoT devices, the PA is regarded as memoryless, meaning the nonlinear output depends only on the input at a particular time. The nonlinearity can be characterized by an amplitude/amplitude (AM/AM) function and an amplitude/phase (AM/PM) function. Several models have been proposed to formulate the functions~\cite{zhu2013challenges}.
Implementing PA nonlinearity for RFFI is widely studied in the literature~\cite{gong2020unsupervised,polak2011identifying,zhang2016specific,hanna2019deep,satija2019specific,li2022radio}. However, the implementation is often limited for static or semi-static channels. The RFFI performance drops significantly when communication environments change. We propose the PA nonlinearity quotient to design an environment-robust RFFI.
The signal of a narrowband system that reaches a receiver is given as
\begin{align*}
s(t)=h(\tau,t)\ast f\left [ x(t) \right ]+n(t),
\label{eq:signalModel}
\tag{1}
\end{align*}
where $x(t)$ is baseband signal, $h(\tau,t)$ is channel impulse response, $f[\cdot]$ denotes the nonlinear effect of hardware impairment at transmission power, and $n(t)$ is additive white Gaussian noise (AWGN). ``$\ast$'' denotes convolution operation.
When generating the PA nonlinearity quotient, two consecutive signals emitted with high and low transmission power correspondingly are received and developed an element-wise division on the frequency domain. The signal representation on the frequency domain is obtained through the short-time Fourier transform (STFT). The result of the STFT on the received signal is a matrix expressed as
\begin{flalign}
\boldsymbol{S}_p=
\begin{bmatrix}
S^{1,1}_p & S^{1,2}_p & \cdots & S^{1,M}_p \\
S^{2,1}_p & S^{2,2}_p & \cdots & S^{2,M}_p \\
\vdots & \vdots & \ddots & \vdots \\
S^{W,1}_p & S^{W,2}_p & \cdots & S^{W,M}_p
\end{bmatrix},
\label{eq:receivedspectrogram}
\tag{2}
\end{flalign}
where $p=\{h,l\}$ denotes high-power and low-power, respectively. The elements in the matrix are given as
\begin{align*}
S_{p}^{w,m}=\sum_{n=0}^{W-1}s_{p}\left [ n \right ]g\left [ n-mR \right ]e^{-j2\pi \frac{w}{W}n}\\
\text{for }w=1,2,...,W \text{ and }m=1,2,...,M,
\label{eq:STFT}
\tag{3}
\end{align*}
where $s_{p}\left [ n \right ]$ is the discrete signal received by the receiver with a sampling interval, $g\left [ n \right ]$ is the window function with length $W$, and $R$ is hop size. The experiments implement LoRa, hence $M$ is given by LoRa configurations as
\begin{align*}
M=\frac{K\cdot \frac{2^{SF}}{B}\cdot f_S-W}{R}+1,
\label{eq:Mvalue}
\tag{4}
\end{align*}
where $K$ is number of LoRa symbols, $SF$ is LoRa spreading factor, $B$ is bandwidth, and $f_S$ is sampling frequency. The configurations are discussed in Section~\ref{sec:ExperimentalSettings}. $W$ is 1024 and $R$ is 512. $M$ is calculated to be 319.
The STFT result of the high-power signal is expressed as~\eqref{eq:highpowerspectrogram}, where $X$ denotes the ideal spectrum of the transmitted signal, $H$ denotes the channel frequency response, and $F(\cdot)$ denotes the nonlinear hardware effect at the transmission power in the frequency domain. Only the preamble of the received signal is used to generate the PA nonlinearity quotient. The ideal spectrum of the low-power preamble is the same as the high-power one, i.e., $X^{w,m}=X^{w,M+m}$. Hence, the STFT result of the consecutive low-power signal is given as~\eqref{eq:lowpowerspectrogram}.
\begin{figure*}[b]
\begin{flalign}
\boldsymbol{S}_h=
\begin{bmatrix}
H^{1,1}F_{h}(X^{1,1}) & H^{1,2}F_{h}(X^{1,2}) & \cdots & H^{1,M}F_{h}(X^{1,M}) \\
H^{2,1}F_{h}(X^{2,1}) & H^{2,2}F_{h}(X^{2,2}) & \cdots & H^{2,M}F_{h}(X^{2,M}) \\
\vdots & \vdots & \ddots & \vdots \\
H^{W,1}F_{h}(X^{W,1}) & H^{W,2}F_{h}(X^{W,2}) & \cdots & H^{W,M}F_{h}(X^{W,M})
\end{bmatrix},
\label{eq:highpowerspectrogram}
\tag{5.1}
\end{flalign}
\end{figure*}
\begin{figure*}[b]
\begin{flalign}
\boldsymbol{S}_l=
\begin{bmatrix}
H^{1,M+1}F_{l}(X^{1,1}) & H^{1,M+2}F_{l}(X^{1,2}) & \cdots & H^{1,2M}F_{l}(X^{1,M}) \\
H^{2,M+1}F_{l}(X^{2,1}) & H^{2,M+2}F_{l}(X^{2,2}) & \cdots & H^{2,2M}F_{l}(X^{2,M}) \\
\vdots & \vdots & \ddots & \vdots \\
H^{W,M+1}F_{l}(X^{W,1}) & H^{W,M+2}F_{l}(X^{W,2}) & \cdots & H^{W,2M}F_{l}(X^{W,M})
\end{bmatrix}.
\label{eq:lowpowerspectrogram}
\tag{5.2}
\end{flalign}
\end{figure*}
By removing the significantly distorted preambles caused by fast-moving objects nearby and implementing transfer learning, we assume intense multipath fading and the Doppler effect are mitigated. Slow fading mostly dominates the wireless channels. Therefore, the channel frequency response does not change significantly during one packet duration, i.e., $H^{w,m}\approx H^{w,M+m}$. The result of the element-wise division of received signals on the frequency domain ($\boldsymbol{Q}$) is given as
\begin{flalign}
\boldsymbol{Q}=\boldsymbol{S}_h ./ \boldsymbol{S}_l=
\begin{bmatrix}
\frac{F_{h}(\boldsymbol{X^{1}})}{F_{l}(\boldsymbol{X^{1}})} & \frac{F_{h}(\boldsymbol{X^{2}})}{F_{l}(\boldsymbol{X^{2}})} & \boldsymbol\cdots & \frac{F_{h}(\boldsymbol{X^{M}})}{F_{l}(\boldsymbol{X^{M}})} \\
\end{bmatrix},
\label{eq:PAnonlinearityquotient}
\tag{6}
\end{flalign}
where ``$./$'' denotes the element-wise division operation and $\boldsymbol{X^m}=[X^{1,m} \quad X^{2,m} \quad \cdots \quad X^{W,m}]^T$. No channel frequency response ($H$) is present in $\boldsymbol{Q}$. The proposed environment-robust RFFI can be developed exploiting the PA nonlinearity quotient, which is $\boldsymbol{Q}$ in dB scale, expressed as
\begin{align*}
\widetilde{\boldsymbol{Q}}=10\log_{10}(|\boldsymbol{Q}|^2).
\label{eq:QdB}
\tag{7}
\end{align*}
\section{Experiments}
\subsection{Experimental Settings}\label{sec:ExperimentalSettings}
The experiments implemented 25 Arduino Nano-controlled LoRa SX1276 modules with the same circuit design and specifications as DUTs. 20 DUTs were randomly selected as legitimate devices (DUT: ``A" to ``T"), and 5 DUTs were selected as rogue devices (DUT: ``Attacker 1" to ``Attacker 5"). The device configurations are given in Table~\ref{tab:config}. The LoRaWAN protocol supports 125~kHz, 250~kHz, and 500~kHz bandwidths, while LoRa supports bandwidths ranging from 7.8~kHz to 500~kHz. The proposed RSSI system does not focus on specific protocols. Therefore, a bandwidth of 62.5~kHz was used to reduce packet loss and maintain high throughputs. A universal software radio peripheral (USRP) platform with a 1~MS/s sampling frequency ($f_S$) was used to collect RF samples. Fig.~\ref{fig:devices} shows the devices used in the experiments.
\begin{table}[t]
\footnotesize
\centering
\caption{DUT Configurations}
\begin{tabular}{|c|c|c|c|c|}
\hline
\begin{tabular}[c]{@{}c@{}}Carrier\\Frequency\end{tabular} & \begin{tabular}[c]{@{}c@{}}Bandwidth\\($B$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Transmission\\Power (h/l)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Spreading\\Factor ($SF$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Coding\\Rate\end{tabular} \bigstrut\\
\hline
915~MHz & 62.5~kHz & 17/10~dBm & 10 & 4/5 \bigstrut\\
\hline
\end{tabular}
\label{tab:config}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.42\textwidth]{devices.pdf}
\caption{Devices used in the experiments. Left: a DUT in operation. Middle: 20 DUTs as legitimate devices and 5 DUTs as rogue devices. Right: a USRP-2922 platform as the receiver.}
\label{fig:devices}
\end{figure}
The data collection was developed in three environments.
\begin{itemize}
\item \textbf{Anechoic chamber:} the collection of channel effect-free RFFs for training the base classifier required by transfer learning was carried out in the anechoic chamber on the top floor of the QUT GP campus S-block building. DUTs were placed 3 meters away from the USRP platform. The anechoic chamber was designed to absorb multipath signals. Therefore, RF samples collected in the environment can generate the PA nonlinearity quotient without the impact of multipath fading and the Doppler effect.
\item \textbf{Indoor:} DUTs were placed in an office room for the indoor setting. The USRP platform was placed in the adjacent room, and DUT signals traveled through a wall. People were freely walking in the office during the data collection. The environment was considered to have moderate multipath fading and a slight Doppler effect.
\item \textbf{Outdoor:} in the outdoor setting, DUTs were placed 104.5 meters away from the USRP platform, as shown in Fig.~\ref{fig:outdoorEn}. Buildings blocked the line of sight, and people freely walked in the environment. The outdoor environment was considered to have more significant multipath fading and the Doppler effect than the indoor environment.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=0.42\textwidth]{outdoorEnvironment.pdf}
\caption{Outdoor experimental environment.}
\label{fig:outdoorEn}
\end{figure}
The DUTs transmit packets with alternating high-power and low-power modes, and the USRP platform passively receives the packets in the data collection. More than 2800 packets were collected for each DUT within one hour. Hence, more than 8400 packets were collected for each DUT in all three experimental environments.
\subsection{Data Preprocessing}
The data preprocessing includes synchronization, preamble extraction, normalization, and the PA nonlinearity quotient generation. The packets collected indoors and outdoors are required to go through the distorted preamble removal process before generating the PA nonlinearity quotient.
\begin{enumerate}
\item Synchronization: transmission power does not impact the data rate. Hence, the time-on-air for the DUT packets stays unchanged for the high-power and low-power transmission. The starting points of the packets are marked and used for synchronization to avoid inaccurate preamble extraction.
\item Preamble extraction: preambles are payload-independent and have no software-defined features such as MAC addresses. Therefore, the intrinsic hardware features in the preamble symbols are the desirable source for RFFI. The preamble length is a flexible configuration for LoRa, with a minimum value of ten symbols. To study the worst-case scenario, we set and extracted ten symbols for one preamble per packet in the experiments.
\item Normalization: the process normalizes the received signal magnitude to remove the device-specific DC offset by dividing the root mean square. The PA nonlinearity feature is unaffected.
\item Distortion removal and PA nonlinearity quotient generation: we introduce Algorithm~\ref{algorithm:generation} to remove the severely distorted preambles caused by fast-moving objects nearby. The correlation between the high-power and low-power spectrogram should stay the same since PA nonlinearity is only affected by the input power~\cite{zhang2021radio}. The distorted preambles can be found by comparing the correlation of the channel-affected spectrogram to the correlation of the anechoic chamber spectrogram. The distortion is considered severe and can be removed if the difference is over a tolerance ($\theta=0.2$ implemented in experiments). After the distortion removal, an element-wise division on the frequency domain is developed to generate the PA nonlinearity quotient. Fig.~\ref{fig:fingerprintIMG} shows the collected preamble spectrogram and the PA nonlinearity quotient generated by a DUT.
\end{enumerate}
\begin{algorithm}[t]
\footnotesize
\DontPrintSemicolon
\KwInput{$\boldsymbol{S}_{h,k},\boldsymbol{S}_{l,k}$ \quad $\%$STFT results of indoor or outdoor preambles ($k=indoor \: or \: outdoor$)}
\KwInput{$\boldsymbol{S}_{h,c},\boldsymbol{S}_{l,c}$ \quad $\%$STFT results of anechoic chamber preambles}
\KwInput{$\theta$ \quad $\%$ Tolerance}
\KwOutput{$\widetilde{\boldsymbol{Q}}$ \quad $\%$ PA nonlinearity quotient}
$\rho_k=corr\{max(\boldsymbol{S}_{h,k}),max(\boldsymbol{S}_{l,k})\}$\\
$\rho_c=corr\{max(\boldsymbol{S}_{h,c}),max(\boldsymbol{S}_{l,c})\}$\\
$\rho_d=\|\rho_c-\rho_k\|$\\
\eIf{$\rho_d\leq\theta$}
{$\left. \boldsymbol{Q}=\boldsymbol{S}_{h,k} .\middle/ \boldsymbol{S}_{l,k} \right.$ \quad $\%$ Element-wise division \\
$\widetilde{\boldsymbol{Q}}=10\log_{10}(|\boldsymbol{Q}|^2)$}
{$remove \: \boldsymbol{S}_{h,k},\boldsymbol{S}_{l,k}$}
\caption{Distortion Removal and PA Nonlinearity Quotient Generation.}
\label{algorithm:generation}
\end{algorithm}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{fingerprintIMG.pdf}
\caption{Spectrogram and PA nonlinearity quotient of a DUT in the experiments. Top: high-power preamble spectrogram. Middle: low-power preamble spectrogram. Bottom: PA nonlinearity quotient, $\widetilde{\boldsymbol{Q}}$ in~\eqref{eq:QdB}.}
\label{fig:fingerprintIMG}
\end{figure}
\subsection{Analytical Metrics}
Device authentication exploiting RFFI involves two essential parts: device classification and rogue device detection. The classification accuracy and receiver operating characteristic (ROC) curve are implemented to evaluate the device classification and rogue device detection performance, respectively.
\subsubsection{Classification Accuracy}
The classification accuracy is defined as the number of correctly classified RFFs divided by the total number of tested RFFs. The results are obtained from the confusion matrix after developing classification tests.
\subsubsection{ROC Curve}
The rogue device detection was studied as binary classification in the experiments. The output values of the softmax function are compared to a threshold. The RFFs associated with the output values smaller than the threshold will be considered unauthorized. Since the threshold is configurable, it is hard to use a detection rate to analyze classifiers' performance. We adopted the ROC curve in the binary classifier study to overcome this. For each class of a classifier, ROC analysis applies threshold values in [0,1] to calculate the true-positive rate (TPR) and the false-positive rate (FPR) for the outputs generated by each threshold. The area under the ROC curve (AUC) is the integral of a ROC curve with respect to FPR. The value of AUC is in the range of 0 to 1. A larger AUC indicates better classifier performance. In our experiments, a larger AUC indicates that the classifier is more capable of detecting rogue devices. A micro-averaging method is applied to generate the averaged AUC and ROC curves to analyze the rogue device detection for all classes.
\section{Classifier Architecture}
The architecture of the PA nonlinearity quotient and transfer learning classifier is summarized in Table~\ref{table:ClassifierArchitecture}. It consists of three convolution layers with 8, 16, and 32 $3\times3$ filters, respectively. A batch normalization layer and the rectified linear unit (ReLU) activation follow each convolution layer. After the activation, a $2\times2$ max pooling layer with stride 2 is implemented. The output of the last ReLU activation is fed to a fully connected layer. An output layer with softmax function is implemented last to produce vectors of probabilities of outputs. The PA nonlinearity quotient is resized to $256\times256$ with 8-bit depth to go to the input layer. Adam optimizer is implemented to reduce the losses. The mini-batch size is 32. The initial training rate is 0.005 and remains unchanged.
Transfer learning retrains a pre-trained classifier on new datasets. In the experiments, the convolution layers of the pre-trained classifier recognize generic RFF patterns. We replaced the fully connected and output layers with new layers. For fine-tuning the transferred classifier, the training rate was configured to 0.0001, and the learning rate factor of the new layers was configured to 20.
\begin{table}[t]
\footnotesize
\centering
\caption{Layers, Parameters, and Activation of the Proposed Classifier}
\label{table:ClassifierArchitecture}
\begin{tabular}{|c|c|c|c|}
\hline
Layer & Dimension & Parameters & Activation\\\hline
Input & $256\times256$ & --- & ---\\ \hline
Convolution, BN & $8\times(3\times3)$ & 80, 16 & ReLU\\ \hline
MaxPooling & $2\times2$ & --- & ---\\ \hline
Convolution, BN & $16\times(3\times3)$ & 1168, 32 & ReLU\\ \hline
MaxPooling & $2\times2$ & --- & ---\\ \hline
Convolution, BN & $32\times(3\times3)$ & 4640, 64 & ReLU\\ \hline
FullyConnected & 20 & 2304020 & SoftMax\\ \hline
\end{tabular}
\end{table}
\section{Results and Discussion}
\subsection{Device Classification}
\begin{figure}[t]
\centering
\includegraphics[width=0.44\textwidth]{indoorClassification.pdf}
\caption{Device classification results of the indoor experiments.}
\label{fig:indoorClassification}
\bigbreak
\includegraphics[width=0.44\textwidth]{outdoorClassification.pdf}
\caption{Device classification results of the outdoor experiments.}
\label{fig:outdoorClassification}
\end{figure}
The base classifier was trained firstly using complete legitimate device (DUT: ``A" to ``T") datasets in the anechoic chamber. Smaller training sets, including 50, 100, 150, and 200 packets, were randomly selected for each legitimate device from the indoor and outdoor datasets to implement the transfer learning. The conventional deep learning and spectrogram-based classifiers were trained as the comparison. The same test sets, including more than 1000 packets per DUT, were implemented to validate the proposed PA nonlinearity quotient and transfer learning classifier and the deep learning and spectrogram-based classifier. No training set packets were used in the test sets.
Fig.~\ref{fig:indoorClassification} shows the device classification results of indoor experiments. The proposed PA nonlinearity quotient and transfer learning classifier outperformed the conventional deep learning and spectrogram-based classifier with an improvement of $33.3\%$ average classification accuracy. More training packets lead to higher classification accuracy. The highest accuracy is $99.4\%$, with 200 packets retraining the base classifier. The PA nonlinearity quotient improved the average classification accuracy by $19.4\%$ compared to the spectrogram-based classifier.
Fig.~\ref{fig:outdoorClassification} shows the classification results of outdoor experiments. The proposed PA nonlinearity quotient and transfer learning classifier outperformed the conventional deep learning and spectrogram-based classifier with an improvement of $34.5\%$ average classification accuracy. The PA nonlinearity quotient improved the average classification accuracy by $10.9\%$ compared to the spectrogram-based classifier.
Table~\ref{table:DeviceClassificationComparison} compares device classification performance among the proposed classifier and recent notable works in literature. The PA nonlinearity quotient and transfer learning classifier achieved high device classification accuracy while requiring fewer training samples and reducing the disk and memory storage requirements.
\begin{table}[t]
\footnotesize
\centering
\caption{Device Classification Comparison With Notable Works}
\label{table:DeviceClassificationComparison}
\begin{tabular}{|c|c|c|c|c|}
\hline
Work & \begin{tabular}[c]{@{}c@{}}Experimental\\Environment\end{tabular} & \begin{tabular}[c]{@{}c@{}}No. of\\Devices\end{tabular} &\begin{tabular}[c]{@{}c@{}}Training Samples\\(Per Device)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Classification\\Accuracy\end{tabular} \\\hline
Ours & \begin{tabular}[c]{@{}c@{}}Indoor\\Outdoor\end{tabular} & 20 & 200 & \begin{tabular}[c]{@{}c@{}}99.4\%\\98.2\%\end{tabular} \\\hline
\cite{shen2022towards} & Indoor & 30 & 100 & 98.4\% \\\hline
\cite{yu2019robust} & Indoor & 54 & 698 & 84.6\% \\\hline
\cite{xing2022design} & Indoor & 7 & 800 & 99.0\% \\\hline
\end{tabular}
\end{table}
\subsection{Rogue Device Detection}
The training sets to retrain the base classifier included 100 randomly selected packets per DUT for studying the rogue device detection for the proposed classifier. Deep learning and spectrogram-based classifiers were trained for comparison. The test sets included more than 1000 packets per DUT and more than 1000 packets for each rogue device (DUT: "Attacker~1" to "Attacker~5"). No training set packets were used in the test sets.
Fig.~\ref{fig:indoorROC} shows the ROC curves for the indoor experiments. The proposed PA nonlinearity quotient and transfer learning classifier outperformed the deep learning and spectrogram-based classifier, with an AUC value of 0.992 compared to 0.939. Fig.~\ref{fig:outdoorROC} shows the outdoor experiment results. Similar to the indoor experiments, the proposed classifier improved the AUC significantly. The PA nonlinearity quotient was more robust to environmental changes than the spectrogram, with larger AUC values in the indoor and outdoor experiments.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{indoorROC.pdf}
\caption{ROC curves of rogue device detection in the indoor experiments.}
\label{fig:indoorROC}
\bigbreak
\includegraphics[width=0.4\textwidth]{outdoorROC.pdf}
\caption{ROC curves of rogue device detection in the outdoor experiments.}
\label{fig:outdoorROC}
\end{figure}
\section{Conclusion}
In this paper, we investigated the technique to make RFFI resilient to environmental changes. We proposed the PA nonlinearity quotient and transfer learning classifier that mitigates channel effects to enhance the RFFI implementation for device classification and rogue device detection. Extensive experiments, including indoor and outdoor settings, were developed to evaluate the proposed classifier. The experiment results demonstrated that the proposed classifier significantly improved classification accuracy and rogue device detection for RFFI. The PA nonlinearity quotient outperformed the spectrogram to enhance RFFI in indoor and outdoor settings.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13646",
"language": "en",
"timestamp": "2023-02-28T02:26:23",
"url": "https://arxiv.org/abs/2302.13646",
"yymm": "2302"
} | \section{Introduction}
Diversification of portfolios in the presence of fat-tailed returns is an ongoing problem in finance, with many open questions still unresolved.
At the crux of the problem is the fact that our go-to measure of (in)dependence of returns, correlation, only actually measures independence if the underlying returns are Gaussian, and coupled via a Gaussian copula. If the returns distributions are not Gaussian, or if they are coupled through a non-Gaussian copula, then zero correlation does not imply statistical independence -- a fact which even has a Wikipedia page dedicated to it \cite{wiki}.
In the standard Gaussian framework, this is usually handled using some combination of local volatility, stochastic volatility, local correlation and stochastic correlation. But this approach often boils back down to diversification through decorrelation, which delivers, at best, mixed results.
An additional difficulty lies in the fact that extreme returns, while being more frequent than the Gaussian distribution would imply, are still comparatively rare; and financial time series are non-stationary, so any relevance of historical observations decays reasonably quickly with time.
A number of approaches exist to handle those issues, reviewed on a high level in \cite{bouchaud}.
We take a somewhat different approach, directly based on the notion of statistical independence, specifically through mutual information and the independent component analysis. This follows on some earlier work that looked at similar approaches in the context of portfolio optimization and hedging \cite{FTF, PLP, LDP}, but not directly in the context of diversification.
The paper is organised as follows: Section 2 reviews the use of moments to isolate the tails of returns distributions; Section 3 sets out the mutual information approach and the ICA methods; Section 4 looks at tail-focused ICA using the moment contrast function; Section 5 integrates the moment contrast approach with the original entropy formulation; Section 6 describes an experiment on Russell 3000 stocks; and Section 7 summarizes the conclusions.
\section{Moments and Tails}
Useful information on the tails of a distribution can be gleaned from its moments of high order \cite{PLP}; for a finite sample $x_{1},x_{2},...,x_{m}$ of a centered random variable $x$, its moment of order $p>0$ is
\begin{equation}
M_{p}(x) = \frac{1}{m} \sum_{i=1}^{m} x_{i}^{p} = \frac{1}{m} x_{\infty}^{p}\sum_{i=1}^{m} \left( \frac{ x_{i}}{x_{\infty}} \right)^{p}
\label{moment}
\end{equation}
where
$$x_{\infty} = \max_{i} |x_{i}|.$$
\begin{figure}[!htb]
\includegraphics[width=0.48\textwidth]{2k.png}
\includegraphics[width=0.48\textwidth]{2k-1.png}
\caption{Moments of even and odd orders. }
\label{fig:moment}
\end{figure}
Since each ratio $x_{i}/x_{\infty}$ is within $-1 \le x_{i}/x_{\infty}\le 1$, in the limit of $p \nearrow \infty$, each $(x_{i}/x_{\infty})^{p}$ tends to either $\pm 1$, or zero. In other words, moments effectively act as high pass amplitude filters, in the sense that elements from the centre of the distribution are comparatively suppressed, and only elements from the tail of the distribution contribute to the moment. The effective width of the band of contributing elements decays to zero as $p \nearrow \infty$, and, in that limit, only the largest element by absolute value contributes to the moment.
For moments of even order, $p=2k$, this reduces to the familiar result from calculus
$$\lim_{k\rightarrow \infty} M_{2k}^{1/2k} = x_{\infty},$$
and, in terms of
$$x_{max} = \max_{i} \ x_{i},\ \ x_{min} = \min_{i}\ x_{i},$$
we can more generally write
$$
\lim_{k\rightarrow \infty} M_{2k}^{1/2k} = \max( x_{max}, - x_{min}), \
$$
\begin{equation}
\lim_{k\rightarrow \infty} M_{2k-1}^{1/(2k-1)} = \left\{ \begin{array}{lcl}
x_{max} & | & x_{max} > -x_{min}\\
x_{min} & | & x_{max} < -x_{min}.
\end{array} \right.
\label{momentex}
\end{equation}
Note that there are multiple possibilities for $M_{2k-1}^{1/(2k-1)}$ when $x_{min}+x_{max}=0$, depending on how many points reach either of $x_{min}$ or $x_{max}$ -- the result could be $x_{min}$, $x_{max}$ or $0$. When the underlying distribution is continuous, however, probability of repeated values is zero, and hence we can disregard this case.
The advantage of using moments to isolate tails of the distribution is that moments are smooth functions of sample elements, while $\max$ and $\min$ are not; hence, by using moments, we open the door to a number of numerical methods that would be unavailable when using $\max / \min$, ranging from simple Newton's iteration, to more complex methods.
\section{Entropy, Independence and Mutual Information}
By definition, a random vector ${\bf y}=(y_{i})$ with a probability density function $\rho({\bf y})$ is independent if the density is the product of its marginals,
\begin{equation}
\rho({\bf y}) = \prod_{i} \rho_{i}(y_{i}), \label{independence}
\end{equation}
hence any measure of functional distance between $\rho({\bf y})$ and $\prod_{i} \rho_{i}(y_{i})$ is a measure of mutual dependence between $y_{i}$.
Significant advances for continuous random variables have been achieved by using the concept of differential entropy, defined as
\begin{equation}
H({\bf y}) = - \int \rho({\bf y}) \ln \rho({\bf y}) d{\bf y},
\label{entropy}
\end{equation}
and the {\it Mutual Information } of ${\bf y}$, defined as the Kuhlback-Lebler divergence between $\rho$ and $\prod \rho_{i}$,
$$I({\bf y}) = \int \rho({\bf y}) \ln \frac{\prod_{i}\rho_{i}(y_{i})}{ \rho({\bf y})} d{\bf y} = - \int \rho({\bf y}) \ln \rho({\bf y})d{\bf y} + \sum_{i=1}^{n} \int \rho({\bf y}) \ln \rho_{i}(y_{i})d{\bf y} =$$
\begin{equation}
=H({\bf y}) - \sum_{i=1}^{n} H_{i}(y_{i}), \label{mi}
\end{equation}
where
$$H_{i}(y_{i}) = - \int \rho({\bf y}) \ln \rho_{i}(y_{i})d{\bf y} = - \int \rho_{i}( y_{i}) \ln \rho_{i}(y_{i})d{\bf y}$$
is the marginal entropy of $y_{i}$. This has the straightforward intepretation that the mutual information is all the information contributed by ${\bf y}$ jointly, minus the information contributed by each of the $y_{i}$ individually.
This has lead to the development of a range of methods for decomposing random vectors into maximally mutually independent components by minimizing their mutual information, collectively referred to as Independent Component Analysis (ICA) \cite{ica}. Briefly, the key insight is that any decomposition is just a change of coordinates, which does not affect the joint entropy term. Therefore, one only has to pick orthogonal directions which minimize the marginal entropies. A further piece of insight is that, for given variance, entropy is maximized by the Gaussian distribution. Hence, directions of minimal marginal entropy are also the directions of maximal non-Gaussianity, using that particular measure of non-Gaussianity.
We can then approximate the marginal entropies in the form \cite{mutual}
$$H_{i}(y_{i}) \approx E (G(y_{i}))$$
for some smooth, non-quadratic scalar contrast function $G$, typically chosen to isolate a relevant measure of non-Gaussianity, and the maximal mutual independence reduces to solving $n$ parallel constrained maximization problems with coupled constraints,
\begin{equation}
E (G({\bf w}_{i}^{\tau} {\bf y})) \rightarrow \max, \ \ E({\bf W}^{\tau}{\bf y})^{2} = {\bf 1}.
\label{opt}
\end{equation}
where ${\bf W}$ is the unknown $n \times n$ weights matrix with columns ${\bf w}_{i}$, and ${\bf 1}$ is the $n \times n$ unit matrix. The Karush-Kuhn-Tucker conditions can be reduced to, in vector form,
\begin{equation}
E\left( {\bf W}^{\tau} {\bf y} g( {\bf W}^{\tau}{\bf y}) - diag( {\bf w}_{i}^{\tau}{\bf y} g( {\bf w}_{i}^{\tau}{\bf y}) ) \right) = {\bf0}, \ {\bf W}^{\tau} {\bf W} = {\bf 1}
\label{kkt}
\end{equation}
where $g = G'$, and it is applied component-wise to the vector ${\bf W}^{\tau}{\bf y} = ({\bf w}_{i}^{\tau}{\bf y})_{i}$ \cite{mutual}. This can then be solved numerically by using a fast semi-Newton iteration \cite{fastica}.
We note that the system (\ref{kkt}) is overdetermined, since it imposes more constraints than there are tunable parameters. The number of recoverable independent sources is therefore generally less than the size of the underlying universe available for portfolio construction.
\section{Isolating the Tails -- Moments as Contrast Functions}
In the context of tail diversification, the task is to pick a contrast function for equation (\ref{opt}) in such a way that the resulting components retain their diversification in the tails of the distribution. In view of the tail-focusing properties of moments as discussed in Section 2, we use
\begin{equation}
G(u) = \frac{1}{2k} u^{2k},\ g(u) = u^{2k-1}.
\label{contrast}
\end{equation}
Then, equation (\ref{kkt}) becomes
\begin{equation}
\begin{array}{llr}
E( {\bf w}_{i}^{\tau} {\bf y} ( {\bf w}_{j}^{\tau}{\bf y})^{2k-1}) = { 0}, & {\bf w}_{i}^{\tau} {\bf w}_{j} = 0 & | \ i\neq j\\
{\bf w}_{i}^{\tau} {\bf w}_{i} = 1.& &
\end{array}
\end{equation}
We call the matrix
\begin{equation}
{\bf T}^{(k)} = \left( E( {\bf w}_{i}^{\tau} {\bf y} ( {\bf w}_{j}^{\tau}{\bf y})^{2k-1}) \right)_{ij}
\end{equation}
the {\it tail covariance matrix} of order $k$ for the vector ${\bf W}^{\tau} {\bf y}$. It is a straightforward generalization of the standard covariance matrix, which we recover by setting $k=1$. The tail covariance matrix is, however, generally not symmetric for $k>1$.
The interpretation of tail covariances is straightforward in terms of Figure \ref{fig:moment}; order $k$ tail covariance of $x$ and $y$ is the covariance of $x$ with the tails of $y$, filtered using the moment of order $2k-1$. This also provides an intuitive explanation for why tail covariances are not symmetric; covariance of $x$ with the tails of $y$ is generally not the same as the covariance of $y$ with the tails of $x$.
In the limit of $k \nearrow \infty$, the filter retains only the element with the highest absolute value, and the tail covariance of $x$ and $y$ is simply the overlap of $x$ with the extreme tail event of $y$.
Tail covariances were observed in the finance context in \cite{LDP}, in connection with hedging tail risk in liability-driven portfolios. Geometrically, they can be interpreted as projections in ${\mathcal L}_{p}$ norm, i.e. the the nearest point on a hyperplane of an $n$-dimensional space to the origin, in ${\mathcal L}_{p}$-norm \cite{lp}.
We can now safely interpret in what sense the independence is optimized by using (\ref{opt}) with the moment contrast function (\ref{contrast}); they generate components that have linear covariances equal to zero, and order-$k$ tail covariances equal to zero.
This still leaves a number of other covariances of the same order, namely covariances of the form $E(x^{j}y^{2k-j})$ for general values of $j$.
In general, independence of centered random variables $x$ and $y$ only implies that their moments are separable;
$$E\left( x^{j} y^{2k-j} \right)= E\left( x^{j} \right) E\left( y^{2k-j} \right).$$
These only automatically equal zero when either $j=1$ or $2k-j=1$ due to the centeredness of the distributions, $E(x)=E(y)=0$. It is therefore somewhat welcome that general mixed moments do not appear in our derivation of independence; since they do not vanish for independent random variables, it is less straightforward to use them as a measure of independence. Coming back to the earlier point about the system already being overdetermined just by using tail covariances, this is not a bad thing - any additional restrictions on covariances of arbitrary order would only make it even more overdetermined.
\section{What about Entropy?}
The choice of contrast function $G(u) = u^{2k}/2k$ was in many ways external to the task of maximizing the independence of the underlying variables; if we start from the assumption that there is no function $G$ that would lead to an acceptable approximation to the marginal entropy, we might as well use one that serves our specific purpose for other reasons.
We can, however, try to estimate marginal entropy directly from the data. We again start from a finite sample $x_{1},x_{2},...,x_{m}$ as in Section 2, and sort it in increasing order, so that $x_{i_{1}}\le x_{i_{2}}\le...\le x_{i_{m}}$. If the underlying distribution is continuous, the probability of repeated terms in a finite sample is zero, and the inequalities are all strict with probability 1. In statistical terms, $x_{i_{1}},...,x_{i_{m}}$ are order statistics for the sample $x_{1},x_{2},...,x_{m}$.
One approach would be to use a Vasicek-like entropy estimation \cite{vasicek}. Vasicek's formulation rests on the unknown cumulative density of $F(x)$ such that $F'(x) = \rho(x)$; the integration switches from integration over the domain of $x$ to integration over the probability $q$, and the entropy is expressed as
\begin{equation}
H(x) = \int_{0}^{1} \ln \frac{d}{dq}F^{-1}(q) dq.
\label{vasicek}
\end{equation}
Using the estimate
$$F\left(x_{i_{j}} \right)=\frac{j}{m+1},\ \ F^{-1}\left( \frac{j}{m+1}\right) = x_{i_{j}}$$
for $j=1,..,m$, we get the order-$n$ Vasicek estimarte of entropy by taking the $n$'th order finite difference of $x_{i_{j}}$,
$$
H(x) \approx \frac{1}{m+1} \sum_{j=1}^{m} \ln \left[ \frac{m+1}{2n} \left( x_{i_{j+n}} - x_{i_{j-n}} \right) \right]
$$
with boundary conditions $x_{i_{j}}=x_{i_{1}}$ for $j<1$, $x_{i_{j}}=x_{i_{m}}$ for $j>m$. Asymptotically for large $x_{\max}$ and $x_{\min}$, this gives
\begin{equation}
H(x) \approx \ln\ -x_{\min} + \ln\ x_{\max} + \ln \frac{m+1}{2n} + O \left( 1 \right). \label{vasicektail}
\end{equation}
From equation (\ref{moment}), we have
\begin{equation}
\ln M_{2k} = \ln \frac{1}{m} + 2k\ \ln\ x_{\infty} + \ln \sum_{i=1}^{m} \left( \frac{x_{i}}{x_{\infty}} \right)^{2k}.
\label{logmoment}
\end{equation}
The sum in (\ref{logmoment}) tends to $1$ with probability $1$ for large $k$, hence (\ref{vasicektail}) and (\ref{logmoment}) combine to
\begin{equation}
H(x) \approx \frac{1}{2k} \ln\ M_{2k} +\frac{1}{2k} \ln\ m + \ln \frac{m+1}{2n} + O \left( 1 \right).
\label{logentropy1}
\end{equation}
The problem with the Vasicek entropy is that it effectively loads $x_{\min}$ and $x_{\max}$ $n$ times, getting $\delta$ mass and infinite slope of $F$ at both points, which is not consistent with a continuous density function $\rho$. Correcting the boundary conditions in the Vasicek formulation leads to the Ebrahimi estimator \cite{ebrahimi}, which eliminates the $n$-fold repetition of $x_{\max}$ and $x_{\min}$. Given the log dependence of the entropy estimate on the tails, that only results in the addition of a term $(\ln\ n) / (m+1) $, and it does not affect the asymptotic estimate at leading order.
In either case, on finite samples from continuous distributions, marginal entropy is, to leading order, logarithmic in the expectation of our moment contrast function $G(u) = u^{2k}/2k$.
The key here is that we are only dealing with finite samples of continuous distributions, so, with probability 1, no point occurs more than once in the sample. In a continuous limit or with discrete distributions, this is clearly not the case, and the approximation (\ref{logentropy1}) does not hold.
\section{Diversifying Russell 3000 stocks}
For a practical example, we look at Russell 3000 stocks over a period of eight years, from September 2014 until October 2022. We used percentage daily returns, adjusted for dividends and splits, and filled zero daily returns on any stock-specific non-trading days.
The time series was divided into two non-overlapping buckets of equal length, covering October 2014- September 2018, and October 2018- September 2022.
\begin{figure}[!htb]
\includegraphics[width=0.8\textwidth]{output.png}
\caption{Russell 3000 stocks, $M_{10}^{1/10}$ of the daily returns on the $x$-axis, differential entropy on the $y$-axis. Each circle represents one symbol; blue circles for the 2014-2018 bucket, red circles for the 2018-2022 bucket. Entropy is on a linear scale, moment on a log scale.}
\label{fig:entropy}
\end{figure}
We first plotted the relationship between the moments and differential entropy. To avoid biasing the entropy estimators, we used Correa's method for the entropy calculation. The results are shown in Figure \ref{fig:entropy}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.95\textwidth]{histogramk2.png}
\centering
\includegraphics[width=0.95\textwidth]{histogramk10.png}
\caption{Russell 3000 stocks, equally weighted portfolios, histograms of in- and out-of-sample daily returns for $k=2$ and $k=10$.}
\label{fig:result}
\end{figure}
As seen from Figure \ref{fig:entropy}, the logarithmic scaling of entropy with the moments is confirmed by the data. The correspondence is not $1-1$, due to the fact that there are significant contributions to entropy other than the moments; the association between entropy and log moment is, however, nonetheless strongly positive, which further justifies the use of the moment contrast function as a stand-in for entropy.
We then proceeded to calibrate 500 independent components on the 2014-2018 bucket, and test their diversification properties on the 2018-2022 bucket.
As we can see from Figure \ref{fig:result}, the out-of-sample tails are fairly consistent and controlled with increasing $k$. The curvature of the centre of the distribution, which can be taken to represent the Gaussian component of the underlying process, barely changes as we change the value of $k$. The tails, however, are clearly compressed towards the centre as a consequence of increasing $k$.
The cost of such compression of the tails with increasing penalty order $k$ is having more large returns of smaller magnitude. The underlying risk does not go away, but it is distributed closer to the centre of the distribution.
It is of note that the in-sample period 2014-2018 was fairly quiet, hence the relative rarity of tail events is understandable; the out-of-sample period 2018-2022 comprised both the Covid-19 induced crash in 2020, and the geopolitical events of 2021-2022, hence the relative increased frequency of tail events. Tail ICA nonetheless keeps a good handle on the tails in the volatile 2018-2022 out-of-sample bucket, and it compresses the tails progressively with increasing $k$.
\section{Conclusions}
The approach we presented in this paper is a new approach to constructing diversified components that retain their diversification in the tails of the distribution.
Instead of contorting realistic distributions to fit into the mould of multivariate Gaussian distributions, we approach the problem from the angle of statistical independence, viewing the tail diversification problem as minimization of statistical dependence between diversified components.
Our approach identifies {\it tail covariances}, or mixed moments of order $E(x y^{2k-1})$ and $E(x^{2k-1}y)$, as the key measures of tail independence of random variables $x$ and $y$, together with orthogonality. Tail covariances naturally come out of the mutual information minimization formulation when using a moment contrast function to isolate the tails; they can also be linked to the original entropy formulation of the mutual information minimization problem.
In the example of Russell 3000 stocks, we have shown that, calibrating independent components on a relatively quiet period of 2014-2018, the resulting components were nonetheless able to retain diversification through the turbulent trading of 2020 and 2022.
|
{
"arxiv_id": "2302.13679",
"language": "en",
"timestamp": "2023-02-28T02:27:17",
"url": "https://arxiv.org/abs/2302.13679",
"yymm": "2302"
} | \section{Introduction}
Given a commutative ring $R,$ let ${\bf P}(R)$ denote the category of finitely generated projective $R$-modules. The ${\bf Nil}(R)$ is a category consisting of all pairs $(P, \nu),$ where $P$ is a finitely generated $R$-module and $\nu$ is a nilpotent endomorphism of $P.$ Let ${\rm Nil}_{0}(R)$ denote the kernel of the forgetful map $K_{0}({\bf Nil}(R)) \to K_{0}({\bf P}(R)):= K_{0}(R).$ The group ${\rm Nil}_{0}(R)$ is generated by elements of the form $[(R^{n}, \nu)]- [(R^{n}, 0)]$ for some $n$ and some nilpotent endomorphism $\nu.$ Using these generators, we find a condition on $R$ under which ${\rm Nil}_{0}(R)=0.$ One of the goal of this article is to determine the generators of higher Nil $K$-groups.
Given an exact category $\mathcal{N},$ let $i$ denote the subcategory of $\mathcal{N}$ whose arrows are isomorphisms. Consider $i$ as a category of weak equivalences. Then the $K$-theory spectrum of $\mathcal{N}$ defined as $K\mathcal{N}:= Ki\mathcal{N},$ where $i\mathcal{N}$ regarded as an exact category with weak equivalences $i \subseteq \mathcal{N}.$ The $n$-th $K$-group is defined as $K_{n}\mathcal{N}:= \pi_{n}(Ki\mathcal{N})= K_{n}i\mathcal{N},$ where $n\geq 0$ (see Appendix A of \cite{Gray}). Let ${\rm Nil}(R)$ denote the fibre of forgetful functor $K{\bf Nil}(R) \to K{\bf P}(R):= K(R).$ The $n$-th Nil group ${\rm Nil}_{n}(R)$ is $\pi_{n}{\rm Nil}(R).$ Since the forgetful functor splits, $K{\bf Nil}(R) \simeq {\rm Nil}(R) \times K(R).$ This implies that $K_{n}{\bf Nil}(R)\cong {\rm Nil}_{n}(R)\bigoplus K_{n}(R)$ for every ring $R.$ There is an isomorphism ${\rm Nil}_{n}(R)\cong NK_{n+1}(R)= \operatorname{ker} [K_{n+1}(R[t]) \stackrel{t\mapsto 0}\to K_{n+1}(R)]$ (see Theorem V.8.1 of \cite{wei 1}). The group ${\rm Nil}_{n}(R)$ is not finitely generated unless it is trivial (for instance, see Proposition IV.6.7.4 of \cite{wei 1}). Despite of being not finitely generated, it is natural to wonder:
\begin{center}
{\it How do the generators of the group ${\rm Nil}_{n}(R)$ for $n>0$ look like}?
\end{center}
In this article, we are able to figure out generators of ${\rm Nil}_{n}(R)$ for $n>0$ using Grayson's technique of binary complexes. We hope the explicit form of generators of Nil $K$-groups obtained in this article might be useful for further research.
\section{Preliminaries and statement of main results}
We know ${\rm Nil}_{0}(R)\cong NK_{1}(R)=\operatorname{ker} [K_{1}(R[t]) \stackrel{t\mapsto 0}\to K_{1}(R)].$ The homotopy invariance of $K$-theory is known for regular noetherian rings and valuation rings (see \cite{KM}). Thus, ${\rm Nil}_{0}(R)=0$ provided $R$ is a regular noetherian or valuation ring. As mentioned above, generators of ${\rm Nil}_{0}(R)$ is wellknown. Using these generators, we prove (see Theorem \ref{nil zero vanish}):
\begin{theorem}
Let $R$ be a commutative ring with unity. Assume that every finitely generated torsion free $R$-module is projective. Then ${\rm Nil}_{0}(R)=0.$
\end{theorem}
The hypothesis on $R$ in the above theorem holds for many known class of rings, e.g., PID, valuation rings, Dedekind domains etc. In fact, in the case of integral domain, the hypothesis on $R$ is equivalent to $R$ is a Pr\"{u}fer domain (i.e., a ring which is locally a valuation ring).
To state our next result, we need a notion of binary complexes. Let us recall the notion from \cite{Gray} for exact categories.
\subsection*{Binary chain complexes}
Let $\mathcal{N}$ denote an exact category. A bounded chain complex $N$ in $\mathcal{N}$ is said to be an {\it acylic chain complex} if each differential $d_{i}: N_{i} \to N_{i-1}$ can be factored as $N_{i} \to Z_{i-1} \to N_{i-1}$ such that each $0\to Z_{i} \to N_{i} \to Z_{i-1}\to 0$ is a short exact sequence of $\mathcal{N}.$ Let $C\mathcal{N}$ denote the category of bounded chain complexes in $\mathcal{N}.$ The full subcategory of $C\mathcal{N}$ consisting of bounded acylic complexes in $C\mathcal{N}$ is denoted by $C^{q}\mathcal{N}.$ The category $C^{q}\mathcal{N}$ is an exact category.
A chain complex in $\mathcal{N}$ with two differential (not necessarily distinct) is called a {\it binary chain complex} in $\mathcal{N}.$ In other words, it is a triple $(N_{*}, d, d^{'})$ with $(N_{*}, d)$ and $(N_{*}, d^{'})$ are in $C\mathcal{N}.$ If $d=d^{'}$ then we call a diagonal binary complex. A morphism between two binary complexes $(N_{*}, d, d^{'})$ and $(\tilde{N}_{*}, \tilde{d}, \tilde{d}^{'})$ is a morphism between the underlying graded objects $N$ and $\tilde{N}$ that commutes with both differentials. The category of binary complexes in $\mathcal{N}$ is denoted by $B\mathcal{N}.$ There is always a diagonal functor $$ \Delta: C\mathcal{N} \to B\mathcal{N}, ~{\rm defined ~by}~ \Delta((N_{*}, d))= (N_{*}, d, d).$$ As before, let $B^{q}\mathcal{N}$ denote the full subcategory of $B\mathcal{N}$ whose objects are acylic binary complexes. This is also an exact category.
By iterating, one can define exact category $(B^{q})^{n}\mathcal{N}=B^{q}B^{q}\cdots B^{q}\mathcal{N}$ for each $n\geq 0.$ An object of the exact category $(B^{q})^{n}\mathcal{N}$ of bounded acylic binary multicomplexes of dimension $n$ in $\mathcal{N}$ is a bounded $\mathbb{Z}^{n}$- graded collection of objects of $\mathcal{N},$ together with a pair of acyclic differentials $d^{i}$ and $\tilde{d^{i}}$ in each direction $1\leq i\leq n,$ where the differentials $d^{i}$ and $\tilde{d^{i}}$ commute with $d^{j}$ and $\tilde{d^{j}}$ whenever $i\neq j.$ Thus, a typical object looks like $(N_{*}, (d^{1}, \tilde{d^{1}}), (d^{2}, \tilde{d^{2}}), \dots, (d^{n}, \tilde{d^{n}})),$ where $N_{*}$ is a
bounded $\mathbb{Z}^{n}$- graded collection of objects of $\mathcal{N}.$ We say that an acyclic binary multicomplex $(N_{*}, (d^{1}, \tilde{d^{1}}), (d^{2}, \tilde{d^{2}}), \dots, (d^{n}, \tilde{d^{n}}))$ is {\it diagonal} if $d^{i}=\tilde{d^{i}}$ for at least one $i.$
According to the Grayson description \cite{Gray}, the $n$-th $K$-group is a group generated by $n$-dimensional acyclic binary complexes modulo certain relations, where $n>0$. In fact, $K_{n}\mathcal{N}$ can be seen as a quotient of $K_{0}(B^{q})^{n}\mathcal{N}$ (see section \ref{gray def} for details).
We now describe generators of ${\rm Nil}_{n}(R)$ for $n>0.$
Using the Grayson presentation of $K$-groups, we first describe generators for ${\rm Nil}_{1}(R)$ in terms of acyclic binary complexes. Let ${\bf Nil(Free}(R))$ denote a category whose object are pairs $(F, \nu),$ where $F$ is a finitely generated free $R$-module and $\nu$ is a nilpotent endomorphism. Here is the precise result (see Theorem \ref{gen for Nil1}):
\begin{theorem}
The group ${\rm Nil}_{1}(R)$ is generated by elements of the form $$[(F_{*}, f, f^{'}, \nu)]- [(F_{*}, f, f^{'}, 0)],$$ where $(F_{*}, f, f^{'}, \nu), (F_{*}, f, f^{'}, 0) \in B^{q}{\bf Nil(Free}(R))$ and $f\neq f^{'}.$
\end{theorem}
We further extend the above result for $n>1$ and describe generators in terms of acyclic binary multicomplexes. More precisely, we prove (see Theorem \ref{gen for Nil_n}):
\begin{theorem}
The group ${\rm Nil}_{n}(R)$ is generated by elements of the form \small$$([(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), \nu)]- [(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), 0)])(~{\rm mod} ~\widetilde{T}_{R}^{n}),$$ \normalsize where $(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), \nu)), (F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), 0))$ are objects of $(B^{q})^{n}{\bf Nil(Free}(R)).$ Here $\widetilde{T}_{R}^{n}$ is a subgroup generated by class of all diagonal acyclic binary multicomplexes in $K_{0}(B^{q})^{n}{\bf Nil(Free}(R))$ (see section \ref{about nil} for $\widetilde{T}_{R}^{n}$).
\end{theorem}
\section{Grayson's description of $K$-groups}\label{gray def}
In \cite{Nenasheb}, Nenashev gave a description of $K_{1}$-group in terms of generator and relations using the notion of double exact sequences. Motivated by \cite{Nenasheb}, Grayson define higher $K$-groups in terms of generators and relations using binary complexes (see \cite{Gray}). However, Nenashev's $K_{1}$-group agree with Grayson's $K_{1}$-group (see Corollary 4.2 of \cite{KKW}). In the rest of the paper, we assume the following as the definition of higher $K$-groups.
\begin{definition}\label{main def}(see Corollary 7.4 of \cite{Gray}) Let $\mathcal{N}$ be an exact category. For $n\geq 1,$ $K_{n}\mathcal{N}$ is the abelian group having generators $[N],$ one for each object $N$ of $(B^{q})^{n}\mathcal{N}$ and the relations are:
\begin{enumerate}
\item $[N^{'}] + [N^{''}]=[N]$ for every short exact sequence $0 \to N^{'} \to N \to N^{''}\to 0$ in $(B^{q})^{n}\mathcal{N};$
\item $[T]=0$ if $T$ is a diagonal acyclic binary multicomplex.
\end{enumerate}
\end{definition}
\begin{remark}\label{k grp as a qt}{\rm
In Definition \ref{main def}, if we only consider the relation (1) then it is just $K_{0}(B^{q})^{n}\mathcal{N}.$ Let $T_{\mathcal{N}}^{n}$ denote the subgroup of $K_{0}(B^{q})^{n}\mathcal{N}$ generated by the classes of the diagonal acyclic binary multicomplexes in $K_{0}(B^{q})^{n}\mathcal{N}.$ Then $K_{n}\mathcal{N}\cong K_{0}(B^{q})^{n}\mathcal{N}/T_{\mathcal{N}}^{n}.$}
\end{remark}
\begin{lemma}\label{harris observation} For each $n\geq 1$, there is a split short exact sequence
$$0 \to K_{n-1}C^{q}\mathcal{N}\stackrel{\Delta}\to K_{n-1}B^{q}\mathcal{N} \to K_{n}\mathcal{N}\to 0,$$ which is functorial in $\mathcal{N}.$
\end{lemma}
\begin{proof}
See Lemma 2.7 of \cite{Harris}.
\end{proof}
\begin{remark}\label{further observ to harris}{\rm
For $n=1,$ $ K_{0}C^{q}\mathcal{N}\cong {\rm im}(\Delta)=T_{\mathcal{N}}^{1}.$ Thus, the above lemma implies that $K_{0}B^{q}\mathcal{N}\cong T_{\mathcal{N}}^{1} \oplus K_{1}\mathcal{N}.$ }
\end{remark}
By Remark \ref{k grp as a qt}, $K_{n}{\bf Nil}(R)\cong K_{0}(B^{q})^{n} {\bf Nil}(R)/T_{{\bf Nil}(R)}^{n}$ for $n\geq 0.$ Moreover, there is a split exact sequence
$$ 0\to {\rm Nil}_{n}(R) \to K_{0}(B^{q})^{n} {\bf Nil}(R)/T_{{\bf Nil}(R)}^{n} \to K_{0}(B^{q})^{n} {\bf P}(R)/T_{{\bf P}(R)}^{n} \to 0.$$
Let $\widetilde{{\rm Nil}}_{n}(R)$(resp. $\widetilde{T}_{R}^{n}$) denote the kernel of the map $K_{0}(B^{q})^{n} {\bf Nil}(R) \to K_{0}(B^{q})^{n} {\bf P}(R)$ (resp. $T_{{\bf Nil}(R)}^{n} \to T_{{\bf P}(R)}^{n}$). Clearly, the map $T_{{\bf Nil}(R)}^{n} \to T_{{\bf P}(R)}^{n}$ is surjective. Now, we have the following result which will be used in section \ref{Nil1}.
\begin{lemma}\label{useful for nil1}
For $n\geq 1,$ there is a canonical short exact sequence
\begin{equation}\label{seq for nil}
0 \to \widetilde{T}_{R}^{n} \to \widetilde{{\rm Nil}}_{n}(R) \to {\rm Nil}_{n}(R) \to 0.
\end{equation} If $n=1$ then this sequence splits, i.e., $\widetilde{{\rm Nil}}_{1}(R)\cong {\rm Nil}_{1}(R)\oplus \widetilde{T}_{R}^{1}.$
\end{lemma}
\begin{proof}
The assertion follows by chasing the following commutative diagram
$$ \begin{CD}
@. 0 @. 0 @.0 \\
@. @VVV @VVV @VVV \\
0 @>>> \widetilde{T}_{R}^{n} @>>> T_{{\bf Nil}(R)}^{n} @>>> T_{{\bf P}(R)}^{n}@>>> 0 \\
@. @VVV @VVV @VVV \\
0 @>>> \widetilde{{\rm Nil}}_{n}(R) @>>> K_{0}(B^{q})^{n} {\bf Nil}(R) @>>> K_{0}(B^{q})^{n} {\bf P}(R) @>>> 0 \\
@. @VVV @VVV @VVV \\
0 @>>> {\rm Nil}_{n}(R) @>>>K_{n}{\bf Nil}(R) @>>> K_{n}(R) @>>> 0 \\
@. @VVV @VVV @VVV \\
@. 0 @. 0 @. 0,
\end{CD}$$ where rows are split exact sequences, the second and third columns are exact sequences.
For $n=1,$ the second column is split exact (see Remark \ref{further observ to harris}). This forces that the first column is split exact provided $n=1.$
\end{proof}
\section{Vanishing of zeroth Nil $K$-groups}\label{about nil}
We discuss a condition on $R$ under which ${\rm Nil}_{0}(R)$ is trivial. Recall that ${\rm Nil}_{0}(R)$ is generated by elements of the form $[(R^{n}, \nu)]- [(R^{n}, 0)]$ for some $n$ and some nilpotent endomorphism $\nu.$
For a fix $n,$ $(R^{n}, \nu)$ is an object in ${\bf Nil}(R).$ Assume that $\nu$ is non-zero nilpotent. Since $\nu$ is nilpotent, there exist a least $m\in \mathbb{N}$ such that $\nu^{m}=0$ and $\nu^{r}\neq 0$ for $r< m.$ Then we have a chain of $R$-modules
$$0 \subseteq \operatorname{ker}(\nu)\subseteq \operatorname{ker}(\nu^{2}) \subseteq \dots \subseteq \operatorname{ker}(\nu^{m-1})\subseteq \operatorname{ker}(\nu^{m})=R^{n}.$$
\begin{lemma}\label{tor free}
$\frac{\operatorname{ker}(\nu^{i+1})}{ker(\nu^{i})} $ is a torsion free $R$-module for $1\leq i \leq m-1.$
\end{lemma}
\begin{proof}
Let $x + ker(\nu^{i})$ be a torsion element of $\frac{\operatorname{ker}(\nu^{i+1})}{ker(\nu^{i})}.$ Then there exist a non-zero-divisor $r\in R$ such that $rx\in ker(\nu^{i}).$ This implies that $r\nu^{i}(x)=0$ in $R^{n}.$ So, $x\in ker(\nu^{i}).$
\end{proof}
Assume that $R$ is a ring which satisfies the following:`` every finitely generated torsion free $R$-module is projective''.
We have an exact sequence of $R$-modules
$$0\to \operatorname{ker}(\nu^{m-1}) \to R^{n} \to \frac{R^{n}}{\operatorname{ker}(\nu^{m-1})} \to 0.$$ By Lemma \ref{tor free}, $\frac{R^n}{\operatorname{ker}(\nu^{m-1})}$ is a finitely generated projective $R$-modules. Thus the sequence splits and we get that $\operatorname{ker}(\nu^{m-1})$ is also a finitely generated projective $R$-module. By considering the exact sequence $ 0\to \operatorname{ker}(\nu^{m-2}) \to \operatorname{ker}(\nu^{m-1}) \to \frac{\operatorname{ker}(\nu^{m-1})}{\operatorname{ker}(\nu^{m-2})}\to 0$ and using Lemma \ref{tor free}, we obtain $\operatorname{ker}(\nu^{m-2})$ is a finitely generated projective $R$-module. Continuing this way, each $\operatorname{ker}(\nu^{i})$ is a finitely generated projective $R$-module. Observe that the following diagram of exact sequences
$$\begin{CD}
0 @>>> \operatorname{ker}(\nu^{i}) @>>> \operatorname{ker}(\nu^{i+1}) @>>> \frac{\operatorname{ker}(\nu^{i+1})}{\operatorname{ker}(\nu^{i})}@>>> 0\\
@. @V \nu VV @V\nu VV @V \bar{\nu}=0 VV \\
0 @>>> \operatorname{ker}(\nu^{i}) @>>> \operatorname{ker}(\nu^{i+1}) @>>> \frac{\operatorname{ker}(\nu^{i+1})}{\operatorname{ker}(\nu^{i})}@>>> 0
\end{CD}$$
is commutative for each $i,$ $1\leq i \leq m-1.$ Thus, we have an exact sequence
$$ 0 \to (\operatorname{ker}(\nu^{i}), \nu) \to (\operatorname{ker}(\nu^{i+1}), \nu) \to (\frac{\operatorname{ker}(\nu^{i+1})}{\operatorname{ker}(\nu^{i})}, 0) \to 0$$ in ${\bf Nil}(R)$ for $1\leq i \leq m-1.$ In $ K_{0}({\bf Nil}(R))$, we get
\begin{align*}
[(R^{n}, \nu)] & = [(\operatorname{ker}(\nu^{m-1}), \nu)] + [(\frac{R^{n}}{\operatorname{ker}(\nu^{m-1})}, 0)]\\
& = [(\operatorname{ker}(\nu^{m-2}), \nu)] + [(\frac{\operatorname{ker}(\nu^{m-1})}{\operatorname{ker}(\nu^{m-2})}, 0)] + [(\frac{R^{n}}{\operatorname{ker}(\nu^{m-1})}, 0)]\\
&= \dots \dots \dots \\
&= [(\operatorname{ker}(\nu)\oplus \frac{\operatorname{ker}(\nu^{2})}{\operatorname{ker}(\nu)} \oplus \dots \oplus \frac{R^{n}}{\operatorname{ker}(\nu^{m-1})}, 0)]\\
& = [(R^{n}, 0)].
\end{align*}
The above discussion implies the following:
\begin{theorem}\label{nil zero vanish}
Let $R$ be a commutative ring with unity. Assume that every finitely generated torsion free $R$-module is projective. Then ${\rm Nil}_{0}(R)=0.$
\end{theorem}
\begin{remark}\label{int val pol}{\rm
The hypothesis on $R$ in the above theorem holds for {\it Pr\"{u}fer} domain. We say that a ring $R$ is a {\it Pr\"{u}fer} domain if $R_{\mathfrak{p}}$ is a valuation domain for all prime ideals $\mathfrak{p}$ of $R.$ Clearly, a valuation ring is {\it Pr\"{u}fer}. A domain $R$ is {\it Pr\"{u}fer} if and only if every finitely generated torsion free $R$-module is projective. The ring of integer-valued polynomials ${\rm Int}(\mathbb{Z})= \{f\in \mathbb{Q}[x]| f(\mathbb{Z})\subset \mathbb{Z}\}$ is a Pr\"{u}fer domain. In fact, it is a non-noetherian Pr\"{u}fer domain (see \cite{cahen}). One can see \cite{SS} for $K$-theory of Pr\"{u}fer domains.
}
\end{remark}
\section{Cofinality Lemma}\label{key lemmas}
Let $\mathcal{N}$ be an exact category. An exact subcategory $\mathcal{M}$ of $\mathcal{N}$ is called {\it closed under extensions} whenever there exists a short exact sequence $0 \to N_{1} \to N \to N_{2}\to 0$ in $\mathcal{N}$ with $N_{1}$ and $N_{2}$ in $\mathcal{M}$ then $N$ is isomorphic to an object of $\mathcal{M}.$ We say that an exact subcategory $\mathcal{M}$ of $\mathcal{N}$ is {\it cofinal} in $\mathcal{N}$ if for every object $N_{1}\in \mathcal{N}$ there exists $N_{2}\in \mathcal{N}$ such that $N_{1}\oplus N_{2}$ is isomorphic to an object of $\mathcal{M}.$ Let ${\bf Free}(R)$ denote the category of finitely generated free $R$-modules. Clearly, ${\bf Free}(R)$ is an exact subcategory of ${\bf P}(R)$ which is cofinal and closed under extensions.
We can define a category ${\bf Nil(Free}(R))$ whose object are pairs $(F, \nu),$ where $F$ is in ${\bf Free}(R)$ and $\nu$ is a nilpotent endomorphism. A morphism $f: (F_{1}, \nu_{1}) \to (F_{2}, \nu_{2})$ is a $R$-module map $f: F_{1} \to F_{2}$ such that $f\nu_{1}=\nu_{2}f.$ One can check that ${\bf Nil(Free}(R))$ is an exact category.
\begin{lemma}\label{nilfree cofinal}
The category ${\bf Nil(Free}(R))$ is an exact full subcategory of ${\bf Nil}(R)$ which is cofinal and closed under extensions.
\end{lemma}
\begin{proof}
Let $(P, \nu)\in {\bf Nil}(R).$ Then there exists a $Q$ in ${\bf P}(R)$ such that $\alpha: P\oplus Q\cong R^{n}$ for some $n>0.$ Note that $(Q, 0)\in {\bf Nil}(R).$ So, we get $ (P, \nu)\oplus (Q, 0)\cong (R^{n}, \nu^{'}),$ where $\nu^{'}= \alpha(\nu , 0)\alpha^{-1}.$ This implies that ${\bf Nil(Free}(R))\subseteq {\bf Nil}(R)$ is cofinal.
Suppose that the sequence
$$ 0 \to (P_{1}, \nu_{1}) \to (P, \nu) \to (P_{2}, \nu_{2}) \to 0$$ is exact with $(P_{1}, \nu_{1}), (P_{2}, \nu_{2}) \in {\bf Nil(Free}(R)),$ i.e., the following diagram
$$\begin{CD}
0 @>>> P_{1} @>>> P @>>> P_{2} @>>> 0\\
@. @V\nu_{1} VV @V \nu VV @V\nu_{2} VV \\
0 @>>> P_{1} @>>> P @>>> P_{2} @>>> 0
\end{CD}$$
is commutative with exact rows and $P_{1}, P_{2} \in {\bf Free}(R).$ Let $\beta$ denote the isomorphism $P\cong P_{1}\oplus P_{2}.$ We define a nilpotent endomorphism of $P_{1}\oplus P_{2}$ as $\beta \nu \beta^{-1}.$ Therefore, $(P, \nu)\cong (P_{1}\oplus P_{2}, \beta \nu \beta^{-1})$ in ${\bf Nil(Free}(R)).$ Hence the lemma.
\end{proof}
We now consider categories $B^{q}{\bf Nil(Free}(R))$ and $B^{q}{\bf Nil}(R).$ Note that one can identify $B^{q}{\bf Nil(Free}(R))$ with ${\bf Nil}(B^{q}{\bf Free}(R))$ and $B^{q}{\bf Nil}(R)$ with ${\bf Nil}(B^{q}{\bf P}(R))$ because nilpotent endomorphism commutes with each differential.
Let us recall an useful lemma from \cite{Gray}.
\begin{lemma}\label{useful lemma}
Suppose $\mathcal{M}\subseteq \mathcal{N}$ is a cofinal full subcategory closed under extensions. Consider $\mathcal{M}$ to be an exact category by equipping it with the sequences of $\mathcal{M}$ that are short exact sequences of $\mathcal{N}.$ Then for any $(N_{*}, d, d^{'})\in B^{q}\mathcal{N}$ there is an object $(L_{*}, e)\in C^{q}\mathcal{N}$ such that $(N_{*}, d, d^{'})\oplus (L_{*}, e, e)\in B^{q}\mathcal{M}$(upto isomorphism).
\end{lemma}
\begin{proof}
See Lemma 6.2 of \cite{Gray}.
\end{proof}
\begin{corollary}\label{gray cor}
Suppose $\mathcal{M}\subseteq \mathcal{N}$ is a cofinal full subcategory closed under extensions, regarded as an exact category as in Lemma \ref{useful lemma}. Then the same is true for $C^{q}\mathcal{M}\subseteq C^{q}\mathcal{N}$ and $B^{q}\mathcal{M}\subseteq B^{q}\mathcal{N}.$ In particular, $C^{q}{\bf Nil(Free}(R))\subseteq C^{q}{\bf Nil}(R)$ and $B^{q}{\bf Nil(Free}(R))\subseteq B^{q}{\bf Nil}(R)$ both are cofinal and closed under extensions.
\end{corollary}
\begin{proof}
See Corollary 6.3 of \cite{Gray} for the first assertion. The second assertion follows from Lemma \ref{nilfree cofinal}.
\end{proof}
\section{Generators of ${\rm Nil_{n>0}}(R)$}\label{Nil1}
We first describe generators for ${\rm Nil_{1}}(R)(\cong NK_{2}(R))$. Before that we need some preparations. Let us begin with the following Lemma.
\begin{lemma}\label{insert zero}
If $[(P_{*}, p, p^{'})]=[(Q_{*}, q, q^{'})]$ in $K_{0}B^{q}{\bf P}(R)$ then $[(P_{*}, p, p^{'}, 0)]=[(Q_{*}, q, q^{'}, 0)]$ in $K_{0}B^{q}{\bf Nil}(R).$
\end{lemma}
\begin{proof}
Since $[(P_{*}, p, p^{'})]=[(Q_{*}, q, q^{'})]$ in $K_{0}B^{q}{\bf P}(R),$ there are short exact sequences (see Exercise II.7.2 of \cite{wei 1}) $$ 0 \to (C_{*}, c, \tilde{c}) \to (A_{*}, a, \tilde{a}) \to (D_{*}, d, \tilde{d}) \to 0$$ and $$0 \to (C_{*}, c, \tilde{c}) \to (B_{*}, b, \tilde{b}) \to (D_{*}, d, \tilde{d}) \to 0$$ in $B^{q}{\bf P}(R)$ such that
\begin{equation}\label{iso}
(P_{*}, p, p^{'})\oplus (A_{*}, a, \tilde{a})\cong (Q_{*}, q, q^{'})\oplus (B_{*}, b, \tilde{b}).
\end{equation}
Note that $$ 0 \to (C_{*}, c, \tilde{c}, 0) \to (A_{*}, a, \tilde{a}, 0) \to (D_{*}, d, \tilde{d}, 0) \to 0$$ and $$0 \to (C_{*}, c, \tilde{c}, 0) \to (B_{*}, b, \tilde{b}, 0) \to (D_{*}, d, \tilde{d}, 0) \to 0$$ both are short exact sequences in $B^{q}{\bf Nil}(R).$ Thus, $[(A_{*}, a, \tilde{a}, 0)]=[(B_{*}, b, \tilde{b}, 0)]= [(C_{*}, c, \tilde{c}, 0)]+ [(D_{*}, d, \tilde{d}, 0)]$ in $K_{0}B^{q}{\bf Nil}(R).$ By using the isomorphism (\ref{iso}), we get $ (P_{*}, p, p^{'}, 0)\oplus (A_{*}, a, \tilde{a}, 0)\cong (Q_{*}, q, q^{'}, 0)\oplus (B_{*}, b, \tilde{b}, 0).$ Hence the assertion.
\end{proof}
\begin{lemma}\label{nil can be choosen zero}
For every $(P_{*}, p, p^{'}, \nu)$ in $B^{q}{\bf Nil}(R)$ there exists a $(Q_{*}, q, q, 0)$ in $B^{q}{\bf Nil}(R)$ such that $(P_{*}, p, p^{'}, \nu)\oplus (Q_{*}, q, q, 0)\cong (F_{*}, f, f^{'}, \nu^{'}),$ where $(F_{*}, f, f^{'}, \nu^{'})\in B^{q}{\bf Nil(Free}(R)).$
\end{lemma}
\begin{proof}
Note that $(P_{*}, p, p^{'})\in B^{q}{\bf P}(R).$ By Lemma \ref{useful lemma}, there exists a $(Q_{*}, q, q)$ such that $(P_{*}, p, p^{'})\oplus (Q_{*}, q, q)\cong (F_{*}, f, f^{'}),$ where $(F_{*}, f, f^{'})\in B^{q}{\bf Free}(R).$ Let $\alpha$ denote the above isomorphism. Define $\nu^{'}:= \alpha(\nu, 0) \alpha^{-1}.$ The isomorphisms $\alpha$ and $\alpha^{-1}$ commute with differentials. We need to check that $\nu^{'}$ is a nilpotent endomorphism on $(F_{*}, f, f^{'})$. Let us just consider one case. Checking for other case is similar. We have $\alpha(p, q)= f\alpha$ and $(p, q)\alpha^{-1}= \alpha^{-1}f.$ Moreover, $\nu$ commutes with $p$ and $p^{'}.$ Then $f\nu^{'}= f\alpha(\nu, 0) \alpha^{-1} = \alpha(p, q)(\nu, 0) \alpha^{-1}= \alpha(\nu, 0)(p, q) \alpha^{-1}=\alpha(\nu, 0)\alpha^{-1}f=\nu^{'}f.$ This shows that $\nu^{'}$ is a nilpotent endomorphism on $(F_{*}, f, f^{'})$ and we get the desired isomorphism $(P_{*}, p, p^{'}, \nu)\oplus (Q_{*}, q, q, 0)\cong (F_{*}, f, f^{'}, \nu^{'}).$
\end{proof}
Recall that we have the forgetful map $$K_{0}B^{q} {\bf Nil}(R) \to K_{0}B^{q}{\bf P}(R), [(P_{*}, p, p^{'}, \nu)]\mapsto [(P_{*}, p, p^{'})]$$ and $\widetilde{{\rm Nil}}_{1}(R)$ denotes the kernel of the above map.
\begin{lemma}\label{gen of tilde nil}
The group $\widetilde{{\rm Nil}}_{1}(R)$ is generated by elements of the form $[(F_{*}, f, f^{'}, \nu)]- [(F_{*}, f, f^{'}, 0)],$ where $(F_{*}, f, f^{'}, \nu)$ and $(F_{*}, f, f^{'}, 0)$ both are in $B^{q}{\bf Nil(Free}(R)).$
\end{lemma}
\begin{proof}
By Corollary \ref{gray cor}, $B^{q}{\bf Nil(Free}(R))\subseteq B^{q}{\bf Nil}(R)$ is cofinal with closed under extensions. Let $x\in K_{0}(B^{q}) {\bf Nil}(R).$ Then $x$ is of the form $[(P_{*}, p, p^{'}, \nu)] - [(F_{*}, f, f^{'}, \tilde{\nu})],$ where $(P_{*}, p, p^{'}, \nu)\in B^{q}{\bf Nil}(R)$ and $(F_{*}, f, f^{'}, \tilde{\nu})\in B^{q}{\bf Nil(Free}(R))$ (see Remark II.7.2.1 of \cite{wei 1}). If $x \in \widetilde{{\rm Nil}}_{1}(R)$ then \begin{equation}\label{x in Nil}
[(P_{*}, p, p^{'})]= [(F_{*}, f, f^{'})].
\end{equation} Since $(P_{*}, p, p^{'}) \in B^{q}{\bf P}(R),$ there exists a $(Q_{*}, q, q)\in B^{q}{\bf P}(R)$ such that $[(P_{*}, p, p^{'})]+ [(Q_{*}, q, q)]= [(\tilde{F}_{*}, \tilde{f}, \tilde{f}^{'})]$ in $K_{0}B^{q}{\bf P}(R)$ (by Lemma \ref{useful lemma}). Here $(\tilde{F}_{*}, \tilde{f}, \tilde{f}^{'}) \in B^{q}{\bf Free}(R).$ We also have (see Lemma \ref{nil can be choosen zero})
\begin{equation}\label{zero can be put}
[(P_{*}, p, p^{'}, \nu)]+ [(Q_{*}, q, q, 0)]= [(\tilde{F}_{*}, \tilde{f}, \tilde{f}^{'}, \nu^{'})].
\end{equation} By (\ref{x in Nil}), $[(F_{*}, f, f^{'})]+ [(Q_{*}, q, q)]= [(\tilde{F}_{*}, \tilde{f}, \tilde{f}^{'})]$ in $K_{0}B^{q}{\bf P}(R).$ Using Lemma \ref{insert zero}, we get $[(F_{*}, f, f^{'}, 0)]+ [(Q_{*}, q, q, 0)]= [(\tilde{F}_{*}, \tilde{f}, \tilde{f}^{'}, 0)]$ in $K_{0}B^{q}{\bf Nil}(R).$ Now, (\ref{zero can be put}) implies that \begin{equation}\label{ before final expression}
[(P_{*}, p, p^{'}, \nu)]= [(\tilde{F}_{*}, \tilde{f}, \tilde{f}^{'}, \nu^{'})]- [(\tilde{F}_{*}, \tilde{f}, \tilde{f}^{'}, 0)]+ [(F_{*}, f, f^{'}, 0)].
\end{equation} Therefore,\begin{equation}\label{ final expression}
x= ([(\tilde{F}_{*}, \tilde{f}, \tilde{f}^{'}, \nu^{'})]- [(\tilde{F}_{*}, \tilde{f}, \tilde{f}^{'}, 0)])- ([(F_{*}, f, f^{'}, \tilde{\nu})]- [(F_{*}, f, f^{'}, 0)]).
\end{equation} This shows that $\widetilde{{\rm Nil}}_{1}(R)$ is generated by elements of the form $[(F_{*}, f, f^{'}, \nu)]- [(F_{*}, f, f^{'}, 0)],$ where $(F_{*}, f, f^{'}, \nu)$ and $(F_{*}, f, f^{'}, 0)$ both are in $B^{q}{\bf Nil(Free}(R)).$
\end{proof}
\begin{theorem}\label{gen for Nil1}
The group ${\rm Nil}_{1}(R)$ is generated by elements of the form $$[(F_{*}, f, f^{'}, \nu)]- [(F_{*}, f, f^{'}, 0)],$$ where $(F_{*}, f, f^{'}, \nu), (F_{*}, f, f^{'}, 0) \in B^{q}{\bf Nil(Free}(R))$ and $f\neq f^{'}.$
\end{theorem}
\begin{proof}
We know $\widetilde{{\rm Nil}}_{1}(R)\cong {\rm Nil}_{1}(R)\oplus \widetilde{T}_{R}^{1}$ (see Lemma \ref{useful for nil1}). Hence the assertion by Lemma \ref{gen of tilde nil}.
\end{proof}
\vspace{1cm}
\subsection*{Generators of ${\rm Nil}_{n}(R)$ for $n>1$}
We consider the case ${\rm Nil}_{n}(R) (\cong NK_{n+1}(R))$ for $n>1.$
\begin{lemma}\label{insert zero for n-complex}
If $[(P_{*}, (p_{1}, p_{1}^{'}), (p_{2}, p_{2}^{'}), \dots, (p_{n}, p_{n}^{'})) ]=[(Q_{*}, (q_{1}, q_{1}^{'}), (q_{2}, q_{2}^{'}), \dots, (q_{n}, q_{n}^{'}))]$ in $K_{0}(B^{q})^{n}{\bf P}(R)$ then $$[(P_{*}, (p_{1}, p_{1}^{'}), (p_{2}, p_{2}^{'}), \dots, (p_{n}, p_{n}^{'}),0)]=[(Q_{*}, (q_{1}, q_{1}^{'}), (q_{2}, q_{2}^{'}), \dots, (q_{n}, q_{n}^{'}), 0)]$$ in $K_{0}(B^{q})^{n}{\bf Nil}(R).$
\end{lemma}
\begin{proof}
The proof is similar to Lemma \ref{insert zero}. More precisely, just rewrite the proof of Lemma \ref{insert zero} for $(B^{q})^{n}{\bf P}(R)$.
\end{proof}
\begin{lemma}\label{nil can be choosen zero for n-complexes}
For every $(P_{*}, (p_{1}, p_{1}^{'}), (p_{2}, p_{2}^{'}), \dots, (p_{n}, p_{n}^{'}), \nu)$ in $(B^{q})^{n}{\bf Nil}(R)$ there exists a $(Q_{*}, (q, q), (q_{2}, q_{2}^{'}), \dots, (q_{n}, q_{n}^{'}), 0)$ in $(B^{q})^{n}{\bf Nil}(R)$ such that \tiny $$(P_{*}, (p_{1}, p_{1}^{'}), (p_{2}, p_{2}^{'}), \dots, (p_{n}, p_{n}^{'}), \nu)\oplus (Q_{*}, (q, q), (q_{2}, q_{2}^{'}), \dots, (q_{n}, q_{n}^{'}), 0)\cong (F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), \nu^{'}),$$ \normalsize where $(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), \nu^{'})$ is in $(B^{q})^{n}{\bf Nil(Free}(R)).$
\end{lemma}
\begin{proof}
By repeatadly using Corollary \ref{gray cor}, $(B^{q})^{n-1}{\bf Free}(R)\subseteq (B^{q})^{n-1}{\bf P}(R)$ is cofinal and closed under extensions. By Lemma \ref{useful lemma}, for $(P_{*}, (p_{1}, p_{1}^{'}), (p_{2}, p_{2}^{'}), \dots, (p_{n}, p_{n}^{'}))\in (B^{q})^{n}{\bf P}(R)$ there exists a $(Q_{*}, (q, q), (q_{2}, q_{2}^{'}), \dots, (q_{n}, q_{n}^{'}))$ such that
\tiny $$(P_{*}, (p_{1}, p_{1}^{'}), (p_{2}, p_{2}^{'}), \dots, (p_{n}, p_{n}^{'}))\oplus (Q_{*}, (q, q), (q_{2}, q_{2}^{'}), \dots, (q_{n}, q_{n}^{'}))\stackrel{\alpha}\cong (F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'})),$$ \normalsize where $(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}))\in (B^{q})^{n}{\bf Free}(R).$ Define $\nu^{'}:= \alpha(\nu, 0) \alpha^{-1}.$ Note that $\alpha$ and $\alpha^{-1}$ commutes with differentials in each direction. The rest of the argument is similar to Lemma \ref{nil can be choosen zero}. \end{proof}
Recall that $\widetilde{{\rm Nil}}_{n}(R)$ denotes the kernel of $$K_{0}(B^{q})^{n} {\bf Nil}(R) \to K_{0}(B^{q})^{n}{\bf P}(R),$$ $$[(P_{*}, (p_{1}, p_{1}^{'}), (p_{2}, p_{2}^{'}), \dots, (p_{n}, p_{n}^{'}), \nu)]\mapsto [(P_{*}, (p_{1}, p_{1}^{'}), (p_{2}, p_{2}^{'}), \dots, (p_{n}, p_{n}^{'})].$$
\begin{lemma}
The group $\widetilde{{\rm Nil}}_{n}(R)$ is generated by elements of the form
$$[(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), \nu)]- [(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), 0)],$$ where $(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), \nu)$ and $(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), 0)$ both are in $(B^{q})^{n}{\bf Nil(Free}(R)).$
\end{lemma}
\begin{proof} By Corollary \ref{gray cor}, $(B^{q})^{n}{\bf Nil(Free}(R))\subseteq (B^{q})^{n}{\bf Nil}(R)$ is cofinal with closed under extensions. Let $x\in K_{0}(B^{q})^{n} {\bf Nil}(R).$ By applying Remark II.7.2.1 of \cite{wei 1} for the categories $(B^{q})^{n}{\bf Nil(Free}(R))\subseteq (B^{q})^{n}{\bf Nil}(R),$ we get $$x= [(P_{*}, (p_{1}, p_{1}^{'}), (p_{2}, p_{2}^{'}), \dots, (p_{n}, p_{n}^{'}), \nu)]- [(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), \tilde{\nu})],~ {\rm where}~ $$ $(P_{*}, (p_{1}, p_{1}^{'}), (p_{2}, p_{2}^{'}), \dots, (p_{n}, p_{n}^{'}), \nu) \in (B^{q})^{n}{\bf Nil}(R)$ and $(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), \tilde{\nu})\\ \in (B^{q})^{n}{\bf Nil(Free}(R)).$ Suppose $x\in \widetilde{{\rm Nil}}_{n}(R)$. Thus, $$[(P_{*}, (p_{1}, p_{1}^{'}), (p_{2}, p_{2}^{'}), \dots, (p_{n}, p_{n}^{'}))]= [(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}))]$$ in $K_{0}(B^{q})^{n}{\bf P}(R).$ The rest of the argument is similar to the case of $\widetilde{{\rm Nil}}_{1}(R)$. By using Lemmas \ref{insert zero for n-complex} and \ref{nil can be choosen zero for n-complexes}, we get \begin{equation}\label{ n complexes expression}\begin{split}
x= ([(\tilde{F}_{*}, (\tilde{f_{1}}, \tilde{f_{1}}^{'}), (\tilde{f_{2}}, \tilde{f_{2}}^{'}), \dots, (\tilde{f_{n}}, \tilde{f_{n}}^{'}),\nu^{'})]- [(\tilde{F}_{*}, (\tilde{f_{1}}, \tilde{f_{1}}^{'}), (\tilde{f_{2}}, \tilde{f_{2}}^{'}), \dots, (\tilde{f_{n}}, \tilde{f_{n}}^{'}), 0)])\\- ([(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), \tilde{\nu})]- [(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), 0)]),\end{split}
\end{equation} where all the entries are in $(B^{q})^{n}{\bf Nil(Free}(R)).$ Therefore, $\widetilde{{\rm Nil}}_{n}(R)$ is generated by elements of the form
$$[(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), \nu)]- [(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), 0)].$$\end{proof}
By Lemma \ref{useful for nil1}, ${\rm Nil}_{n}(R)\cong \widetilde{{\rm Nil}}_{n}(R)/ \widetilde{T}_{R}^{n}.$ Hence we get,
\begin{theorem}\label{gen for Nil_n}
The group ${\rm Nil}_{n}(R)$ is generated by elements of the form \small$$([(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), \nu)]- [(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), 0)])(~{\rm mod} ~\widetilde{T}_{R}^{n}),$$ \normalsize where $(F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), \nu)), (F_{*}, (f_{1}, f_{1}^{'}), (f_{2}, f_{2}^{'}), \dots, (f_{n}, f_{n}^{'}), 0))$ are objects of $(B^{q})^{n}{\bf Nil(Free}(R)).$
\end{theorem}
\begin{remark}\label{length 2 enough}{\rm
D. Grayson in \cite[Remark 8.1]{Gray} remarked that acyclic binary multicomplexes supported on $[0, 2]^{n}$ suffice to generate whole group $K_{n}\mathcal{N}.$ Recently, D. Kasprowski and C. Winges establish Grayson's remark in \cite{KW}(more precisely, see Theorem 1.3 of \cite{KW}). In view of \cite{KW}, generators of ${\rm Nil}_{n}(R)$ for $n>0$ obtained in Theorems \ref{gen for Nil1} and \ref{gen for Nil_n} can be restricted to acyclic binary multicomplexes supported on $[0, 2]^{n}.$}
\end{remark}
|
{
"arxiv_id": "2302.13741",
"language": "en",
"timestamp": "2023-02-28T02:28:52",
"url": "https://arxiv.org/abs/2302.13741",
"yymm": "2302"
} | \section{Introduction}\label{intro}
In recent years, there has been a trend of scaling up deep learning models, resulting in a more robust performance in specific domains. For instance, in the field of natural language processing, large-scale text data has been used to train deep learning models such as GPT-3 (175B)~\cite{GPT3}, T5 (11B)~\cite{t5}, and Megatron-LM (8.3B)~\cite{shoeybi2019megatronlm}, which have demonstrated impressive performance. However, training these models can be quite challenging. To solve the challenges posed by large-scale deep learning models, optimization of distributed computing is crucial.
Model parallelism(MP) is a technique used to solve the problem of a model being too large to fit into the memory of a single GPU or TPU by distributing the model across multiple GPUs or TPUs. However, this approach may introduce communication challenges between GPUs or TPUs during training. On the other hand, data parallelism(DP) can improve time utilization by addressing the batch size issue during training, but it cannot resolve the problem of a model being too large for a single GPU or TPU's memory capacity.
While DP and model MP have been effective in mitigating communication volume issues in recent years, such as large minibatch SGD~\cite{goyal2017accurate}, Megatron-LM~\cite{shoeybi2019megatronlm}, Gpipe~\cite{huang2019gpipe}, and Pathway~\cite{MLSYS202298dce83d} the challenge of scheduling distributed training across machines in different regions remains unsolved. If a model like GPT-3 with hundreds of billions of parameters exceeds the memory capacity of GPUs in the current region during training, it becomes necessary to schedule machines from other regions to complete the training. This will pose several challenges:
\begin{itemize}
\item Communication latency can be very high when training is distributed across machines in different regions.
\item How can tasks be effectively allocated to different machines, such as assigning specific machines to maintain certain layers of the model's parameters (e.g., Machine 0 is responsible for Layer X) or designating machines to process specific data (e.g., Machine 2 handles Data Set Y)?
\item How can we address the issue of disaster recovery in training, such as handling scenarios where a machine fails during the process?
\item If you need to train not only a single task but also multiple tasks simultaneously, such as training both a GPT-3 and a GPT-2 model, how can you provide for these tasks?
\end{itemize}
\begin{table}[]
\caption{\label{Communication}
We measured the time it takes for our machines in three different regions to send and receive 10 words, using eight servers, and calculated the average.
}
\begin{tabular}{lcccccccc}
\hline
\textbf{Regions} & \multicolumn{8}{c}{\textbf{Communication time to send 64 bytes (ms)}} \\ \hline
& California & Tokyo & Berlin & London & New Delhi & Paris & Rome & Brasilia \\
Beijing, China & 89.1 & 74.3 & 250.5 & 229.8 & 341.9 & - & 296.0 & 341.8 \\
Nanjing, China & 97.9 & 173.8 & 213.7 & 176.7 & 236.3 & 265.1 & 741.3 & 351.3 \\
California, USA & 1 & 118.8 & 144.8 & 132.3 & 197.0 & 133.9 & 158.6 & 158.6 \\ \hline
\end{tabular}
\end{table}
To elaborate on the first point, we collected all communication logs between the three machines and the eight servers over a three-month period. Our statistics reveal the communication time for every 64 bytes, as presented in Table \ref{Communication}. As observed in the table, the communication latency between certain nodes is high or even unfeasible. Here, the problem of communication time is difficult to solve in a distributed system without optimization.
\subsection{Contributions}
Graph data structures have been widely adopted since their introduction, as they can effectively represent interconnected structures such as social networks and knowledge graphs. Considering the tremendous success of graph neural networks~\cite{gilmer2017neural,kipf2017semi,wu2020comprehensive} in recent years, we aim to leverage this powerful capability in real-world industrial systems. With the powerful representational capability of graphs, it becomes easier to model the relevant optimization problems described in our paper. Our design choices were influenced by the types of workloads observed in actual systems. Hulk has the following features:
\paragraph{\textbf{Efficient Inter-node Communication}} Our system minimizes the impact of communication latency between machines, ensuring that each machine is assigned the appropriate task.
\paragraph{\textbf{Global Optimality}} Our model is built upon graph convolutional neural networks (GCNs)~\cite{kipf2017semi,wu2019comprehensive} to extract features from the entire graph, enabling the selection of a globally optimal solution.
\paragraph{\textbf{Disaster Recovery}} Since GCNs are utilized to assign tasks to different machines in the system, it becomes evident which tasks each machine is responsible for. Furthermore, in the event of a machine failure, the system can quickly recover the entire computation.
\paragraph{\textbf{Scalability}} If a particular machine or machines are no longer needed, you can simply remove the corresponding edge information from the graph structure.
The novelty of the proposed system lies in the utilization of graph neural networks for optimizing machine learning systems. By relying on the neural network's output values and some algorithms, the scheduling problem of the entire system can be efficiently solved.
\subsection{Engineering Challenges}
Although graph neural networks are capable of addressing tasks such as node classification~\cite{kipf2017semi,velivckovic2018graph,wang2019heterogeneous}, link prediction~\cite{zhang2018link,li2018link,schlichtkrull2018modeling}, and graph classification~\cite{kipf2017semi,ying2018hierarchical}, there is currently no suitable task that can be directly applied to our system. How to construct a suitable loss function is a crucial problem that cannot be overlooked. Regarding the representation of optimization features, such as computation time and communication time, in the graph data structure, there are also challenges that need to be addressed.
\section{Background}
This section provides a brief introduction to machine learning systems and graph neural networks.
\subsection{Machine Learning Systems}
This subsection provides a brief overview of the evolution of machine learning systems.
\subsubsection{Data Parallelism}
DP~\cite{dean2012large} is a commonly used technique in distributed training for deep neural networks, where the data is split into multiple copies and distributed to different machines for computation. Each machine calculates the loss and gradient of its assigned data and aggregates these gradients into a parameter server, which updates the model parameters. This method enables multiple machines to process large data sets in parallel, resulting in faster training speeds.
\subsubsection{Parameter Server}
The parameter server is a distributed deep learning training method proposed by Mu Li et al.~\cite{li2014scaling} that addresses the communication bottleneck problem in training large-scale deep learning models. It achieves this by placing the gradient aggregation and parameter updating process on the server side, and the computational nodes only need to send the locally computed gradient information to the server. This approach reduces communication overhead and improves training efficiency.
\subsubsection{Megatron-LM}
Megatron-LM~\cite{shoeybi2019megatronlm} combines model parallelism and data parallelism by dividing the model parameters into multiple parts, each trained on a different GPU. This allows for larger models to be used as each GPU only needs to focus on computing a part of the model using model parallelism. Data parallelism is used to assign different batches to different GPUs for processing, which improves training efficiency.
The training objective of Megatron-LM is to minimize the negative log-likelihood of the target sequence given the input sequence, which is expressed as:
\begin{eqnarray*}
L(\theta) = - \sum_{t=1}^{T} \log P(y_t | y_{<t}, x; \theta)
\end{eqnarray*}
where $T$ is the length of the sequence, $y_t$ is the target token at time step $t$, $y_{<t}$ are the tokens before time step $t$, $x$ is the input sequence, and $\theta$ represents the model parameters.
\subsubsection{Gpipe}
In Gpipe~\cite{huang2019gpipe}, the model is split into sub-models, each assigned to a different GPU. DP concatenates Micro-batches along the pipeline to pass data and gradients between GPUs, enabling pipeline parallelism~\cite{dally1996pipeline}. The training process in Gpipe can be expressed as the following equation:
\begin{eqnarray*}
\Delta W_{i,j}=\eta\sum_{k=1}^{K}(\nabla_{W_{i,j}}L(f^{i,j}(x_{k}^{i,j}),y_{k}^{i,j})+\sum_{l=j+1}^{M}\nabla_{W_{i,l}}L(f^{i,l}(x_{k}^{i,l}),y_{k}^{i,l}))
\end{eqnarray*}
where $W_{i,j}$ denotes the weight parameter of the $j$th layer of the $i$th submodel, $\Delta W_{i,j}$ denotes the corresponding parameter update, $\eta$ denotes the learning rate, $K$ denotes the number of Micro-batches, $f^{i,j}$ denotes the forward propagation function of the $j$th layer of the $i$th submodel, $x_ {k}^{i,j}$ denotes the $k$th Micro-batch of the $j$th layer in the $i$th sub-model, $y_{k}^{i,j}$ denotes the label of the $k$th Micro-batch.
\subsection{Graph Neural Networks}
Graph Neural Networks (GNNs)~\cite{scarselli2008graph,zhou2018graph,zhang2018scalable,bui2021structural,hamilton2017inductive} are a type of neural network designed to work on graph-structured data, where nodes represent entities and edges represent relationships between them. They have become popular in recent years due to their ability to capture complex relationships and patterns in data, making them useful for tasks such as node classification, link prediction, and graph classification.
\subsection{Graph Convolutional Networks}
Graph Convolutional Networks (GCNs)~\cite{kipf2017semi} are a type of deep learning model designed to work on graph-structured data. They use convolutional operations to aggregate information from neighboring nodes and update node representations. The key formulas for GCNs include the graph convolution operation, which calculates the node representation updates, and the graph pooling operation, which aggregates information across multiple nodes.
\begin{equation}
\mathbf{v}^{(l+1)} = \sigma\left(\sum_{u \in \mathcal{N}(v)}\frac{1}{c_{u,v}} W^{(l)}\mathbf{u}^{(l)}\right)
\label{eq:one}
\end{equation}
where $\mathbf{v}^{(l)}$ represents the feature representation of node $v$ at layer $l$, $\mathcal{N}(v)$ denotes the set of neighbors of node $v$, $W^{(l)}$ is the weight matrix at layer $l$, $\sigma$ is the activation function, and $c_{u,v}$ is a normalization factor that depends on the number of neighbors of node $u$ and $v$. This formula is used to iteratively compute the feature representations of nodes in a graph using neighborhood information.
\section{Data Representation}
To better address the issues raised in Section~\ref{intro}, it is important to select an appropriate data structure to represent the system parameters.We adopt a graph-based data structure to represent our system parameters, with each node (denoted as $v$) representing a machine in a different region. Each node has unique features that include its geographic location, computational capacity, and GPU memory. The edges (denoted as $e$) between nodes denote the possibility of communication between the two connected machines, with the weight of each edge representing the time in milliseconds required to transmit each 64-byte message.
As depicted in Figure \ref{Data}, we randomly selected eight machines to construct a graph, where the edge weight represents the communication time, and the node features are embedded in the corresponding vector space.
\begin{figure}[h]
\centering
\includegraphics[scale=0.58]{DataP.png}
\caption{In this figure, the graph topology is visualized on the left, while the characteristics of each node are indicated on the right. Where computing power is determined based on Nvidia's official website\protect\footnotemark, and memory refers to the total memory across all GPUs on each machine.}\label{Data}
\end{figure}
\footnotetext{\url{https://developer.nvidia.com/cuda-gpus}}
For example, node 0 can be represented as $v_{0}=\left \{'Beijing',8.6,152\right \} $. Then we embed the node information using the following formula:
\begin{equation}
\mathbf{v}^{(0)}=\mathbf{x}_{v}
\label{eq:two}
\end{equation}
where $\mathbf{v}^{(0)}$ denotes the initial feature vector of node $v$ and $\mathbf{x}_{v}$ denotes the input feature vector of node $v$.
The node-to-node edges we represent by the adjacency matrix. The weight of an edge in the adjacency matrix is equal to the communication time between two corresponding nodes. The values for unconnected edges are set to 0, and the diagonal values in this matrix are all 0. Similarly, we then perform the edge information embedding with the following equation:
\begin{equation}
e_{vu} = g\left(\mathbf{e}_{vu}, \mathbf{u}, \mathbf{v}, \mathbf{\Theta}_e\right)
\label{eq:two}
\end{equation}
where $e_{vu}$ denotes the edge feature between node $v$ and node $u$, $\mathbf{e}_{vu}$ is the feature vector of edge $vu$, $\mathbf{u}$ and $\mathbf{v}$ are the feature vectors of node $u$ and node $v$, respectively, $g$ is a learnable function and $\mathbf{\Theta}_e$ is its argument. We then sparsely label this subgraph to enable the neural network to learn the contents of the graph in a supervised manner.
\section{Methods}\label{Methods}
The typical tasks of graph neural networks, such as node classification, do not utilize edge information and only leverage the graph topology. In real-world cases, the information carried by edges is often crucial, such as edge weights and directed edges. To incorporate edge information into nodes, we aim to perform edge pooling, which involves aggregating or pooling edges of neighboring nodes at each node to create a unified node representation that contains edge information. This is expressed in the following equation:
\begin{equation}
\mathbf{v}^{(l+1)} = \sigma\left(\sum_{u \in \mathcal{N}(v)} f(\mathbf{v}^{(l)}, \mathbf{u}^{(l)}, e_{vu})\right)
\label{eq:two}
\end{equation}
Where $\mathbf{v}^{(l+1)}$ represents the feature vector of node $v$ in layer $l+1$, $\sigma$ is the activation function, $\mathcal{N}(v)$ denotes the set of neighboring nodes of node $v$, $\mathbf{u}^{(l)}$ represents the feature vector of node $u$ in layer $l$, and $f$ is a learnable function used to merge features of nodes and edges into new features of node $v$.
\begin{figure}[]
\centering
\includegraphics[scale=0.35]{Trans.png}
\caption{The edge pooling operation of the above figure~\ref{Data}. where $U$ represents the information of the whole graph and $f$ is the respective linear layer.}\label{Trans}
\end{figure}
As depicted in Figure \ref{Trans}, this is the first layer of the constructed network structure($l=0$) that enables nodes to encode edge information.
\begin{figure}[h]
\centering
\includegraphics[scale=0.40]{GCN.png}
\caption{The transformed graph data are entered into GCNs for forward propagation. }\label{GCNs}
\end{figure}
After the edge features are embedded into node features, we can use the resulting transformed graph as input for a standard node classification task and train it using a graph convolutional neural network or graph attention network. As shown in Equation \ref{eq:one}. If we want to build N-layer GCNs with our $l=2,3,4\cdots N+1$.
As shown in Figure \ref{GCNs}, Y represents the category of the classification, i.e., what tasks are appropriate.
Then we calculate its loss using the cross-entropy loss function~\cite{goodfellow2016deep}:
\begin{equation}
\mathcal{L} = -\sum_{i=1}^{|\mathcal{Y}|} Y_{i}\log \hat{Y}_{i}
\label{eq:three}
\end{equation}
Here, $\mathcal{Y}$ denotes the set of all labels, $Y_{i}$ denotes the true label of node $i$, and $\hat{Y}_{i}$ denotes the predicted label of node $i$. Then back propagation is performed to update the network parameters.
\begin{figure}[h]
\centering
\includegraphics[scale=0.50]{Train.png}
\caption{Loss rate and accuracy line charts for 10 steps of training on this data. The parameters of GCNs are 188k and the learning rate is 0.01. }\label{Train}
\end{figure}
As depicted in Figure \ref{Train}, we observed that the accuracy peaked at 99\% during the sixth training step.
\section{Structure}
In this section, we build our system based on the GCNs trained in the previous section \ref{Methods} and solve the problem presented in section~\ref{intro}.
\subsection{Efficiency}
We now have two tasks to perform. The first involves training the BERT-large model~\cite{devlin2019bert}, while the second involves training the GPT-2 model~\cite{radford2019language}. As the largest GPT-2 model (1.5B parameters) is significantly larger than BERT-large (340M parameters), it is important to carefully allocate tasks to each machine in a sensible manner. The ratio of the number of parameters in GPT-2's largest model (1.5B) to BERT-large (340M) is approximately 4.4:1. Based on this information, we instruct the graph neural network to classify the classes according to this scale and optimize the communication time within each class. Also, we need to consider the memory and computing power characteristics of each machine.
\begin{algorithm}[H]
\caption{Task Assignments}
\label{alg1}
\begin{algorithmic}[1]
\Require Graph Data $G_{1}$, Trained Graph Neural Network $F$, Number of Tasks $N$, Minimum Memory Threshold $M_{n}$ for Each Task
\Ensure Task Assignments for Each Graph Data
\State $C \gets 0$
\If{$G_{1}$ does not meet the requirements of all tasks}
\State Jump out of the algorithm and report an error.
\EndIf
\For {i in range(1, N)}
\State $G_{i}, G_{i+1} \gets F(G_{i})$
\State Assign the smaller graph $G_{i}$ to a task with the appropriate minimum memory threshold $M_{n}$
\If{$G_{i}$ does not meet the requirements of the all task}
\State $C \gets i$ and Continue
\If{$C>=1$}
\State $G_{i} \gets G_{i} + G_{C}$
\State Assign the smaller graph $G_{i}$ to a task with the appropriate minimum memory threshold $M_{n}$
\State $C \gets 0$
\EndIf
\EndIf
\If{$G_{i+1}$ does not meet the requirements of the all task}
\State Break and Provide a prompt and wait for other tasks to complete before proceeding with training.
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
We use Algorithm \ref{alg1} to schedule multiple tasks, but it can also be used to determine superiority if there is only one task. Based on the computational power, memory and communication efficiency features, as well as the integration into node information, we only need to determine whether it is appropriate.
\begin{figure}[h]
\centering
\includegraphics[scale=0.50]{Group.png}
\caption{The data in Figure \ref{Data} are grouped using Algorithm \ref{alg1}. The left panel is the training group of GPT-2 and the right panel is the Bert-large training group. }\label{Group}
\end{figure}
Figure \ref{Group} demonstrates that the basic graph neural network is capable of carrying out classification tasks effectively and emulating human thought processes.
\subsection{Scalability}
If we need to add one or more machines to this system, we can simply define their $\left \{City, Compute Capability, Memory\right \}$ and connect them to the existing nodes that can communicate with them using weights.
\begin{figure}[h]
\centering
\includegraphics[scale=0.50]{Add.png}
\caption{Join the machine with id 45 and make assignments. }\label{Add}
\end{figure}
As shown in Figure \ref{Add}, the machine with id 45$\left \{Rome, 7, 384\right \}$ in the dataset was added to the Hulk system and still works fine.
\section{Experimentation and Evaluation}
In this section, we test the Hulk system using multiple deep learning tasks in real industries with 46 high-performance GPU servers.
\subsection{Experimental Setting}
We have a total of 46 servers distributed across different countries and regions, with a combined total of 368 GPUs of various models such as NVIDIA A100, NVIDIA A40, NVIDIA V100, RTX A5000, GeForce GTX 1080Ti, GeForce RTX 3090, and NVIDIA TITAN Xp. And, we calculated the average of 10 communications between these machines over a 3-month period. Due to network policy restrictions in different countries, there are certain machines that are unable to communicate with each other. We adopt the parameter settings provided in the original paper for the training process.
\subsection{Data Building}
We use networkx~\cite{hagberg2008exploring} library to build our graph structure data and visualize it as shown in Figure \ref{dA}. Additionally, we need to read the adjacency matrix of this data and consider the corresponding feature embedding representation.
\begin{figure}[h]
\centering
\includegraphics[scale=0.30]{anedge.png}
\caption{46 servers construct the graph structure data. }\label{dA}
\end{figure}
\subsection{Task Assignment}
The four tasks we aim to train in this system are GPT-3 (175B), T5 (11B), GPT-2 (1.5B), and BERT-large (350M).
We need to classify all nodes into four distinct classes based on their characteristics and then deploy distributed algorithms tailored to each class.
\begin{table}[h!]
\centering
\caption{Model Node Allocation}
\label{tab:model_node_allocation}
\begin{tabular}{cc}
\hline
\textbf{Model} & \textbf{Nodes} \\
\hline
GPT-3 & 0, 1, 2, 3, 4, 20, 21, 22, 23, 24, 27, 28, 29, 30, 31 \\
T5 & 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 \\
GPT-2 & 15, 16, 17, 18, 19, 25, 26, 32, 33, 34 \\
BERT-large & 35, 36, 37, 38 \\
\hline
\end{tabular}
\end{table}
As presented in Table \ref{tab:model_node_allocation}, we feed the graph data into the graph neural network, which was trained in Section \ref{Methods} and employs Algorithm \ref{alg1}, to derive node classification information. To handle the nodes in each class with different computational performance and memory, we utilize Gpipe to train the model in parallel. Depending on the computational power and memory of each node, we determine which part of the model it will handle.
\subsection{Evaluation}
To validate the performance of the Hulk system, we have chosen three commonly used distributed computing algorithms for evaluation.
\paragraph{\textbf{System A}} It utilizes all available machines for training while discarding any machine that does not have sufficient memory to accommodate the entire model. It utilizes data parallelism to distribute the batch size across multiple machines, thereby enabling simultaneous training of the model on each machine.
\paragraph{\textbf{System B}} It utilizes Gpipe for parallelism, assigning a certain layer of the model to a particular machine until the entire model is distributed across all machines.
\paragraph{\textbf{System C}} It employs tensor parallelism with Megatron-LM across the entire system, requiring all machines to be utilized for model training.
\begin{figure}[h]
\centering
\includegraphics[scale=0.47]{Result.png}
\caption{Communication time and calculation time on four models of the four the 4 systems. }\label{Result}
\end{figure}
\paragraph{Result} As shown in Figure \ref{Result}, the Hulk system can greatly reduce communication time and thus the overall training time. This illustrates that Hulk is effective in dividing the nodes into a specific model for training.
\begin{figure}[h]
\centering
\includegraphics[scale=0.60]{model.png}
\caption{Language Model Parameters}\label{model}
\end{figure}
If we need to train 6 models, the parameters of each model are shown in Figure \ref{model}. Among them, the parameters of RoBERTa~\cite{robust} are 355M and the parameters of XLNet~\cite{Xlnet} are 340M.
\paragraph{Result} As illustrated in Figure \ref{Result2}, when the system needs to handle multiple tasks, the gap in communication time becomes more apparent. Our Hulk system is able to effectively reduce communication time.
\begin{figure}[]
\centering
\includegraphics[scale=0.47]{Result2.png}
\caption{Communication time and calculation time on six models of the four systems. }\label{Result2}
\end{figure}
\section{Conclusion}
In this article, we introduce our novel solution, Hulk, which optimizes regionally distributed computer systems by tackling the challenges of scheduling distributed training across machines in different regions. Our real-world industrial solution, Hulk, utilizes graph neural networks with powerful representation capabilities to enhance communication efficiency between GPUs or TPUs across different countries or regions during training. With its efficient communication, global availability, fast recovery, and excellent scalability, Hulk stands out as a powerful tool for optimizing regionally distributed computer systems. The results demonstrate a significant increase in the efficiency of distributed training, crucial for the success of large-scale deep learning models. Overall, the use of Hulk can streamline the model deployment process and benefit researchers and practitioners seeking to optimize communication efficiency.
\section{Introduction}\label{intro}
In recent years, there has been a trend of scaling up deep learning models, resulting in a more robust performance in specific domains. For instance, in the field of natural language processing, large-scale text data has been used to train deep learning models such as GPT-3 (175B)~\cite{GPT3}, T5 (11B)~\cite{t5}, and Megatron-LM (8.3B)~\cite{shoeybi2019megatronlm}, which have demonstrated impressive performance. However, training these models can be quite challenging. To solve the challenges posed by large-scale deep learning models, optimization of distributed computing is crucial.
Model parallelism(MP) is a technique used to solve the problem of a model being too large to fit into the memory of a single GPU or TPU by distributing the model across multiple GPUs or TPUs. However, this approach may introduce communication challenges between GPUs or TPUs during training. On the other hand, data parallelism(DP) can improve time utilization by addressing the batch size issue during training, but it cannot resolve the problem of a model being too large for a single GPU or TPU's memory capacity.
While DP and model MP have been effective in mitigating communication volume issues in recent years, such as large minibatch SGD~\cite{goyal2017accurate}, Megatron-LM~\cite{shoeybi2019megatronlm}, Gpipe~\cite{huang2019gpipe}, and Pathway~\cite{MLSYS202298dce83d} the challenge of scheduling distributed training across machines in different regions remains unsolved. If a model like GPT-3 with hundreds of billions of parameters exceeds the memory capacity of GPUs in the current region during training, it becomes necessary to schedule machines from other regions to complete the training. This will pose several challenges:
\begin{itemize}
\item Communication latency can be very high when training is distributed across machines in different regions.
\item How can tasks be effectively allocated to different machines, such as assigning specific machines to maintain certain layers of the model's parameters (e.g., Machine 0 is responsible for Layer X) or designating machines to process specific data (e.g., Machine 2 handles Data Set Y)?
\item How can we address the issue of disaster recovery in training, such as handling scenarios where a machine fails during the process?
\item If you need to train not only a single task but also multiple tasks simultaneously, such as training both a GPT-3 and a GPT-2 model, how can you provide for these tasks?
\end{itemize}
\begin{table}[]
\caption{\label{Communication}
We measured the time it takes for our machines in three different regions to send and receive 10 words, using eight servers, and calculated the average.
}
\begin{tabular}{lcccccccc}
\hline
\textbf{Regions} & \multicolumn{8}{c}{\textbf{Communication time to send 64 bytes (ms)}} \\ \hline
& California & Tokyo & Berlin & London & New Delhi & Paris & Rome & Brasilia \\
Beijing, China & 89.1 & 74.3 & 250.5 & 229.8 & 341.9 & - & 296.0 & 341.8 \\
Nanjing, China & 97.9 & 173.8 & 213.7 & 176.7 & 236.3 & 265.1 & 741.3 & 351.3 \\
California, USA & 1 & 118.8 & 144.8 & 132.3 & 197.0 & 133.9 & 158.6 & 158.6 \\ \hline
\end{tabular}
\end{table}
To elaborate on the first point, we collected all communication logs between the three machines and the eight servers over a three-month period. Our statistics reveal the communication time for every 64 bytes, as presented in Table \ref{Communication}. As observed in the table, the communication latency between certain nodes is high or even unfeasible. Here, the problem of communication time is difficult to solve in a distributed system without optimization.
\subsection{Contributions}
Graph data structures have been widely adopted since their introduction, as they can effectively represent interconnected structures such as social networks and knowledge graphs. Considering the tremendous success of graph neural networks~\cite{gilmer2017neural,kipf2017semi,wu2020comprehensive} in recent years, we aim to leverage this powerful capability in real-world industrial systems. With the powerful representational capability of graphs, it becomes easier to model the relevant optimization problems described in our paper. Our design choices were influenced by the types of workloads observed in actual systems. Hulk has the following features:
\paragraph{\textbf{Efficient Inter-node Communication}} Our system minimizes the impact of communication latency between machines, ensuring that each machine is assigned the appropriate task.
\paragraph{\textbf{Global Optimality}} Our model is built upon graph convolutional neural networks (GCNs)~\cite{kipf2017semi,wu2019comprehensive} to extract features from the entire graph, enabling the selection of a globally optimal solution.
\paragraph{\textbf{Disaster Recovery}} Since GCNs are utilized to assign tasks to different machines in the system, it becomes evident which tasks each machine is responsible for. Furthermore, in the event of a machine failure, the system can quickly recover the entire computation.
\paragraph{\textbf{Scalability}} If a particular machine or machines are no longer needed, you can simply remove the corresponding edge information from the graph structure.
The novelty of the proposed system lies in the utilization of graph neural networks for optimizing machine learning systems. By relying on the neural network's output values and some algorithms, the scheduling problem of the entire system can be efficiently solved.
\subsection{Engineering Challenges}
Although graph neural networks are capable of addressing tasks such as node classification~\cite{kipf2017semi,velivckovic2018graph,wang2019heterogeneous}, link prediction~\cite{zhang2018link,li2018link,schlichtkrull2018modeling}, and graph classification~\cite{kipf2017semi,ying2018hierarchical}, there is currently no suitable task that can be directly applied to our system. How to construct a suitable loss function is a crucial problem that cannot be overlooked. Regarding the representation of optimization features, such as computation time and communication time, in the graph data structure, there are also challenges that need to be addressed.
\section{Background}
This section provides a brief introduction to machine learning systems and graph neural networks.
\subsection{Machine Learning Systems}
This subsection provides a brief overview of the evolution of machine learning systems.
\subsubsection{Data Parallelism}
DP~\cite{dean2012large} is a commonly used technique in distributed training for deep neural networks, where the data is split into multiple copies and distributed to different machines for computation. Each machine calculates the loss and gradient of its assigned data and aggregates these gradients into a parameter server, which updates the model parameters. This method enables multiple machines to process large data sets in parallel, resulting in faster training speeds.
\subsubsection{Parameter Server}
The parameter server is a distributed deep learning training method proposed by Mu Li et al.~\cite{li2014scaling} that addresses the communication bottleneck problem in training large-scale deep learning models. It achieves this by placing the gradient aggregation and parameter updating process on the server side, and the computational nodes only need to send the locally computed gradient information to the server. This approach reduces communication overhead and improves training efficiency.
\subsubsection{Megatron-LM}
Megatron-LM~\cite{shoeybi2019megatronlm} combines model parallelism and data parallelism by dividing the model parameters into multiple parts, each trained on a different GPU. This allows for larger models to be used as each GPU only needs to focus on computing a part of the model using model parallelism. Data parallelism is used to assign different batches to different GPUs for processing, which improves training efficiency.
The training objective of Megatron-LM is to minimize the negative log-likelihood of the target sequence given the input sequence, which is expressed as:
\begin{eqnarray*}
L(\theta) = - \sum_{t=1}^{T} \log P(y_t | y_{<t}, x; \theta)
\end{eqnarray*}
where $T$ is the length of the sequence, $y_t$ is the target token at time step $t$, $y_{<t}$ are the tokens before time step $t$, $x$ is the input sequence, and $\theta$ represents the model parameters.
\subsubsection{Gpipe}
In Gpipe~\cite{huang2019gpipe}, the model is split into sub-models, each assigned to a different GPU. DP concatenates Micro-batches along the pipeline to pass data and gradients between GPUs, enabling pipeline parallelism~\cite{dally1996pipeline}. The training process in Gpipe can be expressed as the following equation:
\begin{eqnarray*}
\Delta W_{i,j}=\eta\sum_{k=1}^{K}(\nabla_{W_{i,j}}L(f^{i,j}(x_{k}^{i,j}),y_{k}^{i,j})+\sum_{l=j+1}^{M}\nabla_{W_{i,l}}L(f^{i,l}(x_{k}^{i,l}),y_{k}^{i,l}))
\end{eqnarray*}
where $W_{i,j}$ denotes the weight parameter of the $j$th layer of the $i$th submodel, $\Delta W_{i,j}$ denotes the corresponding parameter update, $\eta$ denotes the learning rate, $K$ denotes the number of Micro-batches, $f^{i,j}$ denotes the forward propagation function of the $j$th layer of the $i$th submodel, $x_ {k}^{i,j}$ denotes the $k$th Micro-batch of the $j$th layer in the $i$th sub-model, $y_{k}^{i,j}$ denotes the label of the $k$th Micro-batch.
\subsection{Graph Neural Networks}
Graph Neural Networks (GNNs)~\cite{scarselli2008graph,zhou2018graph,zhang2018scalable,bui2021structural,hamilton2017inductive} are a type of neural network designed to work on graph-structured data, where nodes represent entities and edges represent relationships between them. They have become popular in recent years due to their ability to capture complex relationships and patterns in data, making them useful for tasks such as node classification, link prediction, and graph classification.
\subsection{Graph Convolutional Networks}
Graph Convolutional Networks (GCNs)~\cite{kipf2017semi} are a type of deep learning model designed to work on graph-structured data. They use convolutional operations to aggregate information from neighboring nodes and update node representations. The key formulas for GCNs include the graph convolution operation, which calculates the node representation updates, and the graph pooling operation, which aggregates information across multiple nodes.
\begin{equation}
\mathbf{v}^{(l+1)} = \sigma\left(\sum_{u \in \mathcal{N}(v)}\frac{1}{c_{u,v}} W^{(l)}\mathbf{u}^{(l)}\right)
\label{eq:one}
\end{equation}
where $\mathbf{v}^{(l)}$ represents the feature representation of node $v$ at layer $l$, $\mathcal{N}(v)$ denotes the set of neighbors of node $v$, $W^{(l)}$ is the weight matrix at layer $l$, $\sigma$ is the activation function, and $c_{u,v}$ is a normalization factor that depends on the number of neighbors of node $u$ and $v$. This formula is used to iteratively compute the feature representations of nodes in a graph using neighborhood information.
\section{Data Representation}
To better address the issues raised in Section~\ref{intro}, it is important to select an appropriate data structure to represent the system parameters.We adopt a graph-based data structure to represent our system parameters, with each node (denoted as $v$) representing a machine in a different region. Each node has unique features that include its geographic location, computational capacity, and GPU memory. The edges (denoted as $e$) between nodes denote the possibility of communication between the two connected machines, with the weight of each edge representing the time in milliseconds required to transmit each 64-byte message.
As depicted in Figure \ref{Data}, we randomly selected eight machines to construct a graph, where the edge weight represents the communication time, and the node features are embedded in the corresponding vector space.
\begin{figure}[h]
\centering
\includegraphics[scale=0.58]{DataP.png}
\caption{In this figure, the graph topology is visualized on the left, while the characteristics of each node are indicated on the right. Where computing power is determined based on Nvidia's official website\protect\footnotemark, and memory refers to the total memory across all GPUs on each machine.}\label{Data}
\end{figure}
\footnotetext{\url{https://developer.nvidia.com/cuda-gpus}}
For example, node 0 can be represented as $v_{0}=\left \{'Beijing',8.6,152\right \} $. Then we embed the node information using the following formula:
\begin{equation}
\mathbf{v}^{(0)}=\mathbf{x}_{v}
\label{eq:two}
\end{equation}
where $\mathbf{v}^{(0)}$ denotes the initial feature vector of node $v$ and $\mathbf{x}_{v}$ denotes the input feature vector of node $v$.
The node-to-node edges we represent by the adjacency matrix. The weight of an edge in the adjacency matrix is equal to the communication time between two corresponding nodes. The values for unconnected edges are set to 0, and the diagonal values in this matrix are all 0. Similarly, we then perform the edge information embedding with the following equation:
\begin{equation}
e_{vu} = g\left(\mathbf{e}_{vu}, \mathbf{u}, \mathbf{v}, \mathbf{\Theta}_e\right)
\label{eq:two}
\end{equation}
where $e_{vu}$ denotes the edge feature between node $v$ and node $u$, $\mathbf{e}_{vu}$ is the feature vector of edge $vu$, $\mathbf{u}$ and $\mathbf{v}$ are the feature vectors of node $u$ and node $v$, respectively, $g$ is a learnable function and $\mathbf{\Theta}_e$ is its argument. We then sparsely label this subgraph to enable the neural network to learn the contents of the graph in a supervised manner.
\section{Methods}\label{Methods}
The typical tasks of graph neural networks, such as node classification, do not utilize edge information and only leverage the graph topology. In real-world cases, the information carried by edges is often crucial, such as edge weights and directed edges. To incorporate edge information into nodes, we aim to perform edge pooling, which involves aggregating or pooling edges of neighboring nodes at each node to create a unified node representation that contains edge information. This is expressed in the following equation:
\begin{equation}
\mathbf{v}^{(l+1)} = \sigma\left(\sum_{u \in \mathcal{N}(v)} f(\mathbf{v}^{(l)}, \mathbf{u}^{(l)}, e_{vu})\right)
\label{eq:two}
\end{equation}
Where $\mathbf{v}^{(l+1)}$ represents the feature vector of node $v$ in layer $l+1$, $\sigma$ is the activation function, $\mathcal{N}(v)$ denotes the set of neighboring nodes of node $v$, $\mathbf{u}^{(l)}$ represents the feature vector of node $u$ in layer $l$, and $f$ is a learnable function used to merge features of nodes and edges into new features of node $v$.
\begin{figure}[]
\centering
\includegraphics[scale=0.35]{Trans.png}
\caption{The edge pooling operation of the above figure~\ref{Data}. where $U$ represents the information of the whole graph and $f$ is the respective linear layer.}\label{Trans}
\end{figure}
As depicted in Figure \ref{Trans}, this is the first layer of the constructed network structure($l=0$) that enables nodes to encode edge information.
\begin{figure}[h]
\centering
\includegraphics[scale=0.40]{GCN.png}
\caption{The transformed graph data are entered into GCNs for forward propagation. }\label{GCNs}
\end{figure}
After the edge features are embedded into node features, we can use the resulting transformed graph as input for a standard node classification task and train it using a graph convolutional neural network or graph attention network. As shown in Equation \ref{eq:one}. If we want to build N-layer GCNs with our $l=2,3,4\cdots N+1$.
As shown in Figure \ref{GCNs}, Y represents the category of the classification, i.e., what tasks are appropriate.
Then we calculate its loss using the cross-entropy loss function~\cite{goodfellow2016deep}:
\begin{equation}
\mathcal{L} = -\sum_{i=1}^{|\mathcal{Y}|} Y_{i}\log \hat{Y}_{i}
\label{eq:three}
\end{equation}
Here, $\mathcal{Y}$ denotes the set of all labels, $Y_{i}$ denotes the true label of node $i$, and $\hat{Y}_{i}$ denotes the predicted label of node $i$. Then back propagation is performed to update the network parameters.
\begin{figure}[h]
\centering
\includegraphics[scale=0.50]{Train.png}
\caption{Loss rate and accuracy line charts for 10 steps of training on this data. The parameters of GCNs are 188k and the learning rate is 0.01. }\label{Train}
\end{figure}
As depicted in Figure \ref{Train}, we observed that the accuracy peaked at 99\% during the sixth training step.
\section{Structure}
In this section, we build our system based on the GCNs trained in the previous section \ref{Methods} and solve the problem presented in section~\ref{intro}.
\subsection{Efficiency}
We now have two tasks to perform. The first involves training the BERT-large model~\cite{devlin2019bert}, while the second involves training the GPT-2 model~\cite{radford2019language}. As the largest GPT-2 model (1.5B parameters) is significantly larger than BERT-large (340M parameters), it is important to carefully allocate tasks to each machine in a sensible manner. The ratio of the number of parameters in GPT-2's largest model (1.5B) to BERT-large (340M) is approximately 4.4:1. Based on this information, we instruct the graph neural network to classify the classes according to this scale and optimize the communication time within each class. Also, we need to consider the memory and computing power characteristics of each machine.
\begin{algorithm}[H]
\caption{Task Assignments}
\label{alg1}
\begin{algorithmic}[1]
\Require Graph Data $G_{1}$, Trained Graph Neural Network $F$, Number of Tasks $N$, Minimum Memory Threshold $M_{n}$ for Each Task
\Ensure Task Assignments for Each Graph Data
\State $C \gets 0$
\If{$G_{1}$ does not meet the requirements of all tasks}
\State Jump out of the algorithm and report an error.
\EndIf
\For {i in range(1, N)}
\State $G_{i}, G_{i+1} \gets F(G_{i})$
\State Assign the smaller graph $G_{i}$ to a task with the appropriate minimum memory threshold $M_{n}$
\If{$G_{i}$ does not meet the requirements of the all task}
\State $C \gets i$ and Continue
\If{$C>=1$}
\State $G_{i} \gets G_{i} + G_{C}$
\State Assign the smaller graph $G_{i}$ to a task with the appropriate minimum memory threshold $M_{n}$
\State $C \gets 0$
\EndIf
\EndIf
\If{$G_{i+1}$ does not meet the requirements of the all task}
\State Break and Provide a prompt and wait for other tasks to complete before proceeding with training.
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
We use Algorithm \ref{alg1} to schedule multiple tasks, but it can also be used to determine superiority if there is only one task. Based on the computational power, memory and communication efficiency features, as well as the integration into node information, we only need to determine whether it is appropriate.
\begin{figure}[h]
\centering
\includegraphics[scale=0.50]{Group.png}
\caption{The data in Figure \ref{Data} are grouped using Algorithm \ref{alg1}. The left panel is the training group of GPT-2 and the right panel is the Bert-large training group. }\label{Group}
\end{figure}
Figure \ref{Group} demonstrates that the basic graph neural network is capable of carrying out classification tasks effectively and emulating human thought processes.
\subsection{Scalability}
If we need to add one or more machines to this system, we can simply define their $\left \{City, Compute Capability, Memory\right \}$ and connect them to the existing nodes that can communicate with them using weights.
\begin{figure}[h]
\centering
\includegraphics[scale=0.50]{Add.png}
\caption{Join the machine with id 45 and make assignments. }\label{Add}
\end{figure}
As shown in Figure \ref{Add}, the machine with id 45$\left \{Rome, 7, 384\right \}$ in the dataset was added to the Hulk system and still works fine.
\section{Experimentation and Evaluation}
In this section, we test the Hulk system using multiple deep learning tasks in real industries with 46 high-performance GPU servers.
\subsection{Experimental Setting}
We have a total of 46 servers distributed across different countries and regions, with a combined total of 368 GPUs of various models such as NVIDIA A100, NVIDIA A40, NVIDIA V100, RTX A5000, GeForce GTX 1080Ti, GeForce RTX 3090, and NVIDIA TITAN Xp. And, we calculated the average of 10 communications between these machines over a 3-month period. Due to network policy restrictions in different countries, there are certain machines that are unable to communicate with each other. We adopt the parameter settings provided in the original paper for the training process.
\subsection{Data Building}
We use networkx~\cite{hagberg2008exploring} library to build our graph structure data and visualize it as shown in Figure \ref{dA}. Additionally, we need to read the adjacency matrix of this data and consider the corresponding feature embedding representation.
\begin{figure}[h]
\centering
\includegraphics[scale=0.30]{anedge.png}
\caption{46 servers construct the graph structure data. }\label{dA}
\end{figure}
\subsection{Task Assignment}
The four tasks we aim to train in this system are GPT-3 (175B), T5 (11B), GPT-2 (1.5B), and BERT-large (350M).
We need to classify all nodes into four distinct classes based on their characteristics and then deploy distributed algorithms tailored to each class.
\begin{table}[h!]
\centering
\caption{Model Node Allocation}
\label{tab:model_node_allocation}
\begin{tabular}{cc}
\hline
\textbf{Model} & \textbf{Nodes} \\
\hline
GPT-3 & 0, 1, 2, 3, 4, 20, 21, 22, 23, 24, 27, 28, 29, 30, 31 \\
T5 & 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 \\
GPT-2 & 15, 16, 17, 18, 19, 25, 26, 32, 33, 34 \\
BERT-large & 35, 36, 37, 38 \\
\hline
\end{tabular}
\end{table}
As presented in Table \ref{tab:model_node_allocation}, we feed the graph data into the graph neural network, which was trained in Section \ref{Methods} and employs Algorithm \ref{alg1}, to derive node classification information. To handle the nodes in each class with different computational performance and memory, we utilize Gpipe to train the model in parallel. Depending on the computational power and memory of each node, we determine which part of the model it will handle.
\subsection{Evaluation}
To validate the performance of the Hulk system, we have chosen three commonly used distributed computing algorithms for evaluation.
\paragraph{\textbf{System A}} It utilizes all available machines for training while discarding any machine that does not have sufficient memory to accommodate the entire model. It utilizes data parallelism to distribute the batch size across multiple machines, thereby enabling simultaneous training of the model on each machine.
\paragraph{\textbf{System B}} It utilizes Gpipe for parallelism, assigning a certain layer of the model to a particular machine until the entire model is distributed across all machines.
\paragraph{\textbf{System C}} It employs tensor parallelism with Megatron-LM across the entire system, requiring all machines to be utilized for model training.
\begin{figure}[h]
\centering
\includegraphics[scale=0.47]{Result.png}
\caption{Communication time and calculation time on four models of the four the 4 systems. }\label{Result}
\end{figure}
\paragraph{Result} As shown in Figure \ref{Result}, the Hulk system can greatly reduce communication time and thus the overall training time. This illustrates that Hulk is effective in dividing the nodes into a specific model for training.
\begin{figure}[h]
\centering
\includegraphics[scale=0.60]{model.png}
\caption{Language Model Parameters}\label{model}
\end{figure}
If we need to train 6 models, the parameters of each model are shown in Figure \ref{model}. Among them, the parameters of RoBERTa~\cite{robust} are 355M and the parameters of XLNet~\cite{Xlnet} are 340M.
\paragraph{Result} As illustrated in Figure \ref{Result2}, when the system needs to handle multiple tasks, the gap in communication time becomes more apparent. Our Hulk system is able to effectively reduce communication time.
\begin{figure}[]
\centering
\includegraphics[scale=0.47]{Result2.png}
\caption{Communication time and calculation time on six models of the four systems. }\label{Result2}
\end{figure}
\section{Conclusion}
In this article, we introduce our novel solution, Hulk, which optimizes regionally distributed computer systems by tackling the challenges of scheduling distributed training across machines in different regions. Our real-world industrial solution, Hulk, utilizes graph neural networks with powerful representation capabilities to enhance communication efficiency between GPUs or TPUs across different countries or regions during training. With its efficient communication, global availability, fast recovery, and excellent scalability, Hulk stands out as a powerful tool for optimizing regionally distributed computer systems. The results demonstrate a significant increase in the efficiency of distributed training, crucial for the success of large-scale deep learning models. Overall, the use of Hulk can streamline the model deployment process and benefit researchers and practitioners seeking to optimize communication efficiency.
|
{
"arxiv_id": "2302.13624",
"language": "en",
"timestamp": "2023-02-28T02:25:39",
"url": "https://arxiv.org/abs/2302.13624",
"yymm": "2302"
} |
\section{Introduction}
Quantum batteries (QBs)~\cite{Alicki13} ---genuinely non-classical devices capable of storing energy and performing useful work--- have been a very active topic of research in recent years~\cite{Campaioli18, Bhattacharjee21}. In this framework, theoretical studies have demonstrated that entangling operations can consistently accelerate the charging~\cite{Binder15}, with the Dicke QB standing out as a particularly promising candidate~\cite{Ferraro18, Crescente20, Dou22c, Dou22, Erdman22}. It is given by a system where the energy of photons trapped into a cavity (acting as the charger) is coherently transferred to a QB composed of $N$ quantum units, namely a collection of identical independent two-level systems (TLSs). This model describes, for example, an ensemble of real or artificial atoms embedded in a Fabry-Perot cavity, namely a resonator consisting of two parallel, highly reflective mirrors placed a certain distance apart~\cite{Dicke54}.
In this system, the energy is initially stored into the cavity and flows towards the TLSs due to a proper modulation of the matter-radiation coupling~\cite{Andolina18}. In order to favour this energy flow the cavity and the TLSs are usually assumed as resonant, namely characterized by the same typical energy. Once the charging process is complete, the coupling between the two systems is turned off, leaving the TLSs in a charged state whose stability crucially depends on their relaxation and dephasing times~\cite{Farina19, Santos19, Barra19, Carrega20, Morrone22, Gemme22}. This kind of device displays a collective speed-up of the charging process. This fact has attracted a great deal of interest and, very recently, a first step towards the realization of a Dicke QB has been experimentally implemented in an excitonic system made of fluorescent organic molecules dispersed in an inert matrix, where the Dicke superabsorption leads to a collective boost of the charging process~\cite{Quach22}. Despite this remarkable achievement, the investigation of this kind of device is still in its infancy and new theoretical and experimental studies could lead to promising routes towards the exploitation of purely quantum effect in miniaturized devices devoted to energy storage.
In this direction we want to investigate the role played by exchange of virtual photons between the TLSs composing the QB. A paradigmatic example of the dynamical effect of virtual photons is the Casimir force which arises from the quantum fluctuations of the electromagnetic field. It was first theoretically investigated by Hendrik Casimir in 1948~\cite{Casimir48} and experimentally proved almost fifty years later~\cite{Lamoureaux97}. This force is due to the presence of virtual photons in the vacuum and, in the case of a Fabry-Perot cavity, depends on the boundary conditions imposed by the mirrors. In particular, in the case of two parallel, uncharged, and perfect metallic plates, the virtual cavity photons are restricted in their fluctuation modes by the plates themselves. As a result, the plates experience a net attractive force.
Motivated by this, we introduce and characterize a new type of device for energy storage where the charging is mediated by virtual photons. In contrast to the conventional Dicke QB, the cavity in our proposed off-resonant Dicke QB can be also empty and the two systems (charger and QB) are far detuned in energy, preventing any direct energy flow between them. This makes the systems easier to realize with respect to the conventional Dicke QB and allows us to describe it in the so-called dispersive regime~\cite{Schleich_Book}, typically used in quantum computing for qubit readout~\cite{Krantz19} and characterized by an effective infinite range interaction among the TLSs of the Lipkin-Meshkov-Glick (LMG) kind~\cite{Lipkin65, Dou22b}. Our charging protocol works as follows: initially, both the cavity and the QB are in their ground states and decoupled. The cavity mirrors are then brought close together, leading to a finite matter-radiation coupling. Thus, the virtual detuned photons excite the TLSs and charge the QB. The mirrors are then moved away, switching-off again the matter-radiation coupling leaving the atoms in an excited state. The use of virtual photons in this process gives the QB a unique {\it quantum} behaviour. Remarkably enough, in the strong coupling regime, the performances of these off-resonant Dicke quantum batteries are comparable with the ones of the resonant case.
The article is organized as follows. In Section~\ref{Model} we consider the dispersive regime of the conventional Dicke QB with dipolar matter-radiation coupling. Under the assumption of highly energetic cavity photons, we derive an effective description of the off-resonant Dicke QB based on an LMG model characterized by an infinite range interaction among the TLSs and showing relevant simplifying conservation laws. The charging protocol based on switching on and off the matter-radiation coupling is also discussed.
The main figures of merit, namely the energy stored into the QB, the averaged charging power, and the times required to reach their maxima are discussed in Section~\ref{Figures}. The ergotropy, namely the maximum energy which can be extracted by means of either collective or individual unitary operation is also introduced. Section~\ref{Results} reports the results concerning the time evolution of the above quantities for different values of the matter-radiation coupling and at various numbers $N$ of TLSs. The scaling of their maxima as a function of $N$ is also considered. A universal behaviour of the maximum stored energy is discussed in connection with the quantum phase transition predicted for the underlying effective model in Section~\ref{Universality}. Section~\ref{Conclusione} is devoted to the conclusions, while technical details of the calculations are reported in three Appendices.
\section{Model}
\label{Model}
Let's start by considering a conventional Dicke-QB with a dipole matter-radiation coupling between $N$ identical TLSs and a resonant cavity~\cite{Ferraro18}. It is described by the Hamiltonian (from now on we will set $\hbar=1$)
\begin{eqnarray}
\hat{H}_{\rm {Dicke}}&=&\hat{H}_{B}+\hat{H}_{C}+\hat{H}_{B-C}\\
&=&\omega_z\hat{S}_z+\omega_c\hat{a}^{\dagger}\hat{a}+ 2\lambda\hat{S}_x\left(\hat{a}^{\dagger}+\hat{a}\right),
\label{Dicke}
\end{eqnarray}
with $\hat{H}_{B}$ the Hamiltonian of the QB, $\hat{H}_{C}$ the one of the cavity and $\hat{H}_{B-C}$ their coupling. In the above equation, we have introduced the collective pseudo-spin operator
\begin{equation}
\hat{S}_{\alpha}=\frac{1}{2}\sum^{N}_{i=1}\hat{\sigma}^{i}_{\alpha},
\end{equation}
with $\hat{\sigma}^{i}_{\alpha}$ ($\alpha=x,y,z$) the conventional Pauli matrices associated with the $i$-th TLS and where $\hat{a}$ ($\hat{a}^{\dagger}$) are the annihilation (creation) operator for the photons in the cavity. Here, $\omega_{z}$ is the level spacing of each TLS, $\omega_{c}$ is the energy of the photons, and $\lambda$ is the intensity of the matter-radiation coupling. Notice that this configuration can be effectively engineered in conventional cavity~\cite{Walther06} and circuit-quantum electrodynamics~\cite{Blais04} set-ups. In the latter case, the TLSs can be realized by means of superconducting qubits coupled to an LC circuit playing the role of a resonant cavity~\cite{Krantz19}.
According to the analysis carried out in Appendix~\ref{AppA}, considering the Schrieffer–Wolff transformation~\cite{Schleich_Book} up to the second order in $\lambda$ one obtains the effective Hamiltonian
\begin{equation}
\hat{H}_{\rm eff}=\omega_z \hat{S}_z+\omega_c \hat{a}^{\dagger}\hat{a}-\frac{2\lambda^2\omega_z}{\omega_c^2-\omega_z^2}\hat{S}_z\left(\hat{a}^{\dagger}+\hat{a}\right)^2-\frac{4\lambda^2\omega_c}{\omega_c^2-\omega_z^2}\hat{S}_x^2,
\end{equation}
which is a good approximation of the Dicke Hamiltonian in Eq.~(\ref{Dicke}) in the dispersive regime $\lambda\ll|\omega_{c}-\omega_{z}|$~\cite{Schleich_Book}. Notice that, in this regime, the possibility to increase the energy stored into a QB through virtual excitations mediated by a higher excited state has been recently discussed~\cite{Santos23}.
Further considering the condition of highly energetic photons $\omega_{c}\gg \omega_{z}$ one finally obtains~\cite{Roche22}
\begin{equation}
\hat{H}'_{\rm eff}=\omega_z \hat{S}_z+\omega_c \hat{a}^{\dagger}\hat{a}-\frac{4\lambda^2}{\omega_c}\hat{S}_x^2.
\label{H_1ph_primed}
\end{equation}
Getting rid of the cavity contribution it is possible to map Eq.~(\ref{H_1ph_primed}) into
\begin{eqnarray}
\hat{H}&=&\hat{H}_{B}+\hat{H}_{\rm int}\\
&=&\omega_{z} \hat{S}_z-g\hat{S}_x^2,
\label{H}
\end{eqnarray}
with
\begin{equation}
g=\frac{4\lambda^2}{\omega_c}.
\end{equation}
Notice that, according to this mapping, only positive values of the parameter $g$ are physically meaningful. Moreover, as shown in Appendix~\ref{AppA}, even a more exotic two-photon matter-radiation coupling~\cite{Felicetti15, Felicetti18} leads to an analogous effective Hamiltonian.
The Hamiltonian in Eq.~(\ref{H}) is characterized by an effective infinite range interaction between the TLSs of the LMG kind~\cite{Lipkin65, Dou22b, Abah22} mediated by virtual photons playing the role of a quantum bus~\cite{Larson10}. Such kind of infinite range interaction has been indicated as an ideal condition to achieve high power QBs~\cite{Campaioli17, Le18, Gyhm22}.
In the following, we will assume a charging protocol addressing a time-dependent version of Eq.~(\ref{H})
\begin{equation}
\hat{H}(t) = \hat{H}_{B} + f(t) \hat{H}_{\rm int} ~,
\label{H_t}
\end{equation}
where $f(t)$ is a classical external control parameter introduced to turn on and off the effective interaction among the TLSs in Eq.~(\ref{H}). This can be ultimately seen as a direct consequence of the modulation of the matter-radiation coupling in the original Dicke model in Eq.~(\ref{Dicke}) obtained by bringing the cavity mirrors close together. For sake of simplicity, we will assume this control function to be step-wise: equal to $1$ for $t\in[0,\tau_{c}]$, with $\tau_{c}$ a given time which will be specified in the following, and zero elsewhere. It is worth noting that smoother charging protocols are expected to lead only to marginal changes in the behaviour discussed in the following~\cite{Crescente22}. Generally speaking, the possibility to optimize the time evolution of the charging protocol has been recently object of intense research for both the Dicke QB~\cite{Erdman22} and other QB technologies~\cite{Hu21, Rodriguez23}. Moreover, we further underline the fact that differently from what is usually considered~\cite{Campaioli18,Bhattacharjee21}, and in the same spirit of Refs.~\cite{Rosa20, Rossini20}, here the charger is encoded in the effective interaction among the TLSs composing the QB and not into some external classical or quantum device.
\subsection{Conserved quantities}
In order to investigate the behaviour of QB, for a fixed number $N$ of TLSs, a full numerical exact diagonalization is needed. In order to consistently simplify this task some symmetries can be exploited.
First of all, the operator
\begin{equation}
\hat{S}^2=\hat{S}^2_x+\hat{S}^2_y+\hat{S}^2_z
\end{equation}
is a conserved quantity, namely $\left[\hat{H}(t),\hat{S}^2\right]=0$. This allows us to work in the $(N+1)\times (N+1)$ Hilbert space with maximal value $s=N/2$ and to characterize the states of the system in terms of a pseudo-spin basis $|s,m\rangle$ such that:
\begin{eqnarray}
\hat{S}^2|s,m\rangle&=&s(s+1)|s,m\rangle,\\
\hat{S}_z|s,m\rangle&=&m|s,m\rangle,
\end{eqnarray}
with $m=-s, -s+1, \dots, s-1, s$.
Moreover, also the spin number parity operator
\begin{equation}
\hat{\Pi}=e^{-i\pi\hat{S}_z}
\end{equation}
satisfies $\left[\hat{H},\hat{\Pi}\right]=0$, leading to a further reduction of the Hilbert space dimension for proper initial conditions of the QB. According to this, in the following, we will investigate the charging of the QB starting from the ground state of $\hat{H}_{B}$
\begin{equation}
|\psi(0)\rangle=|\frac{N}{2}, -\frac{N}{2}\rangle.
\end{equation}
It is worth noticing that, according to this parity constraint, the perfect charging of the QB, namely the complete transition to the maximally charged state
\begin{equation}
|\psi_{max}\rangle=|\frac{N}{2}, +\frac{N}{2}\rangle
\end{equation}
can only occur for an even number $N$ of TLSs. However, as will be clear in the following, the efficiency of this kind of QB is lower than $100\%$, therefore we can investigate equivalently both even and odd numbers of TLSs.
\section{Figures of merit}
\label{Figures}
We introduce now relevant quantities able to characterize the performances of the considered off-resonant Dicke QB (with $\omega_{c}\gg \omega_{z}$). In the following, we will focus on the energy stored into the QB itself, the time required to reach the optimal charging, the energy trapped into the interaction term, the averaged charging power, and the ergotropy, namely the maximum amount of work extractable from a charged QB using either collective or local unitary operations~\cite{Alicki13, Allahverdyan04}.
\subsection{Stored energy and charging time}
First of all, we consider the energy stored into the QB. At a given time $t$ it is given by~\cite{Ferraro18, Andolina18}
\begin{equation}
E_{B}(t)=\langle \psi(t) |\hat{H}_{B}| \psi(t) \rangle-\langle \psi(0) |\hat{H}_{B}| \psi(0) \rangle,
\label{E_B}
\end{equation}
with $|\psi(t) \rangle$
the quantum state of the systems for $t\leq \tau_{c}$, evolved according to $\hat{H}(t)$.
The charging time, namely the time $t_{E}$ at which the quantity in Eq.~(\ref{E_B}) reaches is first maximum $E_{max}$, will be also investigated in the following. Of particular relevance will be the scaling of both
$E_{max}$ and $t_{E}$ as a function of the number $N$ of TLSs composing the QB for different values of the effective coupling $g$.
It is worth underlining the fact that the system under investigation is considered as closed. Under this approximation, which holds as long as the time evolution is shorter than the typical time scale associated with relaxation and dephasing of the TLSs~\cite{Dou22c, Carrega20}, one expects a net energy is injected into the QB via the modulation of the external control $f(t)$.
\subsection{Averaged charging power}
Another relevant figure of merit is the averaged charging power, namely the energy stored in a given time, defined as~\cite{Binder15, Campaioli17}
\begin{equation}
P(t)=\frac{E_{B}(t)}{t}.
\end{equation}
In analogy with what was done above, we are interested in the scaling of the maximum charging power $P_{max}$ and of the time $t_{P}$ requited to reach it as a function of $N$ and for various values of the effective coupling $g$.
\subsection{Ergotropy}
Here we introduce the ergotropy, namely the maximum work extractable from a QB~\cite{Allahverdyan04}. We will focus both on the total ergotropy and the one associated with only one of the TLS composing the QB. On a very general ground at a given time $t$, for an Hamiltonian $\hat{\mathcal H}$ and a time dependent density matrix $\hat{\rho}(t)$ describing a quantum state at a given time, it is possible to define the quantity
\begin{equation}\label{eq:ergotropy2}
\mathcal{E}\left(\rho(t),\hat{\cal H}\right)={\rm tr}[\hat{\mathcal{H}} \hat{\rho}(t)]-{\rm tr}[\hat{\mathcal{H}} \hat{\pi}_{\rho}(t)]\;,
\end{equation}
with $\hat{\pi}_{\rho}(t)$ the passive state associated to $\hat{\rho}(t)$ at a given time. Given the diagonal representation of the density matrix
\begin{equation}
\hat{\rho}(t)=\sum_n r_n(t) |r_n(t)\rangle\langle r_n(t)|,
\end{equation}
with $r_0(t)\geq r_1(t) \geq \cdots$, the associated passive state is diagonal on the eigenbasis of $\hat{\mathcal{H}}$ ($|\epsilon_n\rangle$) and reads
\begin{equation}
\hat{\pi}_{\rho}(t)=\sum_n r_n(t) |\epsilon_n\rangle\langle\epsilon_n|,
\end{equation}
with energy eigestates ordered in such a way that $\epsilon_0\leq \epsilon_1 \leq \cdots$. This finally leads to the general definition
\begin{equation}
\mathcal{E}\left(\hat{\rho}(t),\hat{\cal H}\right)=\sum_{k,n} r_{k}(t)\epsilon_{n}\left(|\langle r_{k}(t)|\epsilon_{n}\rangle|^{2}-\delta_{k,n} \right),
\end{equation}
which can be directly applied to our particular case considering
\begin{equation}
\hat{\rho}(t)=|\psi(t)\rangle \langle \psi(t)|
\end{equation}
and $\hat{\mathcal{H}}=\hat{H}_{B}$.
Due to the fact that we are considering only unitary operations acting on a closed system, the ergotropy defined above coincides with the energy stored into the QB~\cite{Alicki13}.
For what it concerns the ergotropy of a single TLS of the QB in~Eq.(\ref{Dicke}) one can proceed as follows. The $2\times 2$ reduced density matrix describing the first TLS at a given instant is indicated by $\hat{\rho}_{{B},1}$, while the energy is measured with respect to the Hamiltonian
\begin{equation}
\hat{h}^1_{B}=\frac{\omega_{z}}{2}\hat{\sigma}^{1}_{z}.
\end{equation}
Notice that, due to the invariance under TLSs permutations of the Hamiltonian in Eq.~(\ref{H}), we can focus on the first TLS without any loss of generality. Thus, the maximum energy that can be extracted from a single battery unit in an $N$ TLSs device is given by
\begin{eqnarray}
\mathcal{E}_{1}^{(N)}(t) \equiv \mathcal{E}(\hat{\rho}_{{B},1}(t),\hat{h}^1_{B})~. \label{ERGO}
\end{eqnarray}
Consistently with what is done above, this expression can be further simplified by writing $\hat{\rho}_{{B},1}(t)$ in the diagonal basis,
%
\begin{eqnarray}
\hat{\rho}_{{B},1}(t)= r_{0}(t)|r_0(t)\rangle\langle r_0(t)|+r_{1}(t)|r_1(t)\rangle\langle r_1(t)|~, \label{rho_diag}
\end{eqnarray}
where the eigenvalues are again ordered in such a way that $r_0(t)\geq r_1(t)$. In this case, the ergotropy $\mathcal{E}_{1}^{(N)}(t)$ reduces to
\begin{eqnarray}
\mathcal{E}_{1}^{(N)}(t) =\frac{E^{(N)}(t)}{N}-r_{1}(t)\omega_{z}~, \label{ERGOS}
\end{eqnarray}
where we used the fact that ${\rm tr}[\hat{\rho}_{{B},1}(t)\hat{h}_1^{B}]=({E^{(N)}(t)}/{N})$, with $E^{(N)}={\rm tr}[\hat{H}_{B} \hat{\rho}(t)]$, due to permutation symmetry. We have evaluated the above expression numerically, further details about this analysis can be found in Ref.~\cite{Erdman22}.
We conclude this Section by pointing out that for a fixed number $N$ of TLSs, in general,
\begin{equation}
\mathcal{E}(t)\neq N\mathcal{E}^{(N)}_{1}(t).
\end{equation}
This is a consequence of the fact that local unitary operations acting on a unique TLS are not enough in order to extract all the energy stored into the QB, and non-local unitary operations collectively acting on the whole QB are needed in order to extract also the energy trapped in the correlations among the elementary building blocks composing the device~\cite{Alicki13}.
\section{Results and scaling laws}
\label{Results}
In the following, we will investigate in detail the behaviour of the previously discussed figures of merit for different values of the coupling $g$ and as a function of the number $N$ of TLSs composing the off-resonant Dicke QB.
\subsection{Weak coupling}
Let's start by considering the regime $g\ll\omega_{z}$. The time evolution of the energy stored in the QB and the averaged charging power are reported in Fig.~\ref{Fig1}.
\begin{figure}[p]
\centering
\includegraphics[width=\textwidth]{figures/fig1.pdf}
\caption{(a). Energy stored into the QB $E_{B}(t)$ (in units of $N\omega_{z}$) and single TLS ergotropy $\mathcal{E}_{1}(t)$ (in units of $\omega_{z}$) as a function of time (in units of $\omega^{-1}_{z}$) for $N=30$. (b). Averaged charging power $P(t)$ (in units of $N\omega^{2}_{z}$) as a function of time (in units of $\omega^{-1}_{z}$) for $N=30$. Other parameters are $g=10^{-3}\omega_z$ and $\omega_{z}\tau_{c}>2\pi$. Black curves represent the theoretical curves obtained through time dependent perturbative expansions (indicated with the superscript $p$ in the labels) in Eqs.~(\ref{E_small}) and (\ref{P_small}).}
\label{Fig1}
\end{figure}
This behavior can be understood considering the analytical expressions for both the above quantities using a time dependent perturbative approach (see Appendix \ref{AppB} for more details about the derivation). This leads to
\begin{equation}
E_{B}(t)\approx\frac{g^2}{8}\frac{1-\cos{2\omega_z t}}{\omega_z}\left(N^2-N\right),
\label{E_small}
\end{equation}
and
\begin{equation}
P(t)\approx\frac{g^2}{8}\frac{1-\cos{2\omega_z t}}{\omega_z t}\left(N^2-N\right).
\label{P_small}
\end{equation}
These approximated expressions correspond to the black dashed curves reported in Fig.~\ref{Fig1}. For the reported coupling $g=10^{-3}\omega_{z}$ they show an excellent agreement with the numerical curves up to $N\approx 30$, further validating the correctness of the considered exact diagonalization and supporting the observation that there is only one relevant frequency of the state dynamics in this regime, leading to very smooth curves with no beats. It is worth noting that the amount of energy stored in this regime is extremely small (see the scale in Fig.~\ref{Fig1}). As a consequence of this, the ergotropy for a single TLS is zero (dashed orange curve in Fig.~\ref{Fig1}), namely no energy can be extracted for a single TLS using local unitary operations.
Starting from Eq.~(\ref{E_small}) one can obtain the values of the maximal stored energy and of the charging time associated with the first maximum, namely
\begin{eqnarray}
E_{max}&=&\frac{g^2}{4\omega_z}\left(N^2-N\right),\\
\omega_{z} t_{E}&=&\frac{\pi}{2},
\end{eqnarray}
respectively. Analogously for the averaged charging power in Eq.~(\ref{P_small}) one has
\begin{eqnarray}
P_{max}&\approx&\frac{g^2}{4.46}\left(N^2-N\right),\\
\omega_{z}t_{P}&\approx& 1.16,
\end{eqnarray}
where the values in this case need to be extracted numerically. The validity of the scalings of stored energy and charging power as a function of the number of TLSs has been checked in Fig.~\ref{Fig2}. In this framework, we underline the fact that throughout all the paper we will consider the scaling at a finite number of TLSs, without addressing the thermodynamical limit, differently from what is discussed for example in Ref.~\cite{Julia-Farre20}. This is justified by the fact that, for example, circuit quantum-electrodynamics devices are usually far from the thermodynamic limit~\cite{Fink09}.
\begin{figure}[p]
\centering
\includegraphics[width=\textwidth]{figures/fig2.pdf}
\caption{(a). Maximum stored energy $E_{max}$ (in units of $N\omega_{z}$) and (b) maximum averaged charging power $P_{\max}$ (in units of $N\omega_{z}$) as a function of the number of TLSs $N$. Other parameters are $g=10^{-3}\omega_z$, $\tau_{c}=t_{E}$ in (a) and $\tau_{c}=t_{P}$ in (b). Red curves are the fits of the numerical points according to: $E_{max}/N\omega_z=a N^{b}$ and $P_{max}/N\omega^{2}_z=aN^{b}$. The values of the fitting parameters are indicated in the labels of each panel.}
\label{Fig2}
\end{figure}
\subsection{Strong coupling}
We investigate here the strong coupling regime, namely a situation in which $g$ is of the same order of $\omega_{z}$ \cite{Giannelli22}. Even if difficult to be realized in the dissipative Dicke model with dipolar coupling, this situation could be reached by exploiting more exotic matter-radiation couplings such as the two-photon interaction and properly changing the initial photon distribution in the cavity (see Appendix \ref{AppA} for more details). The time evolution of the figures of merits, reported in Fig.~\ref{Fig3}, appears again quite regular at small $N$ but shows a complicated beating structure at greater values of $N$. Notice that, in this regime, the first maximum of the energy stored into the QB is only a local maximum, with higher values of the energy possibly appearing at greater times. However, we focus on it due to the fact that it is characterized by a greater value of the power (see Fig.~\ref{Fig3} (d)). In this regime, the charging of the QB can exceed the $50\%$ near the maxima. In correspondence to these points, one observes the emergence of a non-zero single TLS ergotropy (see the dashed orange curve in Fig.~\ref{Fig3} (a) and (b)), the signature of the possibility to extract energy from the QB also by using local unitary operations.
\begin{figure}[p]
\centering
\includegraphics[width=\textwidth]{figures/fig3.pdf}
\caption{Energy stored into the QB $E_{B}(t)$ (in units of $N\omega_{z}$) and single TLS ergotropy $\mathcal{E}_{1}(t)$ (in units of $\omega_{z}$) as a function of time (in units of $\omega^{-1}_{z}$) for $N=4$ (a) and $N=30$ (c). Averaged charging power $P(t)$ (in units of $N\omega^{2}_{z}$) and a function of time (in units of $\omega^{-1}_{z}$) for $N=4$ (b) and $N=30$ (d). Other parameters are $g=\omega_z$ and $\omega_{z}\tau_{c}>2\pi$.}
\label{Fig3}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=\textwidth]{figures/fig4.pdf}
\caption{Maximum stored energy $E_{max}$ (in units of $N\omega_{z}$) (a) and corresponding charging time $\omega_{z}t_{E}$ (c) as a function of the number of TLSs $N$. Maximum averaged charging power $P_{\max}$ (in units of $N\omega^{2}_{z}$) (b) and corresponding charging time $\omega_{z}t_{P}$ (d) as a function of the number of TLSs $N$. Other parameters are $g=\omega_z$, $\tau_{c}=t_{E}$ for each $N$ in (a,c) and $\tau_{c}=t_{P}$ for each $N$ in (b,d). Red curves are the fits of the numerical points according to: $E_{max}/N\omega_z=aN^{b}$ (a),
$P_{max}/N\omega^{2}_z=aN^{b}$ (b), $\omega_{z}t_{E}=aN^{b}+c$ (c), and $\omega_{z}t_{P}=aN^{b}+c$ (d). The values of the fitting parameters are indicated in the labels of each panel.}
\label{Fig4}
\end{figure}
According to the full numerical analysis reported in Fig.~\ref{Fig4} it is possible to deduce the scaling behaviour of the various figures of merit in the strong coupling regime, namely
\begin{eqnarray}
E_{max}&\propto& N,\\
\omega_{z}t_{E}&\propto&N^{-\frac{1}{2}},\\
P_{max}&\propto& N^{\frac{3}{2}},\\
\omega_{z}t_{P}&\propto&N^{-\frac{1}{2}}.
\end{eqnarray}
Notice that these are reminiscent of the ones reported for the Dicke model with conventional dipole matter-radiation coupling~\cite{Ferraro18}. This can be justified by considering the fact that in this case the expectation value of the operator $\hat{S}_{x}$ over both the ground and the excited states of the system becomes extensively large, similarly to what happens for both $\hat{S}_{x}$ and the operator $(\hat{a}^{\dagger}+\hat{a})$ in the original Dicke model in Eq.~(\ref{Dicke}) after the superradiant quantum phase transition~\cite{Emary03}.
\section{Considerations about universality}
\label{Universality}
The previous analysis showed two drastically different scaling behaviours of the considered quantities for different regimes of the coupling, with the weak coupling showing a better scaling with $N$ and the strong coupling regime more promising in terms of absolute values of the energy storage, charging power, and ergotropy. These are summarized in Table~\ref{Table1}. Notice that for intermediate values of the coupling, a crossover between the two behaviour emerges (not shown).
\begin{table}[h!]
\begin{center}
\begin{tabular}{c|c|c|c}
Coupling regime & $E_{max}$ & $P_{max}$ & $t_{E}, t_{P}$\\
\hline
Weak coupling & $N^{2}$ & $N^{2}$ & Constant\\
Strong coupling & $N$ & $N^{\frac{3}{2}}$& $N^{-\frac{1}{2}}$\\
\end{tabular}
\end{center}
\caption{Scaling behaviour for some figures of merit in the weak and strong coupling regimes.}
\label{Table1}
\end{table}
This change of behaviours can be seen as a consequence of the Excited State Quantum Phase Transition (ESQPT) shown by both the Dicke and the effective LMG model (see Ref.~\cite{Cejnar21} for a review on the subject). This represents the footprint of a quantum phase transition at the level of the excited states of the system. Its role in the present QB is justified by the fact that it is initialized in the ground state and charged by means of a protocol which naturally involves the excited states. In order to better clarify this point, we can consider the evolution of $E_{max}$ as a function of the renormalized coupling $G=gN$ (see Fig.~\ref{Fig5}). The choice of this renormalized parameter to characterize the evolution of the system can be justified by considering the classical limit of the effective LMG model (see Appendix~\ref{AppC} for more details) and reminiscent of the renormalization $\Lambda=\lambda \sqrt{N}$ observed for the Dicke model~\cite{Ferraro18}. Notice that the emergence of a crossover behaviour for the maximum stored energy is motivated by the fact that it is closely related to the order parameter of the system, namely the total magnetization, which at a given time $t$ can be written as
\begin{equation}
\mathcal{M}(t)=\langle \psi(t) | \hat{S}_{z} |\psi(t)\rangle= \frac{E_{B}(t)}{\omega_{z}}+\langle \psi(0) | \hat{S}_{z} |\psi(0)\rangle.
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/fig5.pdf}
\caption{Behaviour of $E_{max}$ (in units of $N\omega_{z}$) as a function of the rescaled coupling $G=gN$ (in units of $\omega_{z}$) for various values of $g$ (a) and its zoom around the crossover at $G=\omega_z$ (b). Values of the coupling are reported in the color scale of the legend, while we have assumed $\tau_{c}=t_{E}$ for each value of $N$.}
\label{Fig5}
\end{figure}
It is possible to observe now two different regimes for $G<\omega_z$ and $G>\omega_z$ respectively, with a crossover in correspondence of the ESQPT of the model, occurring at $G=\omega_z$. Incidentally, by increasing the value of $G$ (Fig.~\ref{Fig5} (b)) one has that the maximum value of energy per TLS reachable in the considered QB can exceed, the $60 \%$. Remarkably enough, this value is compatible with what is observed for the Dicke model in the strong coupling regime and under resonant conditions~\cite{Ferraro18}. This makes the present off-resonant Dicke QB competitive for practical experimental realizations, also taking into account the fact that it can be realized also using a vacuum cavity which is easier to be engineered and more stable with respect to what happens in the resonant Dicke QB. Moreover, the dependence on the renormalized coupling $G$ allows to extend the validity of the previous analysis carried out for the strong coupling regime by increasing the number of TLS composing the QB.
\section{Conclusions}
\label{Conclusione}
We have investigated an off-resonant Dicke QB, namely a device composed by $N$ TLSs embedded into a highly detuned resonant cavity. Under this condition, the TLSs feel an effective infinite range interaction among them. By properly switching on and off this coupling it is possible to promote at least a part of the TLSs from the ground to the excited states, thus charging the QB, even in the case of an empty cavity. This phenomenology can be ultimately related to the exchange of virtual photons whose coupling with the TLSs is varied bringing the mirrors of the cavity close together.
By looking at relevant figures of merits such as the stored energy, the averaged charging power, and the time required to reach their respective maxima, we have characterized the performance of this kind of system. We have identified a small coupling regime, showing a very poor charging and no possibility of energy extraction (single TLS ergotropy) despite the relevant scaling in both the energy and the averaged charging power, and a strong coupling limit showing the same collective advantage in averaged charging power of the conventional Dicke QB~\cite{Ferraro18} with dipole matter-radiation interaction and in resonant conditions. The crossover between these two regimes is related to the quantum phase transition occurring in the effective model describing the QB~\cite{Cejnar21}.
The present device shows, above the critical coupling, performances comparable to the ones of the Dicke QB in the resonant regime. In addition, the observed physics can be obtained also working with an empty cavity, thus avoiding the need of controlling the initial quantum state of radiation, the physically relevant parameters of the system, or exotic matter-radiation interactions~\cite{Crescente20}. These remarkable properties make it a very convincing candidate for future practical implementations of a functioning QB in various platforms already considered in the framework of the Dicke QB, such as superconducting qubits or array of semiconducting quantum dots coupled to an LC circuit through a tunable capacitance~\cite{Xiang13, Stockklauser17}.
\vspace{6pt}
\authorcontributions{Conceptualization, G. M. A and D. F.; methodology, G. M. A. and D. F.; software, G. G. and G. M. A.; validation, F. M. D. P. and G. M. A.; formal analysis, D. F.; investigation, G. M. A. and G. G.; data curation, G. G.; writing---original draft preparation, D. F. and G. G.; writing---review and editing, G. M. A., F. M. D. P. and M. S.; supervision, M. S. and All authors have read and agreed to the published version of the manuscript.}
\funding{
D. F. would like to thank the funding of the
European Union-NextGenerationEU through the "Quantum
Busses for Coherent Energy Transfer" (QUBERT) project, in
the framework of Curiosity Driven 2021 initiative of the University of Genova.
F.M.D.P. was supported by
the Università degli Studi di Catania, Piano di Incentivi per la Ricerca di Ateneo 2020/2022 (progetto QUAPHENE), and
PNRR MUR project PE0000023-NQSTI.
}
\institutionalreview{Not applicable.}
\informedconsent{Not applicable.}
\dataavailability{Data are available from the authors upon request.}
\acknowledgments{The authors thank A. Crescente, G. Falci, L. Giannelli, and E. Paladino for illuminating discussions and fruitful comments.}
\conflictsofinterest{The authors declare no conflict of interest.}
\abbreviations{Abbreviations}{
The following abbreviations are used in this manuscript:\\
\vspace{-6pt}
\noindent
\begin{tabular}{@{}ll}
QB & Quantum Battery\\
TLS & Two-level system\\
LMG & Lipkin-Meshkov-Glick\\
ESQPT & Excited State Quantum Phase Transition
\end{tabular}}
\abbreviations{Nomenclature}{
{{{List} of the Relevant Parameters in the Model:\\}}
\noindent
\begin{tabular}{@{}ll}
\end{tabular}}
|
{
"arxiv_id": "2302.13755",
"language": "en",
"timestamp": "2023-02-28T02:29:23",
"url": "https://arxiv.org/abs/2302.13755",
"yymm": "2302"
} | \section{Introduction}
\allowdisplaybreaks
With the growing prevalence of networked systems in the era of the Internet of Things (IOT), the event-triggered control has been well recognized as a compelling alternative to traditional time-triggered control, in the aspect of efficiently utilizing shared and limited communication bandwidth and stored energy.
It offers an attractive way for efficient transmissions of the measured signals \cite{astrom2008event}, as the state sensoring, data transmission and control updating occur only if necessary.
Early on, the event-triggered control design has been extensively explored for linear \cite{SEURET201647,heemels2012periodic} and nonlinear systems \cite{tabuada2007event,abdelrahim2015stabilization,abdelrahim2017robust}, among which the results in \cite{tabuada2007event,abdelrahim2015stabilization,abdelrahim2017robust} are dependent on the input-to-state stability (ISS) property. However, this property is not always guaranteed for nonlinear systems.
Subsequently, this limitation is eliminated successfully by designing event-triggered adaptive backstepping control algorithms in \cite{xing2016event,zhang2021adaptive}.
It is worth underscoring that the works in \cite{SEURET201647}-\cite{zhang2021adaptive} are only
applicable to single nonlinear systems, but not to numerous engineering systems which are actually networked \cite{CHEN2022100004,YANG2022100007}.
For networked multi-agent systems (MASs), the event-triggered control issues under unknown disturbances and actuator failures are investigated in \cite{8474310} and \cite{zhang2018cooperative}, respectively, while the resultant algorithms can only reduce the updating frequency of the actuation signal, that is, the information transmission from sensor to controller remains continuous.
In order to transmit the states under event-triggering setting, considerable solutions have been presented recently. The work in \cite{seyboth2013event} develops a control scheme for multi-agent average consensus with state-triggering execution in the sense that the continuous monitoring of the states of the neighbors involved in \cite{7006773,8319522,8910377} is no longer required.
The idea of developing a distributed adaptive backstepping controller with state-triggering communication is presented in \cite{wang2021adaptive} for norm-form nonlinear systems.
In \cite{sun2022Distributed}, the study on distributed adaptive control under state-triggering action is extended to more general strict-feedback nonlinear systems that are subject to mismatched and nonparametric uncertainties.
Nevertheless, the afore-mentioned methods are built upon that the system states are fully and accurately measurable, which could be not the case in practice since sensor failures are inevitable.
Once sensor failure occurs during the operation of system, the performance of the considered systems under event-triggering setting would be deteriorated since the state information feeding back into the controller is polluted.
It is essential to introduce a suitable compensation mechanism to counteract the effects of sensor failures, and to integrate them with the devised triggering mechanism to ensure that the triggering/sampling error are handled effectively.
Along this direction, some works on nonlinear systems that suffer from sensor failures has emerged recently, see \cite{wang2019event,cao2019event,zhang2021event} for examples. Specifically, with the aid of auxiliary filters, the authors in \cite{wang2019event} propose an adaptive output-feedback control algorithm against sensor failures under event-triggering setting. Based on the NN approximator, a distributed event-triggered control scheme is developed in \cite{cao2019event} in the presence of sensor failures. In \cite{zhang2021event}, an observer-based decentralized adaptive sensor failure compensation control problem is investigated with event-triggering execution. Whereas, those solutions only consider the control input transmitted over the network while continuous feedback of plant states still exists. Moreover, thus far, in the framework of state-triggered control, studies on non-affine uncertain pure-feedback nonlinear MASs that are affected by sensor failures remain open, and the relevant issues have not been well solved, although highly desirable.
Enlightened by the above observations, this paper establishes a distributed event-triggered adaptive control strategy for networked uncertain pure-feedback nonlinear MASs with NN approximators, where all state variables are not directly and continuously available or even polluted arising from sensor failures. The main contributions are summarized as follows.
\begin{itemize}
\item [i)]{To overcome the non-differentiability obstacle stemming from the involvement of the intermittent variables, a novel replacement policy is employed in the developed scheme, that is, a nominal control strategy is firstly developed via regular state feedback by using NN approximating and dynamic filtering technique, and then a NN-based distributed adaptive control scheme with event-triggering setting is constructed by replacing the states in the nominal control strategy with the intermittent ones. Moreover, such replacement ensures the semi-global uniform boundedness of all the internal signals with the aid of several vital lemmas.}
\item [ii)]{The main novelty of handling sensor failures is that the sensor ineffectiveness is accommodated automatically without the utilization of fault detection and diagnosis unit or controller reconfiguration, as opposed to the related state-of-the-art \cite{zhang2021event,zhai2018output,zhang2018observer}.}
\item [iii)]{To our best knowledge, this is the first solution to the distributed event-triggered control problem for more general uncertain non-affine pure-feedback nonlinear MASs, which intermittently executes state sensoring and actuation signal transmitting in the presence of sensor failures, since almost all of the state-triggered control approaches in the literature are for special kinds of systems \cite{seyboth2013event,7006773,wang2021adaptive,ZHAN2019104531,wang2020adaptive,long2022output}, like low-order forms, normal forms and strict-feedback forms.}
\end{itemize}
\section{Preliminaries and Problem Formulation}
\subsection{Gaussian Radial Basis Function (RBF) Networks}
The nonlinear smooth function ${\Psi_{i,k}}\left({{\beta}_{i,k}}\right)\in{\mathcal{R} ^\iota}$ can be approximated by employing the RBFNN \cite{ge2002direct}:
\begin{flalign}
&\Psi_{i,k}(\beta_{i,k})=\Phi_{i,k}^{T}S_{i,k}(\beta_{i,k}) {\rm{+}}\epsilon_{i,k}{(\beta_{i,k})},\,k=1,\cdots,n& \label{eq:4}
\end{flalign}
where $\beta_{i,k}\in{\mathcal{R}^\iota}
\subset\Omega_{\beta_{i,k}}$ represents the NN input, $\Omega_{\beta_{i,k}}$ is a compact set, $\Phi_{i,k}\in \mathcal{R}^p$ denotes the weight matrix, which is assumed to belong to a compact set $\Omega_{\Phi_{i,k}}:=\{\left\|\Phi_{i,k}\right\|\le \phi_{i,k0}\}$, with $\phi_{i,k0}$ being some positive constant, $S_{i,k}(\cdot)=\left[S_{i,k1}
(\cdot),\cdots,S_{i,kp}\left(\cdot\right)\right]^{T}\in \mathcal{R}^p$ denotes the basis function vector, $\epsilon_{i,k}{\left(\beta_{i,k}\right)}\in{\mathcal{R}}$ denotes the approximate error, which satisfy $\left\| {S_{i,k}(\beta_{i,k})} \right\|\le {\bar{s}_{i,km}}$, $\left| {\epsilon_{i,k} \left(\beta_{i,k} \right)} \right| \le {\bar{\epsilon}_{i,km}}$, and ${\bar{s}_{i,km}}$ and ${\bar{\epsilon}_{i,km}}$ are some unknown positive constants. The common choice of $S_{i,kb}\left(\beta_{i,k}\right)$, $b=1,\cdots,p$, is the following Gaussian function:
\begin{flalign}
&S_{i,kb}\left(\beta_{i,k}\right)= \exp{\left[-\frac{{\left(\beta_{i,k}-\mu_b\right)^{T} \left(\beta_{i,k}-\mu_b\right)}}{{\zeta_b^2}}\right]}& \label{eq:4c}
\end{flalign}
where $\mu_{b}= \left[\mu_{b,1}, \cdots, {\mu_{b,\iota}}\right]^{T}$ and $\zeta_b$ denote the center and the width of the basis function, respectively.
\subsection{Graph Theory}
Consider the graph consisting of $N$ agents represented by $\mathcal{G}=\{\mathcal{V}, \mathcal{E}\}$, $\mathcal{V}= \{1,\cdots,N\}$ and $\mathcal{E}$ represent a set of vertexes and edges, respectively.
There exists an edge $(i,j)\in\mathcal{E}$ between vertexes $i$ and $j$ if they are able to exchange data.
${N}_i=\{j\in \mathcal{V}|{(i,j)}\in \mathcal{E}\}$ denotes the set of neighbors of vertex $i$ to exchange data. The weighted adjacency matrix is given as ${\mathcal{A}}=[a_{ij}]\in \mathcal{R}^{N \times N}$, if $(j,i)\in \mathcal{E}$,then we can obtain $a_{ij}>0$, and, on the contrary, $a_{ij}=0$. For all $i, j \in \mathcal{V}$, if $a_{ij} = a_{ji}$, it holds that the directed graph $\mathcal{G}$ is undirected. The Laplacian matrix $\mathcal{L}=[l_{ij}]\in \mathcal{R}^{N\times N}$, where $l_{ii}=\sum_{j=1, j \neq i}^{N} a_{i j}$ and $l_{i j}=-a_{i j}, i \neq j$. $\mathcal{D}={\rm{diag}}(d_1,\cdots,d_N)\in \mathcal{R}^{N\times N}$ represents the absolute in-degree matrix, with $d_{i}=\sum_{j\in N_i} a_{ij}$, satisfying $\mathcal{L}=\mathcal{D}-\mathcal{A}$.
\subsection{System Model}
Consider an uncertain pure-feedback nonlinear MAS composed of $N\,(N\ge{2})$ agents, with the $i$th ($i=1,\cdots,N$) agent modeled as:
\begin{flalign}
{{\dot x}_{i,k}} =\,& {f_{i,k}}\left( {{\check{x}_{i,k}},{x_{i,{k+1}}}} \right),\,k = 1,\cdots ,n - 1 & \nonumber \\
{{\dot x}_{i,n}} =\,& {f_{i,n}}\left( {{\check{x}_{i,n}},{u_i}}\right)&\nonumber \\
{y_i} =\,& {x_{i,1}} & \label{eq:1}
\end{flalign}
where ${x_{i,k}}\in \mathcal{R}$ is the state, $k=1,\cdots,n$, with ${\check{x}_{i,{k}}}=\left[x_{i,1},\cdots,x_{i,k}\right]^T$, $u_i\in \mathcal{R}$ is the control input, ${f_{i,k}}(\cdot)\in\mathcal{R}$ is the unknown nonaffine continuous function, $k=1,\cdots,n$, and $y_i\in\mathcal{R}$ is the output.
We give the definition of sensor failures as follows.
${\textbf{Definition 1}}$ \cite{WU20051925}.
Let $\chi\left(t\right)\in \mathcal{R}$ and $\chi^f\left(t\right)\in \mathcal{R}$ be the system variable and the output of the sensor, respectively, if $\chi^f\left(t\right)=\eta\left(t\right)\chi\left(t\right)$, $\forall t>\tau_f$, with $0<\eta\left(t\right)\le{1}$, then the sensor is said to have failed at the time $\tau_f$.
In light of \emph{Definition} 1, the potential sensor failures in system (\ref{eq:1}) is modeled as:
\begin{flalign}
&x_{i,k}^f\left(t\right)=\eta_{i}\left(t\right)
x_{i,k}\left(t\right),\,\forall t>\tau_f,\, k=1,\cdots,n& \label{eq:6}
\end{flalign}
where $0<\eta_i\left(t\right)\le{1}$ denotes the fault factor.
${\textbf{Remark 1}}$.
The salient features of the sensor failure model under consideration are twofold.
Firstly, the sensor failure model allows for failures in all plant states, which is more general and challenging than related results in the literature where only the output measurement $y_i$ is destroyed \cite{zhang2021event,zhai2018output,zhang2018observer}.
Secondly, there are three scenarios that are mainly considered for sensor failures so far, including cases of the partial failures (the effectiveness of the sensor is lost partially), the outage, and the stuck fault. In this work, the partial failures modeled by (\ref{eq:6}) is utilized, that is, although the sensor looses its effectiveness, it is still functional such that $x_{i,k}^{f}$ can always be affected by the plant states $x_{i,k}$.
${\textbf{Remark 2}}$.
The system (\ref{eq:1}) is in a networked uncertain non-affine nonlinear pure-feedback form which is frequently encountered in various nonlinear control problems for physical systems, such as single-link motor manipulators, hypersonic flight vehicles \cite{7182323}, inverted pendulums, chemical reactors \cite{li2020output} and so on
\subsection{Control Objectives}
The objective of this work is to propose a distributed event-triggered control scheme for system (\ref{eq:1}) with polluted feedback by using NN approximating, ensuring that all the internal signals are semi-globally uniformly ultimately bounded (SGUUB), with the outputs of all the subsystems reaching a consensus under sensor failures. Meanwhile, infinitely fast execution is precluded.
${\textbf{Assumption 1}}$.
The communication graph $\mathcal{G}$ is undirected and connected.
${\textbf{Assumption 2}}$.
The failure factor $\eta_i\left(t\right)$ in (\ref{eq:6}) satisfies $0<\underline{\eta}_i<\eta_i\left(t\right) \le{1}$, and its first derivative $\dot{\eta}_i\left(t\right)$ meets $0<\left|\dot{\eta}_i\left(t\right)\right|<\bar{\eta}_{i,d}$, where $\underline{\eta}_i$, $\bar{\eta}_{i,d}>0$ are unknown constants.
${\textbf{Assumption 3}}$ \cite{liu2015adaptive}.
The unknown nonaffine function $f_{i,k}\left(\check{x}_{i,k}, x_{i,k+1}\right)$ satisfies the following inequalities:
\begin{flalign}
\underline{\ell}_{i,k}x_{i,k+1}+\varphi_{i,k1} \leq &\,f_{i,k}\left(\check{x}_{i,k}, x_{i,k+1}\right)-f_{i,k}\left(\check{x}_{i,k}, 0\right)&\nonumber\\
\leq&\, \bar{\ell}_{i,k} x_{i,k+1}+\varphi_{i,k2},\, x_{i,k+1}\ge{0} &\label{eq:v04_2}\\
\underline{\ell}_{i,k}'x_{i,k+1}+\varphi_{i,k1}' \leq &\,f_{i,k}\left(\check{x}_{i,k},x_{i,k+1}\right)
-f_{i,k}\left(\check{x}_{i,k}, 0\right)&\nonumber\\
\leq& \, \bar{\ell}_{i,k}' x_{i,k+1}+\varphi_{i,k2}',\, x_{i,k+1}<0 &\label{eq:v04_22}
\end{flalign}
for $k=1,\cdots,n$, with $x_{i,n+1}=u_i$, where $\underline{\ell}_{i,k}$, $\bar{\ell}_{i,k}$, $\underline{\ell}_{i,k}'$, $\bar{\ell}_{i,k}'$ are some unknown postive constants, $\varphi_{i,k1}$, $\varphi_{i,k2}$, $\varphi_{i,k1}'$ and $\varphi_{i,k2}'$ are some unknown constants.
${\textbf{Remark 3}}$.
To prevent overshadowing the main idea of this work, we consider the case in which the agents are over undirected graphs $\mathcal{G}$, as stated in \emph{Assumption} 1. Nevertheless, it is worth underscoring that the developed algorithm enables the extension to the case of the directed topology by employing the ideas in \cite{7273843,wang2022adaptive}.
In \emph{Assumption} 2, $\eta_i\left(t\right)$ is assumed to satisfy $0<\underline{\eta}_i<\eta_i\left(t\right) \le{1}$, which is quite common in the current literature, see \cite{cao2019event,bounemeur2018indirect} for examples, and the hypothesis imposed on $\dot{\eta}_i\left(t\right)$ indicates that the growth rate of $\eta_i\left(t\right)$ will not be infinite, which is reasonable in practice.
${\textbf{Remark 4}}$.
In the available methods for dealing with uncertain nonlinear pure-feedback systems \cite{wang2006iss,song2016dealing,wang2011adaptive}, the nonaffine function $f_{i,k}\left(\check{x}_{i,k}, x_{i,k+1}\right)$ is normally assumed to satisfy $0<\underline{\theta}_{i,k} \leq \frac{{\partial f_{i,k}\left(\check{x}_{i,k}, x_{i,k+1}\right)}}{{\partial x_{i,k+1}}}\leq \bar{\theta}_{i,k}$, with $\underline{\theta}_{i,k}$ and $\bar{\theta}_{i,k}$ being some unknown positive constants. However, once ${{\partial f_{i,k}\left(\cdot\right)}}/{{\partial x_{i,k+1}}}$ does not exist (or not strictly positive), those methods are no longer valid. For instance, if there is a dead-zone nonlinearity involved in the model, $f_{i,k}\left(\cdot\right)$ is nondifferentiable with respect to $x_{i,k+1}$.
To eliminate such limitation, in this work a more general assumption is imposed on the nonaffine function $f_{i,k}\left(\cdot\right)$ inspired partly by the ideas in \cite{liu2015adaptive}, as noted in \emph{Assumption} 3. Clearly, \emph{Assumption} 3 satisfies the condition in existing results, while remaining valid even if ${{\partial f_{i,k}\left(\cdot\right)}}/{{\partial x_{i,k+1}}}$ does not exist (or not strictly positive). Consequently, the proposed solution is more general than the existing ones.
For establishing the stability results, we introduce the following lemma.
${\textbf{Lemma 1}}$ \cite{horn2012matrix}.
Let the graph $\mathcal{G}$ be undirected and connected, it holds that $\mathcal{L}$ has a simple eigenvalue of 0, with all other eigenvalues being positive, that is, $\lambda_1 = 0$ and $\lambda_i>0$, $i=2,\cdots,N$.
\subsection{System Transformation}
For notation conciseness, we denote $h_{i,k}\left(\check{x}_{i,k}, x_{i,k+1}\right)=f_{i,k}\left(\check{x}_{i,k}, x_{i,k+1}\right)-f_{i,k}\left(\check{x}_{i,k}, 0\right)$, $k=1,\cdots,n$. It can be derived from (\ref{eq:v04_2}) and (\ref{eq:v04_22}) that there exist functions $\varpi_1\left(\check{x}_{i,k+1}\right)$ and $\varpi_2\left(\check{x}_{i,k+1}\right)$ that take values in [0,1], satisfying
\begin{flalign}
h_{i,k}&\left(\check{x}_{i,k}, x_{i,k+1}\right)=\left(1-\varpi_1
\left(\check{x}_{i,k+1}\right)\right)
\left(\underline{\ell}_{i,k}
x_{i,k+1}+\varphi_{i,k1}\right)&\nonumber\\
&+\varpi_1\left(\check{x}_{i,k+1}\right)\left(\bar{\ell}_{i,k}
x_{i,k+1}+\varphi_{i,k2}\right),x_{i,k+1}\ge{0}&\label{eq:v04_1}\\
h_{i,k}&\left(\check{x}_{i,k}, x_{i,k+1}\right)=\left(1-\varpi_2
\left(\check{x}_{i,k+1}\right)\right)\left(\underline{\ell}_{i,k}'
x_{i,k+1}+\varphi_{i,k1}'\right)&\nonumber\\
&+\varpi_2\left(\check{x}_{i,k+1}\right)\left(\bar{\ell}_{i,k}'
x_{i,k+1}+\varphi_{i,k2}'\right),x_{i,k+1}<{0}.&\label{eq:v04_1a}
\end{flalign}
To facilitate the analysis, the following functions are introduced
\begin{flalign}
g_{i,k}(\check{x}_{i,k+1})= \begin{cases}
\left(1-\varpi_1\left(\check{x}_{i,k+1}
\right)\right)\underline{\ell}_{i,k}\\
\qquad +\varpi_1
\left(\check{x}_{i,k+1}\right)\bar{\ell}_{i,k},x_{i,k+1} \geq 0 \\
\left(1-\varpi_{2}\left(\check{x}_{i,k+1}\right)\right) \underline{\ell}_{i,k}^{\prime}\\
\qquad +\varpi_{2}
\left(\check{x}_{i,k+1}\right)\bar{\ell}_{i,k}^{\prime},
x_{i,k+1}<0\end{cases} \label{eq:v04_1b}
\end{flalign}
\begin{flalign}
d_{i,k}\left(\check{x}_{i,k+1}\right)= \begin{cases}\left(1-\varpi_{1}\left(\check{x}_{i,k+1}
\right)\right)\varphi_{i,k1}\\
\quad +\varpi_{1}\left(\bar{x}_{i,k+1}
\right) \varphi_{i,k2},x_{i,k+1} \geq 0 \\
\left(1-\varpi_{2}\left(\check{x}_{i,k+1}\right)\right) \varphi_{i,k1}'\\
\quad +\varpi_{2}\left(\check{x}_{i,k+1}\right) \varphi_{i,k2}', x_{i,k+1}<0.
\end{cases}\label{eq:v04_1c
\end{flalign}
for $k=1,\cdots,n$. This, together with (\ref{eq:v04_1}), (\ref{eq:v04_1a}) and the definition of $h_{i,k}\left(\cdot\right)$, results in
\begin{flalign}
f_{i,k}\left(\check{x}_{i,k},x_{i,k+1}\right)
=&\,\,g_{i,k}\left(\check{x}_{i,k+1}\right)x_{i,k+1}
+f_{i,k}\left(\check{x}_{i,k},0\right)&\nonumber\\
&+d_{i,k}(\check{x}_{i,k+1})
&\label{eq:vv02}
\end{flalign}
with $0<\,\underline{g}_{i,k}\leq g_{i,k}\left(\check{x}_{i,k+1}\right)\leq \bar{g}_{i,k}$, and $0\leq\,\left|d_{i,k}\left
(\check{x}_{i,k+1}\right)\right| \leq \bar{d}_{i,k}$, $k=1,\cdots,n$, where
$\underline{g}_{i,k}=\min\{\underline{\ell}_{i,k}, \bar{\ell}_{i,k},\underline{\ell}_{i,k}', \bar{\ell}_{i,k}'\}$, $\bar{g}_{i,k}=\max\{\underline{\ell}_{i,k},
\bar{\ell}_{i,k},\underline{\ell}_{i,k}', \bar{\ell}_{i,k}'\}$ and $\bar{d}_{i,k}=\max\{\left|\varphi_{i,k1}\right|
+\left|\varphi_{i,k2}\right|, |\varphi_{i,k1}'|
+|\varphi_{i,k2}'|\}$. In light of the definition of $g_{i,k}(\cdot)$ in (\ref{eq:v04_1b}) and recalling (\ref{eq:1}), it is clear that $\dot{g}_{i,k}(\cdot)$ only depends on states $x_{i,k}$. As $f_{i,k}(\cdot)$, ${\varpi_{1}(\cdot)}$ and ${\varpi_{2}(\cdot)}$ are smooth functions, $\dot{g}_{i,k}(\cdot)$, $k=1, \cdots, n$, is therefore bounded, that is, $\left|\dot{g}_{i,k}(\cdot)\right| \leqslant \bar{g}_{i,d}$, where $\bar{g}_{i,d}>0$ are some unknown constant.
With the above analysis and (\ref{eq:vv02}), we can rewrite system (\ref{eq:1}) as
\begin{flalign}
{{\dot x}_{i,k}} =\,& {g_{i,k}}(\check{x}_{i,k+1})
{x_{i,{k+1}}}{\rm{+}}{f_{i,k}}({{\check{x}_{i,{k}}},0})
+d_{i,k}\left(\check{x}_{i,k+1}\right) & \nonumber \\
{{\dot x}_{i,n}} =\,& {g_{i,n}}\left(\check{x}_{i,n+1} \right){u_{i}}+{f_{i,n}}\left( {{\check{x}_{i,{n}}},0}
\right)+d_{i,n}\left(\check{x}_{i,n+1}\right)
&\nonumber \\
{y_i} =\,& {x_{i,1}}. & \label{eq:1a}
\end{flalign}
for $k = 1,\cdots ,n -1$.
\section{Nominal Control Design}
To promote the design of the distributed adaptive control with event-triggered setting, a nominal control scheme is firstly developed in this section by combining NN approximating with dynamic filtering technique.
To proceed, we define the following coordination transformation $z_{i,k}(k=1,\cdots,n)$ and consensus error $e_i$:
\begin{flalign}
{z_{i,1}}=\,& x_{i,1}^f & \label{eq:7}\\
{z_{i,k}}=\,& {x_{i,k}^f} - {\alpha _{i,kf}}& \label{eq:8}\\
{e_i}=\,& \sum\limits_{j = 1}^N {{a_{ij}}} \left( {{x_{i,1}^f} - {x_{j,1}^f}} \right)& \label{eq:9}
\end{flalign}
for $k = 2, \cdots ,n$, where $\alpha_{i, kf}$ denotes the output of the following filter:
\begin{flalign}
&{\xi _{i,k}}{\dot \alpha _{i,kf}} +{\alpha _{i,kf}} ={\alpha _{i,k - 1}},\,{\alpha _{i,kf}}\left( 0 \right) = {\alpha _{i,k - 1}}\left( 0 \right)& \label{eq:10}
\end{flalign}
where ${\xi_{i,k}}$ is some positive design parameter, $\alpha_{i,k-1}$ denotes a virtual control that serves as the input of (\ref{eq:10}). For convenience of stability analysis, we further define, for $k=2,\cdots,n$
\begin{flalign}
&{\Theta_{i,k}} = {\alpha _{i,kf}} - {\alpha _{i,k - 1}}& \label{eq:11}
\end{flalign}
We design the following nominal control scheme through regular state feedback:
\begin{flalign}
{\alpha _{i,1}} =&\, - {\delta_1}{e_i} -\left
(\gamma_{i,1}+1\right){z_{i,1}}- \hat \Phi_{i,1}^T
{S _{i,1}}\left( {{\beta_{i,1}}}\right)&\label{eq:12}\\
{\alpha_{i,k}} =&\,-\left(\gamma_{i,k}+1\right)
{z_{i,k}} - {z_{i,k - 1}} - \hat \Phi_{i,k}^T{S_{i,k}}\left( {{\beta _{i,k}}} \right)&\label{eq:13}\\
{u_i}=&\,\,{\alpha_{i,n}}& \label{eq:14}
\end{flalign}
for $k=2,\cdots,n$, where $\delta_1>0$ and $\gamma_{i,k}>0$ are the design parameters, $k=1,\cdots,n$. The updating law of ${{\hat \Phi}_{i,k}}$ is as follows:
\begin{flalign}
&{{\dot {\hat \Phi}}_{i,k}}= Proj\left[{\Lambda _{i,k}}{S_{i,k}}\left({\beta }_{i,k} \right){z}_{i,k}\right], \, k=1,\cdots,n& \label{eq:15}
\end{flalign}
with ${\hat \Phi}_{i,k}\left(0\right)
\in\Omega_{\Phi_{i,k}}$, where $\hat{\Phi}_{i,k}$ denotes the estimate of ${\Phi}_{i,k}$, satisfying ${\tilde \Phi}_{i,k}={\Phi}_{i,k}- \hat{\Phi}_{i,k}$, $\Lambda_{i,k}$ denotes some matrix that is positive definite, $Proj\left(\cdot\right)$ denotes a projection operator to ensure that ${\hat \Phi}_{i,k}$ belongs to compact set $\Omega_{\hat{\Phi}_{i,k}}:=\left\{\| {{{\hat \Phi}_{i,k}}} \| \le {\phi_{i,k0}} + \iota \buildrel \Delta \over = {{\bar \phi}_{i,k}}\right\}$, which further indicates that $\| {{{\tilde \Phi}_{i,k}}}\|\le 2{\phi_{i,k0}} + \iota \buildrel \Delta \over = {\phi_{i,km}}$, where $\iota>0$ is an arbitrary constant, and ${{\bar \phi}_{i,k}}, \phi_{i,km}>0$ are some unknown constants.
We present the main results for nominal control scheme of system (\ref{eq:1}) with polluted feedback.
${\textbf{Theorem 1}}$.
Consider an uncertain non-affine pure-feedback nonlinear MAS of $N$ agents (\ref{eq:1}) with undirected topology. If \emph{Assumptions} 1-3 hold, applying distributed controller (\ref{eq:14}) and the adaptive law (\ref{eq:15}), then
\begin{itemize}
\item [i)]{All the internal signals are SGUUB.}
\item [ii)]{The outputs of all the subsystems reach a consensus in the presence of sensor failures, and the upper bound of ${\left\|y\right\|_{[0, T]}}$ can be decreased by selecting the design parameters appropriately.}
\end{itemize}
\textbf{Proof}. See \emph{Appendix} A.
${\textbf{Remark 5}}$.
The major design difficulties in dealing with sensor failures are twofold: i) all state variables are not directly available for feedback design owing to sensor failures; and ii) the sensor failures under consideration (as seen in (\ref{eq:6})) are more general and challenging than existing results where only the output measurement $y_i$ is unavailable for controller design \cite{zhang2021event,zhai2018output,zhang2018observer}. To address such issues, the neuroadaptive fault-tolerant method is applied in this work that allows the sensor ineffectiveness to be accommodated automatically without using fault detection and diagnosis unit or controller reconfiguration.
\section{Event-triggered Control Design}
This section proposes a distributed adaptive control scheme with event-triggering setting based on the nominal control scheme, which is capable of coping with the non-differentiability issue and sensor failures simultaneously by resorting to a novel replacement policy and NN approximating.
{\subsection{Design of Triggering Conditions}}
Let $t_{k,p}^i$, $t_{k,p}^j$ and $t_{u,p}^i$ be the sequences of transmissions of $x_{i,k}^f$, $x_{j,k}^f$ and $v_i$, respectively, $i,j=1, \ldots, N$, $k=1, \ldots, n$, $p = 0,1,2, \cdots$.
At each transmission instant $t_{k,p}^i$, the local state measurement $x_{i,k}^f$ is sent to the controller, $t_{k,p}^j$ is the transmission instant at which agent $j$ broadcasts its $k$th state information to agent $i$, and the input measurement $v_i$ is broadcasted to the actuators at $t_{u,p}^i$. This indicates that the instantaneous information of local state $x_{i,k}^f$, neighboring subsystem state $x_{j,k}^f$ and control input $v_i$ are updated only at the time instants $t_{k,p}^i$, $t_{k,p}^j$ and $t_{u,p}^i$, respectively, then the following equalities hold:
\begin{flalign}
{{\bar x}_{i,k}^f}\left( t \right) =\,&{x_{i,k}^f}\left( {t_{k,p}^i} \right),\,\forall t \in [t_{k,p}^i,t_{k,p + 1}^i)& \label{eq:37}\\
{{\bar x}_{j,k}^f}\left( t \right) =\,&{x_{j,k}^f}\left( {t_{k,p}^j} \right),\,\forall t \in [t_{k,p}^j,t_{k,p + 1}^j)& \label{eq:38}\\
{u_i}\left( t \right) =\,&{v_i}\left( {t_{u,p}^{i}} \right),\,\,\forall t \in [t_{u,p}^{i},t_{u,p + 1}^{i}).& \label{eq:39}
\end{flalign}
Going forward, we design the triggering conditions as follows:
\begin{flalign}
t_{k,p + 1}^i=& \inf \left\{ {t > t_{k,p}^i,\left| {{x_{i,k}^f}\left( t \right) - {{\bar x}_{i,k}^f}\left( t \right)} \right| > \Delta x_{i,k}} \right\} & \label{eq:40}\\
t_{k,p + 1}^j=& \inf \left\{ {t > t_{k,p}^j,\left| {{x_{j,k}^f}\left( t \right) - {{\bar x}_{j,k}^f}\left( t \right)} \right| > \Delta x_{j,k}} \right\} & \label{eq:41}\\
t_{u,p + 1}^i=& \inf \left\{ {t > t_{u,p}^i,\left| {{v_{i}}\left( t \right) - {{u}_{i}}\left( t \right)} \right| > \Delta u_{i}} \right\} & \label{eq:42}
\end{flalign}
where $t_{k,0}^i$, $t_{k,0}^j$ and $t_{u,0}^{i}$ represent the first instant when (\ref{eq:40})-(\ref{eq:42}) are fulfilled, respectively, and $\Delta x_{i,k}>0$, $\Delta x_{j,k}>0$ and $\Delta u_{i}>0$ are the designed triggering thresholds.
{\subsection{Controller Design}}
Under intermittent state feedback circumstances, the coordination transformation $z_{i,k}\,(k=1,\cdots,n)$ and the consensus error $e_i$ in (\ref{eq:7})-(\ref{eq:9}) are converted as:
\begin{flalign}
{\bar{z}_{i,1}}=\,& {\bar{x}}_{i,1}^f & \label{eq:43}\\
{\bar{z}_{i,k}}=\,& {\bar{x}_{i,k}^f} - {\bar{\alpha} _{i,kf}} & \label{eq:44}\\
{\bar{e}_i}=\,& \sum\limits_{j = 1}^N {{a_{ij}}} \left( {{\bar{x}_{i,1}^f} - {\bar{x}_{j,1}^f}} \right).& \label{eq:45}
\end{flalign}
for $k = 2, \cdots ,n - 1$.
Next, we present the following distributed event-triggered control strategy via intermittent state feedback:
\begin{flalign}
{{\bar {\alpha}}_{i,1}} =&\, -{\delta _1}{{\bar e}_i}- \left({\gamma_{i,1}}+1\right){{\bar z}_{i,1}} - \hat \Phi_{i,1}^T{S_{i,1}}\left( {{{\bar \beta }_{i,1}}} \right) & \label{eq:46}\\
{{\bar \alpha }_{i,k}}=&\, -\left({\gamma_{i,k}}+1\right) {{\bar z}_{i,k}} - {{\bar z}_{i,k - 1}}- \hat \Phi_{i,k}^T{S_{i,k}}\left( {{{\bar \beta }_{i,k}}} \right)& \label{eq:47}\\
{v_i} =&\,\, {{\bar \alpha }_{i,n}}& \label{eq:48}
\end{flalign}
for $k=2,\cdots,n$, where $\delta_1>0$ and $\gamma_{i,k}>0$ are the design parameters, $k=1,\cdots,n$. The updating law of ${{\hat \Phi}_{i,k}}$ is as follows:
\begin{flalign}
&{{\dot {\hat \Phi}}_{i,k}} = Proj\left[{\Lambda _{i,k}}{S_{i,k}}\left({\bar \beta }_{i,k} \right){\bar z}_{i,k}\right],\,k=1,\cdots,n& \label{eq:49}
\end{flalign}
with ${\hat \Phi}_{i,k}\left(0\right)
\in\Omega_{\phi_{i,k}}$, where $Proj\left(\cdot\right)$ is a projection operator defined previously, and $\Lambda_{i,k}$ denotes some matrix that is positive definite.
To proceed, we establish several vital lemmas.
${\textbf{Lemma 2}}$ \cite{sun2022Distributed}.
Let ${S_{i,k}}\left( {{\beta _{i,k}}} \right)$ be the basis function vector defined in (\ref{eq:4}), $k=1,\cdots,n$, it holds that
\begin{flalign}
&\left\| {{S_{i,k}}\left( {{\beta _{i,k}}} \right) - {S_{i,k}}\left( {{{\bar \beta }_{i,k}}} \right)} \right\| \le \Delta {s_{i,k}}&\label{eq:52_a}
\end{flalign}
where $\Delta {s_{i,k}}>0$ is a constant relies on the design parameters $\Delta x_{i,k}$ and $\zeta_b$, $b=1,\cdots,p$.
${\textbf{Lemma 3}}$. Let $z_{i,k}$ and $\alpha_{i,k}$ be defined in (\ref{eq:7}), (\ref{eq:8}), (\ref{eq:12}), (\ref{eq:13}), respectively, then the following inequalities hold
\begin{flalign}
\left|{{z_{i,k}}-{{\bar z}_{i,k}}}\right|\le &\, \Delta {z_{i,k}} & \label{eq:51} \\
\left| {{\alpha _{i,k}} {\rm{-}} {{\bar \alpha }_{i,k}}} \right| \le &\, \Delta {\alpha _{i,k}} &\label{eq:52}
\end{flalign}
for $k=1,\cdots,n$, where $\Delta {z_{i,k}}>0$ and $\Delta {\alpha_{i,k}}>0$ are constants that rely on the design parameters $\Delta x_{i,k}$, $\Delta x_{j,k}$, $\gamma_{i,k}$, $d_i$ and $\zeta_b$, $b=1,\cdots,p$.
{\textbf{Proof}}.
From (\ref{eq:9}) and (\ref{eq:45}), it follow that
\begin{flalign}
{\left|{{e_i}-{\bar e}_i}\right|}
\le \,& {d_i}\left(\Delta {x_{i,1}} + \Delta x_{j,1} \right)\, \buildrel \Delta \over =\Delta {e_i}.& \label{eq:a1}
\end{flalign}
As observed from (\ref{eq:7}) and (\ref{eq:43}), we can obtain
\begin{flalign}
&\left| {{z_{i,1}} - {{\bar z}_{i,1}}} \right|
\le\Delta {x_{i,1}}\, \buildrel \Delta \over = \Delta {z_{i,1}}. & \label{eq:a2}
\end{flalign}
In accordance with (\ref{eq:12}) and (\ref{eq:46}), it can be derived as
\begin{flalign}
\left| {{\alpha _{i,1}} - {{\bar \alpha }_{i,1}}} \right| \le \,&{\delta _1} \left|\bar{e}_i-e_i\right| + \left({\gamma_{i,1}}+1\right) \left|{\bar{z}_{i,1}}-z_{i,1}
\right|&\nonumber\\
&+ \left| {\hat \Phi_{i,1}^T\left( {{S_{i,1}}\left( {{{\bar \beta }_{i,1}}} \right) - {S_{i,1}}\left( {{\beta _{i,1}}} \right)} \right)} \right|& \nonumber\\
\le\, & {\delta _1}\Delta {e_i} + \gamma_{i,1} \Delta {z_{i,1}}+ \Delta {z_{i,1}}&\nonumber\\
&+\bar{\phi}_{i,1}\Delta{s_{i,1}}\buildrel \Delta \over = \Delta {\alpha_{i,1}} &\label{eq:a3}
\end{flalign}
For simplicity of presentation, we denote $\tilde{\alpha}_{i,kf}={\alpha}_{i,kf}
-{\bar{\alpha}}_{i,kf}$, $k=2,\cdots,n$.
By recalling (\ref{eq:10}), one can obtain
\begin{flalign}
\dot{\tilde{\alpha}}_{i,kf}
=& -\frac{1}{\xi_{i,k}}\tilde{\alpha}_{i,kf}
+\frac{1}{\xi_{i,k}} \left({\alpha _{i,k - 1}}-{\bar{\alpha}_{i,k - 1}}\right).
&\label{eq:a4}
\end{flalign}
By integrating both sides of (\ref{eq:a4}), it is not difficult to derived that
\begin{flalign}
\left|{\tilde{\alpha}}_{i,kf}\right| \le & \left|\tilde{\alpha}_{i,kf}\left(0\right)\right|
+e^{-\frac{1}{\xi_{i,k}}t}\Delta \alpha_{i,{k-1}}\left(e^{\frac{1}{\xi_{i,k}}t}-1 \right)&\nonumber\\
\le&\left|\tilde{\alpha}_{i,kf}\left(0\right)\right|+\Delta \alpha_{i,{k-1}} \buildrel \Delta \over =\Delta \alpha_{i,kf} &\label{eq:a5_a}
\end{flalign}
for $k=2,{\rm{\cdots}},n$. Using (\ref{eq:8}) and (\ref{eq:44}), we have
\begin{flalign}
&\left| {{z_{i,2}} - {{\bar z}_{i,2}}} \right|
\le\Delta {x_{i,2}} + \Delta {\alpha _{i,2f}}\buildrel \Delta \over = \Delta {z_{i,2}}.& \label{eq:a6}
\end{flalign}
This, together with (\ref{eq:13}) and (\ref{eq:47}), results in
\begin{flalign}
\left| {{\alpha _{i,2}}-{{\bar \alpha }_{i,2}}} \right| \le \,& {\gamma_{i,2}}\Delta {z_{i,2}}+\Delta {z_{i,2}}+ \Delta {z_{i,1}}&\nonumber\\
&+\bar{\phi}_{i,2}\Delta{s_{i,2}}
\, \buildrel \Delta \over =\Delta {\alpha_{i,2}} &\label{eq:a7}
\end{flalign}
In the same vein, it holds that
\begin{flalign}
&\left| {{z_{i,k}} - {{\bar z}_{i,k}}} \right| \le\Delta {x_{i,k}} {\rm{+}} \Delta {\alpha _{i,kf}}
\, \buildrel \Delta \over = \Delta {z_{i,k}},k=3,\cdots,n& \label{eq:a8}
\end{flalign}
\begin{flalign}
\left| {{\alpha _{i,k}} - {{\bar \alpha }_{i,k}}} \right| \le\,& {\gamma_{i,k}}\Delta {z_{i,k}}+ \Delta {z_{i,k}}+ \Delta {z_{i,{k-1}}}&\nonumber\\
&+ \bar{\phi}_{i,k}\Delta{s_{i,k}}\buildrel \Delta \over = \Delta {\alpha_{i,k}},k=3,\cdots,n&\label{eq:a9}
\end{flalign}
Thus we can derive from (\ref{eq:a2}), (\ref{eq:a3}), (\ref{eq:a6})-(\ref{eq:a9}) that \emph{Lemma} 3 holds. $\hfill{\blacksquare}$
With those lemmas, we state the main results for distributed event-triggered control of system (\ref{eq:1}) with polluted feedback.
${\textbf{Theorem 2}}$.
Consider an uncertain non-affine pure-feedback nonlinear MAS of $N$ agents (\ref{eq:1}) with undirected topology, if \emph{Assumptions} 1-3 hold, applying distributed controller (\ref{eq:48}), adaptive law (\ref{eq:49}) and the event-triggering mechanism (\ref{eq:40})-(\ref{eq:42}), then
\begin{itemize}
\item [i)]{All the internal signals are SGUUB.}
\item [ii)]{The outputs of all the subsystems reach a consensus in the presence of sensor failures, and the upper bound of ${\left\|y\right\|_{[0, T]}}$ can be decreased by selecting the design parameters appropriately.}
\item [iii)]{The Zeno solutions are ruled out.}
\end{itemize}
\textbf{Proof}. The following recursive control design are performed to derive the results in \emph{Theorem} 2.
$\textbf{Step 1}$:
Define a Lyapunov function ${V_1}= \sum\nolimits_{i = 1}^N\frac{1}{2g_{i,1}}z_{i,1}^2 + \sum\nolimits_{i = 1}^N \frac{1}{2}\tilde \Phi_{i,1}^T\Lambda_{i,1}^{ - 1}{{\tilde \Phi}_{i,1}} + \sum\nolimits_{i = 1}^N \frac{1}{2}\Theta_{i,2}^2$. In view of (\ref{eq:6}), (\ref{eq:1a}), (\ref{eq:7}), (\ref{eq:8}) and (\ref{eq:11}), $\dot{V}_1$ can be expressed as
\begin{flalign}
\dot{V}_{1} =&-\sum\limits_{i = 1}^N \frac{\dot{g}_{i,1}}{2g_{i,1}^2}z_{i,1}^2
-\sum\limits_{i = 1}^N\tilde \Phi_{i,1}^T\Lambda _{i,1}^{ - 1}{{\dot{\hat \Phi}}_{i,1}} + \sum\limits_{i = 1}^N\Theta_{i,2}\dot{\Theta}_{i,2} &\nonumber\\
&+\sum\limits_{i = 1}^N{z}_{i,1}\left(z_{i,2}+\Theta_{i,2}
+\alpha_{i,1}+\Psi_{i,1}\left( {{\beta _{i,1}}} \right)\right)&\label{eq:17a}
\end{flalign}
where $\Psi_{i,1}\left( {{\beta _{i,1}}} \right)=\frac{1}{g_{i,1}}\left({\dot{\eta}}_i
{x_{i,1}}+{\eta}_i f_{i,1}\left(x_{i,1},0\right)+{\eta}_i d_{i,1}\right)$ can be approximated by utilizing the RBFNN, namely, $\Psi_{i,1}(\beta_{i,1})=\Phi_{i,1}^{T}S_{i,1}(\beta_{i,1}) +\epsilon_{i,1}(\beta_{i,1})$, with $\beta _{i,1}=[x_{i,1}, \eta_i,\dot{\eta}_i]^{T} \in{\Omega_{\beta_{i,1}}}$.
Applying \emph{Lemma} 1, it is obtained that
\begin{flalign}
-\sum\limits_{i = 1}^N {\delta _1}{e_i}{z_{i,1}}
\le& -{\delta _1}\sum\limits_{i = 1}^N\sum\limits_{j = 1}^N {{a_{ij}}} \left( {{x_{i,1}^f} - {x_{j,1}^f}} \right)x_{i,1}^f&\nonumber\\
\le & -\delta_1M^{T}\mathcal{L}M
\le -\delta_1\underline {\lambda}
\left\|y\right\|^2&\label{eq:a17_a}
\end{flalign}
where $M=\eta y$, with $\eta={\rm{diag}}(\eta_i)$, $y=\left[y_1,\cdots,y_N\right]^T$, and $\underline {\lambda}$ is the smallest eigenvalue of $M^{T}\mathcal{L}M$. Synthesizing (\ref{eq:12}), (\ref{eq:17a}), (\ref{eq:a17_a}) and adopting Young's inequality, it can be determined that
\begin{flalign}
\dot{V}_{1} \le& -\delta_1\underline{\lambda}\left\|y\right\|^2
-\sum\limits_{i = 1}^N \jmath_{i,1}z_{i,1}^2- \sum\limits_{i =1}^N\xi_{i,2}^{*}\Theta_{i,2}^2&\nonumber\\
&+ \sum\limits_{i = 1}^N {\tilde \Phi_{i,1}^T {{S_{i,1}}\left( {{\beta _{i,1}}} \right){z_{i,1}} - \sum\limits_{i = 1}^N\tilde \Phi_{i,1}^T\Lambda_{i,1}^{-1}{{\dot {\hat \Phi}}_{i,1}}}}&\nonumber\\
&+\sum\limits_{i = 1}^N{z}_{i,1}z_{i,2}+\sum\limits_{i = 1}^N\varsigma_{i,1}^2+\Sigma_1&\label{eq:53}
\end{flalign}
where $\jmath_{i,1}=\gamma_{i,1}-\frac{\bar{g}_{i,d}}
{2\underline{g}_{i,1}^2}>0$ by choosing $\gamma_{i,1}$ large enough, $\xi_{i,2}^*>0$ is the design parameter, with $\frac{1}{\xi_{i,2}}\ge \xi_{i,2}^*+ \frac{3}{4}$, ${\varsigma_{i,1}}=-{\dot \alpha _{i,1}}$, ${\Sigma_1} =\sum\nolimits_{i =1}^N\frac{1}{2}\bar{\epsilon}_{i,1m}^2$.
and the fact that $-\left(\gamma_{i,1}+
\frac{\dot{g}_{i,1}}{2{g}_{i,1}^2}\right)\le -\left(\gamma_{i,1}-\frac{\bar{g}_{i,d}}
{2\underline{g}_{i,1}^2}\right)$ is utilized.
$\textbf{Step \emph{k}}$ $(k=2,\cdots,n-1)$:
Define the Lyapunov function ${V_k} = {V_{k - 1}} + \sum\nolimits_{i = 1}^N \frac{1}{2g_{i,k}}z_{i,k}^2 + \sum\nolimits_{i = 1}^N \frac{1}{2}\tilde \Phi_{i,k}^T\Lambda_{i,k}^{-1}{{\tilde \Phi}_{i,k}}+ \sum\nolimits_{i = 1}^N \frac{1}{2}\Theta_{i,k + 1}^2$. Applying (\ref{eq:6}), (\ref{eq:1a}), (\ref{eq:8}) and (\ref{eq:11}), the derivative of ${V}_{k}$ is derived as
\begin{flalign}
\dot{V}_{k} \le& -\delta_1\underline{\lambda}
\left\|y\right\|^2-\sum\limits_{i = 1}^N\sum\limits_{q = 1}^{k-1} \jmath_{i,q}z_{i,q}^2- \sum\limits_{i = 1}^N\sum\limits_{q =1}^{k-1}\xi_{i,{q+1}}^{*}
\Theta_{i,{q+1}}^2&\nonumber\\
&-\sum\limits_{i = 1}^N \frac{\dot{g}_{i,k}}{2g_{i,k}^2}z_{i,k}^2-\sum\limits_{i = 1}^N\tilde \Phi_{i,k}^T\Lambda_{i,k}^{ - 1}{{\dot{\hat \Phi}}_{i,k}}+\sum\limits_{i = 1}^N{z}_{i,k}{z}_{i,k+1} &\nonumber\\
&+\sum\limits_{i =1}^N{z}_{i,k}
({\Theta}_{i,{k+1}}+\alpha_{i,k}+z_{i,k-1}+\Psi_{i,k}( {{\beta _{i,k}}}))&\nonumber\\
&+ \sum\limits_{i = 1}^N \sum\limits_{q = 1}^{k-1} {\tilde \Phi_{i,q}^T {{S_{i,q}}\left( {{\beta _{i,q}}} \right){z_{i,q}} -\sum\limits_{i = 1}^N\sum\limits_{q = 1}^{k-1} \tilde \Phi_{i,q}^T\Lambda _{i,q}^{-1}{{\dot {\hat \Phi}}_{i,q}}} }&\nonumber\\
&+\sum\limits_{i = 1}^N\Theta_{i+1}\dot{\Theta}_{i+1}
+\sum\limits_{i =1}^N\sum\limits_{q=1}
^{k-1}\varsigma_{i,q}^2+\Sigma_{k-1}
&\label{eq:27a}
\end{flalign}
where ${\Psi_{i,k}}\left( {{\beta _{i,k}}} \right)= \frac{1}{g_{i,k}}({\dot{\eta}}_i{x_{i,k}}+{\eta}_i f_{i,k}\left(\check{x}_{i,k},0\right)+{\eta}_i d_{i,k}-{\dot{\alpha} _{i,kf}})$ is approximated making use of the RBFNN, with $\beta _{i,k}=[\check{x}_{i,k}
,{x}_{j,1},\eta_i,\dot{\eta}_i]^{T} \in{\Omega_{{\beta _{i,k}}}}$. In view of (\ref{eq:13}), (\ref{eq:53}) and employing Young's inequality, it is obtained that
\begin{flalign}
\dot{V}_{k}
\le& -\delta_1\underline{\lambda}
\left\|y\right\|^2-\sum\limits_{i = 1}^N\sum\limits_{q = 1}^{k} \jmath_{i,q}z_{i,q}^2- \sum\limits_{i = 1}^N\sum\limits_{q =1}^{k}\xi_{i,{q+1}}^{*}
\Theta_{i,{q+1}}^2&\nonumber\\
&+ \sum\limits_{i = 1}^N \sum\limits_{q = 1}^{k-1} {\tilde \Phi_{i,q}^T {{S_{i,q}}\left( {{\beta _{i,q}}} \right){z_{i,q}} -\sum\limits_{i = 1}^N\sum\limits_{q = 1}^{k-1} \tilde \Phi_{i,q}^T\Lambda _{i,q}^{-1}{{\dot {\hat \Phi}}_{i,q}}}}&\nonumber\\
&+ \sum\limits_{i = 1}^N {{z_{i,k}}{z_{i,k + 1}}}+\sum\limits_{i = 1}^N\sum\limits_{q =1}^{k}\varsigma_{i,q}^2+\Sigma_{k}&\label{eq:54}
\end{flalign}
where the fact that $-\left(\gamma_{i,k}+\frac{\dot{g}_{i,k}}
{2g_{i,k}^2\eta_i}\right)\le -\left(\gamma_{i,k}-\frac{\bar{g}_{i,d}}
{2\underline{g}_{i,k}^2\underline{\eta}_i}\right)$ is exploited, $\jmath_{i,k}= \gamma_{i,k}-\frac{\bar{g}_{i,d}}
{2\underline{g}_{i,k}^2\underline{\eta}_i}>0$ by choosing $\gamma_{i,k}$ large enough, $\xi_{i,k+1}^*>0$ is the design parameter, satisfying $\frac{1}{\xi_{i,k+1}}\ge \xi_{i,k+1}^*+ \frac{3}{4}$, ${\varsigma_{i,k}}=- {\dot \alpha _{i,k}}$, and $\Sigma_{k}=\Sigma_{k-1}+\sum\nolimits_{i = 1}^N\frac{1}{2}\bar{\epsilon}_{i,km}^2$.
$\textbf{Step \emph{n}}$:
Define a Lyapunov function ${V_n} = {V_{n - 1}} + \sum\nolimits_{i = 1}^N\frac{1}{2g_{i,n}\eta_i}z_{i,n}^2 + \sum\nolimits_{i = 1}^N \frac{1}{2}\tilde \Phi_{i,n}^T\Lambda_{i,n}^{ - 1}{{\tilde \Phi}_{i,n}}$. Taking the derivative of $V_n$ and using (\ref{eq:6}), (\ref{eq:1a}) and (\ref{eq:8}), yields
\begin{flalign}
\dot{V}_{n} \le& -\delta_1\underline{\lambda}
\left\|y\right\|^2-\sum\limits_{i = 1}^N\sum\limits_{q = 1}^{n-1} \jmath_{i,q}z_{i,q}^2-\sum\limits_{i = 1}^N\frac{\dot{g}_{i,n}}{2g_{i,n}^2
\eta_i}z_{i,n}^2&\nonumber\\
&-\sum\limits_{i =1}^N\frac{\dot{\eta}_{i}}{2g_{i,n}\eta_i^2}z_{i,n}^2+ \sum\limits_{i = 1}^N \sum\limits_{q = 1}^{n - 1} \tilde \Phi_{i,q}^T {S_{i,q}}\left( {{\beta _{i,q}}} \right){z_{i,q}}&\nonumber\\
&- \sum\limits_{i = 1}^N \sum\limits_{q = 1}^{n - 1} \tilde \Phi_{i,q}^T\Lambda_{i,q}^{ - 1}{{\dot {\hat \Phi}}_{i,q}}- \sum\limits_{i = 1}^N\sum\limits_{q=1}^{n-1}\xi_{i,{q+1}}^{*}
\Theta_{i,{q+1}}^2&\nonumber\\
&+ \sum\limits_{i = 1}^N{z_{i,n}}{u_i} + \sum\limits_{i = 1}^N {{z_{i,n}}\left({z_{i,{n-1}}} + {\Psi_{i,n}}\left( {{\beta _{i,n}}} \right)\right)}& \nonumber\\
&-\sum\limits_{i = 1}^N\tilde \Phi_{i,n}^T\Lambda_{i,n}^{ - 1}{{\dot{\hat \Phi}}_{i,n}}+\sum\limits_{i =1}^N\sum\limits_{q=1}^{n-1}
\varsigma_{i,q}^2 +\Sigma_{n-1}.&\label{eq:55}
\end{flalign}
where ${\Psi_{i,n}}\left( {{\beta _{i,n}}} \right)= \frac{1}{g_{i,n}\eta_i} ({\dot{\eta}}_i
{x_{i,n}}+{\eta}_if_{i,n}\left(\check{x}_{i,n},0\right)+
{\eta}_i d_{i,n}-{\dot{\alpha} _{i,nf}})$ is approximated by utilizing the RBFNN, with ${{\beta_{i,n}}}=[\check{x}_{i,n},{x}_{j,1},
\eta_i,\dot{\eta}_i]^{T} \in{\Omega_{{\beta _{i,n}}}}$.
The distributed controller $v_i$ in (\ref{eq:48}) can be written in the form of
\begin{flalign}
{v_i} = & -{\gamma_{i,n}}{z_{i,n}}-{z_{i,n}}- {z_{i,n - 1}} - \hat \Phi_{i,n}^T{S_{i,n}}\left( {{\beta _{i,n}}} \right)& \nonumber\\
& + \gamma_{i,n}\left( {{z_{i,n}} - {{\bar z}_{i,n}}} \right)+\left( {{z_{i,n}} - {{\bar z}_{i,n}}} \right)+ {z_{i,n - 1}}& \nonumber\\
&- {{\bar z}_{i,n - 1}}+ \hat \Phi_{i,n}^T \left({S_ {i,n}}\left( {{\beta _{i,n}}} \right) - {S_{i,n}}\left( {{{\bar \beta }_{i,n}}} \right)\right). &\label{eq:56}
\end{flalign}
Inserting (\ref{eq:49}) and (\ref{eq:56}) into (\ref{eq:55}), ${{\dot V}_n}$ becomes
\begin{flalign}
{{\dot V}_n} \le& -\delta_1\underline{\lambda}\left\|y\right\|^2- \sum\limits_{i = 1}^N {\sum\limits_{q = 1}^{n}{{\jmath_{i,q}}z_{i,q}^2}}- \sum\limits_{i = 1}^N {\sum\limits_{q = 1}^{n - 1} {\xi_{i,q + 1}^*\Theta_{i,q + 1}^2}}&\nonumber\\
&+\sum\limits_{i = 1}^N \sum\limits_{q = 1}^N\tilde \Phi_{i,q}^T \left( {{S_{i,q}}\left( {{\beta _{i,q}}} \right){z_{i,q}} - {S_{i,q}}\left( {{{\bar \beta }_{i,q}}} \right){{\bar z}_{i,q}}} \right)&\nonumber\\
&+\sum\limits_{i = 1}^N\left|z_{i,n}\right|
\left({\Delta{\alpha}_{i,n}}+\Delta{u_i}\right) + \sum\limits_{i = 1}^N {\sum\limits_{q = 1}^{n - 1} {\varsigma_{i,q}^2} }+\bar{H} & \label{eq:58}
\end{flalign}
with
\begin{flalign}
\Delta {\alpha_{i,n}} = &\,{{\gamma_{i,n}}\Delta {z_{i,n}} + \Delta {z_{i,n}}} + \Delta {z_{i,n - 1}}&\nonumber\\
&\,+\left| {\hat \Phi_{i,n}^T {{S_{i,n}}\left( {{\beta _{i,n}}} \right) - \Phi_{i,n}^T{S_{i,n}}
\left( {{{\bar \beta }_{i,n}}} \right)}} \right|& \label{eq:58a}
\end{flalign}
where the fact that $-\left(\gamma_{i,n}+\frac{\dot{g}_{i,n}}
{2g_{i,n}^2\eta_i}+\frac{\dot{\eta}_{i}}{2g_{i,n}
\eta_i^2}\right)\le -\left(\gamma_{i,n}-\frac{\bar{g}_{i,d}}
{2\underline{g}_{i,n}^2\underline{\eta}_i}
-\frac{\bar{\eta}_{i,d}}{2\underline{g}_{i,n}
\underline{\eta}_i^2}\right)$ is utilized, $\jmath_{i,n}=\gamma_{i,n}-\frac{\bar{g}_{i,d}}{2\underline{g}_{i,n}^2
\underline{\eta}_i}-\frac{\bar{\eta}_{i,d}}
{2\underline{g}_{i,n}
\underline{\eta}_i^2}>0$ by choosing $\gamma_{i,n}$ large enough, and $\bar{H}=\sum\nolimits_{i =1}^N\frac{1}{4}\bar{\epsilon}_{i,nm}^2+\Sigma_{n-1}$.
${\textbf{Remark 6}}$.
Here, we pause to stress that several triggering error terms that affect the stability of the system emerge due to the participation of event-triggering mechanism, that is, $\left|z_{i,n}\right|{\Delta{\alpha}_{i,n}}$, $\left|z_{i,n}\right|\Delta{u_i}$ and ${\tilde \Phi_{i,q}^T\left( {{S_{i,q}}\left( {{\beta _{i,q}}} \right){z_{i,q}} -{S _{i,q}}\left( {{{\bar \beta }_{i,q}}} \right){{\bar z}_{i,q}}} \right)}$, $q=1,\cdots,n$. How to tackle such adverse effects to ensure the stability of the system is one of the main challenges in achieving the control objectives of this paper.
In the following, we will show that they are bounded by vital results exhibited in \emph{Lemmas} 2-3, and can be incorporated into $\dot{V}_n$. The impacts of triggering error, as a result, is handled.
By employing \emph{Lemmas} 2-3, we can obtain
\begin{flalign}
&\left|z_{i,n}\right|({\Delta {\alpha}_{i,n}}+\Delta u_i)\le \frac{1}{2}z_{i,n}^2+{\Delta \alpha_{i,n}^2 }+{\Delta u_i^2 }&\label{eq:59}\\
&\sum\limits_{i = 1}^N \sum\limits_{q = 1}^N\tilde \Phi_{i,q}^T \left( {{S_{i,q}}\left( {{\beta _{i,q}}} \right){z_{i,q}} - {S_{i,q}}\left( {{{\bar \beta }_{i,q}}} \right){{\bar z}_{i,q}}} \right)&\nonumber\\
&\le\,\sum\limits_{i = 1}^N\sum\limits_{q = 1}^n {\frac{1}{4}z_{i,q}^2} + \sum\limits_{i = 1}^N\sum\limits_{q = 1}^n {{\Delta s_{i,q}^2}} \tilde \Phi_{i,q}^T{{\tilde \Phi}_{i,q}} & \nonumber\\
&+\sum\limits_{i = 1}^N\sum\limits_{q = 1}^n\frac{\sigma_{i,w}}{4}\tilde \Phi_{i,q}^T{{\tilde \Phi}_{i,q}}+ \sum\limits_{i = 1}^N\sum\limits_{q = 1}^n \frac{\bar{s}_{i,q}^2\Delta z_{i,q}^2}{\sigma_{i,w}} & \label{eq:61}
\end{flalign}
where $\sigma_{i,w}>0$ is the design parameter. According to (\ref{eq:59}), (\ref{eq:61}) and exploits the fact that $\left\|\tilde \Phi_{i,k}\right\|\le \phi_{i,km}$, $k=1,\cdots,n$, it is seen that $\dot{V}_{n}$ in (\ref{eq:58}) becomes
\begin{flalign}
{{\dot V}_n}\le & -\delta_1\underline{\eta}\left\|y\right\|^2- \sum\limits_{i = 1}^N {\sum\limits_{q = 1}^n {\jmath_{i,q}^{*}z_{i,q}^2}} +\sum\limits_{i = 1}^N {\sum\limits_{q = 1}^{n - 1} {\varsigma_{i,q}^2} } &\nonumber\\
&- \sum\limits_{i = 1}^N \sum\limits_{q = 1}^n \frac{\sigma_{i,w}}{2}\tilde \Phi_{i,q}^T{{\tilde \Phi}_{i,q}} - \sum\limits_{i = 1}^N {\sum\limits_{q = 1}^{n - 1} {\xi_{i,q + 1}^{*}\Theta_{i,q + 1}^2}}& \nonumber\\
&+\sum\limits_{i = 1}^N \sum\limits_{q = 1}^n \left(\frac{{\Delta s_{i,q}^2}} {{{\gamma_{i,q}}}}+\frac{\sigma_{i,w}}{4} \right)\phi_{i,qm}^2+{\Xi_n}& \label{eq:a63}
\end{flalign}
where $\jmath_{i,q}^*=\jmath_{i,q}-\frac {1}{4}>0$, $\jmath_{i,n}^*=\jmath_{i,n}-\frac {3}{4}>0$ by choosing $\gamma_{i,k}$ large enough, for $q=1,\cdots,n-1$, $k=1,\cdots,n$, ${\Xi_n} =\bar{H}+\sum\nolimits_{i = 1}^N{\Delta \alpha_{i,n}^2 +\sum\nolimits_{i = 1}^N\Delta u_i^2}
+\sum\nolimits_{i = 1}^N\sum\nolimits_{q = 1}^n \frac{\bar{s}_{i,q}^2\Delta z_{i,q}^2}{\sigma_{i,w}}$.
Since the set ${\Omega _{v}}:=\{\sum\nolimits_{i = 1}^N {\sum\nolimits_{q =1}^{n-1}}\frac{1}
{g_{i,q}}{z_{i,q}^2}+\sum\nolimits_{i = 1}^N \frac{1}{g_{i,n}\eta_i}{z_{i,n}^2}
+\sum\nolimits_{i = 1}^N {\sum\nolimits_{q = 1}^{n}} {\tilde \Phi_{i,q}^T\Lambda_{i,q}^{ - 1}{{\tilde \Phi}_{i,q}}}+ \sum\nolimits_{i = 1}^N {\sum\nolimits_{q = 1}^{n-1}}{\Theta_{i,q + 1}^2}\le 2\varrho_0\}$ is a compact one for any $\varrho_0>0$, which implies that, for all $q=1,\cdots,n-1$, $\left| {{\varsigma_{i,q}}} \right| \le \bar{\varsigma}_{i,q}$ on $\Omega_v$. This, together with (\ref{eq:a63}), results in
\begin{flalign}
{{\dot V}_n}\le & - \sum\limits_{i = 1}^N {\sum\limits_{q = 1}^n{\jmath_{i,q}^{*}z_{i,q}^2}}- \sum\limits_{i = 1}^N \sum\limits_{q = 1}^n \frac{\sigma_{i,w}}{2}\tilde \Phi_{i,q}^T{{\tilde \Phi}_{i,q}}\nonumber\\
& - \sum\limits_{i = 1}^N {\sum\limits_{q = 1}^{n-1}{\xi_{i,q + 1}^*\Theta_{i,q + 1}^2}} -\delta_1\underline{\eta}\left\|y\right\|^2+ {\Sigma_n} & \nonumber\\
\le& -\delta_1\underline{\eta}\left\|y\right\|^2
-\hbar_1{V_n}+{\Sigma_n}& \label{eq:63}
\end{flalign}
where $\hbar_1=\min\{2{\bar{g}_{i,1}
\jmath_{i,1}^{*}},\cdots,2{\bar{g}_{i,n}\jmath_{i,n}^{*}}$,
$\frac{{\sigma_{i,1}}}{{\lambda_{\max}}\left\{ {\Lambda_{i,1}^{-1}} \right\}},\cdots,\frac{{\sigma _{i,n}}}{{\lambda_{\max }}\left\{{\Lambda_{i,n}^{-1}} \right\}}, 2\xi_{i,2}^*,\cdots,2\xi_{i,n}^*\}$ and
${\Sigma_n}={\Xi_n}+\sum\nolimits_{i = 1}^N \sum\nolimits_{q = 1}^n \left(\frac{{\Delta s_{i,q}^2}}{{{\gamma_{i,q}}}}+\frac{\sigma_{i,w}}{4} \right)\phi_{i,qm}^2+ \sum\nolimits_{i = 1}^N {\sum\nolimits_{q = 1}^{n-1}} {\bar{\varsigma}_{i,q}^2}$.
Next, we show that the results in \emph{Theorem} 2 hold.
\begin{itemize}
\item [i)]{In view of (\ref{eq:63}), it is seen that ${V_{n}} \in {L_\infty}$, it indicates that ${z_{i,k}}$, ${\tilde \Phi_{i,k}}$ and ${\Theta_{i,q + 1}}$ are all bounded, $k = 1, \cdots ,n$, $q = 1, \cdots ,n - 1$. This, along with (\ref{eq:7})-(\ref{eq:9}) and (\ref{eq:11})-(\ref{eq:13}), results in $x_{i,k}\in {L_\infty }$. It can be further deduced from (\ref{eq:48}) that $v_i\in {L_\infty }$.}
\item [ii)]{Recalling (\ref{eq:7}) and the definitions of $V_1$ and $V_n$, one can obtain that ${\left\|y\right\|^2}= \sum\nolimits_{i = 1}^N{\frac{1}{\eta_i^2}z_{i,1}^2} \le b{V_n}$, which implies that ${\left\|y \right\|^2} \le b\left[ {{e^{ - \hbar_1 t}}{V_n}\left( 0 \right) + \frac{{{\Sigma_n}}}{\hbar_1}\left( {1 - {e^{ - \hbar_1 t}}} \right)} \right]$, where $b=\max\left\{1,\frac{2\bar{g}_{1,1}}
{\underline{\eta}_1^2},\cdots,\frac{2\bar{g}_{N,1}}
{\underline{\eta}_N^2}\right\}$ is some positive constant. Thus we can conclude that the outputs of all the subsystems reach a consensus against sensor failures. In addition, it is readily seen from (\ref{eq:63}) that $\|y(t)\|_{[0, T]}\leq \sqrt{\frac{1}{\delta_1\underline{\lambda} }\left[\frac{V_{n}(0)}{T}+\Sigma_n\right]}$,
it indicates that the upper bound for ${\left\|y\right\|_{[0, T]}}$ can be decreased by decreasing the triggering thresholds $\Delta x_{i,k}$, $\Delta x_{j,k}$ and $\Delta u_i$, and increasing design parameters $\gamma_{i,k}$, $\delta_1$, $\Lambda_{i,k}$ and $\xi_{i,q}$, $k=1,\cdots,n$, $q=2,\cdots,n$.}
\item [iii)]{For all $t \in\left[t_{k,p}^{i}, t_{k,p+1}^{i} \right)$, $k=1,\cdots,n$, $p=0,1,2,\cdots$, we define $\omega_{k,p}^{i}(t)=x_{i,k}(t) -\bar{x}_{i,k}(t)$, then it follows that
\begin{flalign}
\frac{d\left|\omega_{k,p}^{i}\right|}{d t}=&\, \frac{d\left(\omega_{k,p}^{i} {\rm{\times}} \omega_{k,p}^{i}\right)^{\frac{1}{2}}}{d t}&\nonumber\\
=&\, {\rm{sign}}\left(\omega_{k,p}^{i}\right) \dot{\omega}_{k,p}^{i} \leq
\left|\dot{\omega}_{k,p}^{i}\right|.
&\label{eq:65}
\end{flalign}
Since $\bar{x}_{i,k}(t)$ remains unchanged for $t \in\left[t_{k,p}^{i}, t_{k,p+1}^{i}\right)$, it is seen that, for $k=1,\cdots,n$
\begin{flalign}
\left|\dot{\omega}_{k,p}^{i} \right|=&\,{g_{i,k}}\left(\check{x}_{i,k+1}\right){x_{i,{k+1}}}
+{f_{i,k}}\left({{\check{x}_{i,{k}}},0}\right)&\nonumber\\
&+d_{i,k}\left(\check{x}_{i,k+1}\right)&\label{eq:a65}
\end{flalign}
As $x_{i,k+1}$, $g_{i,k}$, $f_{i,k}$ and $d_{i,k}$ in (\ref{eq:a65}) are all bounded, it holds that $\left|\dot{\omega}_{k,p}^{i}\right| \leq x_{0}^{i}$, $k=1,\cdots,n$, where $x_{0}^{i}>0$ is an unknown constant, which implies that
$t_{k,p+1}^{i}-t_{k,p}^{i} \geq \Delta x_{i,k}/ x_{0}^{i}=T_{x}^{i}>0$. By following the similar analysis, it is seen that $t_{k,p+1}^{j}-t_{k,p}^{j}\geq \Delta x_{j,k}/ x_{0}^{j}=T_{x}^{j}>0$, $k=1,\cdots,n$, and $t_{u,p+1}^{i}-t_{u,p}^{i}\geq \Delta u_{i}/ u_{0}^{i}=T_{u}^{i}>0$, where $x_{0}^{j}>0$ and $u_{0}^{i}>0$ are some unknown constants. Thus the Zeno solutions are ruled out. $\hfill\blacksquare$}
\end{itemize}
${\textbf{Remark 7}}$.
Recalling the definitions of ${\Psi_{i,k}}\left( {{\beta _{i,k}}} \right)$, $k=1,\cdots,n$, it is deduced that ${\beta _{i,1}}$ is a function of variables ${x}_{i,1}$, $\eta_i$ and $\dot{\eta}_i$, and ${\beta _{i,k}}$ is a function of variables $\check{x}_{i,k}$, ${x}_{j,1}$, $\hat {\Phi}_{i,{1}},\cdots, \hat {\Phi}_{i,{k-1}}$, $\eta_i$ and $\dot{\eta}_i$, $k=2,\cdots,n$. However, it should be noted that regarding weights $\hat {\Phi}_{i,{1}},\cdots, \hat {\Phi}_{i,{k-1}}$ as NN inputs is inadvisable since the curse of dimensionality of RBFNN may result in a larger number of NN inputs. To reduce the NN input dimension and thus the computational cost, in this work we choose $\beta _{i,1}=[x_{i,1}, \eta_i,\dot{\eta}_i]^{T} \in{\Omega_{\beta_{i,1}}}$ and ${\beta _{i,k}}=\left[\check{x}_{i,k},{x}_{j,1},
\eta_i,\dot{\eta}_i\right]^{T}
\in{\Omega_{\beta_{i,k}}}$, $k=2,\cdots,n$.
${\textbf{Remark 8}}$.
The problem of solving non-differentiable virtual control signals for nonlinear systems has been reported in recent works \cite{zhang2021adaptive,sun2022Distributed,zhang2021event} in the framework of state-triggering control, therein the repeated differentiation issue is avoided by employing the first-order filter. Nevertheless, the output of the filter is required to be triggered to derive the system stability, which inevitably increases the computational burden of the sensors.
Besides, these solutions concentrate on the case where only the plant states or the control input are transmitted over the network, leaving sensor failures unaccounted for.
Moreover, the systems considered in \cite{zhang2021adaptive,sun2022Distributed,zhang2021event} are limited to strict-feedback form. This paper develops a distributed adaptive control scheme under event-triggering setting for more general uncertain non-affine pure-feedback nonlinear MASs with sensor failures, in which the restriction on triggering the filter-output is lifted by applying the projection operator in the parameter update law (\ref{eq:49}).
Additionally, the sensor failures are effectively handled without using fault detection and diagnosis unit or controller reconfiguration, which is contrary to the related works \cite{zhang2021event,zhai2018output,zhang2018observer}.
\begin{figure}
\begin{center}
\includegraphics[width=0.34\textwidth,height=44mm]{topology_20220716.eps}
\caption{The communication graph.}
\end{center}
\end{figure}
\section{Simulation Studies}
Consider a MAS comprised of 4 nonlinear subsystems, with the $i$th ($i=1,\cdots,4$) agent taking the form of:
\begin{flalign}
{{\dot x}_{i,1}}=\,&{x_{i,1}}+{x_{i,2}}
+\frac{1}{5}{x_{i,2}^3}& \nonumber\\
{{\dot x}_{i,2}}=\,&{x_{i,1}}{x_{i,2}}+{u_{i}} +\frac{1}{7}{u_{i}^3}& \nonumber\\
{y_i} =\,& {x_{i,1}}& \label{eq:66}
\end{flalign}
for $i=1,\cdots,4$, $k=1,2$.
The communication topology is presented in Fig. 2. The initial values $\check{x}_1(0)=\left[0.1, 0.05, 0.1, 0.05\right]^T$, $\check{x}_{2}(0)=\left[0.1, 0.1, 0.1, 0.1\right]^T$, the fault factor $\eta_i=0.6$, $\tau_f=1$, which indicates that the sensor loses $60\%$ of its effectiveness at $t= 1s$, the design parameters $\delta_1=0.5$, $\gamma_{i,1}=5$, $\gamma_{i,2}=10$, $\sigma_{i,1}=20$, $\sigma_{i,2}=20$, $\Lambda_{i,1}=0.005$, $\Lambda_{i,2}=0.005$ and ${\xi_{i,2}}=0.01$, the triggering thresholds $\Delta x_{i,k}=0.001$, $\Delta x_{j,k}=0.001$ and $\Delta u_{i}=0.01$. The NN contains 25 nodes, and $\zeta_b=2$. Figs. 2-3 display the output trajectories of $x_{i,1}^f$ and $x_{i,2}^f$, respectively, from which it can be observed that the outputs of all the subsystems reach a consensus in the presence of sensor failures. From Fig. 4, we can see that consensus error $e_i$ converges into residual set near the origin. The boundedness of distributed control $u_i$ is depicted in Fig. 5.
To test the impact of triggering thresholds on the performance of the system, another set of triggering thresholds $\Delta x_{i,k}^{\prime}=0.002$, $\Delta x_{j,k}^{\prime}=0.005$ and $\Delta u_{i}^{\prime}=0.05$ are chosen for comparison.
The evolution of the consensus error $e_i$ is shown in Figs. 6. The triggered times of $x_{i,1}^f$, $x_{i,2}^f$ and $u_i$ under different triggering thresholds are presented in Figs. 7-9, respectively.
From which we can conclude that the triggering times for communication decrease as the triggering thresholds increase, leading to savings in communication resources. However, increasing triggering thresholds also leads to an increase of the consensus error $e_i$
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth,height=53mm]{xi1.eps}
\caption{The output $x_{i,1}^f$.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth,height=53mm]{xi2.eps}
\caption{The output $x_{i,2}^f$.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth,height=53mm]{ei.eps}
\caption{Consensus error $e_i$.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth,height=53mm]{ui.eps}
\caption{Distributed controller $u_i$.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth,height=53mm]{ei-1.eps}
\caption{$e_i$ in the case of increasing triggering thresholds.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth,height=53mm]{triggering_time_xi1_0809.eps}
\caption{Triggering times of $x_{i,1}$ for different triggering thresholds.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth,height=53mm]{triggering_time_xi2_0809.eps}
\caption{Triggering times of $x_{i,2}$ for different triggering thresholds.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth,height=53mm]{triggering_time_ui_0809.eps}
\caption{Triggering times of $u_{i}$ for different triggering thresholds.}
\end{center}
\end{figure}
\section{Conclusion}
This paper develops a distributed adaptive control algorithm for uncertain non-affine pure-feedback nonlinear MASs under event-triggering setting, wherein all state variables are not directly and continuously available or even polluted due to sensor failures. By fusing a novel replacement policy, NN approximating and dynamic filtering technique, it is shown that all the internal signals are SGUUB, with the outputs of all the subsystems reaching a consensus, while precluding infinitely fast execution. An attractive topic of future research is to consider the tracking control problem for such system.
\begin{appendices}
\section{}
$\textbf{Step 1}$: Define ${V_1}= \sum\nolimits_{i = 1}^N\frac{1}{2g_{i,1}}z_{i,1}^2 + \sum\nolimits_{i = 1}^N \frac{1}{2}\tilde \Phi_{i,1}^T\Lambda_{i,1}^{ - 1}{{\tilde \Phi}_{i,1}} + \sum\nolimits_{i = 1}^N \frac{1}{2}\Theta_{i,2}^2$. In accordance with (\ref{eq:6}), (\ref{eq:1a}), (\ref{eq:7}), (\ref{eq:8}), (\ref{eq:11}), (\ref{eq:12}) and (\ref{eq:15}), the time derivative of $V_1$ is obtained as
\begin{flalign}
\dot{V}_{1} \le & -\delta_1\underline{\lambda}
\left\|y\right\|^2-\sum\limits_{i = 1}^N \jmath_{i,1}z_{i,1}^2-\sum\limits_{i = 1}^N\frac{\sigma_{i,1}}{2}\tilde \Phi_{i,1}^T{{{\tilde \Phi}}_{i,1}}&\nonumber\\
& - \sum\limits_{i = 1}^N\xi_{i,2}^{*}\Theta_{i,2}^2 +\sum\limits_{i = 1}^N{z}_{i,1}z_{i,2}+\sum\limits_{i = 1}^N\varsigma_{i,1}^2+\Sigma_1&\label{eq:25}
\end{flalign}
where $\jmath_{i,1}=\gamma_{i,1} -\frac{\bar{g}_{i,d}}{2\underline{g}_{i,1}^2}>0$ by choosing $\gamma_{i,1}$ large enough, $\xi_{i,2}^*>0$ is the design parameter, satisfying $\frac{1}{\xi_{i,2}}\ge \xi_{i,2}^*+ \frac{3}{4}$, ${\varsigma_{i,1}}=-{\dot \alpha _{i,1}}$, ${\Sigma_1} =\sum\nolimits_{i = 1}^N{\frac{{{\sigma _{i,1}}}}{2}{\phi_{i,1m}^2}} + \sum\nolimits_{i = 1}^N\frac{1}{2}\bar{\epsilon}_{i,1m}^2$, and the fact that $-\left(\gamma_{i,1}+\frac{\dot{g}_{i,1}}
{2{g}_{i,1}^2}\right)\le -\left(\gamma_{i,1}
-\frac{\bar{g}_{i,d}}{2\underline{g}_{i,1}^2}
\right)$ is exploited.
$\textbf{Step \emph{k}}$ $(k=2,\cdots,n-1)$:
Define ${V_k} = {V_{k - 1}} + \sum\nolimits_{i = 1}^N \frac{1}{2g_{i,k}}z_{i,k}^2 + \sum\nolimits_{i = 1}^N \frac{1}{2}\tilde \Phi_{i,k}^T\Lambda_{i,k}^{-1}{{\tilde \Phi}_{i,k}}+ \sum\nolimits_{i = 1}^N \frac{1}{2}\Theta_{i,k + 1}^2$. In view of (\ref{eq:6}), (\ref{eq:1a}), (\ref{eq:8}), (\ref{eq:11}), (\ref{eq:13}), (\ref{eq:15}) and (\ref{eq:25}), we can obtain
\begin{flalign}
\dot{V}_{k}\le& -\delta_1\underline{\lambda}\left\|y\right\|^2
-\sum\limits_{i = 1}^N\sum\limits_{q = 1}^{k} \jmath_{i,q}z_{i,q}^2+ \sum\limits_{i = 1}^N {{z_{i,k}}{z_{i,k + 1}}}&\nonumber\\
&-\sum\limits_{i = 1}^N\sum\limits_{q =1}^{k}\frac{\sigma_{i,q}}{2}\tilde \Phi_{i,q}^T{{{\tilde \Phi}}_{i,q}}- \sum\limits_{i = 1}^N\sum\limits_{q =1}^{k}\xi_{i,{q+1}}^{*}
\Theta_{i,{q+1}}^2&\nonumber\\
&+\sum\limits_{i = 1}^N\sum\limits_{q =1}^{k}\varsigma_{i,q}^2+\Sigma_{k}&\label{eq:30}
\end{flalign}
where $\jmath_{i,k}=\gamma_{i,k}-\frac{\bar{g}_{i,d}}
{2\underline{g}_{i,k}^2\underline{\eta}_i}>0$ by choosing $\gamma_{i,k}$ large enough, $\xi_{i,k+1}^*>0$ is the design parameter, satisfying $\frac{1}{\xi_{i,k+1}}\ge \xi_{i,k+1}^*+ \frac{3}{4}$, ${\varsigma_{i,k}}=- {\dot \alpha _{i,k}}$, $\Sigma_{k}=\Sigma_{k-1}+\sum\nolimits_{i = 1}^N{\frac{{{\sigma _{i,k}}}}{2}{{\phi_{i,km}^2}}}+\sum\nolimits_{i = 1}^N\frac{1}{2}\bar{\epsilon}_{i,km}^2$, and the fact that $-\left(\gamma_{i,k}+\frac{\dot{g}_{i,k}}
{2g_{i,k}^2\eta_i}\right)\le -\left(\gamma_{i,k}-\frac{\bar{g}_{i,d}}
{2\underline{g}_{i,k}^2\underline{\eta}_i}\right)$ is used.
$\textbf{Step \emph{n}}$:
Define ${V_n} = {V_{n-1}} + \sum\limits_{i = 1}^N \frac{1}{2g_{i,n}\eta_i}z_{i,n}^2 + \sum\limits_{i = 1}^N \frac{1}{2}\tilde \Phi_{i,n}^T\Lambda_{i,n}^{ - 1}{{\tilde \Phi}_{i,n}}$.
Applying (\ref{eq:6}), (\ref{eq:1a}), (\ref{eq:8}), (\ref{eq:14}), (\ref{eq:15}) and (\ref{eq:30}), one can obtain
\begin{flalign}
\dot{V}_{n}\le&-\sum\limits_{i = 1}^N\sum\limits_{q = 1}^{n} \jmath_{i,q}z_{i,q}^2-\sum\limits_{i = 1}^N\sum\limits_{q =1}^{n}\frac{\sigma_{i,q}}{2}
\tilde \Phi_{i,q}^T{{{\tilde \Phi}}_{i,q}}&\nonumber\\
&- \sum\limits_{i = 1}^N\sum\limits_{q =1}^{n-1}\xi_{i,{q+1}}^{*}
\Theta_{i,{q+1}}^2 -\delta_1\underline{\lambda}
\left\|y\right\|^2+\Sigma_n& \nonumber\\
\le& -\delta_1\underline{\lambda}
\left\|y\right\|^2-\hbar_1 V_n+\Sigma_n& \label{eq:36}
\end{flalign}
where $\jmath_{i,n}= \gamma_{i,n}-\frac{\bar{g}_{i,d}}{2\underline{g}_{i,n}^2
\underline{\eta}_i}-\frac{\bar{\eta}_{i,d}}{2\underline{g}_{i,n}
\underline{\eta}_i^2}>0$ by choosing $\gamma_{i,n}$ large enough, $\hbar_1 =\min\{2{\bar{g}_{i,1}\jmath_{i,1}}
,\cdots,2{\bar{g}_{i,n}\jmath_{i,n}}$,
$\frac{{\sigma_{i,1}}}{{\lambda_{\max}}\left\{ {\Lambda_{i,1}^{-1}} \right\}},\cdots,\frac{{\sigma _{i,n}}}{{\lambda_{\max }}\left\{{\Lambda_{i,n}^{-1}} \right\}}, 2\xi_{i,2}^*,\cdots,2\xi_{i,n}^*\}$, ${\Sigma_n}=\Sigma_{n-1}+\sum\nolimits_{i = 1}^N\frac{1}{4}
\bar{\epsilon}_{i,nm}^2+\sum\nolimits_{i = 1}^N {\frac{{{\sigma _{i,n}}}}{2}{\phi_{i,nm}^2}} + \sum\nolimits_{i = 1}^N {\sum\nolimits_{q = 1}^{n-1}} {\bar{\varsigma}_{i,q}^2}$, and the fact that $-\left(\gamma_{i,n}+\frac{\dot{g}_{i,n}}
{2g_{i,n}^2\eta_i}+\frac{\dot{\eta}_{i}}{2g_{i,n}
\eta_i^2}\right)\le -\left(\gamma_{i,n}-\frac{\bar{g}_{i,d}}
{2\underline{g}_{i,n}^2\underline{\eta}_i}
-\frac{\bar{\eta}_{i,d}}{2\underline{g}_{i,n}
\underline{\eta}_i^2}\right)$ and $\left| {{\varsigma_{i,q}}} \right| \le \bar{\varsigma}_{i,q}$ over $\Omega_v$ are exploited, for $q=1,\cdots,n-1$.
By conducting a similar analysis to the proof of \emph{Theorem} 2, the results in \emph{Theorem} 1 can be easily demonstrated.
$\hfill{\blacksquare}$
\end{appendices}
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13661",
"language": "en",
"timestamp": "2023-02-28T02:26:53",
"url": "https://arxiv.org/abs/2302.13661",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Multimodal emotion recognition is a significant capability in human-machine interaction and has attracted widespread attention in industry and academia. As we all know that emotions are expressed in extremely complex and ambiguous ways, perhaps through linguistic content, speech intonation, facial expression and body actions. There have been many related studies on text emotion recognition \cite{adoma2020comparative, acheampong2021transformer}, and also on audio emotion recognition \cite{shen2011automatic, kishore2013emotion, siriwardhana2020jointly}. However, by observing these results, the research on single modality has reached a certain bottleneck which leads to increasing attention devoted to the use of multimodal approach. Some studies propose that the information of different modalities is often complementary and verified, and the full use of the information of different modalities can help the model to better learn the key content \cite{soleymani2011multimodal, mittal2020m3er}.
In recent years, pretrained self-supervised learning has performed prominently in several research fields such as natural language processing (NLP) \cite{devlin2018bert} and automatic speech recognition (ASR) \cite{baevski2020wav2vec}. For the multimodal emotion recognition (MER) task, there are also studies that have done a lot of exploration on the basis of pretrained models.
For the first time, Siriwardhana et al. \cite{siriwardhana2020jointly} jointly finetuned modality-specific “BERT-like” pretrained Self Supervised Learning (SSL) architectures to represent both audio and text modalities for the task of MER. Similarly, Yang et al. \cite{yang22q_interspeech} also proposed to finetune two pretrained self-supervised learning models (Text-RoBERTa and Speech-RoBERTa) for MER. Based on pretrained models, Zhao et al. \cite{zhao22k_interspeech} explored Multi-level fusion approaches, including coattention-based early fusion and late fusion with the models trained on both embeddings. Compared with the MCSAN \cite{sun2021multimodal} using traditional features (MFCC \& GloVe) for modal fusion, the works mentioned above have greatly improved performance. From the perspective of making full use of contextual data, Wu et al. \cite{wu2021emotion} took advantage of contextual information and proposed a two-branch neural network structure including time synchronous branch and time asynchronous branch. By modifying the structure of network, SMCN \cite{hou2022multi} realize multi-modal alignment which can capture the global connections without interfering with unimodal learning.
However, these previous works focused more on sophisticated fusion structure design and the use of larger and stronger pretrained models, or the use of contextual information that breaks data constraints.
They did not start from the MER task itself to explore the bottleneck of insufficient fusion, or capture the feature of emotion itself and the alignment of emotion in different modalities.
We believe that the parameters of the network are already sufficient, and the complex fusion module design has not brought enough benefits. Thus, we hope to guide the model to fully exploit the potential of the fusion module by designing just the right auxiliary tasks.
In this work, we propose a modular end-to-end approach for the MER task. The general framework is shown in figure \ref{fig:figure-1}. First, we learn the semantic information of the respective modalities through the pretrained models, wav2vec 2.0 \cite{baevski2020wav2vec} for audio modality and BERT \cite{devlin2018bert} for text modality. Then, we map text and audio modal feature information into a unified semantic vector space through a k-layer cross-attention mechanism for more adequate modal fusion.
Furthermore, we design two auxiliary tasks to help fully fuse the features of the two modalities and learn the alignment information of the emotion itself between different modalities.
In the first one, we randomly recombine text and audio modalities and let the model to predict the combination of the two modalities through the vector obtained by fusion. This decoupling of multimodal data enables the model to see more complex input combinations, and the constraint of this auxiliary task forces the network to not ignore the role of any modality in the task of MER.
In the second one, we randomly replace one of the modalities with other data of the same emotion category, and hope that the model can capture the feature related to emotion and the alignment information beyond the content itself.
We comprehensively evaluated the performance of the model proposed on the IEMOCAP dataset in terms of average weighted accuracy (WA) and unweighted accuracy (UA). In additional, we compared it with the SOTA methods and presented relevant ablation experiments that illustrate the effectiveness of each module.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{framework.png}
\caption{Our framework}
\label{fig:figure-1}
\end{figure}
\section{Method}
\label{sec:format}
The framework of our proposed model is showned in Figure \ref{fig:figure-1}, which consists of three modules, i.e., text encoder, audio encoder, and fusion module.
\subsection{Text Encoder}
The emergence of BERT has brought NLP into a new era, and gradually refreshed the effect of multiple NLP domain tasks. And “Pretrain + Finetune” has gradually become a new paradigm. Pre-training models such as BERT can be used to transform text into word vectors with contextual semantic information. In this paper, we choose bert-base-uncased\footnote{https://huggingface.com/bert-base-uncased} as the text modal encoder, which consists 12 layers of transformer encoder. It converts the text into 768-dimensional vectors, which are fed into the fusion module. During training, we also finetune its weights to make it more suitable for our multimodal emotion recognition task.
\subsection{Audio Encoder}
We choose wav2vec2-base\footnote{https://huggingface.co/facebook/wav2vec2-base} as the audio modality encoder, which consists of feature encoder, contextualized representations with Transformers, and quantization module. The base model contains 12 transformer blocks, and it is pretrained in Librispeech corpus containing 960 hours of 16kHz speech. It is able to learn 768-dimensional latent representation directly from raw audio every 20ms (16Khz sampling rate). We also finetune its parameters during training similar to BERT.
\subsection{Fusion Module}
The fusion module is based on the multi-head cross attention mechanism \cite{vaswani2017attention}. In addition, two auxiliary tasks (Section \ref{sec:Auxiliary Tasks}) help the model to better handle the feature relationship between the two modalities. Figure \ref{fig:figure-3} shows the specific details of the fusion module, and each layer of the fusion module consists of two branches, which have the same structure but different Q, K, and V. In addition, we use residual linking to reduce the loss of information of the original modalities. The calculation process of multi-head cross attention is as follows:
\begin{equation}
F_a = F_a + Attention_{at}(Q_a,K_t,V_t)
\label{eq3}
\end{equation}
\begin{equation}
Attention_{at}(Q_a,K_t,V_t) = softmax(\dfrac{Q_aK_t^T}{\sqrt{d_{K_t}}})V_t
\label{eq4}
\end{equation}
\begin{equation}
F_t = F_t + Attention_{ta}(Q_t,K_a,V_a)
\label{eq1}
\end{equation}
\begin{equation}
Attention_{ta}(Q_t,K_a,V_a) = softmax(\dfrac{Q_tK_a^T}{\sqrt{d_{K_a}}})V_a
\label{eq2}
\end{equation}
where subscript $a$ represents audio modality and subscript $t$ represents text modality. $d_{K_a}$ and $d_{K_t}$ represent dimension of the embeddings. $F_t$ : $(B, T_t, C)$ is the text feature outputed by BERT, and $F_a$ : $(B, T_a, C)$ is the audio feature outputed by Wav2vec 2.0. $Q_a$, $K_a$, $V_a$ are given here (same of $Q_t$, $K_t$, $V_t$):
\begin{equation}
Q_a = W_QF_a + b_a^Q
\label{eq5}
\end{equation}
\begin{equation}
K_a = W_KF_a + b_a^K
\label{eq6}
\end{equation}
\begin{equation}
V_a = W_VF_a + b_a^V
\label{eq7}
\end{equation}
Finally, we average pooling $F_a$ and $F_t$ in the time dimension, and concatenate them in the feature dimension to obtain the fusion embedding $(B, 2C)$, which is sent to the classifier to get the emotion category.
\subsection{Auxiliary Tasks}
\label{sec:Auxiliary Tasks}
In order to help the model fully fuse the features of the two modalities and learn the alignment information of the emotion itself between different modalities, we design two auxiliary modal interaction tasks.
\subsubsection{Auxiliary Task1}
In MER tasks, audio and text have the same semantics. In the modal fusion of the downstream network, we analyze that the reason for insufficient fusion comes from the fact that the overall emotional orientation can be obtained just from the information of one modality. In some cases, this approach leads to the right results. But for complex cases, we want the network to be more ``humble", making full use of the information of the two modalities.
As shown in Figure \ref{fig:figure-2}, we decouple the pairs of \{Audio, Text\} in a batch of data, and then randomly scramble and recombine them to get Aux\_batch1. During the training process, we not only let the model predict the emotion category of the original data pair, but also predict the combined category of this reorganized data pair \{Audio, Text\} (a total of $emotion\_num \times emotion\_num$ kinds), and its label ($label_{new}$) is defined as follows:
\begin{equation}
label_{original} = label_a = label_t
\label{eq8}
\end{equation}
\begin{equation}
label_{new} = label_a \times emotion\_nums + label_t
\label{eq9}
\end{equation}
The main task MER requires the downstream network to receive the features from the two modalities and output the emotion category, while the auxiliary task 1 requires the downstream network to predict not only the emotion but also the combination of the two modalities according to the fusion embedding. It forces the downstream network to not ignore any modal information during the feature fusion process of the two modalities, that is, both modal information contributes to the final fusion embedding.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{aux_batch.png}
\caption{Auxiliary Batch}
\label{fig:figure-2}
\end{figure}
\subsubsection{Auxiliary Task2}
In order to guide the fusion network to learn the alignment information of emotion itself between different modalities, we break the strong semantic correlation between modalities. As shown in Figure \ref{fig:figure-2}, for the pairs of \{Audio, Text\} in a batch of data, we randomly replace one of the modalities (Audio or Text) with other data of the same emotion category.
In Aux\_batch2, different modalities have same emotional label but different semantics. We hope that the fusion network can focus on the features of emotion itself in different modalities and align them. At the same time, the model can better learn common features of the same emotion category.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Ourmodel.png}
\caption{Proposed model structure}
\label{fig:figure-3}
\end{figure}
\section{Experimental setup}
\label{sec:format}
\subsection{Dataset}
The dataset used in the experiment is the Interactive Emotional Dyadic Motion Cap-ture (IEMOCAP) \cite{busso2008iemocap}, which is a dialogues dataset and performs improvised and scripts by 10 actors. The 10 actors are divided into 5 sessions, and every session consists of 1 male and 1 female. There are a total of 7529 utterances in \textbf{IEMOCAP} (happy 595, excited 1,041, angry 1,103, sad 1,084, neutral 1,708, frustration 1,849, fear 40, surprise 107, disgust 2). To be consistent and compare with previous studies \cite{pepino21_interspeech}, only utterances with ground truth labels belonging to “angry”, “happy”, “excited”, “sad”, and “neutral” were used. The “excited” class was merged with “happy” to better balance the size of each emotion class, which results in a total of 5,531 utterances (happy 1,636, angry 1,103, sad 1,084, neutral 1,708).
\subsection{Implementation Details}
In order to fully evaluate our proposed model and maintain the same test conditions as previous studies \cite{wu2021emotion}, a leave-one-session-out 5-fold cross-validation (CV) configuration was implemented to evaluate our model. We divide IEMOCAP into five folds according to sessions in our experiments. At each fold we keep one session for testing, and other sessions are used for training. Therefore, for each fold we can get one result, and we take the average of the results as the final result of our experiments.
We implement our model within the PyTorch framework and select the AdamW \cite{loshchilov2017decoupled} optimizer for model optimization with a learning rate of $1 \times 10^{-5}$, where cross attention had 8 heads.
\section{Results}
\label{sec:typestyle}
Table \ref{tab:example} shows the performance of our method on audio-only, text-only, and multimodal (audio and text) emotion recognition tasks. Compared with a single modality, we simply concatenate the features of the two modalities and feed them into a downstream network constructed with a fully connected layer (FC), which improves the performance by about 6\%. Further, we use the single-layer (K=1) multi-head cross-attention downstream network in Figure 3 for modality fusion, which achieves WA : 77.19\%, UA : 78.47\%. In the current state, we also verify the gains of Auxiliary Task 1 and Auxiliary Task 2, of which Auxiliary Task 2 has the better performance. We also try to use both auxiliary tasks with performance WA : 78.34\%, UA : 79.59\%.
\begin{table}[t]
\caption{Weighted Accuracy (WA) and Unweighted Accuracy (UA) of the 5-fold CV results using single modality and multi modality.(FC - Fully Connected; CA - Multi-Head Cross Attention (K=1); Aux1 - Auxiliary Task1; Aux2 - Auxiliary Task2.)}
\label{tab:example}
\centering
\begin{tabular}{ lcc }
\toprule
\textbf{Methods} & \textbf{WA($\%$)} & \textbf{UA($\%$)} \\
\midrule
\textbf{Text-only} \\
BERT & $70.53$ & $71.79$ \\
\midrule
\textbf{Audio-only} \\
Wav2vec2 & $69.92$ & $70.68$ \\
\midrule
\textbf{Audio and Text} \\
BERT+Wav2vec2+FC & $76.24$ & $77.20$ \\
BERT+Wav2vec2+CA & $77.19$ & $78.47$ \\
BERT+Wav2vec2+CA+Aux1 & $77.67$ & $79.16$ \\
BERT+Wav2vec2+CA+Aux2 & $78.11$ & $79.47$ \\
\textbf{BERT+Wav2vec2+CA+Aux1\&2} & \bm{$78.34$} & \bm{$79.59$} \\
\bottomrule
\end{tabular}
\end{table}
Table \ref{tab:example2} shows that when both auxiliary tasks are used simultaneously, the effect of multi-head cross-attention layer K on the performance of emotion recognition task. When K is 2, we get the best performance WA : 78.42\%, UA : 79.71\%. We found that with the introduction of auxiliary tasks, the overall training objective of the model became difficult to achieve. By appropriately increasing the number of layers in the downstream network, we could obtain better performance. However, due to the limited size of the IEMOCAP dataset, continuously increasing the number of network layers will make it difficult to fully train the network parameters, resulting in performance degradation. The performance of previous state-of-the-art multimodal models is mentioned in Table \ref{tab:example3}, and our proposed method has better performance than previous works.
\begin{table}[t]
\caption{Performance with different K (the number of layers of Multi-Head Cross Attention (CA)).}
\label{tab:example2}
\centering
\begin{tabular}{ lccc }
\toprule
\textbf{Methods} & \textbf{K} & \textbf{WA($\%$)} & \textbf{UA($\%$)} \\
BERT+Wav2vec2+CA+Aux1\&2 &$1$ & $78.34$ & $79.59$ \\
\textbf{BERT+Wav2vec2+CA+Aux1\&2} &\bm{$2$} & \bm{$78.42$} & \bm{$79.71$} \\
BERT+Wav2vec2+CA+Aux1\&2 &$3$ & $77.68$ & $79.41$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[H]
\caption{Comparison of the 5-fold CV results of previous state-of-the-art multimodal models and our model on the IEMOCAP.}
\label{tab:example3}
\centering
\begin{tabular}{ lcc }
\toprule
\textbf{Methods} & \textbf{WA($\%$)} & \textbf{UA($\%$)} \\
\midrule
BERT + Wav2vec2 \cite{zhao22k_interspeech} & $-$ & $76.31$ \\
RoBERTa-text\&audio \cite{yang22q_interspeech} & $77.70$ & $78.50$ \\
BERT + FBK \cite{wu2021emotion} & $77.57$ & $78.41$ \\
SMCN \cite{hou2022multi} & $75.60$ & $77.60$ \\
BERT + FBK \cite{morais2022speech} & $70.56$ & $71.46$ \\
MCSAN \cite{sun2021multimodal} & $61.20$ & $56.00$ \\
\textbf{Our proposed (best)} & \bm{$78.42$} & \bm{$79.71$} \\
\bottomrule
\end{tabular}
\end{table}
\section{Conclusion}
\label{sec:majhead}
In this paper, we propose to use wav2vec 2.0 and BERT as upstream network and K-layer downstream network based on multi-head cross-attention mechanism for multimodal emotion recognition task. In addition, we design two auxiliary tasks for the model to help the audio and text be fully integrated, and capture and align the features of emotion itself in different modalities. Finally our method outperforms the previous work on the 5-fold CV result of IEMOCAP, achieved the state-of-the-art, WA : 78.42\%, UA : 79.71\%.
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.13716",
"language": "en",
"timestamp": "2023-02-28T02:28:13",
"url": "https://arxiv.org/abs/2302.13716",
"yymm": "2302"
} | \section{Introduction}
Consider a non-elementary hyperbolic group $\Gamma$ endowed with an invariant metric $d$, satisfying some regularity assumptions, acting by measure class preserving transformations on its Gromov boundary $(\partial \Gamma,\nu_{d})$ equipped with $\nu_{d}$ the so-called Patterson-Sullivan measure associated with $d$. This yields to a one parameter family of isometric representations on $L^{p}$-spaces denoted by $(\pi_{t},L^{p}(\partial \Gamma,\nu_{d}))$, defined for almost every $\xi\in \partial \Gamma$ as \begin{align}\label{representation}
[\pi_{t}(\gamma)v](\xi)=\bigg(\frac{d\gamma_{*}\nu_{d}}{d\nu_{d}}(\xi)\bigg)^{\frac{1}{2}+t}v(\gamma^{-1}\xi).
\end{align}
The above representation satisfies
$\|\pi_{t}(\gamma) v\|_{p}=\|v\|_{p}$ for all $v\in L^{p}(\partial \Gamma,\nu_{d})$ and for all $\gamma \in \Gamma$, where $p$ is such that $1/p=1/2+t$ with $-\frac{1}{2}<t<\frac{1}{2}$. We call these representations \emph{$L^{p}$-boundary representations} of hyperbolic groups.\\
\emph{The boundary representation} of hyperbolic groups is nothing but $\pi_{0}$ and has been intensively studied this last decade and might be seen, from a dynamical point of view, as a generalization of ergodicity see \cite{BM}, \cite{BM2}, \cite{Ga}, \cite{Boy2}, \cite{BoyMa},
\cite{BoyP}, \cite{BG}, \cite{BLP}, \cite{Fink}, \cite{KS} \cite{KS2}, and in \cite{Ca}.
In the papers \cite{Bou}, \cite{Boy} and \cite{BPi} the representations (\ref{representation}) have already been studied but rather as representations on Hilbert spaces. In this paper we focus on $L^{p}$-spaces. We characterize the irreducibility of $L^{p}$-boundary representations $(\pi_{t},L^{p}(\partial \Gamma,\nu_{d}))$ with $1/p=1/2+t$ (where $-1/2<t<1/2$) thanks to an intertwining operator associated with the metric $d$, denoted by $\mathcal{I}_{t}$ satisfying $\mathcal{I}_{t}\pi_{t}=\pi_{-t}\mathcal{I}_{t}$. We prove that this a bounded operator $\mathcal{I}_{t}:L^{p}(\partial \Gamma,\nu)\rightarrow L^{q}(\partial \Gamma,\nu)$ with $1/q=1/2-t$, defined only for $0<t<1/2$ (see Subsection \ref{intertwiner}). It already appears in the context of hyperbolic groups and Hilbert spaces in \cite{BPi}, \cite{GAG} and in CAT(-1) spaces \cite{Bou} .\\
Our main result is the following:
\begin{theo} \label{mainT}
For all $-1/2<t<1/2$ and $1/p=1/2+t$, the $L^{p}$-boundary representations $(\pi_{t},L^{p}(\partial \Gamma,\nu_{d}))$ corresponding to $d$ are irreducible if and only if the intertwiner $\mathcal{I}_{|t|}$ is injective.
\end{theo}
\begin{remark}
In particular, we prove that the representation $(\pi_{t},L^{p}/\ker \mathcal{I}_{t})$ is irreducible for $1/p=1/2+t$ with $0<t<1/2.$
\end{remark}
We also deduce the following result in the context of rank one semisimple Lie groups. We do not know if the following theorem is present in the literature.
\begin{theo}\label{latt}
Let $G$ be a rank one semisimple Lie group of non compact type.
Let $\Gamma$ be a lattice in $G$. The $L^{p}$-boundary representations of $\Gamma$ corresponding to $\nu$ the unique $K$-invariant probability measure on its Poisson-Furstenberg boundary $G/P$ are irreducible for all $1<p<+\infty$, where $K$ is the maximal compact subgroup and $P$ the minimal parabolic subgroup of $G$.
\end{theo}
Indeed, we derive the above theorem from a generalizations of an ergodic theorem \`a la Bader-Muchnik for $L^{p}$-boundary representations, see Theorem \ref{BML2} in the following.
\subsection*{Notation}
Endow $\Gamma$ with the length function corresponding to $d$, $|\cdot|: \Gamma \rightarrow \mathbb{R}^{+}$ defined as $|\gamma |=d(1,\gamma )$ where $1$
is the identity element and $\gamma \in \Gamma.$\\
Let $S^{\Gamma}_{n,R}:=\{ \gamma \in \Gamma| nR\leq |\gamma|<(n+1)R\}$ for $R>0$ and let $|S^{\Gamma}_{n,R}|$ be the cardinal of $S^{\Gamma}_{n,R}$.\\
As in \cite{Boy}, we recall the definition of a spherical function associated with $\pi_{t}$. This is the matrix coefficient: \begin{align}
\phi_{t}:\gamma \in \Gamma \mapsto \langle \pi_{t}(\gamma)\textbf{1}_{\partial \Gamma}, \textbf{1}_{\partial \Gamma}\rangle \in \mathbb{R}^{+},
\end{align}
where $
\textbf{1}_{\partial \Gamma}$ stands for the characteristic function of $\partial \Gamma$. \\
It will be convenient to also introduce the \emph{continuous} function (see \cite[Lemma 3.2]{BPi})
\begin{align}\label{lafonction}
\sigma_{t}:\xi\in \partial \Gamma \mapsto \mathcal{I}_{t}(\textbf{1}_{\partial \Gamma})(\xi)\in \mathbb{R}^{+}\;\;\;\; (t>0).\end{align} This is also a strictly positive function.
Let $M_{\sigma^{-1}_{t}}$ be the corresponding multiplication operator, that is a bounded operator acting on any $L^{p}(\partial \Gamma,\nu_{d})$ for any $p>1$.\\
We denoted by $\mathcal{R}_{t}=M_{\sigma^{-1}_{t}} \mathcal{I}_{t}$ the Riesz operator as in \cite{BPi}.
\subsection*{Convergence results}
We deduce the above theorems from a theorem {\it \`a la Bader-Muchnik} for $L^{p}$-boundary representations of hyperbolic groups. Surprisingly, this kind of theorem known in the Hilbertian context holds for $L^{p}$-spaces.
\begin{theorem}\label{BML2}
For $R>0$ large enough, there exists a sequence of measures $\mu_{n,R}:\Gamma \rightarrow \mathbb{R}^{+}$, supported on $S^{\Gamma}_{n,R}$, satisfying $\mu_{n,R}(\gamma)\leq C /|S^{\Gamma}_{n,R}|$ for some $C>0$ independent of $n$ such that
for all $0<t<1/2 $, for all $f,g\in C(\Gamma \cup \partial \Gamma)$, for all $v\in L^{p}(\partial \Gamma,\nu_{o})$ and $w\in L^{q}(\partial \Gamma,\nu_{o})$:
$$\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma) f(\gamma ) g(\gamma^{-1} ) \frac{\langle \pi_{t}(\gamma)v,w\rangle }{\phi_{t}(\gamma)}\to \langle g_{|_{\partial \Gamma}}\mathcal{R}_{t}(v),\textbf{1}_{\partial \Gamma}\rangle \langle f_{|_{\partial \Gamma}},w \rangle, $$
as $n\to +\infty$.
\end{theorem}
\begin{remark}
If $\mu$ denotes a finitely supported random walk on a non-elementary hyperbolic group $\Gamma$ and $d:=d_{\mu}$ denotes the corresponding Green metric, then $(\Gamma,d_{\mu})$ satisfies the assumptions of Theorem \ref{mainT} and \ref{BML2}.
\end{remark}
\subsection*{Acknowledgement}
The first author would like to thank Ian Tice for useful discussions about Proposition \ref{weakschur} and Nigel Higson for comments on Theorem \ref{latt}.
\subsection{Structure of the paper}
Section \ref{sec2} contains preliminaries on $\delta$-hyperbolic spaces and hyperbolic groups, Patterson-Sullivan measures and equidistribution results, $L^{p}$-boundary representations as well as the definition of the intertwiner $\mathcal{I}_{t}$ for $t>0.$\\
In Section \ref{interpolation}, we recall some basic facts in interpolation theory and some known results about spherical functions for hyperbolic groups. We prove that $\mathcal{I}_{t}$ is a bounded operator from $L^{p}(\partial \Gamma,\nu)$ to $L^{q}(\partial \Gamma,\nu)$ with $1/p=1/2+t$ and $1/q=1/2-t$ where $0<t<1/2$.\\
Section \ref{section4} is devoted to proofs of Theorem \ref{BML2} and Theorem \ref{mainT}. In particular, a new tool we use is a $L^p$-version of Radial Property RD for $L^p$-boundary representations.\\
Section \ref{section5} is a discussion about the case of rank one globally symmetric spaces of non compact type and we provide a proof of Theorem \ref{latt}.
\section {Preliminaries on geometrical setting}\label{sec2}
\subsection{The geometrical setting and the regularity assumptions of the metric} A nice reference is \cite{BH}.\\
A metric space \((X,d)\) is said to be \emph{hyperbolic} if there exists $\delta\geq 0$ and a\footnote{if the condition holds for some \(o\) and \(\delta\), then it holds for any \(o\) and \(2\delta\)} basepoint \(o\in X\) such that for any \(x,y,z\in X\) one has
\begin{equation}\label{hyp}
(x,y)_{o}\geq \min\{ (x,z)_{o},(z,y)_{o}\}-\delta,
\end{equation}
where \((x,y)_{o}\) stands for the \emph{Gromov product} of \(x\) and \(y\) from \(o\), that is
\begin{equation}
(x,y)_{o}=\frac{1}{2}(d(x,o)+d(y,o)-d(x,y)).
\end{equation}
We consider \emph{proper} hyperbolic metric space (the closed balls are compact).
A sequence $(a_{n})_{n\in \mathbb{N}}$ in $X$ converges at infinity if $(a_{i},a_{j})_{o}\rightarrow +\infty$ as $i,j$ goes to $+\infty$. Set $(a_n)\sim(b_n)\Leftrightarrow (a_i,b_j)_o\to \infty$ as $i,j\to \infty$ : it defines an equivalence relation and the set of equivalence classes (that does not depent on the base point) is denoted by $\partial X$ and is called the Gromov boundary of $X$. The topology on $X$ naturally extends to $\overline{X}:=X\cup \partial X$ so that $\overline{X}$ and $\partial X$ are compact sets. The formula
\begin{equation}\label{gromovextended}
(\xi,\eta)_{o}:= \sup \liminf_{i,j}(a_{i},b_{j})_{o}
\end{equation}
(where the supremum is taken over all sequences $(a_n), (b_n)$ which represent $\xi$ and $\eta$ respectively)
allows to extend the Gromov product on $\overline{ X}\times \overline{ X}$ but in a \emph{non continuous way} in general. Moreover the boundary $\partial X$ carries a family of \emph{visual metrics}, depending on \(d\) and a real parameter \(\epsilon > 0\) denoted from now by $d_{o,\epsilon}$. The metric space $(\partial X,d_{o,\epsilon})$ is a compact subspace of the bordification $\overline{X}:=\partial X \cup X$ (also compact) and the open ball centered at $\xi$ of radius $r$ with respect to $d_{o,\epsilon}$ will be denoted by $B(\xi,r)$.\\
It turns out that in general, the Gromov product does not extend continuously to the bordification, see for example \cite[Example 3.16]{BH}. Following the authors of \cite{NS}, we say that a hyperbolic space $X$ is $\epsilon$-good, where $\epsilon>0$, if the following
two properties hold for each base point $o\in X$:
\begin{itemize}
\item The Gromov product $(\cdot,\cdot)_{o}$ on $X$ extends continuously to the bordification $X\cup \partial X$.
\item The map $d_{o,\epsilon}:(\xi,\eta)\in \partial X\times \partial X \mapsto e^{-\epsilon(\xi,\eta)_{o}}$ is a metric on $\partial X$.
\end{itemize}
The classical theory of $\delta$-hyperbolic spaces works under the assumption that the spaces are geodesic but to guarantee that the Gromov product extends continuously to the boundary, that is if two sequences $a_{n},b_{m} \in X \to \xi,\eta \in \partial X$, then the Gromov product satisfies $(a_{n},b_{m})_{o}\to (\xi,\eta)_{o}$, we shall work under the assumption of roughly geodesic spaces. In particular the conformal relation on the boundary holds: for all $x,y\in X$ and for all $\xi,\eta \in \partial X:$
\begin{equation}\label{conform}
d_{y,\epsilon}(\xi,\eta)=e^{\frac{\epsilon}{2} \big(\beta_{\xi}(x,y)+\beta_{\eta}(x,y)\big)}d_{x,\epsilon}(\xi,\eta),
\end{equation}
where the Busemann function $\beta_{\cdot}(\cdot,\cdot)$ is defined as $$(\xi,x,y)\in\partial X \times X \times X \mapsto \beta_{\xi}(x,y):= \lim_{n\to +\infty}d(x,a_{n}) -d(y,a_{n}),$$ where $(a_{n})_{n\in \mathbb{N}}$ represents $\xi.$
Recall that for all $\xi\in \partial X$ and $x,y\in X$
\begin{equation}\label{buse}
\beta_{\xi}(x,y)=-d(x,y)+2(\xi,y)_{x}.
\end{equation}
A metric space $(X,d)$ is roughly geodesic if there exists $C=C_X>0$ so that for all $x,y\in X$ there exists a rough geodesic joining $x$ and $y$, that is map $r:[a,b]\subset \mathbb{R}\rightarrow X$ with $r(a)=x$ and $r(b)=y$ such that
\begin{equation}\label{roughgeo} |t-s|-C_X \leq d(r(t),r(s))\leq |t-s|+C_X
\end{equation}
for all $t,s\in [a,b]$.
We say that two rough geodesic rays
$r,r':[0,+\infty)\rightarrow X$ are equivalent if \\ $\sup_{t}d(r(t),r'(t))<+\infty$. We write $\partial_{r} X$ for the set of equivalence classes of rough geodesic rays. When $(X,d)$ is a proper roughly geodesic space, $\partial X$ and $\partial_{r} X$ coincide.
\subsection{Hyperbolic groups}
For an introduction to theory of hyperbolic groups we refer to \cite{Gr} and \cite{G}.\\
Recall that a group $\Gamma$ acts properly discontinuously on a proper metric space if for every compact sets $K,L\subset X$, the set $|\{\gamma\in \Gamma\;;\; \gamma K\cap L\neq \emptyset\}|<\infty$. A {\it group $\Gamma$ is said to be hyperbolic} if it acts by isometries on some proper hyperbolic metric space $(X,d)$ such that $X/\Gamma$ is compact. A hyperbolic group is necessarily finitely generated (by \v{S}varc-Milnor's lemma). For such $\Gamma$, any finite set of generators $\Sigma$ gives rise to a Cayley graph $({\mathcal G}(\Gamma, \Sigma),d_{\Sigma})$ whose set of vertices are the elements of $\Gamma$, linked by length-one edges if and only if they differ by an element of $\Sigma$. Every {\it geodesic} hyperbolic metric space $(X,d)$ on which $\Gamma$ acts by isometries properly discontinuously with compact quotient is quasi-isometric to a Cayley graph of a hyperbolic group. If $\Gamma$ is a hyperbolic group
endowed with a left invariant metric quasi-isometric to a word metric, it turns out that the metric space $(\Gamma,d)$ is a proper roughly geodesic $\delta$-hyperbolic metric space, see for example \cite[Section 3.1]{Ga}.\\
The limit set of $\Gamma$ denoted by $\Lambda_{\Gamma}$ is the set of accumulation points in $\partial X$ of an (actually any) orbit. Namely $\Lambda_{\Gamma}:=\overline{\Gamma . o}\cap \partial X$, with the closure in $\overline{X}$. We say that $\Gamma$ is non-elementary if $|\Lambda_{\Gamma}|>2$ (and in this case, $|\Lambda_{\Gamma}|=\infty$). If $\Gamma$ is non-elementary and if the action is cocompact then $\Lambda_{\Gamma}=\partial X$.
Eventually, note that a combination of results due to Blach\`ere, Ha\" issinsky and Matthieu \cite{BHM} and of Nica and \v{S}pakula \cite{NS} provides
\begin{theorem}
A hyperbolic group acts by isometries, properly discontinuously and cocompactly on a proper roughly geodesic $\epsilon$-good $\delta$-hyperbolic space.
\end{theorem}
\subsection{To sum up}\label{class}
We assume that the metric space we are considering satisfies the following conditions:
\begin{itemize}
\item The metric space $(X,d)$ is $\delta$-hyperbolic.
\item The metric space $(X,d)$ proper.
\item The metric space $(X,d)$ is roughly geodesic.
\item The metric space $(X,d)$ $\epsilon$-good with some $\epsilon>0,$
\end{itemize}
and we let a non-elementary group $\Gamma$ act on $(X,d)$ under the following conditions:
\begin{itemize}
\item The action of $\Gamma$ is by isometries.
\item The action is properly discontinuous.
\item The action is cocompact.
\end{itemize}
In other words, the group $\Gamma$ is a non-elementary hyperbolic group and thus $\Gamma$ is infinite, discrete, countable and non-amenable.
\subsection{The Patterson -Sullivan measure}
Fix such $(X,d)$, pick an origin $o\in X$ and set $B(o,R)=\{ x\in X|d(o,x)<R\}$.
Consider a family of visual metrics $(d_{x,\epsilon})_{x\in X}$ associated with a parameter $\epsilon$. The compact metric space $(\partial X,d_{o,\epsilon})$ admits a Hausdorff measure of dimension \begin{equation}\label{DQ} D:={Q\over \epsilon} \end{equation}
where
\begin{equation}\label{volumegrowth}
Q=Q_{\Gamma,d}:=\limsup_{R\to +\infty}\frac{1}{R}\log |\Gamma.o\cap B(o,R)|,
\end{equation}
is the critical exponent of $\Gamma$ (w.r.t. its action on $(X,d)$).
This $D$-Hausdorff measure is nonzero, finite, unique up to a constant, and denoted by \(\nu_{o}\) when we normalize it to be a probability. The fundamental property we use is the Ahlfors regularity: the support of $\nu_{o}$ is in $\partial X$ and we say that $\nu_{o}$ is Ahlfors regular of dimension \(D\), if we have the following estimate for the volumes of balls: there exists $C>0$ so that for all $\xi \in \Lambda_{\Gamma}$ for all \(r \leq Diam (\partial X)\)
\begin{equation}\label{Ahlfors}
C^{-1} r^{D}\leq \nu_{o}(B(r,\xi)) \leq C r^{D}.
\end{equation}
The \emph{class} of measures $\nu_o$ is invariant under the action of $\Gamma$ and independent of the choice of \(\epsilon\). We refer to \cite{Pa}, \cite{Su}, \cite{BMo} and \cite{Co} for Patterson-Sullivan measures theory.
\subsection{Shadows and control of Busemann functions}
\subsubsection*{Upper Gromov bounded by above}
This assumption appears in the work of Connell and Muchnik in \cite{CM} as well as in the work of Garncarek on boundary unitary representations \cite{Ga}. We say that a space $X$ is \emph{upper gromov bounded by above} with respect to $o$, if there exists a constant $M>0$ such that for all $x\in X$ we have
\begin{equation} \sup_{\xi \in \partial X}(\xi,x)_{o}\geq d(o,x)-M.
\end{equation}
Morally, this definition allows us to choose a point in the boundary playing the role of the forward endpoint of a geodesic starting at $o$ passing through $x$ in the context of simply connected Riemannian manifold of negative curvature. \\
We denote by $\hat{x}_{o}$ a point in the boundary satisfying
\begin{equation} \label{endpoint}
(\hat{x}_{o},x)_{o}\geq d(o,x)-M.
\end{equation}
In particular, every roughly geodesic metric spaces are upper Gromov bounded by above
(see for example \cite[Lemma 4.1]{Ga}).
\subsubsection{Definition of shadows}
Let $(X,d)$ be a roughly geodesic, $\epsilon$-good, $\delta$-hyperbolic space.
Let $r>0$ and a base point $o \in X$.
Define a shadow for any $x\in X$ denoted by $O_{r}(o,x)$ as
\begin{equation}
O_{r}(o,x):=\{ \xi\in \partial X | (\xi,x)_{o}\geq d(x,o)-r\}.
\end{equation}
\begin{lemma}\label{ombre} Let $r>M+\delta$. Then
$$B(\hat{x}_{o},e^{-\epsilon(d(o,x)-r+\delta)})\subset O_{r}(o,x) \subset B(\hat{x}_{o},e^{-\epsilon(d(x,o)-r-\delta)}). $$
\end{lemma}
\begin{proof}
Assume $r>M+\delta$.
For the left inclusion we have
\begin{align*}
(\xi,x)_{o}&\geq \min \{ (\xi,\hat{x}_{o})_{o},(\hat{x}_{o},x)\}-\delta\\
&\geq\min \{d(o,x)-r+\delta,d(o,x)-M \}-\delta \\
&=d(o,x)-r \\
&\geq d(o,x)-r.
\end{align*}
For the other inclusion
\begin{align*}
(\xi,\hat{x}_{o})_{o}&\geq \min \{ (\xi,x)_{o},(\hat{x}_{o},x)_{o}\}-\delta\\
&\geq \min \{ d(x,o)-r,d(o,x)-M \}-\delta\\
&\geq d(o,x)-r -\delta.
\end{align*}
\end{proof}
The above lemma combined with Ahlfors regularity of $\nu_{o}$ provides
\begin{lemma}\label{shadow}
There exists $C>0$ such that for any $x \in X$, and for $r>M+\delta$ $$C^{-1}e^{-Qd(o,x)}\leq\nu_{o}(O_{r}(o,x))\leq C e^{-Qd(o,x)}.$$
\end{lemma}
Here is a lemma dealing with a covering and the multiplicity of a covering by shadows of the boundary.
\begin{lemma}\label{multiplicity}
\begin{enumerate} We have the two following properties:
\item \label{item1}For $R>0$ large enough, there exists $r>0$ such that $$\cup_{\gamma \in S^{\Gamma}_{n,R}} O_{r}(o,\gamma o)=\partial X.$$
\item \label{item2}For all $R,r>0$ large enough, there exists an integer $m$ such that for all $\xi \in \partial X$ we have for all $n\in \mathbb{N}$,
$\sum_{\gamma \in S^{\Gamma}_{n,R}} \textbf{1}_{O_{r}(o,\gamma o)}(\xi)\leq m.$
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\kappa$ be the diameter of a relatively compact fundamental domain of the action of $\Gamma$ on $X$ containing $o$. Set \begin{equation}\label{choice}
R>3(C_{X}+\kappa)
\end{equation} where $C_{X}$ is the constant coming from the assumption \ref{roughgeo}.\\
We prove (\ref{item1}).
Let $\xi\in \partial X$ and consider $r_{o}$ a roughly geodesic representing $\xi$. Define $z_{\xi}:=r_{o}(nR+R/2)$. Hence, $nR\leq d(o,z_{\xi})< (n+1)R.$ Since the action is cocompact, there exists $\gamma \in \Gamma$ such that $d(\gamma o,z_{\xi})\leq \kappa.$
The choice of (\ref{choice}) ensures $ nR \leq d(\gamma o,o)<(n+1)R.$ Therefore,
\begin{align*}
(\xi, \gamma o)_{o}&\geq \min\{(\xi,z_{\xi})_{o} ,(z_{\xi},\gamma o)_{o}\}-\delta\\
&\geq \min\{(n+1)R-3C_{X},nR-\kappa\}-\delta\\
&\geq nR -\kappa-\delta\\
&\geq |\gamma|-R -\kappa-\delta,
\end{align*}
and thus $\xi \in O_{r}(o,\gamma o)$ with $r=R +\kappa+\delta$.\\
We now prove (\ref{item2}). Take $r>0$ and $R$ satisfying (\ref{choice}). For any $\gamma \in S^{\Gamma}_{n,R}$
and for all $\xi \in O_{r}(o,\gamma o)$
\begin{align*}
(\gamma o,z_{\xi})_{o}&\geq \min\{(\gamma o,\xi)_{o},(\xi,z_{\xi})_{o} \}-\delta\\
&\geq \min\{|\gamma|-r ,(n+1)R-3C_{X} \}-\delta\\
&\geq \min\{nR-r ,(n+1)R-3C_{X} \}-\delta\\
&\geq \min\{nR-r ,nR-C_{X} \}-\delta\\
&\geq nR-r-\delta.
\end{align*}
By definition of the Gromov product we deduce that $d(\gamma o,z_{\xi})\leq \rho$ where $\rho=2(R+r+\delta)$.
Thus the set $\{\gamma \in S^{\Gamma}_{n,R} | O_{r}(o,\gamma o)\ni \xi \}$ is contained in $B(z_{\xi},\rho)\cap \Gamma .o$. Since the action is cocompact, by taking a positive constant $R'$ bigger than $\kappa>0$, we obtain $|B(z_{\xi},\rho)\cap \Gamma .o| \leq |B(o,R')\cap \Gamma. o| $ with $R'=\rho-\kappa$. Set $m:=|B(o,R')\cap \Gamma .o| $ to conclude the proof.
\end{proof}
Recall that there exists $M>0$, such that if $\gamma$ is an element of $\Gamma$, one can choose $\hat{\gamma}_{o}$ a point in $\partial X$ satisfying (\ref{endpoint})
$$ (\hat{\gamma}_{o},\gamma o)_{o}\geq |\gamma|-M.$$
\begin{lemma}\label{crucial}
There exists $C>0$ such that for all $\xi \in \partial X$ there exists $g_{\xi}\in S^{\Gamma}_{n,R}$ such that for all $\gamma \in S^{\Gamma}_{n,R}$,
$\beta_{\xi}(o,\gamma o)\leq \beta_{\hat{\gamma}_{o}}(o,g_{\xi} o)+C.$
\end{lemma}
\begin{proof}
Pick a point $\xi \in \partial X$. Consider a roughly geodesic $r_{o}$ starting at $o$ representing $\xi$ and choose a point $z_{\xi}$ on it such that $nR\leq d(z_{\xi},o)< (n+1)R$ (since $R$ is large enough).
We have for $\gamma \in S^{\Gamma}_{n,R}$ \begin{align*}
(\gamma o, z_{\xi})_{o}&\geq \min\{(\gamma o, \hat{\gamma o})_{o}, (\hat{\gamma o}, z_{\xi})_{o}\}-\delta\\
& \geq \min\{|\gamma|-M, (\hat{\gamma o}, z_{\xi})_{o}\}-\delta,
\end{align*}
and therefore either $(\hat{\gamma o}, z_{\xi})_{o}\leq (\gamma o, z_{\xi})_{o} +\delta $ or $ (\gamma o, z_{\xi})_{o}\geq |\gamma|-M-\delta \geq d(o,z_{\xi})-M-R-\delta\geq (\hat{\gamma o}, z_{\xi})_{o}-M-R-\delta.$ In other words,
\begin{equation}\label{equ}
(\hat{\gamma o}, z_{\xi})_{o}\leq (\gamma o, z_{\xi})_{o} +C_{\delta,M,R}
\end{equation}
with $C_{\delta,M,R}=M+R+\delta.$
It follows that
\begin{align*}
(\xi,\gamma o)_{o}&\geq \min\{ (\xi,z_{\xi})_{o}, (z_{\xi},\gamma o)_{o} \}-\delta \\
&\geq \min\{ nR-C_{X} , (z_{\xi},\gamma o)_{o} \}-\delta\\
&\geq \min\{ nR-C_{X} , (\hat{\gamma o}, z_{\xi})_{o}-C_{\delta,M,R} \}-\delta\\
&\geq (\hat{\gamma o}, z_{\xi})_{o}-C',
\end{align*}
with $C'=C_{X}+R+\delta+C_{\delta,M,R}>0$.
Equality (\ref{buse}) implies that
$\beta_{\xi}(o,\gamma o)\leq \beta_{\hat{\gamma o}}(o,z_{\xi})+2C'.$ Now, choose $R$ large enough so that there exists $g_{\xi}\in \Gamma$ with $d(g_{\xi}o,z_{\xi})\leq \kappa$ and $go \in S^{\Gamma}_{n,R}$, where $\kappa$ is the diameter of a fundamental domain of the action of $\Gamma$ on $X$ containing $o$. Then\\
$$\beta_{\hat{\gamma o}}(o,z_{\xi})=\beta_{\hat{\gamma o}}(o,g_{\xi} o)+\beta_{\hat{\gamma o}}(g_{\xi} o,z_{\xi})\leq \beta_{\hat{\gamma o}}(o,g_{\xi} o)+d(g_{\xi} o,z_{\xi})\leq \beta_{\hat{\gamma o}}(o,g_{\xi} o)+\kappa.$$
To conclude the proof write
$$\beta_{\xi}(o,\gamma o)\leq \beta_{\hat{\gamma o}}(o,g_{\xi}o)+2C'+\kappa.$$
\end{proof}
\subsection{Equidistribution \`a la Roblin-Margulis}\label{equid}The following theorem appears under this form for the first time in \cite[Theorem 3.2]{BG} and has been inspired by results in \cite{Ro} and \cite{Ma}.
The unit Dirac mass centered at $x\in X$ is denoted by $D_{x}$.
\begin{theorem}\label{equi}
For any $R>0$ large enough, there exists a sequence of measures $\mu_{n,R}:\Gamma \rightarrow \mathbb{R}^{+}$ such that
\begin{enumerate}
\item \label{growth}There exists $C>0$ satisfying for all $n \in \mathbb{N}$ and all $\gamma \in S^{\Gamma}_{n,R}$ that $$\mu_{n,R}(\gamma)\leq C / |S^{\Gamma}_{n,R}|.$$
\item
We have the following convergence: $$\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma) D_{\gamma o} \otimes D_{\gamma^{-1} o} \rightharpoonup \nu_{o}\otimes \nu_{o},$$
as $n\to +\infty$, for the weak* convergence in $C(\overline{X}\times \overline{X})$.
\end{enumerate}
\end{theorem}
\subsection{$L^{p}$-representations}
The expression (\ref{representation}) of $\pi_{t}$ defines an isometric $L^{p}$-representation of $\Gamma$ for the exponent
\begin{equation}
p=\frac{2}{1+2t},
\end{equation}
with $0<t<1/2$.
Denote its conjugate exponent
\begin{equation}
q=\frac{2}{1-2t}.
\end{equation}
Observe that the contragredient representation of $(\pi_{t},L^{p})$ is $(\pi_{-t},\overline{L^{q}})$
with respect to the (non-degenerate) pairing $$\langle\cdot, \cdot\rangle:(v,w)\in L^{p}\times \overline{L^{q}} \rightarrow \int_{\partial X}v(\xi)\overline{w}(\xi) d\nu_{o}(\xi) \in \mathbb{C}.$$
In particular, the adjoint operator $\pi^{*}_{t}(\gamma)$ of $\pi_{t}(\gamma)$ is given for any $\gamma \in \Gamma$ by
\begin{equation}
\pi^{*}_{t}(\gamma)=\pi_{-t}(\gamma^{-1}).
\end{equation}
\subsection{An Intertwiner}\label{sec14}
Following \cite{BPi}, recall the definition of the operator $\mathcal{I}_{t}$ for $t>0$: for almost every $\xi \in \partial X$
\begin{equation}\label{intertwiner}
\mathcal{I}_{t}(v)(\xi):=\int_{\partial X} \frac{v(\eta)}{d^{(1-2t)D}_{o,\epsilon}(\xi,\eta)}d\nu_{o}(\eta).
\end{equation}
It has already been observed in \cite{BPi} that $\mathcal{I}_{t}$ is a self-adjoint compact operator on $L^{2}(\partial X,\nu_{o})$. We will show that for $0<t<1/2$, the formula (\ref{intertwiner}) defines $\mathcal{I}_{t}$ as a bounded operator from $L^{p}(\partial X,\nu_{o})$ to $L^{q}(\partial X,\nu_{o})$ with with $1/p=1/2+t$ and $1/q=1/2-t$, see Proposition \ref{cont}. Moreover, working under the assumptions of $\epsilon$-good spaces guarantees that the operator $\mathcal{I}_{t}$ intertwines $\pi_{t}$ and $\pi_{-t}$ from $L^{p}(\partial X,\nu_{o})$ to $L^{q}(\partial X,\nu_{o})$ thanks to the relation (\ref{conform}), see \cite[Proposition 3.17]{BPi}. Namely, for all $\gamma \in \Gamma$ and for all $v\in L^{p}(\partial X,\nu_{o}):$
\begin{equation}\label{intert}
\mathcal{I}_{t}\pi_{t}(\gamma)v=\pi_{-t}(\gamma)\mathcal{I}_{t}v.
\end{equation}
It will be also useful to consider \begin{equation}\label{sigmat}
\tilde{\sigma_{t}}:x\in \overline{X} \mapsto \int_{\partial X} e^{(1-2t)Q( x,\eta)_{o}}d\nu_{o}(\eta)\in \mathbb{R}^{+}.
\end{equation}
Observe that $\tilde{\sigma_{t}}$ restricted to $\partial X$ is nothing but $\sigma_{t}$ defined in (\ref{lafonction}).
We recall that
the function $\tilde{\sigma_{t}}$ si continuous on $\overline{X},$ see \cite[Proposition 3.4 ]{BPi}.
\section{Interpolation theory: Strong inequality of type $(p,q)$ for the intertwining operator and Application of Riesz-Thorin Theorem}\label{interpolation}
The aims of this section is to provide some material of interpolation theory to prove the main result concerning the operator $\mathcal{I}_{t}$, with $0<t<1/2$ based on the \emph{weak-type Schur's test}. The connection of interpolation theory and Lorentz psaces with boundary representations are already in the paper \cite{Cow} and \cite[Chapter 6]{FiPi}. Note also the very recent work \cite{GAG}.
\subsection{Lorentz spaces, interpolation and applications}
We follow \cite{Cow}.
Let $(\Omega,\mu)$ be a measure space. If $f:\Omega \rightarrow \mathbb{C}$ is a measurable function then define the nonincreasing rearrangement of $f$ $$f^*(t):= \inf \bigg\{ s>0 |\mu\big( \{|f|>s \}\big)\leq t \bigg\}.$$
The real function $f^*$ is a positive nonincreasing function, equimeasurable with $|f|$ and right continuous. Define then the norm, if $1<p,q<\infty$ as $$\|f\|_{L^{p,q}}=\bigg(\frac{p}{q} \int^{\infty}_{0}(t^{1/p}f^{*}(t))^{q}\frac{dt}{t}\bigg)^{1/q},$$ and $1<p<\infty$ with $q=\infty$:
$$\|f\|_{L^{p,\infty}}=\sup\{ t^{1/p}f^{*}(t)|t\in [0,+\infty]\}.$$
Define the Lorentz spaces for $1<p<+\infty$ and $1< q\leq +\infty$.
$$L^{p,q}(\Omega):=\{ f:\Omega \rightarrow \mathbb{C} \mbox{ measurable }| \|f\|_{p,q}<+\infty\}.$$
Here are some useful facts:
\begin{enumerate}
\item $L^{p,p}=L^{p}.$
\item $(L^{p,q})^{*}=L^{p',q'}.$
\item \label{point3} $L^{p,q_{1}}\subset L^{p,q_{2}}$ with $q_{1}<q_{2}$, that is $$\|v\|_{L^{p,q_{2}}}\leq C_{p,q_{1},q_{2}} \|v\|_{L^{p,q_{1}}},$$for some $C_{p,q_{1},q_{2}}>0.$
\end{enumerate}
Here is the \emph{} the fundamental tool, called \emph{the weak-type Schur's test} coming form interpolation theory.
\begin{prop}\label{weakschur}
Let $(X,\mathcal{M},\mu)$ and $(Y,\mathcal{N},\nu)$ be $\sigma$-finite measure spaces and let $1<p,q,r<\infty$ be such that $$\frac{1}{p}+\frac{1}{r}=\frac{1}{q}+1.$$
Let $k:X\times Y \mapsto \mathbb{R}$ be a measurable function and suppose that there exists $A>0$ such that
\begin{align*}
\|k(\cdot,y)\|_{L^{r,\infty}}&\leq A \mbox{ for a.e. } y \in Y, \\
\|k(x,\cdot)\|_{L^{r,\infty}}&\leq A \mbox{ for a.e. } x \in X.
\end{align*}
Therefore the formula $Tv(x)=\int_{Y}k(x,y)v(y)d\nu(y)$ defined a.e a function in $L^{q}(X,\mu)$ whenever $v$ is in $L^{p}(Y,\nu)$. And moreover for all $1\leq s \leq \infty $ there exists a constant $C=C_{p,q,r,s}>0$ depending on $p,q,r,s$ such that
$$\|Tv\|_{L^{q,s}}\leq C A\|v\|_{L^{p,s}}.$$
\end{prop}
We refer to \cite[Proposition 6.1]{Tao} for a proof.
\subsubsection{Analogs of homogenous functions on the boundary}
\begin{lemma} Let $1<1/p<\infty.$
We have for all $\xi\in \partial X$ $$\frac{1}{d^{ pD}_{o,\epsilon} (\xi,\cdot)}\in L^{1/p,\infty}.$$
\end{lemma}
\begin{proof}
Let $s>0.$ We have for all $\xi \in \partial X$
$$\nu_{o}(\{ \eta |d^{-pD}_{o,\epsilon}(\xi,\eta)>s\})=\nu_{o}(\{ \eta |d_{o,\epsilon}(\xi,\eta)<s^{-1/pD}\})=\nu_{o}\big(B(\xi,s^{-1/pD})\big).$$
We obtain for all $\xi \in \partial X$ and for $t>0:$
\begin{align*}
\bigg(\frac{1}{d^{ pD}_{o,\epsilon} (\xi,\cdot)}\bigg)^{*}(t)&=\inf \{ s>0| \nu_{o}(\{ \eta |d^{-pD}_{o,\epsilon}(\xi,\eta)>s)\leq t\}\\
&= \inf \{ s>0 | \nu_{o}\big(B(\xi,s^{-1/pD})\big) \leq t \} \\
&\leq \inf \{ s>0 | C s^{-1/p} \leq t \} \\
&= C^{p}t^{-p},
\end{align*}
for a constant $C>0$ coming from the Ahlfors regularity property \ref{Ahlfors}. It follows that $1/d^{ pD}_{o,\epsilon} (\xi,\cdot) \in L^{1/p,\infty}.$
\end{proof}
For $t>0$ we have for all $\xi\in \partial X$ $$\frac{1}{d^{(1-2t) D}_{o,\epsilon} (\xi,\cdot)}\in L^{r,\infty},$$
with $r=1/(1-2t).$ By symmetry, we have for all $\eta \in \partial X$ that $$\frac{1}{d^{(1-2t) D}_{o,\epsilon} (\cdot,\eta)}\in L^{r,\infty}$$ as well. We obtain the following result:
\begin{prop}\label{cont}
Let $0<t<1/2$ and let $p,q$ such that $1/p=1/2+t$ and $1/q=1/2-t$.
The operator $\mathcal{I}_{t}$ is bounded from $L^{p}(\partial X,\nu_{o})$ to $L^{q}(\partial X,\nu_{o})$.
\end{prop}
\begin{proof}
Note that $1/p+(1-2t)=1/2+t+(1-2t)=1+1/2-t=1+1/q.$ Proposition \ref{weakschur} implies that $$\|\mathcal{I}_{t}(v)\|_{L^{q,s}}\leq C_{t} \|v\|_{L^{p,s}} $$ for all $1\leq s\leq \infty$.
Pick $p \leq s=2/1-2t=q $. Thus $\|\mathcal{I}_{t}(v)\|_{L^{q,s}}=\|\mathcal{I}_{t}(v)\|_{L^{q}}$ and $\|v\|_{L^{p,s}}=\|v\|_{L^{p,q}}\leq C_{p,q}\|v\|_{L^{p,p}}=\|v\|_{L^{p}} $ by (\ref{point3}) to complete the proof.
\end{proof}
\subsection{Consequences of spectral gap estimates and Riesz-Thorin theorem}\label{section6}
\subsubsection{Spherical functions on hyperbolic groups }
As in \cite{Boy}, we recall the definition of a spherical function associated with $\pi_{t}$. This is the matrix coefficient: \begin{align}
\phi_{t}:\gamma \in \Gamma \mapsto \langle \pi_{t}(\gamma)\textbf{1}_{\partial X}, \textbf{1}_{\partial X}\rangle \in \mathbb{R}^{+},
\end{align}
and introduce the function $\omega_{t}(\cdot)$ for $t\in \mathbb{R}^{*} $ defined as
\begin{equation}
\omega_{t}(x) =\frac{2 \sinh\big( tQ x\big) }{e^{2tQ}-1}.
\end{equation}
Note that $\omega_{t}(\cdot)$ is a positive function for all $t\in \mathbb{R}^{*} $ converging uniformly on compact sets of $\mathbb{R}$ to
$ \omega_{0}(x)=x $ as $t\to 0$, and $\omega_{-t}(x)=e^{2tQ}\omega_{t}(x)$.\\
In \cite{Boy}, the following estimates, called \emph{ Harish-Chandra Anker estimates}, naming related to \cite{Ank} have been proved.
There exists $C>0$, such that for any $t\in \mathbb{R}$, we have for all $\gamma \in \Gamma$
\begin{align}\label{HCHestimates}
C^{-1}e^{-\frac{1}{2}Q|\gamma| }\big(1+\omega_{|t|}(|\gamma|)\big
\leq \phi_{t}(\gamma)\leq
Ce^{-\frac{1}{2}Q|\gamma|}\big(1+\omega_{|t|}(|\gamma|)\big).
\end{align}
Set for all $x\in \mathbb{R}$ \begin{equation}
\widetilde{\phi_{t}}(x):= e^{-\frac{1}{2}Qx}\big(1+\omega_{|t|}(x)\big).
\end{equation}
\subsubsection{A $L^{p}$-spectral inequality} We briefly recall some facts.
In \cite{Boy} the following spectral inequality, generalizing the so called ``Haagerup Property" or Property RD has been proved. Pick $R>0$ large enough. There exists $C>0$ such that for any $t \in \mathbb{R}$ and for all $f_{n}\in \mathbb{C}[\Gamma]$ supported in $S^{\Gamma}_{n,R}$, we have
\begin{equation}\label{RDgeneral}
\|\pi_{t}(f_{n})\|_{L^{2}\to L^{2}}\leq C \omega_{|t|}(nR)\|f_{n}\|_{\ell^{2}}.
\end{equation}
For $R>0$ large enough and for any $n\in \mathbb{N}^{*}$, consider $f_{n}=\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)D_{\gamma } \in \mathbb{C}[\Gamma]$ supported in $S^{\Gamma}_{n,R}$. Note that (\ref{growth}) of Theorem \ref{equi} implies the existence of some positive constant $C>0$ such that for any $n\in \mathbb{N}^*$ $$ \|f_{n}\|_{\ell^{2}}=\|\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)D_{\gamma }\|_{\ell^{2}}\leq C/|S^{\Gamma}_{n,R}|^{\frac{1}{2}}.$$
Lemma \ref{multiplicity} (\ref{item1}) implies the existence of $C>0$ such that $|S^{\Gamma}_{n,R}|\leq C e^{-Q nR}$.
From the lower bound of (\ref{HCHestimates}) together with above growth estimate we deduce the following ``spectral gap": there exists a constant $C>0$ such that for any $t\in \mathbb{R}$, we have for all $\gamma \in \Gamma,$ for all non negative integers $n$
\begin{equation}\label{radialestimates}
\|\pi_{t}(f_{n})\|_{L^{2}\to L^{2}}\leq C \widetilde{\phi_{t}}(nR),
\end{equation}
where $\widetilde{\phi_{t}}(nR)$ satisfies $C^{-1}\widetilde{\phi_{t}}(nR)\leq \phi_{t}(\gamma) \leq C\widetilde{\phi_{t}}(nR)$ for all $\gamma \in S^{\Gamma}_{n,R}$ where $C$ is a constant independent on $n$.\\
The aim of this subsection is to prove a $L^{p}$-version of the above inequality (\ref{radialestimates}).\\
Although $\pi_{t}$ is an isometric action on $L^{p}$ with $1/p=1/2+t$, it defines also a representation $\pi_{t}:\Gamma \rightarrow \mathbb{GL}(L^{r})$ where $ \mathbb{GL}(L^{r})$ stands for the group of bounded invertible linear operators acting on $L^{r}$. More precisely
\begin{prop}
For any $-1/2\leq t\leq 1/2$, for all $\gamma \in \Gamma$ the operator $\pi_{t}(\gamma)$ is bounded invertible operator on $L^{r}$ for all $1\leq r\leq \infty$ and moreover $\gamma \in \pi_{t}(\gamma) \in \mathbb{GL}(L^{r})$ is a group morphism.
\end{prop}
\begin{proof}
Pick $-1/2\leq t\leq 1/2$. Assume $1\leq r < \infty$. We have for all $\gamma \in \Gamma$ and for all $v\in L^{r}$
\begin{align*}
\|\pi_{t}(\gamma)v\|^{r}_{r}&=\int_{\partial X}e^{r(1/2+t)Q\beta_{\xi}(o,\gamma o)}|v(\gamma^{-1}\xi)|^{r}d\nu_{o}(\xi)\\
&=\int_{\partial X}e^{Q\beta_{\xi}(o,\gamma o) +(r(1/2+t)-1) Q\beta_{\xi}(o,\gamma o) }|v(\gamma^{-1}\xi)|^{r}d\nu_{o}(\xi)\\
&\leq e^{|r(1/2+t)-1)| Q |\gamma| } \int_{\partial X} e^{Q\beta_{\xi}(o,\gamma o) } |v(\gamma^{-1}\xi)|^{r}d\nu_{o}(\xi)\\
&= e^{|r(1/2+t)-1)| Q |\gamma| } \|v\|^{r}_{r},
\end{align*}
where the inequality follows from the fact $|\beta_{\xi}(x,y)|\leq d(x,y)$ for all $\xi\in \partial X$ and for all $x,y\in X$. \\
For the case $r=\infty$, we have for all $\gamma \in \Gamma$ and for all $v\in L^{\infty}$ that $$\|\pi_{t}(\gamma)v\|_{\infty}\leq e^{|1/2+t| Q |\gamma| } \|v\|_{\infty}.$$Hence, for all $\gamma \in \Gamma$ the operator $\pi_{t}(\gamma)$ is bounded invertible operator on $L^{r}$ for all $1\leq r\leq \infty$. The cocycle property of the Radon-Nikodym derivative implies that $\pi_{t}$ is a morphism and thus $\pi^{-1}_{t}(\gamma)=\pi_{t}(\gamma^{-1})$. \end{proof}
In order to prove a $L^{p}$-version of Inequality (\ref{radialestimates}) we need the following crucial lemma.\\
\begin{lemma}\label{BMtrick}
Let $R>0$ large enough. For any $n\in \mathbb{N}$, set $f_{n}=\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)D_{\gamma } \in \mathbb{C}[\Gamma]$. Consider
$\pi_{t}(f_{n}) $ as an operator from $L^{\infty} \to L^{\infty}$ with $t\in [-1/2,1/2]$.
The exists $C_{\infty}>0$ such that for all $t\in [-1/2,1/2]$ and for all $n\in \mathbb{N}$ $$ \|\pi_{t}(f_{n})\|_{L^{\infty} \to L^{\infty}}\leq C_{\infty} \widetilde{\phi_{t}}(nR).$$
\end{lemma}
\begin{proof}
Let $t\in \mathbb{R}$ such that $-1/2\leq t\leq1/2$ and $R>0$. Consider the sequence of functions $(G_{n})_{n}$ defined for each $n$ as $$G_{n}:\xi \in \partial X \mapsto \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)e^{(\frac{1}{2}+t)Q\beta_{\xi}(o,\gamma o)}.$$ Consider also $$\check{G}_{n}:\xi \in \partial X \mapsto \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma^{-1})e^{(\frac{1}{2}+t)Q\beta_{\xi}(o,\gamma o)}. $$
Let $F\in L^{\infty}(\partial X,\nu_{o})$. We have for every $\xi\in \partial X$ and for all $n\in \mathbb{N}:$
\begin{align*}
|\pi_{t}(f_{n})F(\xi)|&=\big|\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)e^{(\frac{1}{2}+t)Q\beta_{\xi}(o,\gamma o)}F(\gamma^{-1}\xi)\big|\\
&\leq \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)e^{(\frac{1}{2}+t)Q\beta_{\xi}(o,\gamma o)}|F(\gamma^{-1}\xi)| \\
&\leq \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)e^{(\frac{1}{2}+t)Q\beta_{\xi}(o,\gamma o)}\|F\|_{\infty},
\end{align*}
and thus:
$$\|\pi_{t}(f_{n})F\|_{\infty}\leq \|G_{n}\|_{\infty}\|F\|_{\infty}.$$
In other words, we shall prove $$\sup_{n} \frac{ \|G_{n}\|_{\infty}}{\widetilde{\phi_{t}}(nR)}<\infty.$$ Pick $\xi\in \partial X$. Lemma \ref{crucial} implies that one can choose $R>0$ large enough such that there exist $C>0$ and $g_{\xi} \in S_{n,R}$ satisfying for all $\gamma \in S^{\Gamma}_{N,R}$
$$\beta_{\xi}(o,\gamma o)\leq \beta_{\hat{\gamma}_{o}}(o,g_{\xi} o)+C.$$ Furthermore, the right inclusion of Lemma \ref{ombre} implies that there exists $C'>0$ such that for all $\eta \in O_{r}(o,\gamma o)$
$$ \beta_{\hat{\gamma}_{o}}(o,g_{\xi} o)\leq \beta_{\eta}(o,g_{\xi} o)+C'.$$
It follows a ``quasi mean-value property" that reads as follows
$$ e^{(\frac{1}{2}+t)Q\beta_{\hat{\gamma}_{o}}(o,g_{\xi}o)} \leq \frac{C_{Q,t}}{|\nu_{o}(O_{r}(o,\gamma o))|} \int_{O_{r}(o,\gamma o)}e^{(\frac{1}{2}+t)Q\beta_{\eta }(o,g_{\xi}o)}d\nu_{o},$$
where $C_{Q,t}=e^{(\frac{1}{2}+t)Q+C'}.$
Therefore, using an absorbing constant $C$ independent on $t\in [-1/2,1/2]$ we obtain
\begin{align*}
G_{n}(\xi)&\leq C \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)e^{(\frac{1}{2}+t)Q\beta_{\hat{\gamma}_{o}}(o,g_{\xi}o)} \\
&\leq C \sum_{\gamma \in S^{\Gamma}_{n,R}} \frac{\mu_{n,R}(\gamma)}{\nu_{o}(O_{r}(o,\gamma o))}\int_{O_{r}(o,\gamma o)}e^{(\frac{1}{2}+t)Q\beta_{\eta}(o,g_{\xi}o)}d\nu_{o}(\eta) \\
&\leq C \sum_{\gamma \in S^{\Gamma}_{n,R}} \int_{O_{r}(o,\gamma o)}e^{(\frac{1}{2}+t)Q\beta_{\eta }(o,g_{\xi}o)}d\nu_{o}(\eta) \\
&\leq C \phi_{t}(g_{\xi}),
\end{align*}
where the first inequality follows Lemma \ref{crucial}, the second inequality follows from Lemma \ref{shadow} combined with the growth of $|S^{\Gamma}_{n,R}|$ and the last inequality follows from the finite multiplicity of the covering $\cup_{ \gamma \in S^{\Gamma}_{n,R} }O_{r}(o,\gamma o)$ proved in Lemma \ref{multiplicity}. \\
The estimates of the spherical functions (\ref{HCHestimates}) applied to $g_{\xi}\in S^{\Gamma}_{n,R}$ together with the above inequality imply for almost every $\xi\in
\partial X$ and for all $n\in \mathbb{N}:$
$$G_{n}(\xi)\leq C \widetilde{\phi_{t}}(nR).$$ Thus
$$\sup_{n}\frac{\|G_{n}\|_{\infty}}{\widetilde{\phi_{t}}(nR)}<+\infty,$$ as required.
The above method applied to $\check{G}_{n}$ implies $$\sup_{n}\frac{\|\check{G}_{n}\|_{\infty}}{\widetilde{\phi_{t}}(nR)}<\infty.$$
\end{proof}
Eventually we obtain the $L^{p}$-version of Radial property RD for $L^p$-boundary representations.
\begin{theorem}\label{Lpspectral}
Let $R>0$ be large enough and let $r\in [1,\infty]$ such that $0\leq 1/r\leq 1$. There exists $C>0$ such that for any $-\frac{1}{2}\leq t \leq\frac{1}{2}$, such that for all $n\in \mathbb{N}$ with $f_{n}=\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)D_{\gamma } \in \mathbb{C}[\Gamma]$ supported on $S^{\Gamma}_{n,R}$ we have:
\begin{align}
\|\pi_{t} (f_{n})\|_{L^{r}\to L^{r}}\leq C \widetilde{\phi_{t}}(nR).
\end{align}
And thus we have
\begin{align}
\sup_{\|v\|_{p},\|w\|_{q}\leq 1} |\langle \pi_{t} (f_{n})v,w\rangle |\leq C\widetilde{\phi_{t}}(nR).
\end{align}
\end{theorem}
\begin{remark}
It is worth noting that the above theorem can be viewed as a $L^{p}$-version of Radial Property RD for $L^p$-boundary representations of hyperbolic groups.
\end{remark}
\begin{proof}
The proof is based on Riesz-Thorin Theorem. We shall prove that for any $t\in [-1/2,1/2 ] $ and for each $n\in \mathbb{N}$ the operator $\pi_{t}(f_{n})$ viewed as an operator from $L^{1} \to L^{1}$ and as an operator from $L^{\infty} \to L^{\infty}$ is uniformly bounded with respect to $n$.\\
The second point follows from Lemma \ref{BMtrick}. We prove now the first point. First, observe that $\pi_{t}(f_{n})$ preserves the cone of positive functions since $f_{n}$ is positive and $\pi_{t}$ itself preserves the cone of positive functions. By decomposing a function $v$ into real an imaginary parts and positive and negative functions it is enough to find a bound for positive functions. Assume $v \geq 0$.
\begin{align*}
\|\pi_{t}(f_{n})v\|_{1}&=\langle \pi_{t}(f_{n})v,\textbf{1}_{\partial X}\rangle \\
&=\langle v, \pi_{-t}(\check{f_{n}}) \textbf{1}_{\partial X}\rangle \\
&\leq \|\check{G}_{n}\|_{\infty} \langle v, \textbf{1}_{\partial X}\rangle \\
&\leq C_{\infty}\widetilde{\phi_{t}}(nR) \|v\|_{1},
\end{align*}
where the first inequality follows from the proof of Lemma \ref{BMtrick}.
Therefore Riesz-Thorin theorem implies that
$\pi_{t}(f_{n}) $ defines a bounded operator from $L^{r}$ to $L^{r}$ for any $r$ such that $1/r\in [0,1]$.
\end{proof}
\section{Proofs}\label{section4}
The proof of Theorem \ref{BML2} relies on three steps.
\begin{proof}
Let $0<t<\frac{1}{2}$. \\
\textbf{Step 1:} Uniform boundedness. Consider for $R>0$ and for all non-negative integer $n$ the function $f_{n}$ supported on $S^{\Gamma}_{n,R}$ defined as: $$f_{n}=\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)\frac{ D_{\gamma }}{\phi_{t}(\gamma)}.$$
Note that this a weighted version of the function $\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma) D_{\gamma }$ by the spherical function $\phi_{t}(\gamma).$ The $L^{p}$-spectral inequality of Theorem \ref{Lpspectral} together with the fact that there exists $C>0$ such that for all $\gamma \in S^{\Gamma}_{n,R}$, $C^{-1}\widetilde{\phi_{t}}(nR)\leq \phi_{t}(\gamma) $ imply
\begin{align}
\sup_{n}\| \pi_{t}(f_{n})\|_{L^{p}\to L^{p}} <+\infty.
\end{align}
Set $K:=\sup_{n}\| \pi_{t}(f_{n})\|_{L^{p}\to L^{p}} $. Given $f,g\in C(\overline{ X})$ we have for all $n\in \mathbb{N}$
\begin{align*}
| \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)f(\gamma o)g(\gamma^{-1} o)\frac{\langle\pi_{t}(\gamma)v,w\rangle }{\phi_{t}(\gamma)}|&\leq C \|f\|_{\infty}\|g\|_{\infty}\| \pi_{t}(f_{n})\|_{L^{p}\to L^{p}} \|v\|_{p} \|w\|_{q} \\
&\leq K \|f\|_{\infty}\|g\|_{\infty} \|v\|_{p} \|w\|_{q}.
\end{align*}
By Banach-Alaoglu-Bourbaki Theorem and since on reflexive spaces the weak topology and the weak*-topology coincide, the limit $$\lim_{n\to +\infty} \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)f(\gamma o)g(\gamma^{-1} o)\frac{\langle\pi_{t}(\gamma)v,w\rangle }{\phi_{t}(\gamma)}$$ exists for all $v\in L^{p}$, for all $w\in L^{q}$ and for all $f,g\in C(\overline{X})$, up to extraction.\\
\textbf{Step 2:} Computation of the limit.\\
We already know by \cite{BPi} that we have the desired result for $v,w$ in a dense subspace of $L^{p},L^{q}$ spaces (e.g. that for all $v,w \in Lip(\partial X)$ and $f,g\in C(\overline{X})$)
\begin{align*}
\lim_{n\to +\infty}\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)f(\gamma o)g(\gamma^{-1} o)\frac{\langle\pi_{t}(\gamma)v,w\rangle }{\phi_{t}(\gamma)}=\langle g_{|_{\partial X}} \mathcal{R}_{t}(v), \textbf{1}_{\partial X}\rangle \langle f_{|_{\partial X}} ,w \rangle.
\end{align*}
\textbf{Step 3:} Conclusion.\\
Assume that $0<t<1/2$. Therefore $\mathcal{I}_{t}$ and thus $\mathcal{R}_{t}$ are continuous on $L^{p}$ with $1/p=1/2+t$.
The limit above together with the uniform bound of \textbf{Step 1} imply eventually that for $f,g\in C(\overline{ X})$ and for
$(v,w)\in L^{p}\times L^{q}$:
\begin{align*}
\lim_{n\to +\infty}\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)f(\gamma o)g(\gamma^{-1} o)\frac{\langle\pi_{t}(\gamma)v,w\rangle }{\phi_{t}(\gamma)}=\langle g_{|_{\partial X}} \mathcal{R}_{t}(v), \textbf{1}_{\partial X}\rangle \overline{ \langle w,f_{|_{\partial X}} \rangle}.
\end{align*}
\end{proof}
\subsection{Proof of irreducibility}
To prove irreducibility of representations our main tool is Theorem \ref{BML2}.
\begin{lemma}\label{cyclic}
Let $0<t<\frac{1}{2}$. The vector $\textbf{1}_{\partial X}$ is cyclic for $(\pi_{t},L^{p})$ and the vector $\sigma_{t}$ is cyclic for $(\pi_{-t},\overline{Im(\mathcal{I}_{t})} ^{\|\cdot\|_{q}})$.
\end{lemma}
\begin{proof}
Theorem \ref{BML2} implies for all $w\in L^{q}$ the following convergence and for all continuous functions $f\in C(\overline{X})$:
$$\lim_{n\to +\infty}\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)f(\gamma o)\frac{\langle\pi_{t}(\gamma)\textbf{1}_{\partial X},w\rangle}{\phi_{t}(\gamma)}= \langle f_{|_{\partial X}},w \rangle. $$
Now, given a function $f\in C(\partial X)$ consider $\tilde{f}\in C(\overline{X})$ such that $\tilde{f}_{|_{\partial X}}=f$. Set
\begin{equation}\label{fn}
f_{n}:=\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)\tilde{f}(\gamma o)\frac{\pi_{t}(\gamma)\textbf{1}_{\partial X}}{\phi_{t}(\gamma)}\in \pi_{t}(\mathbb{C}[\Gamma])\textbf{1}_{\partial X}.
\end{equation}
Hence, $f_{n}\to f$ with respect to the weak topology of $L^{p}$. Therefore, since $C(\partial X)$ is dense in $L^{p}$ and since the closure of the weak topology and the $\|\cdot\|_{p}$ coincides, the vector $\textbf{1}_{\partial X}$ is cyclic for $L^{p}$.\\
We prove now that $\sigma_{t}$ is cyclic for $\pi_{-t}$. Recall that for all $v\in L^{p}$ we have $\mathcal{I}_{t}(v)\in L^{q}$ by Proposition \ref{cont}. Hence for all $v\in L^{p}$, with the same notation of (\ref{fn}) we have
$$\langle f_{n},\mathcal{I}_{t}(v) \rangle \to \langle f_{|_{\partial X}},\mathcal{I}_{t}(v)\rangle.$$ Since $\mathcal{I}_{t}$ is self adjoint, for all $v\in L^{p}$
\begin{align*}
\langle \mathcal{I}_{t}(f_{n}),v \rangle&=\langle \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma)\tilde{f}(\gamma o)\frac{\pi_{-t}(\gamma)\mathcal{I}_{t}(\textbf{1}_{\partial X})}{\phi_{t}(\gamma)},v\rangle\\
&\to \langle \mathcal{I}_{t}(f_{|_{\partial X}}),v \rangle,\\
\end{align*}
where $ \mathcal{I}_{t}(f_{n}) \in \pi_{-t}(\mathbb{C}[\Gamma])\sigma_{t}.$
Hence $$\overline{ \pi_{-t}(\mathbb{C}[\Gamma])\sigma_{t}} ^{\|\cdot\|_{q}}=\overline{\mathcal{I}_{t}(C(\partial X))}^{\|\cdot\|_{q}}.$$
Eventually, using the density of $C(\partial X)$ in $L^{p}$ and the continuity of $\mathcal{I}_{t}$ we deduce $$\overline{ \pi_{-t}(\mathbb{C}[\Gamma])\sigma_{t}} ^{\|\cdot\|_{q}}= \overline{\mathcal{I}_{t}(L^{p})}^{\|\cdot\|_{q}}.$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{mainT}.]
First of all, since $\mathcal{I}_{t}$ is continuous from $L^{p}$ to $L^{q}$, the subspace $\ker \mathcal{I}_{t}$ is a closed invariant subspace of $(\pi_{t},L^{p})$. Thus, if $\mathcal{I}_{t}$ is non injective, then $(\pi_{t},L^{p})$ is not irreducible.\\
We shall prove now that $\mathcal{I}_{t}$ is injective then $(\pi_{t},L^{p})$ is irreducible for $1/p=1/2+t.$ Since $\mathcal{I}_{t}$ is a continuous operator from $L^{p}\to L^{q}$, a standard result in Banach spaces theory asserts that the dual space of $L^{p}/ \ker \mathcal{I}_{t}$ is the space $\overline{ Im (\mathcal{I}_{t})}^{\|\cdot\|_{q}}$, where the the weak closure for $L^{q}$ is the same as the $\|\cdot\|_{q}$-closure (see \cite[Chapter 3]{Me}). Hence, the dual representation of $(\pi_{t}, L^{p}/ \ker \mathcal{I}_{t} )$ is $(\pi_{-t}, \overline{ Im (\mathcal{I}_{t})}^{\|\cdot\|_{q}})$. We shall prove that $(\pi_{-t},L^{q})$ is irreducible to obtain irreducibility of $(\pi_{t}, L^{p}/ \ker \mathcal{I}_{t} )$.\\ Recall that
$ \pi_{-t}(f) \sigma_{t}=\mathcal{I}_{t}(\pi_{t}(f)\textbf{1}_{\partial X} )$, for all $f\in \mathbb{C}[\Gamma]$ and the function $\sigma_{t}$ is cyclic in $\overline{ Im (\mathcal{I}_{t})}^{\|\cdot\|_{q}}$ for $\pi_{-t}$ by Lemma \ref{cyclic}.\\
Now, let $0\neq K\subset \overline{ Im (\mathcal{I}_{t})}^{\|\cdot\|_{q}} \subset L^{q}(\partial X,\nu_{o})$ a $L^{q}$-closed subspace invariant by $\pi_{-t}$. Let $R>0$ large enough. For any $w\in K \subset L^{q}$ define for all $n\in \mathbb{N}$, the vector:
\begin{equation}\label{zzz}
w_{n}:=\sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma) \tilde{ \sigma_{t}}(\gamma^{-1} o) \frac{\pi_{-t}(\gamma^{-1}) w }{\phi_{t}(\gamma)} \in K,
\end{equation}
where $\tilde{\sigma_{t}}$ has been defined in (\ref{sigmat}).
Theorem \ref{BML2} implies for all $v\in L^{p}(\partial X,\nu_{o})$ that as $n\to +\infty$:
$$ \sum_{\gamma \in S^{\Gamma}_{n,R}}\mu_{n,R}(\gamma) \tilde{\sigma_{t}}(\gamma^{-1} o) \frac{\langle v, \pi_{-t}(\gamma^{-1}) w\rangle }{\phi_{t}(\gamma)}\to \langle \mathcal{I}_{t}(v),\textbf{1}_{\partial X}\rangle \langle \textbf{1}_{\partial X} ,w \rangle=\langle v,\mathcal{I}_{t}(\textbf{1}_{\partial X})\rangle \langle\textbf{1}_{\partial X}, w \rangle,$$
and the above convergence reads as follows with respect to the weak topology on $L^{q}:$
$$w_{n}\to \langle \textbf{1}_{\partial X} , w \rangle \sigma_{t}.$$
Since $K$ is closed we have that $ \langle \textbf{1}_{\partial X} , w \rangle \sigma_{t} \in K.$ So, since $\sigma_{t}$ is cyclic, it is sufficient to show that there exists $0\neq w \in K$ such that $$\langle \textbf{1}_{\partial X} , w \rangle\neq 0.$$
Assume this is not the case: for all $w\in K$ we have $\langle \textbf{1}_{\partial X},w\rangle =0$. We would have that for all $\gamma \in \Gamma$ that $\langle \textbf{1}_{\partial X},\pi_{-t}(\gamma^{-1})w \rangle=0= \langle \pi_{t}(\gamma) \textbf{1}_{\partial X},w \rangle.$ And therefore, since $\textbf{1}_{\partial X}$ is cyclic for $(\pi_{t},L^{p}(\partial X,\nu_{o}))$ it implies that
$$ K\ \subset \{w\in L^{q}| \langle v,w\rangle=0,\forall v\in L^{p} \}$$
Since the pairing is non-degenerate then $ K=\{0 \}.$
Hence, if $K\neq \{0 \}$ then $K$ contains $\sigma_{t}$ and thus it has to be $\overline{ Im (\mathcal{I}_{t})}^{\|\cdot\|_{q}}$ and the proof is done.
\end{proof}
\section{Application to rank one semisimple Lie groups}\label{section5}
Let $G$ be a connected semisimple Lie group with finite center and let $\frak g$ be its Lie algebra.
Let $K$ be a maximal compact subgroup of $G$ and let $\frak k$ be its Lie algebra. Let $\frak p$ be the orthogonal complement of $\frak k$ in $\frak g$
relative to the Killing form $B$. Among the abelian sub-algebras of $\frak g$ contained in the subspace $\frak p$, let $\frak a$ be a maximal one. We assume $\dim \frak a=1$, i.e. the real rank of $G$ equals to $1$ (in particular $G$ is not compact). Let $\Sigma\subset \frak{a}^{*}$ be the root system associated to $(\frak g,\frak a)$. Let
\[
\frak g_{\alpha}=\{X\in\frak g: \mathrm{ad}(H)X=\alpha(H)X\ \ \forall H\in\frak a \}
\]
be the root space of $\alpha\in\Sigma$. Recall that $\Sigma=\{-\alpha,\alpha\}$ or $\Sigma=\{-2\alpha,-\alpha,\alpha,2\alpha\}$ where $\alpha$ is a positive root ($\alpha\in\Sigma$ is positive if and only if $\alpha(H)>0$ for all $H\in \frak a^+$). If $m_{1}=\dim \frak g_{\alpha}$ and $m_{2}=\frak g_{2\alpha}$ denoted by $\rho=\frac{1}{2}(m_{1}+2m_{2})\alpha$ be the half sum of positive roots. Let $H_{0}$
be the unique vecteur in $\frak{a}$ such that $\alpha(H_{0})=1$. Hence, $\frak{a}=\{ tH_{0},t\in\mathbb{R}\}$ and $\frak{a}_{+}=\{ H\in \frak{a}|\alpha(H)>0\}$ is identified with the open set of strictly positive real numbers.
Let $\frak n$ the nilpotent Lie algebra defined as the direct sum of root spaces of positive roots:
\[
\frak n=\bigoplus_{\alpha\in\Sigma^+}\frak g_{\alpha}.
\]
Let $A=\exp(\frak a)$, $A^+=\exp(\frak a^+)$ and $N=\exp(\frak n)$.
Let $G=KAN$ be the Iwasawa decomposition and $K\overline{A^{+}}K$ the Cartan decomposition defined by $\frak a^+$ where $\overline{A^{+}}$ denoted the closure $A^{+}$. Let $Z(A)$ be the centralizer of $A$ in $G$ and
$M=Z(A)\cap K$. The group $M$ normalizes $N$. Let $P=MAN$ be the minimal parabolic subgroup of $G$ associated to $\frak a^+$.
Let $\nu$ be the unique Borel regular $K$-invariant probability measure on the Furstenberg-Poisson boundary $G/P$ that is quasi-invariant under the action $G$ (we refer to \cite[Appendix B]{BDV} for a general discussion).
Let
\[
\rho_{t}:G\to \mathcal{U}(L^p(G/P,\nu))
\]
be the associated $L^{p}$-boundary representation of $G$ and defined the corresponding spherical function
\[
\phi_{t}(g)=\langle \rho_{t}(g)\textbf{ 1}_{G/P}, \textbf{1}_{G/P}\rangle.\]
The corresponding globally symmetric space of non compact type of $G$ is $G/K$ endowed with a $G$-invariant Riemannian metric denoted by $d$ induced by the Killing form on $\frak{g}\ / \frak{k}$ identified with the tangent space of $G/K$ at the point $o=eK$. A flat of dimension $k$ is defined as the image of a map $\mathbb{R}^{k}\rightarrow G/K$ locally isometric. The rank of $G$ is the largest dimension of a flat subspace of $G/K$. The rank one globally symmetric spaces of non compact type are classified as follows: there are the real hyperbolic spaces $H^{n}(\mathbb{R})$, the complex hyperbolic spaces $H^{n}(\mathbb{C})$, the quaternionic hyperbolic spaces $H^{n}(\mathbb{H})$ for $n\geq 2$ and the exceptional hyperbolic space that is the $2$-dimensional octonionic hyperbolic space $H^{2}(\mathbb{O})$. \\
If $(X,d)$ is one the above hyperbolic space, it is a CAT(-1) space and in particular it is a proper geodesic $\delta$-hyperbolic space and fits of course in the class of spaces \ref{class} for $\epsilon=1$. One can therefore consider its Gromov boundary $\partial X$ or equivalently the geometric boundary of $X$. The group $G$ acts by isometries on $(X,d)$ and its discrete subgroup acts properly discontinuously on $X$. Assume that $\Gamma$ is lattice (uniform or non-uniform) and perform the Patterson-Sullivan construction associated to $(\Gamma,d)$ with the base point $o=eK \in X$ to obtain a measure supported on $\partial X$ denoted by $\nu_{o}$. The Hausdorff dimension of $\nu_{o}$ is the critical exponent $Q_{\Gamma}$ of $\Gamma$ that coincides with the volume growth of the corresponding hyperbolic spaces $Q_{G}=m_{1}+2m_{2}$.\\
A geodesic ray starting at the origin can be represented using Cartan decomposition as $c(t)=ke^{tH_{0}}\cdot o=kMe^{tH_{0}}\cdot o$ where $t\in \mathbb{R}_{+}$ and $k\in K$. Then the Furstenberg-Poisson boundary $G/P$ can be identified with the geometric boundary $\partial X$ in the case of rank one symmetric space. Indeed, one can identify $G/P=K/M$ thanks to the Iwasawa decomposition $KAN$. It turns out that the Patterson-Sullivan measure $\nu_{o}$ associated with a lattice $\Gamma$ supported on $\partial X$ of dimension $Q_{\Gamma}$ coincides with the unique $K$-invariant measure $\nu$ on $G/P$. \\
Thus, the $L^{p}$-boundary representation of $\Gamma$ is nothing but the restriction of $\rho_{t}$ to $\Gamma$
\begin{align}
\pi_{t}:\gamma \in \Gamma\rightarrow \rho_{t_{|_{\Gamma}}} \in \mbox{Iso}(L^{p}(\partial X,\nu_{o}))
\end{align}
with $p$ such that $1/p=1/2+t$ with $0<t<1/2$. Since $\Gamma$ might be a non-uniform lattice, the results obtained above dealing with hyperbolic groups do not apply to $\Gamma$. Nevertheless we have the exact analog of Lemma \ref{BMtrick}.\\
In the following, for $R>0$ and for any $n\in \mathbb{N}$, the spheres $S^{\Gamma}_{n,R}$ are defined with respect to the length function $|\gamma|:=d(o,\gamma o)$ corresponding to the Riemannian metric $d$ on the symmetric space $G/K$. Moreover, one can take for $\mu_{n,R}$ , for $R>0$ and for any $n\in \mathbb{N}$ the standard average
$$\frac{1}{|S^{\Gamma}_{n,R}|} \sum_{\gamma \in S^{\Gamma}_{n,R}} D_{\gamma }.$$
\begin{lemma}\label{BMtrick2}
Let $\Gamma$ be a lattice in $G$.
Let $R>0$. For all $t\in [-1/2,1/2]$ and $n\in \mathbb{N}$ set $f_{n}=\frac{1}{|S^{\Gamma}_{n,R}|}\sum_{\gamma \in S^{\Gamma}_{n,R}}D_{\gamma } \in \mathbb{C}[\Gamma]$. Consider
$\pi_{t}(f_{n}) $ as an operator from $L^{\infty} \to L^{\infty}.$
The exists $C_{\infty}>0$ depending on $R$ such that for all $t\in [-1/2,1/2]$ $$ \|\pi_{t}(f_{n})\|_{L^{\infty} \to L^{\infty}}\leq C_{\infty} \widetilde{\phi_{t}}(nR).$$
\end{lemma}
\begin{proof}
Indeed the proof follows the same ideas of \cite[Proposition 3.2]{BoyHCH} and \cite[Section 2.5]{BLP}
\end{proof}
Therefore, following exactly the same method of the proof of Theorem \ref{BML2}, we obtain
\begin{theorem}\label{lattspectral}
Let $\Gamma$ be a lattice in a rank one connected semisimple Lie group with finite center $G$.
Let $R>0$ be large enough and let $r\in [1,\infty]$ such that $0\leq 1/r\leq 1$. There exists $C>0$ such that for any $-\frac{1}{2}\leq t \leq\frac{1}{2}$, such that for all $n\in \mathbb{N}$ with $f_{n}=\frac{1}{|S^{\Gamma}_{n,R}|}\sum_{\gamma \in S^{\Gamma}_{n,R}}D_{\gamma } \in \mathbb{C}[\Gamma]$ supported on $S^{\Gamma}_{n,R}$ we have:
\begin{align}
\|\pi_{t} (f_{n})\|_{L^{r}\to L^{r}}\leq C \widetilde{\phi_{t}}(nR).
\end{align}
And thus we have
\begin{align}
\sup_{\|v\|_{p},\|w\|_{q}\leq 1} |\langle \pi_{t} (f_{n})v,w\rangle |\leq C\widetilde{\phi_{t}}(nR).
\end{align}
\end{theorem}
The equidistribution theorem needed in (\ref{equid}) reads as follows in the context of lattices. We refer to \cite{Ro} for the next results in a more general setting.
\begin{theorem}\label{equi}
Let $\Gamma$ be a lattice in a rank one connected semisimple Lie group with finite center $G$. For any $R>0$, we have the following convergence: $$\frac{1}{|S^{\Gamma}_{n,R}|}\sum_{\gamma \in S^{\Gamma}_{n,R}} D_{\gamma o} \otimes D_{\gamma^{-1} o} \rightharpoonup \nu_{o}\otimes \nu_{o},$$
as $n\to +\infty$, for the weak* convergence in $C(\overline{X}\times \overline{X})$.
\end{theorem}
Eventually we obtain
\begin{theorem}\label{BML3}
Let $\Gamma$ be a lattice in a rank one connected semisimple Lie group with finite center $G$. For $R>0$ large enough, for all $0<t<1/2 $, for all $f,g\in C(\overline{X})$, for all $v\in L^{p}( \partial X,\nu_{o})$ and $w\in L^{q}(\partial X,\nu_{o})$:
$$\frac{1}{|S^{\Gamma}_{n,R}|}\sum_{\gamma \in S^{\Gamma}_{n,R}} f(\gamma ) g(\gamma^{-1} ) \frac{\langle \pi_{t}(\gamma)v,w\rangle }{\phi_{t}(\gamma)}\to \langle g_{|_{\partial \Gamma}}\mathcal{R}_{t}(v),\textbf{1}_{\partial \Gamma}\rangle \langle f_{|_{\partial \Gamma}},w \rangle, $$
as $n\to +\infty$.
\end{theorem}
\subsection{Intertwining operators}
The restriction of $\rho_{t}$ to $K$ on $L^{2}(K/M,\nu_{o})$ is given by
\begin{align*}
\rho_{t}(k)v(\xi)=v(k^{-1}\xi),
\end{align*}
since $\nu_{o}$ is $K$-invariant and note that the representation does not depend on $t$ anymore and provides a unitary representation of $K$. Therefore the intertwining relations \ref{intert} reads as follows: for all $k\in K$ and for all $t>0$
$$ \mathcal{I}_{t}\rho_{t}(k)= \rho_{t}(k) \mathcal{I}_{t}.$$
Peter-Weyl Theorem implies that $$ L^{2}(K/M,\nu_{o})=\oplus_{n\geq 0} V_{n},$$ where $V_{n}$ are finite dimensional irreducible unitary representation of $K$. Therefore, Schur's lemma implies that there exists a sequence of scalars $(\lambda_{n})_{n\geq 0}$ such that $\mathcal{I}_{t}$ restricted to $V_{n}$ is a scalar operator as follows $ \mathcal{I}_{t |_{V_{n}}}=\lambda_{n} Id_{ |_{V_{n} } } $ with $\lambda_{n}\neq 0$ for all $n\geq 0$. We deduce that $\mathcal{I}_{t}$ is injective viewed as an operator acting on $L^{2}$ and therefore it is injective as an operator from $L^{p}$ to $L^{q}$. Apply Theorem \ref{BML3} to lattices in rank one semisimple Lie groups and use exactly the same arguments of Theorem \ref{mainT} to complete the proof of Theorem \ref{latt}. |
{
"arxiv_id": "2302.13752",
"language": "en",
"timestamp": "2023-02-28T02:29:20",
"url": "https://arxiv.org/abs/2302.13752",
"yymm": "2302"
} | \section*{Acknowledgments}
We thank the reviewers for their comments,
and Lukas Gonon, Lyudmila Grigoryeva and
Juan-Pablo Ortega for their useful suggestions
on the manuscript.
QL is supported by the National Research Foundation, Singapore,
under the NRF fellowship (project No. NRF-NRFF13-2021-0005).
HJ is supported by National University of Singapore under the PGF scholarship.
SW is supported by National University of Singapore under the Research scholarship.
\section{Sequence modelling as an approximation problem}
\label{sec:formulation}
We begin by formalizing the broad mathematical problem of approximation and some of the key questions
one may be interested in.
We then discuss how one may formulate sequence modelling
in the setting of approximation theory.
\subsection{The problem of approximation}
\label{subsec:formulation_approx}
Let us introduce the basic problem of approximation for functions on vector spaces.
Let $\set{X}$ and $\set{Y}$ be normed vector spaces.
We consider a family of \emph{target functions}, or simply \emph{targets},
which is a subset $\set{C}$ of all mappings $\set{X} \to \set{Y}$,
i.e. $\set{C} \subset \set{Y}^\set{X}$.
In the learning theory literature, one sometimes calls $\set{C}$ a \emph{concept space}.
These are the relationships we wish to learn, or approximate, by some
simpler candidate functions.
Let us denote this set of candidates by $\set{H} \subset \set{Y}^\set{X}$.
In learning theory, this is often called a \emph{hypothesis space}.
The problem of approximation concerns how well can functions in $\set{H}$ resolve
functions in $\set{C}$.
In broad terms, we may classify results on approximation theory into three types:
\emph{universal approximation results (density-type)},
\emph{approximation rate estimates (Jackson-type)},
and \emph{inverse approximation results (Bernstein-type)}.
Let us discuss each in turn.
\paragraph{Universal approximation results (density-type).}
Universal approximation theorems are the most basic approximation results.
We say that $\set{H}$ is an universal approximator for $\set{C}$ if
for every $H \in \set{C}$ and $\epsilon > 0$,
there exists $\h{H} \in \set{H}$ such that $\| H - \h{H} \| \leq \epsilon$.
In other words, $\set{H}$ is dense in $\set{C}$ in the topology generated
by $\| \cdot \|$.
The choice of the norm depends on applications.
We illustrate this with the following example.
We consider approximating scalar functions by trigonometric polynomials.
Here, we set $\set{X} = [0, 2\pi]$ and $\set{Y} = \mathbb{R}$.
The target space is $\set{C} = C^{\alpha}_{\text{per}}([0,2\pi])$,
the set of $\alpha$-times continuously differentiable, periodic functions on $[0,2\pi]$.
The hypothesis space is
\begin{equation}\label{eq:H_trig}
\set{H} =
\cup_{m\in \mathbb{N}_+}
\left\{
\h{H}(x) =
\sum_{i=0}^{m-1}
a_i \cos(i x)
+
b_i \sin(i x)
:
a_i, b_i \in \mathbb{R},
m \in \mathbb{N}_+
\right\}.
\end{equation}
As a direct consequence of the Stone-Weierstrass theorem,
$\set{H} \subset \set{C}$ is dense in $\set{C}$ with respect to the norm
$\| H \| = \sup_{x\in[0,2\pi]} |H (x)|$
\citep[p.~32]{achieser2013.TheoryApproximation}.
\paragraph{Approximation rate estimates (Jackson-type).}
Universal approximation (density) ensures that our hypothesis space $\set{H}$
is in a sense ``big enough'', so that we can use it to approximate a reasonably large
variety of target functions.
However, such results do not tell us precisely what types of functions in $\set{C}$
are ``easy'' (or ``hard'') to approximate using $\set{H}$.
In other words, two hypothesis spaces $\set{H}_1$ and $\set{H}_2$ may both be dense in
$\set{C}$ but can be naturally adapted to approximate functions of different types.
To resolve this, we may ask a finer question on the rate of approximation.
Fix a hypothesis space $\set{H}$.
Let $\{ \set{H}^m : m \in \mathbb{N}_+ \}$ be a collection of subsets of $\set{H}$ such that
$\set{H}^m \subset \set{H}^{m+1}$ and $\cup_{m \in \mathbb{N}_+} \set{H}^m = \set{H}$.
Here, $m$ is a measure of complexity of the approximation candidates,
and $\set{H}^m$ is the subset of hypotheses with complexity at most $m$.
This is also called the approximation budget.
Then, the approximation rate estimate is an inequality of the form
\begin{equation}\label{eq:qn_jackson_type}
\inf_{\h{H} \in \set{H}^m}
\| H - \h{H} \|
\leq
C_{\set{H}}(H, m).
\end{equation}
\cref{eq:qn_jackson_type} tells us the best possible approximation error
one can hope to obtain under approximation budget $m$.
Note that $\set{H}$ is dense if and only if $\lim_{m\to\infty} C_{\set{H}}(H,m) = 0$
for every $H \in \set{C}$.
The speed at which $C_{\set{H}}(H,m)$ decays as $m$ increases is the approximation rate,
and its dependence on $H$ measures the complexity of a particular target $H$
under the current approximation scheme $\set{H}$.
Returning to the example in~\cref{eq:H_trig}, the hypothesis space with budget $m$ is
\begin{equation}
\set{H}^m =
\left\{
\h{H}(x) =
\sum_{i=0}^{m-1}
a_i \cos(i x)
+
b_i \sin(i x)
:
a_i, b_i \in \mathbb{R}
\right\}.
\end{equation}
The classical Jackson's theorem~\citep[p.~187]{achieser2013.TheoryApproximation}
gives a rate estimate of the form
\begin{equation}\label{eq:jackson_trig}
\inf_{\h{H} \in \set{H}^m}
\| H - \h{H} \|
\leq
\frac
{
c_\alpha
\max_{0\leq r\leq \alpha}
\| H^{(r)} \|
}
{
m^{\alpha}
},
\end{equation}
where $c_\alpha$ is a constant depending only on $\alpha$.
Observe that the rate of decay of the approximation error is $m^{-\alpha}$
and the complexity of a target function under the trigonometric polynomial approximation scheme
is its norm associated with the Sobolev space
$\mathcal{W}^{\alpha,\infty} := \{ H: \max_{0\leq r\leq \alpha} \| H^{(r)} \| < \infty \}$.
The key insight here is that a function $H$ is easy to approximate using trigonometric polynomials
if it has small gradient (Sobolev) norm.
We will hereafter refer to estimates in the form of~\cref{eq:qn_jackson_type} as
\emph{Jackson-type} results.
\paragraph{Inverse approximation results (Bernstein-type).}
Jackson-type results tell us that if a target function $H$ possesses some property related to $\{\set{H}^m\}$,
(e.g. smoothness, small gradient norm), then it is in fact easy to approximate with $\{\set{H}^m\}$.
Inverse approximation results are converse statements.
It identifies properties that $H$ ought to possess if one starts with the assumption that it
can be well-approximated (in a sense to be made precise in each case) by $\{ \set{H}^m \}$.
In the case of trigonometric polynomial approximation,
the following inverse approximation result is due to Bernstein~\citep[p.~206]{achieser2013.TheoryApproximation}.
Fix some periodic $H:[0,2\pi]\to \mathbb{R}$, and suppose that there exists
a constant $c>0$, $\delta >0$ and $\alpha \in \mathbb{N}_+$
so that for every $m\in \mathbb{N}_+$, one has
\begin{equation}\label{eq:berstein_trig}
\inf_{\h{H} \in \set{H}^m}
\| H - \h{H} \| \leq
\frac
{c}
{m^{\alpha + \delta}}.
\end{equation}
Then, $H$ is $\alpha$-times continuously differentiable and its $\alpha$-th
derivative is $\delta$-H\"{o}lder continuous.
Intuitively, this result says that if a function $H$ can be approximated with a rate
in~\cref{eq:jackson_trig}, then it must be in $C^\alpha_{\text{per}}([0,2\pi])$.
Combined with Jackson's result, one gains a complete characterization of the type of functions
- namely smooth functions, and their associated Sobolev spaces -
that can be efficiently approximated with trigonometric polynomials.
We will hereafter refer to these inverse approximation theorems as
\emph{Bernstein-type} results.
\subsection{Sequence modelling as an approximation problem}
\label{subsec:formulation_seq}
Now, we introduce the problem of sequence approximation,
which can be regarded as a particular class of approximation problems
as introduced in~\cref{subsec:formulation_approx}.
The key difference with classical approximation theories is that the
input spaces $\set{X}$ and the output spaces $\set{Y}$
are now spaces of sequences, and may be infinite-dimensional.
We consider an input sequence indexed by a completely ordered index set $\set{T}$
\begin{equation}
\seq{x}
=
\{
x(t):
t \in \set{T}
\}.
\end{equation}
There are two main choices of the index set $\set{T}$.
For discrete sequences (e.g. sequences of word embeddings), $\set{T}$ is (a subset of) $\mathbb{Z}$.
For continuous sequences (e.g. measurements of a continuous-time control system),
$\set{T}$ is (a subset of) $\mathbb{R}$.
The input space is a collection $\set{X}$ of such sequences.
Correspondingly, the output space $\set{Y}$ is another collection of sequences.
Each input sequence $\seq{x} \in \set{X}$ corresponds to an output sequence
$\seq{y}$ with
\begin{equation}
y(t) = H_t(\seq{x}),
\qquad
t\in\set{T}.
\end{equation}
That is, the sequence $\seq{H} = \{ H(t)\equiv H_t : t\in\set{T} \}$ is our target.
In this case, the target is in general an infinite-dimensional operator mapping $\set{X} \to \set{Y}$,
and for each $t$, $H_t$ is a functional on $\set{X}$.
We will hereafter refer to operators of this type as functional sequences.
Now, we seek to approximate $\seq{H}$ by candidates from a hypothesis space $\set{H}$.
The latter may be recurrent neural networks, convolutional networks or other types of models.
In each case, one first identifies appropriate target spaces $\set{C}$ for which $\set{H}$ is dense.
Then, one seeks Jackson-type and Bernstein-type results that characterize the types of sequence
relationships that can be efficiently approximated by each hypothesis space.
From the viewpoint of classical approximation theory,
one novel aspect of sequence approximation is that the input and output spaces are infinite-dimensional,
provided that the index set $\set{T}$ is infinite.
In fact, many interesting aspects of sequence modelling, such as those associated with memory,
precisely result from an unbounded index set, e.g. $\set{T} = \mathbb{R}$ or $\mathbb{Z}$.
We note that while sequence modelling is in effect an infinite-dimensional approximation problem,
it should be contrasted with generic operator learning problems
~\citep{kovachki2022.NeuralOperatorLearning,neufeld2022.ChaoticHedgingIterated,lu2021.LearningNonlinearOperators,chen1995.ApproximationCapabilityFunctions,benth2023.NeuralNetworksFrechet,cuchiero2022.UniversalApproximationTheorems,stinchcombe1999.NeuralNetworkApproximation}.
Here, a sequence is not a generic function but one with domain being a completely ordered index set.
Therefore, the sequences and their corresponding vector spaces contain temporal structure
that should be highlighted in the approximation results.
It is for this reason that we do not call $\seq{H}$ operators,
but functional sequences, to highlight the presence of its sequential structure.
We close this part with a final remark.
There are many applications where the output is not a sequence but rather just a finite dimensional vector.
Examples include sequence regression~\citep{xing2010.BriefSurveySequence}
and sentiment analysis~\citep{tang2015.DocumentModelingGated}.
The present formulation includes these cases by writing $y \equiv y(\infty) = H_{\infty}(\seq{x})$ as the input-output
functional relationship.
In next sections, we give a brief but structured overview of the approximation results for
sequence modelling, paying particular attention to the theoretical insights
and their consequences on practical architecture design.
\section{Introduction}
\label{sec:intro}
The modelling of relationships between sequences is an important task that enables
a wide array of applications, including
classical time-series prediction problems in finance~\citep{taylor2008.ModellingFinancialTime},
and modern machine learning problems in natural language processing~\citep{bahdanau2016.NeuralMachineTranslation}.
Another class of engineering applications involving sequential relationships are control systems,
which study the dependence of a dynamical trajectory on an input
control sequence~\citep{brunton2019.DatadrivenScienceEngineering}.
In general, sequence-to-sequence relationships can be very complex.
For example, when the index set for the sequences is infinite,
one can understand these relationships as mappings between infinite-dimensional spaces.
Thus, traditional modelling techniques are limited in their efficacy,
especially when there is little prior knowledge on the system of interest.
To address these difficulties,
an increasingly popular method to model sequence relationships is to
leverage machine learning.
To date, a large variety of machine learning paradigms have been proposed to
model sequential relationships.
One of the earliest attempts is the class of neural networks called
\emph{recurrent neural networks (RNN)}%
~\citep{rumelhart1986.LearningRepresentationsBackpropagating},
and their variants~\citep{cho2014.LearningPhraseRepresentations,hochreiter1997.LongShortTermMemory}.
Besides the RNN family of models, many other alternatives have also been explored.
These include convolutional based models~\citep{oord2016.WaveNetGenerativeModela},
encoder-decoder based models~\citep{cho2014.LearningPhraseRepresentations}
attention based models~\citep{bahdanau2016.NeuralMachineTranslation},
and their combinations.
For example, the powerful transformer architecture~\citep{vaswani2017.AttentionAllYou}
combines encoder-decoder and attention architectures.
Despite the rapid developments in the practical domain,
the theoretical foundation of data-driven sequence modelling is still in its nascent stages.
For example, the most basic question of how the aforementioned architectures are different,
and how practitioners should select the model architecture based on their applications,
is largely unknown and relies on trial and error.
Thus, an important direction of theoretical research is to understand the essential properties,
and most importantly, distinctions between different sequence modelling paradigms.
The present survey aims to provide an overview of the theoretical research
on sequence modelling in the specific direction of \emph{approximation theory}
\citep{lorentz2005.ApproximationFunctions,devore1998.NonlinearApproximation}.
In a nutshell, approximation theory is the study of how a complex relationship (say a function)
can be broken down as a combination of simpler, more readily computable building blocks.
A theoretical foundation of sequence modelling requires the understanding of how and when
a sequential relationship can be approximated by simpler components realized as various
neural network architectures.
The theory of sequence modelling is an active area of research that spans decades
of work both in machine learning and the study of nonlinear dynamics.
Thus, the purpose of this survey is not to give an exhaustive summary of all relevant results in the literature,
but rather to highlight some interesting insights for approximation theory gained from existing works.
Moreover, we discuss some classes of open questions that are of significance in order
to progress the understanding of the approximation theory for sequences.
The survey is organized as follows.
In~\cref{sec:formulation}, we introduce the mathematical problem of approximation,
including the key questions one is interested in answering.
In particular, we highlight the new aspects of sequence approximation as compared to classical paradigms
of function approximation.
In~\cref{sec:rnn}, we discuss approximation results on recurrent neural networks, where much more is known
compared with other architectures.
In~\cref{sec:others}, we consider the approximation theory of other model architectures, including those of convolutional,
encoder-decoder and attention types.
In~\cref{sec:outlook}, we summarize the known results and motivate some future directions of interest.
\paragraph{Notation.}
Let us introduce some notational conventions.
Throughout this paper, we use lower-case letters to denote scalars and vectors.
Boldface letters are reserved for sequences, e.g. $\seq{x} = \{ x(t) : t\in \set{T} \}$.
As in the previous formula, script letters such as $\set{T}$
are used to represent sets of scalar or vectors, both finite and infinite-dimensional.
Capital letters are used to denote mappings between vector spaces.
Correspondingly, a bold-faced capital letter is a sequence of such mappings.
Sometimes, we wish to refer to a portion of a sequence.
Let $\set{S} \subset \set{T}$,
then $\seq{x}_{\set{S}} := \{x(t) : t \in \set{S}\}$.
We use $|\cdot|$ to denote the Euclidean norm,
and reserve $\| \cdot \|$, possibly with subscripts, for
norms over function (e.g. sequence) spaces.
We use $\dot{x}(t)$ to denote the derivative of $t \mapsto x(t)$.
Higher derivatives of order $r\geq 0$ are written as $x^{(r)}(t)$.
Throughout this survey, we reserve the letters $m,n,d,i,j$
to represent integers.
Sequence indices, equivalently referred to as time indices,
are written as $t$ or $s$.
\section{Other architectures}
\label{sec:others}
Let us now expand our discussion to models beyond
the RNN model family.
Many of these architectures are proposed or popularized
in fairly recent years.
A partial but important motivations for developing these
alternative model architectures is precisely the limitations
with respect to memory we have discussed in~\cref{sec:rnn}.
Very often in practical applications,
we want to model sequence relationships having long
and irregular memory patterns.
For example, in machine translation tasks,
an output word at the end of the sentence in one language
may depend on the very first word in the corresponding sentence
in another language.
Moreover, the number of words in the original and translated
sentences are often not the same.
For these reasons, a variety of alternative models to the RNN
have been proposed.
Each of them are competitive in different domains of application.
The subsequent discussions will highlight a number of such examples.
However, to concretely understand the gains of using alternative
architectures to RNN, it is necessary to develop some
theoretical understanding of their comparison.
For example, can an alternative architecture such as
a convolutional-based architecture overcome the
curse of memory related to RNNs?
This often requires the developments of
Jackson-type estimates in similar approximation settings,
which tells us precisely which functional sequences are easy to approximate
under a particular hypothesis space corresponding to
a model architecture of interest.
\subsection{Convolution-based architectures}
\label{sec:others_cnn}
We begin with results for convolutional-based architectures.
While convolutional neural networks (CNN) were originally developed
for computer vision applications
\citep{krizhevsky2017.ImageNetClassificationDeep},
temporal versions of the CNNs have been shown to be
effective in many sequence modelling tasks~\citep{bai2018.EmpiricalEvaluationGeneric}.
Since convolution operations are easier to describe
using a discrete index set, we shall assume throughout
this subsection that $\set{T} = \mathbb{Z}$.
The basic building block of temporal CNNs is
the causal dilated convolution operation
\begin{equation}\label{eq:dilated_conv}
(\seq{u} \Conv{}_{l} \, \seq{v})(t)
=
\sum_{s \geq 0} u(s)^\top v(t-ls),
\qquad
l \in \mathbb{Z}_+.
\end{equation}
Note that the summation is taken over $s\geq 0$
to ensure causality,
meaning that the outcome at time $t$ depends only on the past information.
When $l=1$, this is the usual convolution.
Dilations $l\geq 2$ result in larger receptive fields with the same
number of parameters, and are hence useful in processing long sequences.
For example, successful temporal CNN architectures,
including the WaveNet~\citep{oord2016.WaveNetGenerativeModela}
and the TCN~\citep{lea2017.TemporalConvolutionalNetworks},
contain stacks of dilated convolutions with increasing dilation rates.
We can write a general dilated temporal CNN model with
$K$ layers and $M$ channels at each layer as
\rev{\begin{equation}\label{eq:CNNdynamics}
\begin{aligned}
\seq h_{0,i}
&=
\seq x_i,\\
\seq h_{k+1,i}
&=
\sigma \left(\sum_{j=1}^{M_k} {\seq w}_{kji}
\Conv{}_{d_k} \seq h_{k,j} \right),
\quad i=1, \dots, M_{k+1},
\quad k=0, \dots, K-1\\
\seq {\h y} &= \seq h_{K,1},
\end{aligned}
\end{equation}
where $M_0=d$ is the input dimension,
$M_K=1$ is the output dimension.
$M_k=M$ is the number of channels at layer k for $k=1, \dots, K-1$.
}
Here, $\seq x_{i}$ is the scalar sequence
corresponding to the $i^\text{th}$ element
of the vector sequence $\seq x$,
and ${\seq w}_{kji}$ is the convolutional filter at layer $k$,
mapping from channel $j$ at layer $k$ to
channel $i$ at layer $k+1$.
A common choice for the dilation rate in applications is $d_k = 2^K$,
so we adopt this choice for the subsequent exposition.
Furthermore, for establishing approximation results
it is sufficient to assume that the support of each filter
$\seq{w}_{kji}$ is 2, since convolutional filters
of large sizes include this case.
This gives rise to the temporal CNN hypothesis space
\begin{equation}
\set{H}_{\text{CNN}} =
\bigcup_{K, M}
\set{H}_{\text{CNN}}^{(K,M)}
=
\bigcup_{K, M}
\Big\{
\seq{x} \mapsto \seq{\h{y}}
\text{ in \cref{eq:CNNdynamics}}
\Big\}.
\end{equation}
Density-type results have been studied for general CNNs
mostly for two-dimensional image applications,
and some of them can be adapted to the one-dimensional,
causal case here.
For brevity, we will not give an exhaustive list of this literature.
We mention however that most existing results are not directly applicable
to the sequence modelling case due to the shift-equivariant requirement.
For example, the works of
\citet{
oono2019.ApproximationNonparametricEstimation,
zhou2020.UniversalityDeepConvolutional,
okumoto2022.LearnabilityConvolutionalNeural}
consider approximating general functions,
and shift-equivariance is violated at the boundaries.
Density results for fully convolutional cases
\citep{
li2022.DeepNeuralNetwork,
lin2022.UniversalApproximationProperty,
petersen2020.EquivalenceApproximationConvolutional,
yarotsky2022.UniversalApproximationsInvariant}
are more relevant for the present application.
Nevertheless, due to the nature of image data having finite supports,
none of these results consider an unbounded index set.
However, for sequence approximation,
the problem of memory should be studied precisely on unbounded index sets.
If we assume some form of memory decay such as the FMP,
then a truncation argument can be used to show that the temporal
CNN hypothesis space is dense in sequence spaces (e.g. $\ell^p$),
as a corollary of these results.
For Jackson-type theorems, the current understanding is again
limited to the simple but interesting case of linear temporal CNNs,
i.e. $\sigma(z) = z$.
This gives the linear temporal CNN hypothesis space
\begin{equation}\label{eq:linearhcnn}
\begin{aligned}
\set{H}_{\text{L-CNN}} = \bigcup_{K, M}
\set{H}_{\text{L-CNN}}^{(K,M)} = &\Big\{\seq{\h{H}}:
\h{H}_t(\bm x) = \sum_{s=0}^\infty \h{\rho}(s)^\top x(t-s)
\Big\},
\end{aligned}
\end{equation}
where $\seq{\h{\rho}}$ is a finitely-supported
sequence determined by the filters $\{\seq{w}_{kji}\}$:
\begin{equation}\label{eq:cnn_representation}
\seq{\h \rho}_i
=
\sum_{i_1, \dots, i_{K-1}}
\seq{w}_{K-1, i_{K-1}, 1}
\Conv{}_{2^{K-1}}
\seq{w}_{K-2, i_{K-2}, i_{K-1}}
\Conv{}_{2^{K-2}}
\dots
\Conv{}_{2}
\seq{w}_{0, i, i_1}.
\end{equation}
Observe the striking similarity of~\labelcref{eq:linearhcnn}
and linear RNN hypothesis space~\labelcref{eq:hlrnn}.
The key difference is that in the RNN case,
the sequence $\seq{\h \rho}$ is an exponential sum with infinite support,
whereas in the case of CNNs it is a sum of repeated
dilated convolutions resulting in a finite support.
This in turn leads to, as investigated in
\citet{jiang2021.ApproximationTheoryConvolutional},
vastly different Jackson-type estimates.
In particular, one can identify different approximation
spaces that suggests how RNN and CNN approximation
differ when modelling sequence relationships.
Concretely,~\citet{jiang2021.ApproximationTheoryConvolutional}
proved the following Jackson-type estimate for linear,
causal and shift-equivariant
functional sequences $\seq{H}$:
\begin{equation}\label{eq:jackson_cnn}
\inf_{\seq{\h{H}} \in \set{H}_{\text{L-CNN}}^{(K,M)}}
\| \seq{H} - \seq{\h{H}} \|
\leq
G(KM^{\frac 1 K}-K)C_1(\seq{H})d + C_2(\seq H, K).
\end{equation}
Recall that $M$ is the number of convolution filters at each layer
and $K$ is the number of layers.
Together, $(M,K)$ control the complexity of the CNN hypothesis space.
The function $G : \mathbb{R} \to \mathbb{R}$ is a non-increasing function tending to 0,
to be explained later.
Let us now clarify the form of $C_1,C_2$.
Let $\seq\rho$ be the Riesz representation of $\seq H$,
\begin{equation}
H_t(\seq x) = \sum_{s\geq 0} \rho(s) x(t-s).
\end{equation}
Then, $C_2(\seq{H}, K)=\| \seq\rho_{[2^K, \infty) }\|_{\ell^2}$
is determined by the rate of decay of the memory
of the target functional sequence.
In particular, $C_2$ decays at least exponentially in
the depth of the neural network ($K$),
even if the target does not possess memory decay.
The term $C_1(\seq{H)}$ is a complexity measure of the target functional sequence,
determined by the effective rank of $\seq{H}$ after a tensorisation transformation.
Let us motivate its definition by an example.
Set $d=1$ and suppose the goal is to model a target functional sequence
\begin{equation}
H_t(\seq{x})
=
r_0 x(t)
+
r_1 x(t-1)
+
r_2 x(t-2)
+
r_3 x(t-3),
\qquad
r_s \in \mathbb{R}.
\end{equation}
In this case, the Riesz representation for $\seq{H}$ has support 4, i.e.
$
\seq{\rho}
=
(r_0, r_1, r_2, r_3)
$.
\rev{
A temporal CNN approximates $\seq{\rho}$ via product-sums in
the form of~\labelcref{eq:cnn_representation}.
Let us take $K=2$ and $M=1$.
Then, notice that we are seeking the approximation of
\begin{equation}
\seq{\rho}
=
(r_0, r_1, r_2, r_3)
\qquad
\text{by}
\qquad
\seq{\h \rho}
=
(w_{0,0}, w_{0,1})
\Conv{}_2
(w_{1,0}, w_{1,1}),
\end{equation}
which we can rewrite in matrix form as the approximation of
\begin{equation}
\tens{\seq{\rho}}
=
\begin{pmatrix}
r_0 & r_1\\
r_2 & r_3
\end{pmatrix}
\qquad
\text{by}
\qquad
\tens{\seq{\h \rho}}
=
\begin{pmatrix}
w_{0,0} \\
w_{0,1}
\end{pmatrix}
\begin{pmatrix}
w_{1,0} & w_{1,1}
\end{pmatrix}.
\end{equation}
Then, the approximation error becomes clear.
If $\tens{\seq{\rho}}$ is rank 1, then it can be represented
exactly by the 2-layer CNN with channel size 1.
Otherwise, there will be an approximation error,
and the optimal approximation error is the second singular
value of $\tens{\seq\rho}$ as a consequence of the Eckart-Young theorem.
This argument can be generalized to any $K$ and $M$.
For $K \geq 3$ the reshaping operation $\tens{\cdot}$
acting on a length $2^K$ sequence produces an order-$K$ tensor of size $2$
in each dimension,
\begin{equation}
\tens{\seq\rho_{[0,2^K]}}_{i_1,\dots,i_K}
=
\rho_{[0,2^K]}
\left(
\sum_{j=1}^{K}
i_j 2^{j-1}
\right),
\qquad
i_j \in \{0, 1\}.
\end{equation}
Then, a temporal CNN approximates this tensor as a sum
of rank 1 tensors.
The optimal approximation error is hence a consequence of
an Eckart-Young type theorem for higher-order singular value decomposition (HOSVD)%
~\citep{kolda2009.TensorDecompositionsApplications}.
This motivates the definition of an approximation
space that depends on the tail of the singular value sequence.
Let us now make this more precise.
We fix a CNN of depth $K$,
and consider the tensorisation $\tens{\seq{\rho}_{[0,2^K]}}$.
In the theory of HOSVD~\citep{kolda2009.TensorDecompositionsApplications},
this tensor has $2K$ singular values
\begin{equation}\label{eq:hosvs}
\sigma_1^{(K)}\ge\sigma_2^{(K)}\ge\cdots\ge\sigma_{2K}^{(K)}\geq 0,
\end{equation}
the first $K$ of which are equal and redundant.
The last $K$ singular values determine the error
of low rank approximation of this tensor,
much in the same way as ordinary singular value decay rates
determine the accuracy of low rank approximation of matrices.
Thus, we may consider specifying some decay rate $G$ so that
the tail sum of singular values (which corresponds to low rank approximation error)
satisfies
\begin{equation}\label{eq:g_decay}
\left(\sum_{i=s+K}^{2K}|\sigma_i^{(K)}|^2\right)^{\frac 1 2} \leq c G(s),
\end{equation}
with $G(s) \rightarrow 0$ as $s\rightarrow \infty$.
Now, we can build an approximation space by considering target functional
sequences whose Riesz representation $\seq \rho$ satisfies the following property:
for each $K$, the singular value tail sum of $\tens{\seq\rho_{[0,2^K]}}$
has a decay rate of at least $G$ (\cref{eq:g_decay}).
Then, the error of low rank approximation of these functional sequences
can be described by $G$.
This leads to the definition of a complexity measure in~\citet{jiang2021.ApproximationTheoryConvolutional}
of the form
\begin{equation}\label{eq:cnn_complexity}
\begin{aligned}
C_1(\bm H) =
\inf \Bigg\{c: \left(\sum_{i=s+K}^{2K}|\sigma_i^{(K)}|^2\right)^{\frac 1 2} \leq c G(s),
~s\geq 0, K\ge 1\Bigg\},
\end{aligned}
\end{equation}
and $G$ is a specified rate of decay of the singular values.
The Jackson-type rate estimate in \cref{eq:jackson_cnn} then follows
from the fact that the maximum rank of a CNN with $K$ layers and $M$ channels is at least $KM^{1/K}$.
The class of functional sequences where $C_1$ is finite defines an approximation
space (with respect to $G$) of sufficiently regular functional sequences
that admits efficient approximation by temporal CNNs.
This is analogous to the characterization of classical smoothness spaces by
the decay rate of series coefficients,
e.g. wavelet coefficients~\citep{mallat2009.WaveletTourSignal}.
Here, we can understand $C_1(\seq{H})$ as a measure of how easy
it is to approximate $\seq{H}$ by tensor product-sums.
In particular, it can be shown~\citep{jiang2021.ApproximationTheoryConvolutional}
that if $\seq{H}$ has a sparse Riesz representation (memory),
then it has small $C_1(\seq{H})$.
This supports the empirical observation that temporal CNNs
excel in applications such as text-to-speech~\citep{oord2016.WaveNetGenerativeModela},
where such sparsity patterns are expected.
}
Let us now contrast this insight to that obtained for the RNN,
which excel at modelling memory patterns that are exponentially
decreasing, but not necessarily sparse.
Consider a target functional sequence with Riesz representation
\begin{equation}
\rho(t)= \delta(t-2^{K_0}) =
\begin{cases}
1 & t = 2^{K_0}, \\
0 & t \neq 2^{K_0}.
\end{cases}
\end{equation}
This corresponds to a shift operation, where the output is the result of
shifting the input by $2^{K_0}$ units.
Observe that this target functional sequence
is inside the temporal CNN hypothesis space,
hence it can be exactly represented by setting $K = K_0$ and $M=1$.
However, when $K_0$ is large, it becomes increasingly difficult for a
power sum
$
u(t) = c_0 + \sum_{i=1}^m c_i\ \gamma_i^{ t}
$
to approximate this function.
The form of $u$ here is a simplified discrete
analogue of the exponential sum
in \cref{eq:hlrnn}.
For any such $u$,
we have the following property
due to~\citet{erdelyi1996.SharpBernsteintypeInequality},
\rev{\begin{equation}\label{eq:exp_property}
m
\geq
\frac{t}{2 \sup_{s\in [0,2t+2]} u(s)}
|{u(t+1)-u(t)}|.
\end{equation}}%
Since $\rho(t)$ has a sudden change at $t = 2^{K_0}$,
$u(t)$ need at least $2^{K_0-1}$ terms to achieve approximation,
making it challenging for a RNN to learn this target.
Conversely, there exists targets which are easily approximated
(in fact, exactly represented) by $\set{H}_{\text{RNN}}$ but have high complexity
when approximated by $\set{H}_{\text{CNN}}$~\citep{jiang2021.ApproximationTheoryConvolutional}.
These Jackson-type results highlight the interesting
differences between the RNN and the CNN architectures
with respect to the types of sequential relationships
they are adapted to approximating.
\subsection{Encoder-decoder architectures}
\label{sec:others_encdec}
Encoder-decoder architectures%
~\citep{cho2014.LearningPhraseRepresentations,
cho2014.PropertiesNeuralMachine,
sutskever2014.SequenceSequenceLearninga,
kalchbrenner2013.RecurrentContinuousTranslationb}
are a class of sequence to sequence models where
an encoder first maps the input sequence into a fixed-sized context vector,
and then a decoder maps the context vector into the output sequence.
The development of encoder-decoder models was motivated by the need
to handle input and output sequences with varying lengths.
The encoder-decoder architecture is flexible and allows
for the use of various configurations for the encoder and decoder components.
We consider the simplest setting where the encoder and the decoder are both
recurrent networks~\citep{cho2014.LearningPhraseRepresentations,sutskever2014.SequenceSequenceLearninga}.
This has the advantage that we can compare the results here
with those in ordinary RNNs in~\cref{sec:rnn}.
The RNN encoder-decoder ({{REncDec}}) architecture
(ignoring bias for simplicity) can be written as
\begin{equation}\label{eq:reddynamics}
\begin{aligned}
\dot h(s) &= \sigma_E(Wh(s)+Ux(s)),
& v &= Qh_0, \quad s\leq 0\\
\dot g(t) &= \sigma_D(Vg(t)),
& g_0 &= Pv, \\
\h{y}(t) &= c^\top g(t),
& t &\ge 0,
\end{aligned}
\end{equation}
where $W\in\mathbb{R}^{m\times m}$, $U\in\mathbb{R}^{m\times d}$, $Q\in\mathbb{R}^{N\times m}$, $V\in\mathbb{R}^{m\times m}$,
$P\in\mathbb{R}^{m\times N}$ and $c\in\mathbb{R}^{m}$.
Sequences $\seq h$ and $\seq g$ are the RNN-type hidden states
corresponding to the encoder and decoder dynamics, respectively.
The encoder is first applied to the entire input sequence $\seq{x}$
in order to produce a fixed-size context vector $v$,
which is the final hidden state of the encoder.
This context vector summarizes the input sequence.
The vector is then utilized as the initial state of the decoder,
which generates an output at each time step.
This defines sequential relationship between two semi-infinite sequences,
with the input $\seq{x}$ having support in $(-\infty, 0]$
and the output $\seq{y}$ having support in $[0, \infty)$.
The complexity of these maps (approximation budget)
is controlled by the RNN width $m$ and context vector size $N$.
Approximation properties of the {{REncDec}} architecture are
investigated in detail for the linear case
($\sigma_E,\sigma_D$ are identity maps) in~\citet{li2022.ApproximationPropertiesRecurrent}.
For simplicity of presentation, we take $d=1$,
corresponding to scalar input sequences.
Then, one can rewrite the {{REncDec}} hypothesis space as
\begin{equation}\label{eq:hred}
\set{H}_{\text{L-\rencdec}} =
\bigcup_{m, N}
\set{H}_{\text{L-\rencdec}}^{(m,N)} =
\bigcup_{m, N}
\Big\{\seq{\h{H}}:
\h{H}_t(\seq x)
=
\int_0^\infty \sum_{n=1}^N \h{\psi}_n(t)\h{\phi}_n(s) x(-s)ds
\Big\},
\end{equation}
where one may recall that
$m$ is the width of the RNNs used for the
encoder and the decoder,
and $N$ is the size of the context vector.
The sequences $\seq{\h \psi_n}$ and $\seq{\h \phi_n}$
are in exponential sum forms
\begin{equation}\label{eq:phi_psi}
\h\psi_n(t)
=
\Bigg(\sum_{i,j=1}^{m}c_i P_{jn}
\left[e^{Vt}\right]_{ij}\Bigg),
\qquad
\h\phi_n(t)
=
\Bigg(\sum_{i,j=1}^{m}u_i Q_{nj}
\left[e^{Wt}\right]_{ji}\Bigg).
\end{equation}
Since the {{REncDec}} architecture maps sequences of disjoint support,
it is no longer meaningful to consider time-homogeneity and causality.
Indeed, causality is always satisfied and time-homogeneity is not satisfied.
This is the case by design: the {{REncDec}} architecture is used to model
sequential relationships without the shift-equivariant condition.
Consequently, the target functional sequences considered here are
only assumed to be continuous and linear.
In this case, the Riesz representation of these targets take the form
\begin{equation}\label{eq:riesz_encdec_target}
H_t(\seq{x}) = \int_0^\infty \rho(t,s)^\top x(-s) ds,
\qquad
t \geq 0.
\end{equation}
This is a more general form where $\seq \rho$ depends on
two temporal indices $t$ (outputs) and $s$ (inputs) simultaneously.
The density of the hypothesis space~\labelcref{eq:hred} in the space of
sufficiently regular continuous linear functional is established
in~\citet{li2022.ApproximationPropertiesRecurrent}.
This result follows from the observation that we now seek approximations
of $\rho(t,s)$ via a product of two exponential sums.
Hence, one may follow essentially the same approach as in the RNN case
to prove density.
More interestingly, Jackson-type estimates can also be derived.
In particular, one has the following approximation rate
under similar settings as in~\cref{eq:jackson_rnn}
\begin{align}\label{eq:jackson_rencdec}
\| \seq{H} - \seq{\h{H}} \|
\leq
\frac{C_1(\alpha) \gamma}{\beta^2 m^\alpha}
+
C_2(\seq{H}, N),
\end{align}
where the meaning of various constants are defined similarly as
in~\cref{eq:jackson_rnn}.
Observe that the first term is similar to the RNN rate
\labelcref{eq:jackson_rnn}, as both the encoder and decoder are implemented
using RNNs.
The estimate $C_2(\seq H, N)$
highlights the new complexity measure associated with
encoder-decoder architectures,
since $N$ is the complexity of the context (coding) vector that acts as the only
intermediary between the encoder and decoder components.
To see what the complexity measure may be,
let us compare~\cref{eq:phi_psi}
and~\cref{eq:riesz_encdec_target}.
Observe that approximating a target $\seq{H}$ simply amounts
to approximating its Riesz representation $\seq{\rho}$ by
a tensor-product summation of the form
\begin{equation}\label{eq:model_svd}
\h{\rho}(t,s)
=
\sum_{n=1}^N \h{\psi}_n(t)\h{\phi}_n(s).
\end{equation}
One may immediately notice that this is a rank-$N$ approximation
of a two-variable function as sums of products of univariate functions.
The optimal approximation is obtained through the proper orthogonal decomposition
(POD)~\citep{chatterjee2000.IntroductionProperOrthogonala},
which is an infinite-dimensional version of the optimal low rank approximation
of matrices via truncated singular value decomposition.
In fact, we may write the formal POD expansion for $\seq{\rho}$ as
\begin{equation}\label{eq:target_svd}
{\rho}(t,s) = \sum_{n=1}^\infty \sigma_n{\psi}_n(t){\phi}_n(s).
\end{equation}
where $\sigma_n$ are the singular values and $\psi_n,\phi_n$
are the left and right singular vectors (functions).
This is called a \emph{temporal product} structure
in~\citet{li2022.ApproximationPropertiesRecurrent}.
Then, an analogue of the classical Eckart-Young theorem implies
that the optimal approximation error is simply the tail-sum of the singular
values. This is precisely the estimate in $C_2$, i.e.
\begin{equation}
C_2(\seq{H}, N)
\propto
\left(\sum_{n=N+1}^\infty \sigma_n^2\right)^{\frac 1 2}.
\end{equation}
This is considered as the effective rank of the target,
and the Jackson-type estimate in~\cref{eq:jackson_rencdec}
says that a target with low effective rank can be approximated
efficiently with {{REncDec}} (with small context vector).
Note that this notion of rank is different from the
tensorisation rank discussed for CNNs in~\cref{sec:others_cnn}.
The concept of effective rank of the sequential relationship
under temporal products is similar to that in linear algebra,
where the rank of a matrix is the dimension of its range space.
This definition can be extended to apply to sequential relationships.
\Cref{fig:high_low_rank} illustrates this idea:
A low rank temporal relationship results in a more
regular output sequence.
In particular, local perturbations to the input sequence result into global
perturbations of the output sequence.
This is very different from both the CNN and the RNN architectures,
and the Jackson-type estimate makes this difference precise.
\begin{figure}[H]
\centering
\includegraphics[draft=false,width=0.8\textwidth]{figures/highlowrank.pdf}
\hfill
\caption{
Schematic illustration of a high-rank vs low-rank
sequential relationship under the temporal product structure.
A dataset of input sequences (left)
are fed into a functional sequence producing
the corresponding output sequences (right).
The top (resp. bottom) right plot shows the resulting sequence
of a high-rank (resp. low-rank) relationship.
Observe that the high rank relationship yields a
complex and input-sensitive temporal structure.
In contrast, the outputs of the low rank relationship
exhibit greater regularity, with only macroscopic structures present.
It is precisely the latter that {{REncDec}} is adapted
to model.
}
\label{fig:high_low_rank}
\end{figure}
Currently known approximation results only focus on linear RNN
encoder-decoder.
However, the density result can be extended to non-linear cases by following
the same approaches outlined in~\cref{sec:rnn},
due to the similarity with RNNs.
The rate estimate is less straightforward to extend to non-linear activations.
Nevertheless, one may expect that the uncovered relationship
between the size the context vector and a low-rank type of approximation should
hold generally for encoder-decoder architectures. This is because in all such
structures, the input and output sequence (both may be infinite-dimensional) only
communicate through a bottle-neck coding vector (finite dimensional), and thus
the approximation should be viewed as a generalized low-rank approximation.
\subsection{Attention-based architectures}
\label{sec:others_attn}
In the final part of this section, we discuss approximation theory
for the growingly popular attention-based architectures.
The attention mechanism was first proposed in~\citet{bahdanau2016.NeuralMachineTranslation}
in the context of RNNs.
Subsequently, it was employed in a variety of practical network architectures.
The attention mechanism, much like the encoder-decoder mechanism,
is a component that can be incorporated into existing models.
Since its introduction, the attention mechanism has become popular tool in applications,
including natural language processing~\citep{vaswani2017.AttentionAllYou}
and computer vision~\citep{dosovitskiy2021.ImageWorth16x16}.
In fact, one of the most successful successful model families,
the Transformer~\citep{vaswani2017.AttentionAllYou},
is based on both the attention mechanism and the encoder-decoder mechanism.
However, our theoretical understanding of the attention mechanism is currently limited,
particularly with regard to its approximation properties.
Let us focus our discussion on the Transformer family of
attention-based architectures.
Currently established approximation results include
the universal approximation capabilities of Transformer
networks~\citep{yun2020.AreTransformersUniversala}
and its sparse variants~\citep{yun2020.ConnectionsAreExpressivea}.
It is important to note that in this context,
the term ``Transformer'' refers specifically to the encoder component
of the original architecture proposed in~\citet{vaswani2017.AttentionAllYou}.
In order to study the Transformer under the sequence approximation setting,
it is convenient to restrict the index set $\set{T}$ to a finite set
$\set{T} = \{1,2,\dots,\tau\}$.
Then, the approximation problem becomes finite-dimensional.
The reason is as follows.
The use of position encoding in Transformer networks is necessary to eliminate
their permutation equivariance (we will show this exactly later).
Position encoding is a sequence $\seq e$
where $t\mapsto e(t)$ is a fixed or trainable
function, independent of $\seq{x}$.
\rev{The sequence $\seq e$ preserves the information of temporal order.}
For training convenience, the length of this encoding is fixed.
As a result, Transformer networks are unable to directly process
infinite-length sequences,
unlike RNNs and CNNs based architectures.
The simplest transformer block consists of the following components,
\begin{equation}\label{eq:transformer_block}
\begin{aligned}
\text{Attn}(\seq x)(t) &= x(t) + \sum_{i=1}^q
W_o^{i} \sum_{s=1}^\tau\sigma [(W_{Q}^{i}x(t))^\top W_{K}^{i}x(s)] \, W_{V}^{i} x(s),\\
\text{Trans}(\seq x)(t) &= \text{Attn}(\seq x)(t) + f(\text{Attn}(\seq x)(t)),
\end{aligned}
\end{equation}
where
$
W_{Q}^{i}, W_{K}^{i}, W_{K}^{i}
\in
\mathbb{R}^{m\times d}$, $W_{o}^{i} \in \mathbb{R}^{d\times m}
$.
Here, $\text{Attn}(\seq x)$ is the attention block,
$\sigma$ is a normalization usually taken as the softmax function,
and $\tau$ is the maximum input sequence length.
The attention mechanism produces an output which is
subsequently fed into a common trainable feed-forward network $f$,
pointwise in time.
This constitutes a Transformer block.
Define the Transformer hypothesis space by
\begin{equation}
\set{H}_{\text{Trans}}^{(m_1, m_2, q, l)}
=
\Big\{\seq{\h{H}}:
\seq{\h{H}}
\text{ is a composition of $l$ Transformer blocks } t^{(m_1,m_2,q)}
\Big\},
\end{equation}
where $t^{(m_1,m_2,q)}=\text{Trans}(\cdot)$ is a Transformer block defined
in \labelcref{eq:transformer_block},
$m_1$ is the trainable dimension of the attention block
(total degrees of freedom of $W_{o}, W_{Q}, W_{K}, W_{V}$),
$q$ is the number of attention heads
and $m_2$ is the dimension of the trainable parameters in the
pointwise feed-forward network $f$.
In~\citet{yun2020.ConnectionsAreExpressivea}, a sparse variants is defined,
where the $W_Q^i$ matrix in the attention block satisfies certain sparsity conditions.
We denote the sparse Transformer hypothesis space by
\begin{equation}
\set{H}_{\text{SpTrans}}^{\,(m_1, m_2, q, l)} \subset \set{H}_{\text{Trans}}^{\,(m_1, m_2, q, l)},
\end{equation}
which is a subset of the Transformer hypothesis space.
We start with density results for the Transformer.
First, note that without position encoding,
the Transformer hypothesis space is permutation equivariant.
Concretely, let $p$ be a permutation of the sequence index,
which is a bijection on $\{1,\dots,\tau\}$.
For a sequence $\seq{x}$, we denote by $\seq{x}\circ p$
the permuted sequence $[\seq{x}\circ p](t) = x(p(t))$.
A functional sequence $\bm H$ is said to be
permutation equivariant if for all $p$ and $\bm x$
we have $\bm H(x \circ p) = \bm H(\bm x) \circ p$.
We can check that the Transformer block~\labelcref{eq:transformer_block}
is permutation equivariant if one does not perform positional encoding.
This certainly limits approximation properties,
and thus we hereafter assume that
a fixed position encoding is added to the input $\seq x$,
such that the model input becomes $\seq x + \seq e$.
In~\citet{yun2020.AreTransformersUniversala,yun2020.ConnectionsAreExpressivea},
a density results for the Transformer is proved under the following conditions.
Assuming the target $\seq H$ is continuous, and the input sequence space
is uniformly bounded,
then $\seq{H}$ can be approximated by
\begin{equation}
\seq{\h H} \in \bigcup_l
\set{H}_{\text{SpTrans}}^{( 1, 4 ,2, l)}\subset \bigcup_l \set{H}_{\text{Trans}}^{( 1, 4 ,2, l)}.
\end{equation}
This result is proved by a special construction.
First, one uses a stack of attention blocks to achieve the following condition:
\begin{enumerate}
\item For any input $\seq{x}$,
the value of the output $\seq{\tilde{x}}$ are all distinct.
\item For all inputs $\seq{x_1}, \seq{x_2}$ such that
$\seq{x_1} \neq \seq{x_2}$,
their outputs $\seq{\tilde{x}_1}$ and $\seq{\tilde{x}_2}$ have no common value.
\end{enumerate}
These conditions can be understood as for each $t$,
$\tilde{x}(t)$ captures the information of the entire input sequence.
Next, a deep stack of pointwise feed-forward blocks
are constructed to map each $\tilde{x}(t)$ to the desired output.
This construction results in a deep Transformer architecture
with a small width.
However, this construction is not generally how Transformer operates,
since the first part of the construction is an attention-only network,
which is shown to degenerate quickly ~\citep{dong2021.AttentionNotAlla}.
In a similar vein,
several studies such as~\citet{cordonnier2020.RelationshipSelfAttentionConvolutional}
and~\citet{li2021.CanVisionTransformersa}
have demonstrated that a Transformer can represent
a CNN through careful parameterization.
Therefore, density results from CNN imply
the density of the transformer.
Again, there is little empirical evidence that the Transformer
behaves like a CNN in applications.
The ability for the Transformer to mimic other architectures
is not surprising, since it has many highly flexible
components (encoder-decoders, fully connected networks, attention mechanisms)
that can be carefully, but often artificially, adjusted to represent
other known architectures as a special case.
In fact, we give here another example of such a representation
that, to the best of our knowledge,
has not been reported in the literature,
but is straight-forward to derive.
We can show that a two-layer Transformer can mimic the form of
a generalized Kolmogorov representation theorem~\citep{ostrand1965.DimensionMetricSpaces}.
This result states that for $d$ dimensional compact sets $I_1,\dots,I_\tau \subset \mathbb R^d$,
any continuous
$\seq H : I_1 \times \cdots \times I_\tau \to \mathbb{R}$
can be written as
\begin{equation}
H(\seq x) = \sum_{q=0}^{2d\tau}
\Phi_q \left( \sum_{s=1}^\tau \phi_{q,s}(x(s))\right),
\end{equation}
where $\Phi_q$ and $\{\phi_{q,s}\}$ are continuous functions.
It is possible to design a two-layer Transformer
exhibiting a similar form, implying density.
For simplicity, we consider $d=1$ and only the output at $H_t$ when $t = 1$.
The general case can be constructed similarly.
One can ensure that with position encoding,
a pointwise feed-forward function is able to apply different
mappings at each temporal index position.
To see this, observe that for a collection of continuous functions
$f_i: \mathbb [0,1]^d \to \mathbb R$,
$i=1,\dots,\tau$,
we can find vectors $\{e_i\}$
and a continuous function $F:\mathbb{R}^d\to\mathbb{R}$
such that
$
F(x + e_i) = f_i(x).
$
Now suppose we have an input sequence $\seq x$.
\begin{itemize}
\item Layer 1.
We set $W_o^i = 0$ in the attention block,
so that the input directly goes into the pointwise feed-forward block.
From the previous discussion,
the pointwise feed-forward network
can be constructed to give an output
$y^{(1)} :\mathbb{R} \to \mathbb{R}^{2\tau + 1}$,
such that
$
y^{(1)}(s) =
c_s[
\h{\phi}_{0,s}(x(s)),
\dots,
\h{\phi}_{2\tau,s}(x(s))
]^\top
$,
with $c_1 = 1/2$ and $c_s=1$ when $s>1$.
Due to the density of feed-forward neural networks,
each $\h{\phi}_{j,s}$ can be chosen to approximate any continuous function.
\item Layer 2.
In the attention block, by letting $W^i_{K}=0$,
the softmax function gives a constant output where
$\text{softmax} [(W_{Q}^{i}x(t))^\top W_{K}^{i}x(s)]\equiv \frac{1}{\tau}$.
Let $W_{V}^i = I, W_o^i = \tau I$ and $h = 1$, then we have
$
\text{Attn}(\seq y^{(1)})(1) = \sum_{s=1}^\tau y^{(1)}(s).
$
Hence, the final output after the feed-forward network
with linear readout $c^\top = (1,\dots, 1) \in \mathbb{R}^{2\tau+1}$ gives
\begin{equation}
\begin{aligned}
H_1(\seq x)
&=
c^\top \h{\Phi}(\text{Attn}(\seq y^{(1)})(1)), \\
&=
\sum_{q=0}^{2\tau}\h{\Phi}_q\left(\sum_{s=1}^{\tau} [y^{(1)}(s)]_q\right), \\
&=
\sum_{q=0}^{2\tau}\h{\Phi}_q\left(\sum_{s=1}^{\tau} \h{\phi}_{q,s}(s)\right),
\end{aligned}
\end{equation}
where $\h{\Phi}$ is again a feed-forward neural network
that can be adjusted to approximate any continuous function.
Thus, the Kolmogorov representation can be approximated through
this particular construction.
\end{itemize}
This highlights a common issue in current approximation results for
complex structures such as the transformer.
Density-type results are rarely illuminating, since they can be
constructed in many ways due to the structural flexibility.
However, they rarely reveal the working principles of the complex
model under study.
In particular, it gives little insights to why and when these
models should be used for applications.
Such insights may result from finer analysis of approximation properties,
including Jackson-type and Bernstein-type results as outlined
for the other architectures previously.
\revtwo{
To date, there are few - if any - Jackson or Berstein-type
results for sequence modelling using the Transformer.
We mention a related series of works on static function approximation
with a variant of the Transformer architecture%
~\citep{kratsios2022.SmallTransformersCompute,kratsios2022.UniversalApproximationConstraints,acciaio2022.DesigningUniversalCausal}.
Here, the targets are continuous functions $H: [0, 1]^\tau \to K$,
and $K\subset \mathbb{R}^n$ is a compact set.
Examples include classification problems
where $K$ is a probability simplex, and
covariance matrix prediction problems with $K$ being
the set of symmetric positive semi-definite matrices.
The authors consider a variant of the transformer architecture
to approximate this target function family.
For $x \in \mathbb [0,1]^\tau$, and $Y_1, \cdots, Y_N \in K$,
an approximant of the following form is considered,
\begin{equation}
\begin{aligned}
\h H(x) &= \text{Attn}(\h D(\h E (x)),Y) \\
&= \sum_{i=1}^N \text{softmax}(\h D(\h E (x)))_{i} \delta_{Y_i},
\end{aligned}
\end{equation}
where
$\h E: \mathbb R^\tau \to \mathbb R^m$,
$\h D: \mathbb R^m \to \mathbb R^N$ are two fully-connected neural networks,
and $\delta_{Y_i}$ is a point mass at $Y_i$.
The softmax is taken along the $i=1,\dots,N$ direction.
One may observe the deviations of this architecture
compared with the Transformer used
in sequence modelling \labelcref{eq:transformer_block}.
For this modified architecture,
the authors derive an approximation error estimate
based on increasing the complexities of the encoder $\h E$ and the decoder $\h D$.
The identified notion of regularity for the target
to induce efficient approximation is smoothness,
similar to classical approximation of functions.
However, the sequence approximation properties of
the practical Transformer architecture \labelcref{eq:transformer_block},
and in particular its relation to memory structures in the data,
remains an open problem.
This is an important direction of future research.
}
\section{Discussion and outlook}
\label{sec:outlook}
Let us first summarize in~\cref{tab:summary}
the approximation results we discussed
in~\crefrange{sec:rnn}{sec:others}.
Observe that most results in the literature
are of the density-type,
and current Jackson and Bernstein-type results
are often limited to the simplified setting of linear activations.
Nevertheless, these rate estimates are instructive
in revealing some key insights on the approximation
of sequence relationships using different architectures.
\begin{table}[ht!]
\begin{center}
\caption{Summary of approximation results for sequence modelling.}
\label{tab:summary}
\begin{tabular}{l|ccc}
\toprule
& \textbf{Density-type}
& \textbf{Jackson-type}
& \textbf{Bernstein-type}\\
\midrule
\textbf{RNN} & \checkmark & \rev{Barron, linear} & linear\\
\textbf{CNN} & \checkmark & linear & -\\
\textbf{{REncDec}} & \checkmark & linear & - \\
\textbf{Transformer} & \checkmark & - & -\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
We can collectively summarize this insight as
a form of \emph{structural compatibility}.
That is to say:
\begin{verse}
\emph{
Each model architecture is efficient in approximating
precisely those targets that resemble its temporal structure.
}
\end{verse}
For example, we saw that RNNs are particularly good at
approximating relationships with an exponentially decaying memory pattern.
We can attribute this to the fact that RNNs themselves have
an exponentially decaying memory structure, as evidenced
by the expression~\cref{eq:hlrnn}.
Similarly, temporal CNNs are effective in approximating
targets whose memory structure has low-rank under tensorisation,
i.e. can be written as the product-sum of few tensors.
\rev{This is indeed what the temporal CNN itself looks like:
we recall that the tensorisation rank of the temporal CNN
with $K$ layers and $M$ channels scales like $K M^{1/K}$.}
The same holds for the RNN encoder-decoder with respect
to its low-rank structure under temporal products,
induced by the context vector.
\rev{
We emphasize that this notion of rank is very different from that
in temporal CNN approximation.
In the convolution case, the rank refers to
the tensorisation procedure related to the stacked
convolutional structure of the temporal CNN.
In the case of recurrent encoder-decoders,
the rank refers to the amount of temporal coupling between the input
sequence and the output sequence.
Mathematically, this coupling is measured by the
rate of decay of the singular values in the expansion
\labelcref{eq:target_svd}.
The recurrent encoder-decoder with small context vector
precisely parameterize a temporal relationship that
has little coupling between inputs and outputs.
The Jackson-type results show that the recurrent
encoder-decoder is indeed adapted to approximate
target relationships having the same property.
}
The requirement of structural compatibility is consistent
with classical approximation theory.
For example, trigonometric polynomials with low-orders
are themselves smooth functions with small gradient-norms,
and thus are adapted to approximating these functions.
The same pattern is observed for non-linear approximation,
wavelets and multi-resolution analysis, where weakened smoothness,
sparsity and multi-scale structures dictate both the model
structures and effective targets for their application~\citep{devore1998.NonlinearApproximation}.
Now, let us discuss future research directions to further
our understanding of the approximation theory of sequence modelling.
Besides the obvious task of completing~\cref{tab:summary},
we may wish to ask:
\emph{What does a successful
theory of sequence approximation entail?}
While there is no singular definition of success,
it is reasonable to discuss desired outcomes
in two broad categories.
On the practical side,
one pressing need is to reduce the amount of
trial and error during model selection.
The understanding of the suitability of different model
architectures for different problem types
is essential in guiding implementations in practice.
Therefore, an important task is to formalize
a model selection workflow for sequence modelling.
This certainly requires more than approximation theory,
but the current understanding already suggests that
we should quantify the memory patterns observed in
datasets to select the model archetype.
Developing this concrete pipeline based on
well-understood theory is of great interest
and importance.
Another practical application worth noting
is the simplification of model architectures.
Modern architectures developed for specific
applications may be very complex,
and it is likely that some of their components
are not performance critical.
A theory of sequence modelling should help
to identify the components that may not be
necessary, so as to simplify and distil
the essential modelling techniques.
On the mathematical side,
following the development of classical approximation
theory~\citep{devore1998.NonlinearApproximation},
it is of interest to characterize the so-called approximation
spaces that are associated with each sequence modelling hypothesis space.
Recall that the results for RNN in~\cref{sec:rnn}
suggests a type of approximation space in the form of
\begin{equation}
\set{C}_{\text{RNN}}
=
\{
\seq{H} \in \set{C}
:
\normCrnn{\seq H} < \infty
\},
\end{equation}
where the norm $\normCrnn{\cdot}$ may take the form
\begin{equation}
\normCrnn{\seq H}
=
\|\seq{H}\|
+
|\seq{H}|_*.
\end{equation}
Here, $\|\seq{H}\|$ is the usual uniform norm
$\|\seq{H}\| = \sup_{t}\sup_{\|\seq{x}\|_{L^\infty}\leq 1} |H_t(\seq{x})|$,
and $|\seq{H}|_*$ is a suitable semi-norm
measuring exponential decay.
For example, motivated by \cref{eq:jackson_rnn},
we may take
\begin{equation}
|\seq{H}|_*
=
\sup_{t\in\mathbb{R}}
\max_{r=1,\dots,\alpha+1}
\sup_{\seq{x} \in \set{X}_0}
|e^{\beta t} H_t^{(r)}(\seq{x})|
\end{equation}
where $\beta$ is the supremum value for which
$|e^{\beta t} H_t^{(r)}(\seq{x}) |$
remains finite for all $t$, $\seq{x} \in \set{X}_0$,
and $r=1,\dots,\alpha+1$.
The set $\set{X}_0\subset\set{X}$
is a suitable set of test sequences.
One can check that $(\set{C}_{\text{RNN}}, \normCrnn{\cdot})$
forms a normed linear subspace.
Then, the Jackson-type estimate can be
rewritten as the familiar form
\begin{equation}
\inf_{\seq{\h H} \in \set{H}_{\text{L-RNN}}^m}
\| \seq{H} - \seq{\h H} \|
\leq
\text{Constant }
\times
\frac{
\normCrnn{\seq H}
}{
m^\alpha
}.
\end{equation}
Hence, this suggests that the approximation space $\set{C}_{\text{RNN}}$
is the RNN analogue of the usual Sobolev spaces
($\mathcal{W}^{\alpha,\infty}$) that characterizes
trigonometric polynomial approximation.
The space $\set{C}_{\text{RNN}}$ is reminiscent of the
Schwartz spaces~\citep{stein2011.FourierAnalysisIntroduction}
arising in Fourier analysis,
except that we are now concerned with exponentially
(instead of polynomially) decaying derivatives.
\rev{
Note that here, we are primarily concerned with
the effect of temporal structure on approximation.
Since time is one-dimensional, regularity related
to smoothness should be expected.
In the case where the ambient dimension of the
input sequence $d$ is large (and the relationship is non-linear),
one expects that smoothness alone is insufficient
to ensure efficient approximation.
In this case, one may envision approximation spaces
with a combination of smoothness conditions in the temporal direction
and Barron-type conditions in the spatial direction.
}
Similar constructions of approximation spaces
can be made from Jackson-type results for the other architectures
we described before.
To characterize these spaces, their interpolation theory
and whether they correspond to familiar spaces arising from
analysis is of keen mathematical interest.
Another aspect is characterizing the difference
between linear and non-linear approximation.
Taking the RNN as an example,
the usual RNN (with trainable $W,U,b$) is a
non-linear hypothesis space, in the sense that
the linear combination of two functional sequences from $\set{H}_{\text{RNN}}^m$
is in general a new functional sequence not in $\set{H}_{\text{RNN}}^m$,
but $\set{H}_{\text{RNN}}^{2m}$.
On the contrary, reservoir computing systems
take $W,U,b$ as fixed random realizations,
and $\set{H}_{\text{ESN}}^m \oplus \set{H}_{\text{ESN}}^m = \set{H}_{\text{ESN}}^m$.
That is to say, the $\set{H}_{\text{ESN}}^m$ is a linear approximation
space.
In classical approximation theory,
linear and non-linear (adaptive) approximation
lead to different approximation spaces%
~\citep{devore1998.NonlinearApproximation}.
It is thus of interest to investigate this distinction
for sequence modelling, e.g.,
clarifying the difference of using ESNs versus RNNs
for approximation.
Beyond approximation theory,
it is important to note that a comprehensive understanding
of sequence modelling should also account for
optimization and generalization aspects.
Indeed, principled sequence modelling in machine learning
is not only the design of model architectures,
but also how to train them
and how to regularize them to maximize
testing performance.
For example, it is observed that while
RNN training can be shown to be stable
in the certain regimes%
~\citep{hardt2018.GradientDescentLearns,allen-zhu2019.ConvergenceRateTraining},
it can sometimes be provably ineffective
in the presence of long-term memory%
~\citep{li2021.CurseMemoryRecurrent,li2022.ApproximationOptimizationTheorya}.
Generalization theories have also been explored,
e.g. in
\citet{chen2019.GeneralizationBoundsFamily,tu2020.UnderstandingGeneralizationRecurrent,wang2021.ProvableGeneralizationRecurrent}.
\revtwo{
Other than deterministic model families,
there are also results on modeling sequential data via
(latent) neural controlled (stochastic) differential equations,
such as hybrid architectures with GANs~\citep{kidger2021.NeuralSdesInfinitedimensional},
universal neural operators for causality~\citep{galimberti2022.DesigningUniversalCausal},
and neural SPDE models motivated by
mild solutions~\citep{hu2022.NeuralOperatorRegularity,salvi2021.NeuralStochasticPDEs}.
Applications include time series generation~\citep{lozano2023.NeuralSDEsConditional},
irregular and long time series analysis~\citep{kidger2020.NeuralControlledDifferential,morrill2021.NeuralRoughDifferential},
and online prediction~\citep{morrill2022.ChoiceInterpolationScheme}.
These interesting aspects of sequence modelling theory
are beyond the scope of the current survey.
}
\section{Recurrent neural networks}
\label{sec:rnn}
Recurrent neural networks (RNN) are one of the earliest model
architectures proposed for modelling
sequential relationships~\citep{rumelhart1986.LearningRepresentationsBackpropagating}.
The key idea is the introduction of a hidden dynamical system
that captures the memory patterns in the sequences.
We begin by introducing the RNN architecture and its corresponding hypothesis space.
\subsection{Recurrent neural network hypothesis space}
We first consider modelling a sequential relationship on the
index set $\set{T}=\mathbb{Z}$.
Suppose at each time, the input sequence $x(t) \in \mathbb{R}^d$ is a vector.
Without much loss in generality, we can consider the output sequence
as a scalar sequence, i.e. $y(t) \in \mathbb{R}$.
For vector-valued output sequences, one may consider each output
dimension separately to deduce corresponding results.
The (one-layer) recurrent neural network parametrizes the relationship
between an input sequence $\seq{x}$ and an output sequence $\seq{y}$
as the following discrete dynamical system%
\footnote{
There are notational variants in the literature,
e.g. sometimes the index for the input is $t-1$ instead of $t$.
Such minor variations do not affect approximation results.
}
\begin{equation}\label{eq:rnn_discrete_dynamics}
\begin{aligned}
h(t+1) &= \sigma(W h(t) + U x(t) + b), \\
y(t) &= c^\top h(t),
\end{aligned}
\qquad
t \in \mathbb{Z}.
\end{equation}
Here, $\seq{h}$ is a hidden state sequence, with each $h(t) \in \mathbb{R}^m$.
Thus, the trainable parameters are $W\in\mathbb{R}^{m\times m}$,
$U\in\mathbb{R}^{m\times d}$, $b\in\mathbb{R}^m$, and $c\in\mathbb{R}^m$.
Conventionally, we impose a zero initial condition on $\seq{h}$,
i.e. if the input sequence first becomes non-zero at a particular $t_0$
then $h(t_0) = 0$.
For theoretical treatments, we can also take $t_0 = -\infty$
to handle inputs of unbounded support.
The function $\sigma$ is an activation function, which is a scalar
function acting element-wise.
In typical RNNs, $\sigma$ is taken as the hyperbolic tangent function ($\tanh$),
but many other choices are possible.
Observe that \cref{eq:rnn_discrete_dynamics} defines a functional sequence
$\h{\seq{H}}$, with $y(t) = \h{H}_t(\seq x) = c^\top h(t)$,
and $h(t)$ satisfies the dynamics in \cref{eq:rnn_discrete_dynamics}.
Formally, we can write the RNN hypothesis space as
\begin{equation}\label{eq:Hrnn_discrete}
\begin{aligned}
\set{H}_{\text{RNN}}
&=
\bigcup_{m \in \mathbb{N}_+}
\set{H}_{\text{RNN}}^m \\
\set{H}_{\text{RNN}}^m
&=
\left\{
\begin{aligned}
\seq{\h{H}}
:
&\h{H}_t(\seq x)
=
c^\top h(t),
\seq{h} \text{ follows } \cref{eq:rnn_discrete_dynamics}
\text{ with } \\
&W\in \mathbb{R}^{m\times m},
U\in \mathbb{R}^{m\times d},
b\in \mathbb{R}^{m},
c\in \mathbb{R}^{m}
\end{aligned}
\right\}
\end{aligned}
\end{equation}
The approximation budget here is $m$, which is the width of the RNN,
or the dimension of the hidden state sequence $\seq{h}$.
Approximation theory of RNN investigates the ability of $\set{H}_{\text{RNN}}$ and $\{\set{H}_{\text{RNN}}^m\}$
to approximate appropriate target functional sequences.
It is often convenient to consider a continuous variant of the RNN, i.e. $\set{T} = \mathbb{R}$.
In this case, the RNN hidden state equation is now continuous in $t$, and it can be viewed
as a time index.
The only change is that we replace the difference equation \cref{eq:rnn_discrete_dynamics}
by the differential equation
\begin{equation}\label{eq:rnn_cts_dynamics}
\begin{aligned}
\dot{h}(t) &= \sigma(W h(t) + U x(t) + b), \\
y(t) &= c^\top h(t),
\end{aligned}
\qquad
t \in \mathbb{R}.
\end{equation}
Besides theoretical advantages, some practical applications (e.g. irregularly-sampled time series)
require a continuous-index model.
The corresponding hypothesis space is analogous to \cref{eq:Hrnn_discrete} with \cref{eq:rnn_cts_dynamics}
in place of \cref{eq:rnn_discrete_dynamics}.
A remark is in order on the choice of time-index for sequence approximation.
Generally, $\set{T}$ can be discrete or continuous, and bounded or unbounded, leading to four different settings.
In addition, in each setting there is a choice of the norm that measures the approximation error.
In the simplest case where $\set{T}$ is bounded and discrete, the approximation problem is finite-dimensional.
Beyond this setting, the choice of norm generally matters.
For density-type results, the choice of discrete vs continuous $\set{T}$ is usually not important,
since they can be bridged by a discretization argument in one way and taking limits in the other.
The distinction between bounded and unbounded $\set{T}$ is however significant, and the latter
generally requires more stringent conditions and is also more important for analyzing
memory behavior that occurs at asymptotic regimes of $\set{T}$.
On the other hand, for Jackson/Bernstein-type theorems, there is a difference between
discrete and continuous $\set{T}$.
Typically, approximation error estimates for a discrete $\set{T}$ are grid-dependent,
and do not readily translate to a uniform error estimate over all discrete grid partitions.
In this sense, uniform-in-$t$ estimates for the continuous case are stronger results,
as they imply approximation rates for any grid using a discretization argument,
given some regularity conditions on the sequences to allow one to estimate the discretization error.
\subsection{Density-type results}
\label{sec:rnn_density}
As with most machine learning models, density-type results are the most basic and thus prevalent.
Such results are minimal guarantees for the general applicability of a machine learning model.
At the same time, the most theoretically interesting part about these results is
the identification of appropriate target spaces $\set{C}$ in which a particular $\set{H}$ is dense.
\paragraph{Hidden dynamic functionals.}
By observing the RNN structure, it is natural to consider target functionals that are themselves
defined via observations of a hidden dynamical system that has a compatible structure.
For instance, in continuous time index case one can consider
\begin{equation}\label{eq:fg_dynamics}
\seq x \mapsto \seq{H}(\seq{x}) = \seq{y}
\quad
\text{with}
\quad
\begin{aligned}
\dot{h}(t) &= f(h(t), x(t)), & h(t) &\in \mathbb{R}^{n},\\
y(t) &= g(h(t)), & h(-\infty) &= 0,
\end{aligned}
\end{equation}
where $f:\mathbb{R}^n \times \mathbb{R}^d \to \mathbb{R}^n$ and $g:\mathbb{R}^n \to \mathbb{R}$.
We may assume that $f$ is Lipschitz and $g$ is continuous so that
$\seq{H}$ is well-behaved.
The function $g$ is called a readout map.
Since the functions $f,g$ parameterize a functional
sequence via a hidden dynamical system,
we call them \emph{hidden dynamic functional sequences},
or $\set{C}_{HD}$ for short.
The discrete or bounded index cases are defined similarly.
In the non-linear dynamics literature, \cref{eq:fg_dynamics}
is often called a non-linear time-invariant system
and the corresponding functional sequence $\seq{H}$ is
referred to as a time-invariant filter.
The term time-invariant (strictly, equivariant) highlights that $\seq{H}$ commutes with time-shifts.
To see this, denote by $\seq{S}_\tau$ the shift operator $\seq{S}_\tau(\seq{x})(t) = x(t-\tau)$,
then $\seq{H}$ satisfies $\seq{H} \circ \seq{S}_\tau = \seq{S}_\tau \circ \seq{H}$.
However, in this survey we refrain from calling them time-invariant filters,
because there may exist functional sequences that commute with time shifts,
but are not readily written in the form \cref{eq:fg_dynamics},
e.g. the shift functional sequence $H_t(\seq{x}) = \seq{S}_\tau(\seq{x})(t) = x({t-\tau})$.
\rev{
Density-type results for $\set{C}_{HD}$ are also called
\emph{universal simulation},
since it requires the approximate simulation of a dynamics driven by $f$
and a readout map defined by $g$ by a RNN.
Earlier results on {hidden dynamic functional sequences} focus on a bounded index set
(see the survey of~\citet{sontag1992.NeuralNetsSystems} and references therein,
and also~\citet{chow2000.ModelingContinuousTime,li2005.ApproximationDynamicalTimevariant}).
In these works, the main technique is to appeal to the universal approximation
theory of feed-forward networks (e.g. \citet{cybenko1989.ApproximationSuperpositionsSigmoidal}).
The simple observation is that the right hand side of RNNs
are feature maps of a fully connected network.
Thus, by increasing $m$ one can construct an approximation of $f$ as
\begin{equation}\label{eq:rnn_rhs}
(h, x) \mapsto f(h, x) \approx (h_1, x) \mapsto \sigma(W(h_1, h_2)^\top + Ux + b),
\end{equation}
where $h_1 \in \mathbb{R}^n$ and $h_2 \in \mathbb{R}^{m-n}$.
The readout map $g$ can be handled likewise.
A similar approach is developed in~\citet{schafer2006.RecurrentNeuralNetworks}
in the discrete-time setting,
and~\citet{funahashi1993.ApproximationDynamicalSystems} for simulating
dynamics without inputs.
Since the results concern a compact time interval,
to approximate dynamics it is enough to approximate $f$.
This is in general not true for the unbounded case,
as the approximation error can be magnified by the dynamics.}
To handle unbounded $\set{T}$ (e.g. $\set{T}=\mathbb{R}$),
one strategy is to introduce
some decay properties to the targets.
One such property is the \emph{fading memory property} (FMP)
\citep{boyd1985.FadingMemoryProblema}.
Let $\seq{x_1}, \seq{x_2}$ be bounded sequences indexed by $\mathbb{R}$,
and let $\seq{H}$ be a sequence of causal, shift-equivariant
(also called time-homogeneous) functionals.
Here, causal means $H_t(\seq{x}) = H_t(\seq{x}_{(-\infty,t]})$ for all $t$.
We say that $\seq{H}$ has the FMP if
there is a monotonically decreasing function $\seq{w}:\mathbb{R}_+ \to (0, 1]$
such that for any $\epsilon > 0$ there exists $\delta > 0$
with
\begin{equation}\label{eq:fmp}
|H_t(\seq{x_1}) - H_t(\seq{x_2})| < \epsilon
\text{ whenever }
\sup_{s \in (-\infty, t]} |x_1(s) - x_2(s)| w(t-s)
<
\delta.
\end{equation}
Intuitively, this says that two inputs sequences
that differ more and more in their history ($t\to-\infty$)
still produce similar outputs at the present.
This is in effect requiring the memory of $\seq{H}$ to decay.
Note that due to time-equivariance, it is enough to check
this for just one $t$, say $t=0$.
Then, we can define a weighted norm on the space of
semi-infinite sequences on $(-\infty, 0]$ by
\begin{equation}
\| \seq{x} \|_{\seq w}
=
\sup_{s\in (-\infty, 0]}
|
x(s) w(-s)
|.
\end{equation}
Consequently, the FMP (\cref{eq:fmp}) is simply a continuity requirement
of $H_0$ with respect to $\normw{\cdot}$.
We denote by $\set{C}_{\text{FMP}}$ the set of causal, shift-equivariant
functional sequences satisfying the FMP.
The FMP allows one to prove density on unbounded $\set{T}$,
e.g. in~\citet{grigoryeva2018.EchoStateNetworks}
and~\citet{gonon2021.FadingMemoryEcho}.
Indeed, the FMP property allows one to approximate $\seq{H} \in \set{C}_{\text{FMP}}$
by a truncated version on a bounded interval.
Then, approximation results can be deduced using methodologies
for the bounded case.
Note that the FMP is defined for general functional sequences,
and is not limited to the form of hidden dynamic functional sequences.
Thus, this idea can also be used to prove density
for general functionals on unbounded index sets.
\rev{
In the specific setting of hidden dynamic functional sequences, another technique for handling unbounded index sets was
proposed in~\citet{hanson2020.UniversalSimulationStable}.
Here, the authors consider dynamics driven by
$f$ that satisfy a property called
``uniformly asymptotically incrementally stable''.
This roughly says that the flow maps of $\dot{h}=f(h, x)$ are uniformly
continuous, uniformly in $\seq{x}$, and that $h(t)$ is independent
of initial condition at large $t$.
One can understand this as again a memory decay condition,
as any initial condition on $h$ is forgotten in the large time limit.
This allows one to localize the approximation of $f$ and $g$
to a compact set, which then allows one to appeal to standard
approximation results from the feed-forward networks.
}
\paragraph{General functional sequences.}
Now, we turn to more general functional sequences.
Since the RNN architecture (\cref{eq:Hrnn_discrete}) is causal
and shift-equivariant, we should restrict our attention
to target spaces satisfying the same properties.
However, we no longer assume that
these functional sequences admit a representation in the form of
\cref{eq:fg_dynamics}.
For density-type results, this distinction is not important.
This is because it is known that $\set{C}_{HD}$ is dense in $\set{C}_{\text{FMP}}$ in the norm
$\| \seq{H} \| = \sup_{t\in\mathbb{R},\seq{x}\in \set{K}} |H_t(\seq{x})|$,
where $\set{K}$ is a bounded equicontinuous set in $C(\mathbb{R})$%
~\citep[Thm.~2]{boyd1984.AnalyticalFoundationsVolterra};
see also~\citet{grigoryeva2019.DifferentiableReservoirComputing}.
The idea relies on approximation of FMP functionals by a
Volterra series~\citep{volterra1930.TheoryFunctionalsIntegral}.
The density can also be established without appealing to
the Volterra series~\citep[Thm.~8]{grigoryeva2018.EchoStateNetworks}.
Therefore, density-type results on $\set{C}_{HD}$ can be passed onto $\set{C}_{\text{FMP}}$,
provided the norms are compatible.
In the RNN case, this program is carried out in%
~\citet{grigoryeva2018.UniversalDiscretetimeReservoir,grigoryeva2018.EchoStateNetworks}.
However, we will see later that for Jackson-type results,
the choice of target spaces is important:
the rate of approximations generally depends on such choices.
It is also possible to construct a RNN approximation in $\set{C}_{\text{FMP}}$ directly,
without the need to use $\set{C}_{HD}$ as an intermediate.
For example, in~\citet{gonon2021.FadingMemoryEcho} the authors
first use the FMP to reduce the approximation problem to one over
a finite, bounded index set, and then appeal to the density of
fully connected neural network to obtain approximation.
It remains then to construct a (large) RNN to represent the
fully connected network.
A similar result for stochastic inputs is proved in
\citet{gonon2018.ReservoirComputingUniversality}.
Many of the aforementioned density-results stem from the
\emph{reservoir computing} literature,
where researchers are interested in studying systems
such as the RNN, but with the internal weights
($W,U,b$ in \cref{eq:rnn_cts_dynamics})
being random variables.
This random version of the RNN is called an echo-state network (ESN).
From the machine learning viewpoint,
one can understand ESNs as an analogue of random feature models
corresponding to RNNs.
These models have the nice property that the hypothesis space is linear
and training these networks is a convex problem,
since only $c$ needs to be trained.
Previously mentioned results show existence of $(W,U,b)$ and $c$
to approximate each $\seq{H}$, but do not address the approximation
of classes of $\seq{H}$ by choosing only $c$ and using a
common random realization of $(W,U,b)$.
The latter approximation problem is studied in%
~\citet{gonon2021.ApproximationBoundsRandom},
where a density result with some explicit error estimates
is obtained.
Here, the primary idea is to constrain target functionals
to a subset of $\set{C}_{\text{FMP}}$ whose Fourier transform has finite third moment.
\rev{
This builds on the idea of \citet{barron1992.NeuralNetApproximation,barron1993.UniversalApproximationBounds}
where functions of this type (but with finite first and second moments)
were shown to be approximated by feed-forward neural networks without suffering the curse of dimensionality.
This is to be contrasted with a related line of work%
~\citep{e2019.PrioriEstimatesPopulation,e2020.MathematicalUnderstandingNeural,wojtowytsch2020.BanachSpacesAssociated},
which introduces a probabilistic definition of Barron-type functions via an expectation
in place of a moment condition on its Fourier transform.
In both cases, it is known that such functions can be approximated by randomly sampling
neural network weights according to a distribution to achieve approximation.
This is used in~\citet{gonon2021.ApproximationBoundsRandom} to prove
density for ESNs with random weights.
We note that in general Barron function approximations,
the random weight distributions depend on the target functions to be approximated,
whereas in~\citet{gonon2021.ApproximationBoundsRandom} the distribution
of the reservoir weights is fixed as uniform.
This comes with the cost of stronger regularity conditions,
as we will discuss later.
}
\subsection{Jackson-type results}
\label{subsec:rnn_jackson}
\rev{
Compared to density-type results,
there are fewer Jackson-type results for RNNs.
In the aforementioned work of~\citet{gonon2021.ApproximationBoundsRandom},
a quantitative error estimate can be obtained by a time-truncation argument
in the discrete time index setting.
Let $\seq{H}|_T$ denote the restriction of $\seq{H}$
to sequences of length $T+1$.
Then, for each $t$ we can identify $H|_T(t)$ with a function
$H_{T,t} : \mathbb{R}^{d\times(T+1)} \rightarrow \mathbb{R}$.
If one imposes additional regularity conditions by requiring
$H_{T,t} \in \mathcal{W}^{k,2}$ for each $t$, then
one can deduce an error estimate of the form
\begin{equation}
\inf_{\seq{\h{H}} \in \set{H}_{\text{ESN}}^m}
\mathbb{E}
\left[
\| \seq{H} - \seq{\h{H}} \|^2
\right]^{1/2}
\leq
c_1
\frac{\| \seq{H}|_T \|_{W^{k,2}}}{m^{1/\alpha}}
+
c_2
\sum_{i=T+1}^{\infty} w(-i),
\end{equation}
where $\alpha > 2$ and $w$ is the weighting function
used in the definition of $\set{C}_{\text{FMP}}$.
In particular, if we consider approximation on a
bounded index set the last term vanishes,
and we obtain an approximate Monte-Carlo rate $1/\sqrt{m}$.
However, a caveat is that the smoothness requirement $k$
for this estimate to hold increases linearly with $dT$,
i.e. it becomes increasingly stringent on larger time intervals
or input dimensions.
In other words, this estimate is more useful for bounded
index sets and low input dimensions.
In the setting of {hidden dynamic functional sequences}, a similar estimate is proved in~\citet{hanson2020.UniversalSimulationStable}
for unbounded index sets.
The key assumption of uniformly asymptotically incrementally stable dynamics
(c.f. the discussion in \cref{sec:rnn_density})
is combined with the additional assumption that $f,g$ are Barron-type functions.
Then, one can obtain a Monte-Carlo error rate that decays as $1/\sqrt{m}$.
The argument is a combination of the localization argument outlined
previously for the density result, and the application of the results of
~\citet{barron1992.NeuralNetApproximation,barron1993.UniversalApproximationBounds}
on the localized compact domain.
}
A general property of these results is the reliance on time truncation,
thus the rate estimates do not explicitly account for the behavior on large
time intervals.
Jackson-type error estimates that directly operates on unbounded time domains
are proved in the linear RNN case
($\sigma(z) = z$ and $b=0$ in \labelcref{eq:rnn_cts_dynamics})%
~\citep{li2021.CurseMemoryRecurrent,li2022.ApproximationOptimizationTheorya}.
Let us call these hypothesis spaces $\set{H}_{\text{L-RNN}}$ and $\{\set{H}_{\text{L-RNN}}^m\}$.
Observe that each $\seq{\h{H}} \in \set{H}_{\text{L-RNN}}^m$ has the form
\begin{equation}\label{eq:hlrnn}
\h{H}_t(\seq{x})
=
\int_{0}^{\infty}
c^\top
e^{Ws}
U x(t-s)
ds,
\qquad
c \in \mathbb{R}^{m},
W \in \mathbb{R}^{m\times m},
U \in \mathbb{R}^{m\times d}.
\end{equation}
Here, the input space considered is $\set{X} = C_0(\mathbb{R}, \mathbb{R}^d)$,
the space of continuous vector-valued sequences
vanishing at infinity.
We will also assume that $W$ is Hurwitz
(i.e. it is non-singular with eigenvalues having negative real parts),
so that the dynamics is stable.
In this case, one can check that each $\seq{\h H}$ is linear,
continuous in the uniform norm and shift-equivariant (time-homogeneous).
In addition, it is regular in the sense that if $\seq{x}_n(t) \to 0$
for almost every $t$ then $\seq{H}(\seq{x}_n) \to 0$.
It turns out that that these conditions are sufficient conditions
for functionals in $\set{C}$ to be uniformly approximated by
linear RNNs~\citep{li2022.ApproximationOptimizationTheorya}.
The idea is straightforward:
one first shows that any linear functional sequence $\seq{H}$
satisfying these conditions admits a common Riesz representation
\begin{equation}\label{eq:commonriesz}
H_t(\seq{x})
=
\int_{-\infty}^{t}
\rho(t-s)^\top x(s) ds
=
\int_{0}^{\infty}
\rho(s)^\top x(t-s) ds.
\end{equation}
In other words, $\seq{H}$ and $\seq{\rho} \in L^1$ can be identified.
Note that the application of Riesz representation is valid since
$C_0(\mathbb{R},\mathbb{R}^d)$ is taken as the input sequence space.
In broader settings, e.g. $C(\mathbb{R},\mathbb{R}^d)$ where input sequences
need not decay at infinity,
more assumptions is required for the existence of this representation.
For example, \citet[Thm.~5]{boyd1985.FadingMemoryProblema}
shows that if $\set{X} = C(\mathbb{R},\mathbb{R})$,
$\seq{H}$ admits the form \eqref{eq:commonriesz}
if and only if $\seq{H}$ has fading memory,
in addition to the aforementioned assumptions.
Now, comparing \cref{eq:hlrnn} and \cref{eq:commonriesz},
linear RNN approximation of these functionals boils down to
\begin{equation}
|H_t(\seq{x}) - \h{H}_t(\seq{x})|
\leq
\| \seq{x} \|_{L^\infty}
\| \seq{\rho} - \seq{\h{\rho}} \|_{L^1},
\end{equation}
where $\h{\rho}(s) = [c^\top e^{Ws} U]^\top$.
Therefore, we may deduce approximation properties of targets
by linear RNNs by approximation of functions in $L^1$
by exponential sums of the form $[c^\top e^{Ws} U]^\top$.
The density of such exponential sums can be derived using
the M\"{u}ntz–Sz\'{a}sz theorem~\citep{lorentz2005.ApproximationFunctions}.
Similarly, ~\citet{li2022.ApproximationOptimizationTheorya}
further use this idea to prove a Jackson-type result for the error estimate.
Here enters the crucial property of memory decay.
There exists a vast literature on possible notions of memory decay
for functional sequences, see e.g.~\cite{boyd1985.FadingMemoryProblema,gonon2020.ReservoirComputingUniversality} and references therein.
In the linear case, the following simple definition suffices.
Let $\seq{e}_i = e_i \mathbf{1}_{t\geq 0}$,
$i=1,\dots,d$ with $e_i$ the unit vector in the $i$-th axis direction.
We consider targets $\seq{H}$ such that there exist
$\alpha \in \mathbb{Z}_+$, $\beta > 0$ such that
\begin{equation}\label{eq:efmp}
e^{\beta t} H^{(r)}_t(\seq{e_i})
=
o(1),
\qquad
t \to \infty,
\qquad
i = 1, \dots, d,
\quad
1 \leq r \leq \alpha+1.
\end{equation}
Intuitively, these functionals forget input history
at a rate of at least $e^{-\beta t}$.
Thus, we may also understand them possessing
an \emph{exponentially} decaying memory.
The main result in~\citet{li2021.CurseMemoryRecurrent}
is a Jackson-type error estimate
\begin{equation}\label{eq:jackson_rnn}
\inf_{\seq{\h H} \in \set{H}_{\text{L-RNN}}^m}
\| \seq{H} - \seq{\h H} \|
\leq
\frac{c_\alpha d \gamma}{\beta m^\alpha},
\qquad
\gamma =
\sup_{t\geq 0}
\max_{i=1,\dots,d}
\max_{r=1,\dots,\alpha+1}
\frac{|e^{\beta t} H^{(r)}_t(\seq{e}_i)|}{\beta^r},
\end{equation}
where $\|\seq{H}\| = \sup_{t} \sup_{\|\seq{x}\|_{L^\infty} \leq 1} |H_t(\seq{x})|$.
Comparing with \cref{eq:jackson_trig}, we see that
one obtains a similar rate characterized by the smoothness parameter
$\alpha$.
The new phenomena is the assumption of exponential decaying memory
in \cref{eq:efmp}.
The key insight here is as follows.
If we assume, in addition to the usual smoothness requirements,
that the memory of targets decay like an exponential
(\cref{eq:efmp}), then we can efficiently approximate them using
linear RNNs.
We remark here that this result demonstrates the importance of
considering more general functional sequences than $\set{C}_{HD}$
in establishing Jackson-type results.
Assume instead that one considers {hidden dynamic functional sequences} with both $f,g$
as linear functions, i.e.
\begin{equation}
f(h, x) = W_* h + B_* x,
\quad
g(h) = c_*^\top h,
\qquad
W_*\in\mathbb{R}^{n\times n},
U_*\in\mathbb{R}^{n\times d},
c_*\in\mathbb{R}^{n}.
\end{equation}
Then, the rate estimate becomes trivial:
If $m \geq n$, then the approximation error is 0
and we have perfect representation.
However, in practice it is generally not possible
to know the precise mechanism for the generation
of the sequence data,
and a theory should handle general functional sequences.
From the Riesz representation \labelcref{eq:commonriesz}
of general linear, causal and shift-equivariant target functional sequences,
$\seq \rho$ can be any $L^1$ function, and
may not in the form of an exponential sum.
In this case, the approximation rate estimate
becomes non-trivial.
We close the discussion by discussing the so-called
\emph{curse of memory} phenomenon identified in the above analysis
observed in~\citet{li2021.CurseMemoryRecurrent}.
The density type results, including the linear RNN case,
do not require the targets to have an exponentially decaying memory in the sense of \cref{eq:efmp}.
However, the rate estimate in \cref{eq:jackson_rnn} does have this requirement.
The natural question is therefore, what if one has a slower memory decay rate?
For example, we may replace \cref{eq:efmp} by
\begin{equation}
H^{(r)}_t(\seq{e_i})
\sim
e^{-\beta t}
\quad
\longrightarrow
\quad
H^{(r)}_t(\seq{e_i})
\sim
t^{-(r+\omega)}
\quad
(\omega > 0).
\end{equation}
Then, a truncation argument in~\citet{li2022.ApproximationOptimizationTheorya}
shows that to obtain an approximation error of $\epsilon$,
a size of the RNN may need to grow exponentially,
as $m \sim \epsilon^{-1/\omega}$.
While this is not a lower bound for the optimal approximation error,
it suggests that in sequence approximation,
one may observe a very similar issue with approximating ordinary functions
in high dimensions.
There, it is known that the approximation budget required to
achieve a prescribed approximation error grows
like an exponential function of the dimension of the function domain.
This is known as the \emph{curse of dimensionality}.
Here, the results suggests that in sequence approximation problems using RNNs,
there lies a \emph{curse of memory}.
In particular, it affirms the empirical observations that RNNs
usually perform well when memory in the system is small,
but suffer in its performance for approximating long-term memory%
~\citep{bengio1994.LearningLongtermDependencies}.
The result in \cref{eq:jackson_rnn} confirms the first part of
the observation.
The second part can be further demonstrated by optimization analysis
\citep{li2022.ApproximationOptimizationTheorya}
and also a Bernstein-type result, as we discuss next.
\subsection{Bernstein-type results}
Recall that Bernstein-type results deduce properties
of targets assuming that they can be efficiently approximated
by a hypothesis space.
Known Bernstein-type results for RNNs are currently
limited to linear functional sequences.
With the same set-up as the Jackson-type theorem,
\citet{li2022.ApproximationOptimizationTheorya}
proves a Bernstein-type result, which we now describe.
Let us assume that we have a target functional sequence $\seq{H}$
such that it (and its derivatives in time) can be uniformly
approximated by a sequence of linear RNNs.
That is, we assume that there is a sequence
$\seq{\h H}_m \in \set{H}_{\text{L-RNN}}^m$ such that
$\| \seq{H} - \seq{\h H}_m \| \to 0$ and that
\begin{equation}
\sup_{t\geq 0}
|
H_t^{(k)}(\seq{e}_i)
-
{\h{H}}_{m, t}^{(k)}(\seq{e}_i)
|
\to 0,
\qquad
k = 1,\dots,\alpha+1.
\end{equation}
Then, under additional technical conditions,
there must exist a $\beta > 0$
such that
\begin{equation}\label{eq:berstein_rnn}
e^{\beta t} H^{(r)}_t(\seq{e_i})
=
o(1),
\qquad
t \to \infty,
\qquad
i = 1, \dots, d,
\quad
1 \leq r \leq \alpha + 1.
\end{equation}
In other words, a target can be effectively approximated
by linear RNNs only if it has exponentially decaying memory.
This is in a sense a partial converse to the Jackson-type result
in \cref{eq:jackson_rnn}.
Together, it shows that, at least in the linear setting,
effective RNN approximation occurs if and only if
the target functional sequence has an exponentially decaying memory pattern.
Bernstein-type results can assist in designing architectures
for sequence modelling:
if a model aims to model a sequential relationship whose memory
pattern does not decay like an exponential,
then it is necessary to go beyond the RNN setting due to the limitations
posed by the inverse approximation result.
\rev{
At the end of \cref{sec:others_cnn},
we discuss an example given in~\citet{jiang2021.ApproximationTheoryConvolutional}
where the target functional sequence does not have an exponentially decaying memory,
and alternative architectures such as dilated convolutions are shown to be
more effective than RNNs.
}
We end the section on RNNs approximation by discussing some of its variants.
In the practical literature, a number of generalizations
of the simple RNN hypothesis space (\cref{eq:Hrnn_discrete})
have been proposed.
Examples include the long-short term memory (LSTM) network~\citep{hochreiter1997.LongShortTermMemory}
and gated recurrent units (GRU)~\citep{cho2014.PropertiesNeuralMachine}.
Density-type results for these networks can be directly deduced
since they often include the classical RNN as a special case
by a proper choice of its trainable parameters.
In some cases
(%
e.g. normalized RNNs in~\citet{schafer2006.RecurrentNeuralNetworks},
and deep variants with fixed width in~\citet{song2022.MinimalWidthUniversal}%
),
additional analysis is required to establish density.
However, rate estimates of Jackson-type or inverse theorems
of Bernstein-type (different from classical RNNs)
are generally not known for these more complex structures,
and is an interesting direction of future work. |
{
"arxiv_id": "2302.13701",
"language": "en",
"timestamp": "2023-02-28T02:27:55",
"url": "https://arxiv.org/abs/2302.13701",
"yymm": "2302"
} | \section{Introduction}
In the interval scheduling problem, the input is a set of intervals with integral endpoints, each representing timesteps at which a process starts and ends. A scheduler's task is to decide whether to accept or reject each job so that the intervals of accepted jobs do not overlap except possibly at one of their endpoints. The objective is to maximize the number of accepted intervals, referred to as the \emph{payoff} of the scheduler. This problem is also known as \emph{fixed job scheduling} and \emph{k-track assignment}
~\cite{kolen2007interval}.
Interval scheduling is a special case of the \emph{disjoint path allocation problem},
where the input is a graph $G$ and a set of $n$ \emph{requests}, each defined by a pair of vertices in $G$. An algorithm can accept or reject each pair, given that it can form edge-disjoint paths between vertices of accepted pairs. Interval scheduling is the particular case when $G$ is a path graph.
The disjoint path allocation problem can be solved in polynomial time for
trees~\cite{GVV97} and outerplanar graphs
by a combination of~\cite{WW95,MNS85,F85},
but the problem is NP-complete for general graphs~\cite{EIS76},
and even on quite restricted graphs such as series-parallel graphs~\cite{NVZ01}.
The disjoint path problem is the same as call
control/call allocation with all bandwidths (both of the calls and the edges
they would be routed on) being equal to 1 and as the maximum multi-commodity
integral flow problem with edges having unit capacity.
In this work, we focus on the online variant of the problem, in which the set of requests is not
known in advance but is revealed in the form of a sequence $I$ of intervals. A new request must either be irrevocably accepted or rejected, subject to maintaining disjoint paths between accepted requests.
We analyze an online algorithm via a comparison with an optimal offline algorithm, \ensuremath{\operatorname{\textsc{Opt}}}\xspace.
The \emph{competitive ratio} of an online algorithm \ensuremath{\operatorname{\textsc{Alg}}}\xspace is defined as $ \inf_{I} \SET{\ensuremath{\operatorname{\textsc{Alg}}}\xspace(I)/\ensuremath{\OPT}\xspace(I)}$, where $\ensuremath{\operatorname{\textsc{Alg}}}\xspace(I)$ and $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)$, respectively, denote the payoff of \ensuremath{\operatorname{\textsc{Alg}}}\xspace and \ensuremath{\operatorname{\textsc{Opt}}}\xspace for intervals in $I$ (for randomized algorithms, $\ensuremath{\operatorname{\textsc{Alg}}}\xspace(I)$ is the expected payoff of $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$).
Since we consider a maximization problem, our ratios are between zero and one.
For interval scheduling on a path graph with $m$ edges, the competitive ratios of the best deterministic and randomized algorithms are respectively $m$ and $\lceil \log m \rceil $~\cite{BE98}. These results suggest that the constraints on online algorithms must be relaxed to compete with \ensuremath{\operatorname{\textsc{Opt}}}\xspace. Specifically, the problem has been considered in the \emph{advice complexity model} for path graphs~\cite{BBFGJKSS14,GKKKS15}, trees~\cite{BBK22}, and grid graphs~\cite{BKW22}. Under the advice model, the online algorithm can access error-free information on the input called advice.
The objective is to quantify the trade-offs between the competitive ratio and the size
of the advice.
In recent years, there has been an increasing interest in improving the performance of online algorithms via the notion of \emph{prediction}. Here, it is assumed that the algorithm has access to machine-learned information in the form of a prediction. Unlike the advice model, the prediction may be erroneous and is quantified by an \emph{error measure $\eta$}. The objective is to design algorithms whose competitive ratio degrades gently as a function of $\eta$.
Several online optimization problems have been studied under the prediction model, including non-clairvoyant scheduling~\cite{NIPS2018_8174,WeiZ20}, makespan scheduling~\cite{lattanzi2020online}, contract scheduling~\cite{AK21,abs-2111-05281}, and other variants of scheduling problems~\cite{AzarLT21,LeeMHLSL21,BampisDKLP22,BalkanskiGT23}.
Other online problems studied under the prediction model include
bin packing~\cite{0001DJKR20,ijcaiKS22}, knapsack~\cite{Zeynali0HW21,knapsack22,BoyarFL22}, caching~\cite{DBLP:conf/icml/LykourisV18,rohatgi2020near},
matching problems~\cite{AntoniadisGKK20,LLMV20,LavastidaM0X21}, and various graph problems~\cite{ChenSVZ22,EberleLMNS22,ChenEILNRSWWZ22,AzarPT22,BanerjeeC0L23}.
See also the survey by Mitzenmacher and Vassilvitskii~\cite{mitzenmacher2020algorithms} and the collection at~\cite{ALPS}.
\subsection{Contributions}
We study the disjoint path allocation problem under a setting where the scheduler is provided with a set $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$ of intervals predicted to form the input sequence $I$.
Given the erroneous nature of the prediction, some
intervals in $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$ may be incorrectly predicted to be in $I$ (false positives), and some intervals in $I$ may not be included in $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$ (false negatives). We let the \emph{error set} be the set of intervals that are false positives or false negatives and define the error parameter $\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I)$ to be the cardinality of the largest set of non-overlapping intervals in the error set, i.e., $\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) = \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace \cup \ensuremath{\operatorname{\textsc{FN}}}\xspace)$.
We explain later that this definition of $\eta$ satisfies specific desired properties for the prediction error (Proposition~\ref{propo:proper}).
In the following, we use $\ensuremath{\operatorname{\textsc{Alg}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace, I)$ to denote the payoff of an algorithm \ensuremath{\operatorname{\textsc{Alg}}}\xspace for prediction $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$ and input $I$. We also define $\gamma(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) = \eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I)/\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)$; this \emph{normalized error} measure is helpful in describing our results because the point of reference in the competitive analysis is $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)$.
Our first result concerns general graphs:
\begin{itemize}
\item \textbf{Disjoint-Path Allocation:} We first study a simple algorithm \ensuremath{\operatorname{\textsc{Trust}}}\xspace, which accepts a request only if it belongs to the set of intervals in a given optimal solution for $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$. We show that, for any graph $G$, any input sequence $I$, and any prediction $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$,
$\ensuremath{\operatorname{\textsc{Trust}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) \geq (1-2\ensuremath{\gamma(\IPRED,I)}\xspace) \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) $
(Theorem~\ref{pr:trustupper}). Furthermore, for any algorithm \ensuremath{\operatorname{\textsc{Alg}}}\xspace and any positive integer $p$,
there are worst-case input sequence $I_w$ and prediction set $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w$ over a star graph with $8p$ leaves, such that
$\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w) = p$
and
$\ensuremath{\operatorname{\textsc{Alg}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_{w},I_{w}) \leq (1-2\ensuremath{\gamma(\IPRED_w,I_w)}\xspace)\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_{w})\,$
(Theorem~\ref{th:lowerstar}).
Thus, \ensuremath{\operatorname{\textsc{Trust}}}\xspace achieves an optimal competitive ratio in
any graph that contains $S_8$ as a subgraph, i.e.,
any graph of maximum degree at least~8.
\end{itemize}
The above result demonstrates that even for trees,
the problem is so hard that no algorithm can do better than the trivial \ensuremath{\operatorname{\textsc{Trust}}}\xspace.
Therefore, our main results concern the more interesting case of path graphs, that is, interval scheduling:
\begin{itemize}
\item \textbf{Interval Scheduling:}
We first show a negative result for deterministic interval scheduling algorithms.
Given any deterministic algorithm \ensuremath{\operatorname{\textsc{Alg}}}\xspace and integer $p$, we show there are worst-case instances $I_w$ and predictions $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w$ such that $\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w) = p$ and
$ \ensuremath{\operatorname{\textsc{Alg}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w) \leq (1 - \ensuremath{\gamma(\IPRED_w,I_w)}\xspace)\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w)$
(Theorem~\ref{thm:generallower}, setting $c=2$).
Next, we present a negative result for \ensuremath{\operatorname{\textsc{Trust}}}\xspace.
For any positive integer, $p$, we show there are worst-case instances $I_w$ and predictions $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w$ such that $\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w)=p$ and
$ \ensuremath{\operatorname{\textsc{Trust}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w) = (1-2\ensuremath{\gamma(\IPRED_w,I_w)}\xspace)\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w)\,. $
(Theorem~\ref{th:intervaltrustlower}). This suggests that there is room for improvement over \ensuremath{\operatorname{\textsc{Trust}}}\xspace.
Finally, we introduce our main technical result, a deterministic algorithm \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace that achieves an optimal competitive ratio for interval scheduling. \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace is similar to \ensuremath{\operatorname{\textsc{Trust}}}\xspace in that it maintains an optimal solution for $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$, but unlike \ensuremath{\operatorname{\textsc{Trust}}}\xspace, it updates its planned solution to accept requests greedily when it is possible without a decrease in the payoff of the maintained solution.
For any input $I$ and prediction $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$, we show that
$ \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) \geq (1-\ensuremath{\gamma(\IPRED,I)}\xspace)\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)\,$
(Theorem~\ref{th:trustgreedymain}), which proves optimality of \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace in the light of Theorem~\ref{thm:generallower}.
\item \textbf{Consistency-Robustness Trade-off}: We study the trade-off between \emph{consistency} and \emph{robustness}, which measure an algorithm's competitive ratios in the extreme cases of error-free prediction (consistency) and adversarial prediction (robustness)~\cite{DBLP:conf/icml/LykourisV18}. We focus on randomized algorithms because a non-trivial trade-off is infeasible for deterministic algorithms (Proposition~\ref{prop:deterconstrob}).
Suppose that for any input $I$, an algorithm \ensuremath{\operatorname{\textsc{Alg}}}\xspace
guarantees a consistency of $\alpha < 1$ and robustness of $\beta \leq \frac{1}{\lceil \log m\rceil}$.
We show
$\alpha \leq 1-\frac{\lfloor\log m\rfloor-1}{2}\beta$
and
$\beta \leq \frac{2}{\lfloor\log m\rfloor -1}\cdot (1-\alpha)$
(Theorem~\ref{th:constrob}).
For example, to guarantee a robustness of $\frac{1}{10\lfloor \log m \rfloor}$, the consistency must be at most $19/20$, and to guarantee a consistency of $\frac{2}{3}$, the robustness must be at most $\frac{2}{3}\frac{1}{\lfloor \log m \rfloor -1}$.
We also present a family of randomized algorithms that provides an almost \emph{Pareto-optimal} trade-off between consistency and robustness (Theorem~\ref{th:robtrust}).
\item \textbf{Experiments on Real-World Data}: We compare our algorithms with the online \ensuremath{\operatorname{\textsc{Greedy}}}\xspace algorithm (which accepts an interval if and only if it does not overlap previously accepted intervals), and \ensuremath{\operatorname{\textsc{Opt}}}\xspace on real-world scheduling data from~\cite{ChapinCFJLSST99}. Our results are in line with our theoretical analysis: both \ensuremath{\operatorname{\textsc{Trust}}}\xspace and \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace are close-to-optimal for small error values; \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace is almost always better than \ensuremath{\operatorname{\textsc{Greedy}}}\xspace even for large values of error, while \ensuremath{\operatorname{\textsc{Trust}}}\xspace is better than \ensuremath{\operatorname{\textsc{Greedy}}}\xspace only for small error values.
\end{itemize}
\section{Model and Predictions}
We assume that an oracle provides the online algorithm with a set \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace of requests predicted to form the input sequence $I$.
One may consider alternative predictions, such as statistical information about the input. While these predictions are compact and can be efficiently learned, they cannot help achieve close-to-optimal solutions. In particular, for interval scheduling on a path with $m$ edges, since the problem is AOC-complete, one cannot achieve a competitive ratio $c \leq 1$ with fewer than $c m/(e \ln 2)$ bits~\cite{BFKM17}.
In what follows, true positive (respectively, negative) intervals are correctly predicted to appear (respectively, not to appear) in the
request sequence. False positives and negatives are defined analogously as
those incorrectly predicted to appear or not appear.
We let \ensuremath{\operatorname{\textsc{TP}}}\xspace, \ensuremath{\operatorname{\textsc{TN}}}\xspace, \ensuremath{\operatorname{\textsc{FP}}}\xspace, \ensuremath{\operatorname{\textsc{FN}}}\xspace denote the four sets containing these different types
of intervals.
Thus, $I = \ensuremath{\operatorname{\textsc{TP}}}\xspace \cup \ensuremath{\operatorname{\textsc{FN}}}\xspace$ and $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace = \ensuremath{\operatorname{\textsc{TP}}}\xspace \cup \ensuremath{\operatorname{\textsc{FP}}}\xspace$.
We use $\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I)$, to denote the error for the input formed by the set $I$, when the set of predictions is \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace.
When there is no risk of confusion, we use $\eta$ instead of
$\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I)$.
The error measure we use here is $\eta=\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)$, and hence, the normalized error measure is $\gamma=\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)/\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)$.
Our error measure satisfies the following desirable properties, the first two of which were strongly recommended in Im, et al.~\cite{IKQP21a}:
$\eta(I,\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace) \leq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace \cup
\ensuremath{\operatorname{\textsc{FN}}}\xspace)$.
In Section~\ref{sec:otherMeasures}, we discuss natural error models, such as Hamming distance between the request sequence and prediction, and explain why these measures do not satisfy our desired properties.
\begin{itemize}
\item \emph{Monotonicity:}
This property ensures that increasing the number
of true positives or negatives does not
increase the error.
To be more precise, if we increase $|\ensuremath{\operatorname{\textsc{TP}}}\xspace|$ by one unit (decreasing $|\ensuremath{\operatorname{\textsc{FN}}}\xspace|$ by
one unit) or increase $|\ensuremath{\operatorname{\textsc{TN}}}\xspace|$ by one unit (decreasing $|\ensuremath{\operatorname{\textsc{FP}}}\xspace|$ by one unit),
the error must not increase.
Formally, for any $I$, \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace, the following must hold.
\begin{itemize}
\item For any $x\in I\setminus \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$,
$\eta(I, \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace \cup \{ x\}) \leq \eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I)$.
\item For any $y\in \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace\setminus I$, $\eta(I, \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace \setminus \{ y\}) \leq \eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I)$.
\end{itemize}
\item \emph{Lipschitz property:} Let $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)$ denote the number of requests in an optimal solution for the input sequence, and $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace)$ denote the number of requests in an optimal solution for a set of predicted requests. The Lipschitz property requires the error to be at least equal to the net difference between $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)$ and $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace)$, that is,
\[\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) \geq |\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)-\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace)|\,.\]
Note that this property ensures that the error is not ``too
small''. In particular, we should not be able to decrease the error to an arbitrarily small value by adding ``dummy requests''.
For example, \emph{false discovery rate}, defined as $\frac{|\ensuremath{\operatorname{\textsc{FP}}}\xspace|}{|\ensuremath{\operatorname{\textsc{FP}}}\xspace|+|\ensuremath{\operatorname{\textsc{TP}}}\xspace|}$, does not satisfy Lipschitz property:
an adversary can construct a bad
input and then add a lot of intervals to $I \cap \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$, contributing to $|\ensuremath{\operatorname{\textsc{TP}}}\xspace|$, that
neither the algorithm nor \ensuremath{\operatorname{\textsc{Opt}}}\xspace will choose, driving down the error.
\item \emph{Lipschitz completeness (or simply completeness):} We need the error measure to ensure that the error is not ``too large''. Consider the following example for the disjoint paths problem.
The input is formed by a set $I = A \cup B$ of requests, with $A=\{A_1, A_2, \ldots, A_k\}$ and $B = \{B_1, B_2, \ldots, B_{k-1}\}$, where the $A_i$'s are disjoint, the $B_i$'s are disjoint, and $B_i$ overlaps $A_i$ and $A_{i+1}$. The true optimal solution is then $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) = |A| = k$. Suppose the prediction is $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace = ( A \setminus \{A_1, A_2\} ) \cup B$, and note that $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace) = |B| = k-1$. The optimal solutions for $I$ and \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace are disjoint but $|\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) - \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace)| = 1$, $\ensuremath{\operatorname{\textsc{FP}}}\xspace=0$ and $\ensuremath{\operatorname{\textsc{FN}}}\xspace=2$. In this case, the error should be relatively small, independent of $k$. More generally,
the error measure must not grow with the dissimilarity between the optimal solutions for $I$ and $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$, but rather with the size of the optimal solution for $\ensuremath{\operatorname{\textsc{FP}}}\xspace$ and $\ensuremath{\operatorname{\textsc{FN}}}\xspace$. This is guaranteed by the
Lipschitz completeness, which requires
\[\eta(I,\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace) \leq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace \cup
\ensuremath{\operatorname{\textsc{FN}}}\xspace).\]
\end{itemize}
\begin{restatable}{proposition}{errorProposition}
\label{propo:proper}
The error measure $\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) = \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)$ satisfies the properties of monotonicity, Lipschitz, and Lipschitz completeness.
\end{restatable}
\begin{proof}
\labelText{}{LLerrorProposition} We check all properties listed above:
\begin{itemize}
\item Monotonicity: First, consider increasing
the number of true positives. Let $x\in I\setminus \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$.
Since $x$ is a false negative, it
may or may not have been counted in $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)$, but removing it from
\ensuremath{\operatorname{\textsc{FN}}}\xspace (thus adding it to \ensuremath{\operatorname{\textsc{TP}}}\xspace) cannot make $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)$
larger, i.e., \[\eta(I,\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace\cup\{ x\}) = \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup(\ensuremath{\operatorname{\textsc{FN}}}\xspace\setminus\{ x\})) \leq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)=\eta(I,\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace)\,.\]
Similarly, for any $y \in \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace \setminus I$, $\ensuremath{\operatorname{\textsc{Opt}}}\xspace((\ensuremath{\operatorname{\textsc{FP}}}\xspace\setminus\{ y\})
\cup \ensuremath{\operatorname{\textsc{FN}}}\xspace)$ cannot be larger than $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)=\eta(I,\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace)$, so
\[\eta(I,\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace\setminus\{ y\}) = \ensuremath{\operatorname{\textsc{Opt}}}\xspace((\ensuremath{\operatorname{\textsc{FP}}}\xspace\setminus\{ y\})
\cup \ensuremath{\operatorname{\textsc{FN}}}\xspace) \leq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace \cup \ensuremath{\operatorname{\textsc{FN}}}\xspace) = \eta(I,\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace)\,.\]
\item Lipschitz property: We need to show that \[\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)
\geq |\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)-\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace)|\,.\]
We note that
\begin{align*}
\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) & = \ensuremath{\operatorname{\textsc{Opt}}}\xspace((\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace \setminus \ensuremath{\operatorname{\textsc{FP}}}\xspace) \cup \ensuremath{\operatorname{\textsc{FN}}}\xspace)\\
& \leq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace \cup \ensuremath{\operatorname{\textsc{FN}}}\xspace)\\
& \leq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace) + \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace)\,,
\end{align*}
which implies
\[\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) - \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace) \leq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace) \leq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace \cup \ensuremath{\operatorname{\textsc{FN}}}\xspace).
\]
\item Lipschitz completeness: Follows trivially with the suggested bound,
since $\eta =\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)$.\qed
\end{itemize}
\end{proof}
\subsubsection{Alternative Error Measures.}
\label{sec:otherMeasures}
In what follows, we review a few alternative error measures that do not satisfy our desired properties of monotonicity, Lipschitz, and Lipschitz completeness (or simply completeness).
\begin{itemize}
\item Hamming distance between the bit strings representing the
request sequence and the predictions:
\[|\ensuremath{\operatorname{\textsc{FP}}}\xspace|+|\ensuremath{\operatorname{\textsc{FN}}}\xspace| = |I \cup \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace| - |I \cap \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace| = |(I \cup
\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace) \setminus (I \cap \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace)|\]
It fails completeness.
\item Using $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)$ and $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace)$ instead of $I$ and \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace
in the above measure:
\[\big| \big(\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) \cup \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace)\big) \setminus \big(\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)
\cap \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace)\big) \big|\]
also fails completeness, according to the example given in
connection with the definition of completeness.
\item Either $|\ensuremath{\operatorname{\textsc{FP}}}\xspace|$ or $|\ensuremath{\operatorname{\textsc{FN}}}\xspace|$ fails Lipschitz property.
\item Normalizing the Hamming distance, we obtain the Jaccard
distance:
\[\frac{|I \cup \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace| - |I \cap \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace|}{|I \cup \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace|}\]
This measure is sensitive to dummy requests: The adversary can construct a bad
input and then add a lot of intervals to $I \cap \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$ that
neither the algorithm nor \ensuremath{\operatorname{\textsc{Opt}}}\xspace will choose, driving down the error.
\item We also considered normalizing by the total number of possible
intervals (order $m^2$), but this measure fails the Lipschitz property, as we can make the error arbitrarily small by ``scaling up" each edge to an arbitrarily long path, without changing algorithms' payoffs.
\end{itemize}
\section{Disjoint-Path Allocation} \label{sect:dpa}
In this section, we show that a simple algorithm \ensuremath{\operatorname{\textsc{Trust}}}\xspace for the disjoint path allocation problem has an optimal competitive ratio for any graph of maximal degree at least~8.
\ensuremath{\operatorname{\textsc{Trust}}}\xspace simply relies on the predictions being
correct. Specifically,
it computes an optimal solution \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace in~\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace before
processing the first request. Then, it accepts
any interval in \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace that arrives and rejects all others.
We first establish that, on any graph,
$\ensuremath{\operatorname{\textsc{Trust}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) \geq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) - 2\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) = (1-2\ensuremath{\gamma(\IPRED,I)}\xspace) \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)$.
The proof follows by observing that
(i) false negatives cause a deficit of at most $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace)$ in the schedule of $\ensuremath{\operatorname{\textsc{Trust}}}\xspace$ compared to the optimal schedule for $\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace$, (ii)
false positives cause a deficit of at most $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace)$ in the optimal schedule of $\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace$, compared to the optimal schedule for $I$, and (iii) $ \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace) + \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace) \leq 2\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace) = 2\eta$.
\begin{restatable}{theorem}{trustUpperTheorem}
\label{pr:trustupper}
For any graph $G$, any prediction \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace, and input sequence $I$, we have
$\ensuremath{\operatorname{\textsc{Trust}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) \geq (1-2\ensuremath{\gamma(\IPRED,I)}\xspace) \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)\,.$
\end{restatable}
\begin{proof}
\labelText{}{LLtrustUpperTheorem}
Since \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace is an optimal selection from $\ensuremath{\operatorname{\textsc{TP}}}\xspace \cup \ensuremath{\operatorname{\textsc{FP}}}\xspace$, the largest number of intervals that
\ensuremath{\operatorname{\textsc{Opt}}}\xspace
would be able to accept from $I$ compared to \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace would be an
optimal selection from~\ensuremath{\operatorname{\textsc{FN}}}\xspace. Thus,
$\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) \leq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace) + \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace)$, and so
$\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace) \geq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) - \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace)$.
Similarly,
the largest number of intervals that can be detracted from \ensuremath{\operatorname{\textsc{Trust}}}\xspace is realized when intervals that it planned
to accept from \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace do not appear is $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace)$.
Therefore, $\ensuremath{\operatorname{\textsc{Trust}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) \geq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace) - \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace)$. Now,
\[
\begin{array}{rcl}
\ensuremath{\operatorname{\textsc{Trust}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) & \geq & \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace) - \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace) \\
& \geq & \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) - \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace) - \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace) \\
& \geq & \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) - 2\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)\\
& = & \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) - 2\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I)\\
& = & (1- 2\ensuremath{\gamma(\IPRED,I)}\xspace) \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) \end{array}
\] \qed
\end{proof}
The following result shows that Theorem~\ref{pr:trustupper} is tight for star graphs of degree 8. One can conclude that \ensuremath{\operatorname{\textsc{Trust}}}\xspace is optimal for any graph that contains stars of degree 8 as a subgraph, i.e., any graph of maximal degree at least~8.
\begin{restatable}{theorem}{djpAllocationLower}
\label{thm:star}
Let \ensuremath{\operatorname{\textsc{Alg}}}\xspace be any deterministic algorithm and $p$ be any positive integer.
On the star graph, $S_{8p}$, there exists a set of predicted intervals $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_{w}$ and a request sequence $I_w$
such that
$\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_{w},I_{w}) = p$ and
$\ensuremath{\operatorname{\textsc{Alg}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_{w},I_{w}) \leq (1-2\ensuremath{\gamma(\IPRED,I)}\xspace)\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_{w})\,$.
\label{th:lowerstar}
\end{restatable}
\begin{proof}
We consider the non-center vertices of $S_{8p}$ in $p$ groups of eight, and
handle them all identically, one group at a time, treating each group
independently.
The prediction is fixed, but the input sequence depends on the algorithm's actions.
For each group, we show that the error in the prediction is 1, and the payoff of \ensuremath{\operatorname{\textsc{Opt}}}\xspace is at least 2 units more than that of \ensuremath{\operatorname{\textsc{Alg}}}\xspace.
Given that groups do not share edges between themselves, the total error and algorithms' payoffs are summed over all groups. Hence,
the total error will be equal to $\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_{w},I_w) = p$, and we can write $\ensuremath{\operatorname{\textsc{Alg}}}\xspace(I_w) \leq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w) - 2 \eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w)$, that is, $\ensuremath{\operatorname{\textsc{Alg}}}\xspace(I_w) \leq (1-2\gamma(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w))\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w)$.
Next, we explain how an adversary defines the input for each group.
For group $0\leq i\leq s-1$, the non-center vertices are $8i+j$, where $1\leq j\leq 8$, but we refer to these vertices by the value $j$.
Let $
\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w = \SET{
(1, 2), (2, 3), (3, 4), (4, 5), (6, 7), (7, 8)}
$
be the part of the prediction relevant for the current group of eight vertices.
Both $(6,7)$ and $(7,8)$ are always included in the input sequence,
with $(6,7)$
arriving immediately before $(7,8)$. \ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts at most one of them.
This is discussed in the cases below.
The first request in the input is always $(2,3)$, and \ensuremath{\operatorname{\textsc{Alg}}}\xspace can either accept or reject it.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{0.46\textwidth}
\centering
\scalebox{.74}{\figStarOneA}
\caption{\ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts $(2,3)$, rejects $(6,7)$ and $(7,8)$.}
\label{fig:staronea}
\end{subfigure}\hfill \vspace*{3mm}
\begin{subfigure}[b]{0.46\textwidth}
\hspace*{3mm}
\scalebox{.74}{\figStarOneB}
\caption{ \ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts $(2,3), (7,x)$; here $x=6$ , $y=8$.}
\label{fig:staroneb}
\end{subfigure}
\vspace*{1mm}
\begin{subfigure}[b]{0.46\textwidth}
\centering
\scalebox{.74}{\figStarTwoA}
\caption{\ensuremath{\operatorname{\textsc{Alg}}}\xspace rejects $(2,3)$, accepts $(3,4)$, and rejects both $(6,7)$ and $(7,8)$.}
\label{fig:startwoa}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.46\textwidth} \hspace*{3mm}
\scalebox{.74}{\figStarTwoB}
\caption{\ensuremath{\operatorname{\textsc{Alg}}}\xspace rejects $(2,3)$, accepts $(3,4)$ and $(7,x)$; here $x=6$ and $y=8$.}
\label{fig:startwob}
\end{subfigure} \ \\ \ \\
\centering
\begin{subfigure}[b]{0.46\textwidth}
\centering
\scalebox{.74}{\figStarTwoC}
\caption{\ensuremath{\operatorname{\textsc{Alg}}}\xspace rejects $(2,3)$, $(3,4)$, accepts $(7,x)$ and $(1,2)$; here $x=6$ and $y=8$.}
\label{fig:startwoc}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.46\textwidth} \hspace*{3mm}
\scalebox{.74}{\figStarTwoD}
\caption{\ensuremath{\operatorname{\textsc{Alg}}}\xspace rejects $(2,3)$, $(3,4)$, accepts $(7,x)$, and rejects $(1,2$); here $x=6$ and $y=8$.}
\label{fig:startwod}
\end{subfigure}
\caption{Illustration of the proof of Theorem~\ref{th:lowerstar}. Highlighted edges indicate paths between accepted pairs.}
\end{figure}
{\bf Case \ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts $\bm{(2,3)}$:}
The next interval to arrive is $(6,7)$. If \ensuremath{\operatorname{\textsc{Alg}}}\xspace rejects this interval,
the next to arrive is $(7,8)$. If \ensuremath{\operatorname{\textsc{Alg}}}\xspace also rejects this interval, then the
intervals $(1,2)$ and $(3,4)$ also arrive, but $(4,5)$ is a false positive (see Figure~\ref{fig:staronea}).
Then, \ensuremath{\operatorname{\textsc{Opt}}}\xspace accepts $\{ (1,2), (3,4), (6,7)\}$, \ensuremath{\operatorname{\textsc{Alg}}}\xspace only accepts
$\{ (2,3)\}$, and $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace\cup\ensuremath{\operatorname{\textsc{FP}}}\xspace)=1$. Thus, we may assume that \ensuremath{\operatorname{\textsc{Alg}}}\xspace
accepts at least one of $(6,7)$ and $(7,8)$, which we call $(7,x)$ where
$x\in\{ 6,8\}$. We call the other of these two edges $(7,y)$.
Then, the intervals $(1,2)$ and $(3,4)$ also arrive, along with a false
negative $(5,x)$. The interval $(4,5)$ is a false positive and is not
in the input (see Figure~\ref{fig:staroneb}). Since $(4,5)$ and $(5,x)$ share an edge, $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace\cup\ensuremath{\operatorname{\textsc{FP}}}\xspace)=1$.
$\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ accepts $\{ (2,3),(7,x)\}$, and \ensuremath{\operatorname{\textsc{Opt}}}\xspace accepts $\{ (1,2),(3,4),(5,x),
(7,y)\}$. To conclude, the error increases by 1, and \ensuremath{\operatorname{\textsc{Alg}}}\xspace's deficit to \ensuremath{\operatorname{\textsc{Opt}}}\xspace increases by 2.
{\bf Case \ensuremath{\operatorname{\textsc{Alg}}}\xspace rejects $\bm{(2,3)}$:}
The next interval to arrive is $(3,4)$.
{\bf Subcase \ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts $\bm{(3,4)}$:}
As in the previous case, we consider which of $(6,7)$ and $(7,8)$ \ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts.
If neither is accepted, in addition to $(2,3)$, $(4,5)$ arrives, but $(1,2)$
is a false
positive (Figure~\ref{fig:startwoa}). Again, payoffs of \ensuremath{\operatorname{\textsc{Alg}}}\xspace and \ensuremath{\operatorname{\textsc{Opt}}}\xspace are respectively 1 and 3,
and
$\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace\cup\ensuremath{\operatorname{\textsc{FP}}}\xspace)=1$. The error is increased by 1, and the net advantage of \ensuremath{\operatorname{\textsc{Opt}}}\xspace over $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ is increased by at least 2.
Next, we assume that \ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts $(7,x)$ and rejects
$(7,y)$.
Then, in addition to the intervals $(2,3)$ and $(3,4)$, $(4,5)$ arrives, along with a false
negative $(1,x)$ (Figure~\ref{fig:startwob}). The interval $(1,2)$ is a false positive and is not
in the input. Since $(1,2)$ and $(1,x)$ share an edge, $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace\cup\ensuremath{\operatorname{\textsc{FP}}}\xspace)=1$.
$\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ accepts $\{ (3,4),(7,x)\}$, and \ensuremath{\operatorname{\textsc{Opt}}}\xspace accepts $\{ (1,x),(2,3),(4,5),
(7,y)\}$. Again, the error is increased by 1, and the net advantage of \ensuremath{\operatorname{\textsc{Opt}}}\xspace over $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ is increased by~2.
{\bf Subcase \ensuremath{\operatorname{\textsc{Alg}}}\xspace rejects $\bm{(3,4)}$:}
The next interval to arrive is $(1,2)$.
Regardless of whether \ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts or rejects $(1,2)$,
as in the previous cases, we consider which of $(6,7)$ and $(7,8)$ \ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts.
If neither is accepted, then $(2,3)$ and $(3,4)$ have already arrived, but
$(4,5)$ is a false positive. The payoff of \ensuremath{\operatorname{\textsc{Alg}}}\xspace is at most 1 if it accepts $(1,2)$ and 0 otherwise, while \ensuremath{\operatorname{\textsc{Opt}}}\xspace accepts $\{(1,2),(2,3), (3,4)\}$,
and
$\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace\cup\ensuremath{\operatorname{\textsc{FP}}}\xspace)=1$.
Thus, the error is increased by 1, and the net advantage of \ensuremath{\operatorname{\textsc{Opt}}}\xspace over $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ is increased by 2. In what follows, we assume \ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts $(7,x)$ for $x\in\{6,8\}$.
{\bf Subsubcase \ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts $\bm{(1,2)}$:}
Then, in addition to the intervals $(2,3)$ and $(3,4)$,
a false
negative, $(5,x)$, arrives. The interval $(4,5)$ is a false positive and is not
in the input.
Since $(4,5)$ and $(5,x)$ share an edge, $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace\cup\ensuremath{\operatorname{\textsc{FP}}}\xspace)=1$.
$\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ accepts $\{ (1,2),(7,x)\}$, and \ensuremath{\operatorname{\textsc{Opt}}}\xspace accepts
$\{ (1,2),(3,4),(5,x),(7,y)\}$ (Figure~\ref{fig:startwoc}).
As before, the error is increased by 1, and the net advantage of \ensuremath{\operatorname{\textsc{Opt}}}\xspace over $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ is increased by 2.
{\bf Subsubcase \ensuremath{\operatorname{\textsc{Alg}}}\xspace rejects $\bm{(1,2)}$:}
In this case, the interval $(4,5)$ is a false positive, and there
are no false negatives.
Thus, the payoffs of \ensuremath{\operatorname{\textsc{Alg}}}\xspace and \ensuremath{\operatorname{\textsc{Opt}}}\xspace are respectively 1 and 3, and
$|\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace\cup\ensuremath{\operatorname{\textsc{FP}}}\xspace)|=1$ (Figure~\ref{fig:startwod}). That is, the error is increased by 1, and \ensuremath{\operatorname{\textsc{Alg}}}\xspace's deficit compared to \ensuremath{\operatorname{\textsc{Opt}}}\xspace is increased by~2.
This completes the proof for one group of eight vertices. Repeating it independently for each of the $s$ groups of eight vertices gives the claimed result.
\qed
\end{proof}
\section{Interval Scheduling}
In this section, we show tight upper and lower bounds on the competitive ratio of a deterministic algorithm for interval scheduling.
As an introduction to the difficulties in designing algorithms for the problem,
we start by proving a general lower bound.
We show that for any deterministic
algorithm $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$, there exists an input sequence $I_w$ and a set of predictions
$\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w$ such that $\ensuremath{\operatorname{\textsc{Alg}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w) = \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w) - \eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w)$,
and that this can be established for any positive integer error. We also show that the competitive ratio of \ensuremath{\operatorname{\textsc{Alg}}}\xspace is arbitrarily small.
\begin{restatable}{theorem}{IntervalGeneralLower}
\label{thm:generallower}
Let $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ be any deterministic algorithm. For any positive integers $p$
and $c\in [2,m]$, there are instances $I_w$ and predictions $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w$ such
that $p \leq \ensuremath{\eta}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w) \leq (c-1)p$ and
$ \ensuremath{\operatorname{\textsc{Alg}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w) = (1 - \ensuremath{\gamma(\IPRED_w,I_w)}\xspace)\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w) \leq \frac1c \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w)\,. $
\end{restatable}
\begin{proof}
$\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ will be presented with $p$ intervals of length~$c$, and
the remainder of the sequence will depend on which of these it accepts.
The prediction, however, will include the following $2p$ requests:
$\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace = \bigcup_{i=0}^{p-1} \big\{(c i, c (i+1) ), (c i, c i+1)\big\}\,.$
The input $I_w$ is formed by $p$ phases, $i\in [0,p-1]$. The
$i$th~phase starts with the true positive $(c i,c (i+1))$.
There are two cases to consider:
\begin{itemize}
\item If $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ accepts $(c i, c (i+1))$, then the phase
continues with \\
$\SETOF{(c i+j, c i+(j+1))}{0\leq j\leq c-1}.$
The first of these requests is a true positive, and the other $c-1$ are
false negatives. Note that $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ cannot accept any of these $c$
requests. The optimal algorithm rejects the
original request $(c i, c (i+1))$ and accepts all of the $c$
following unit-length requests.
\item If $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ rejects $(c i, c (i+1))$, the phase ends
with no further requests. In this case, $(c i, c i+1)$
is a false positive.
\end{itemize}
The contribution, $\eta_i$, of phase~$i$ to $|\ensuremath{\operatorname{\textsc{FP}}}\xspace \cup \ensuremath{\operatorname{\textsc{FN}}}\xspace|$ is $\eta_i= c-1$
in the first case and $\eta_i=1$ in the second. Since the intervals in
$\ensuremath{\operatorname{\textsc{FP}}}\xspace \cup \ensuremath{\operatorname{\textsc{FN}}}\xspace$ are disjoint, we can write
$ \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup \ensuremath{\operatorname{\textsc{FN}}}\xspace) = \sum_{i=0}^{p-1} \eta_i$ and it follows that
$p\leq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup \ensuremath{\operatorname{\textsc{FN}}}\xspace) \leq(c-1)p$.
Moreover, the net advantage of \ensuremath{\operatorname{\textsc{Opt}}}\xspace over $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$
in phase~$i$ is at least~$\eta_i$:
in the first case, \ensuremath{\operatorname{\textsc{Opt}}}\xspace accepts $\eta_i+1$ and $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$ accepts one
request, and in the second case, \ensuremath{\operatorname{\textsc{Opt}}}\xspace accepts $\eta_i=1$ and $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$
accepts no requests. Given that there are $p$~phases, we can write
$\ensuremath{\operatorname{\textsc{Alg}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w) \leq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w) - \sum_{i=0}^{p-1} \eta_i = \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w)-\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace) = (1 - \ensuremath{\gamma(\IPRED_w,I_w)}\xspace)\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w).$
In phases where \ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts the first request, \ensuremath{\operatorname{\textsc{Opt}}}\xspace accepts $c$ times
as many requests as \ensuremath{\operatorname{\textsc{Alg}}}\xspace.
In phases where \ensuremath{\operatorname{\textsc{Alg}}}\xspace rejects the first request, \ensuremath{\operatorname{\textsc{Opt}}}\xspace accepts one
interval, and \ensuremath{\operatorname{\textsc{Alg}}}\xspace accepts no intervals.
Thus, $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w) \geq c \cdot \ensuremath{\operatorname{\textsc{Alg}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w)\,.$ \qed
\end{proof}
For $c=2$, we get $\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I)=p$ and $\ensuremath{\operatorname{\textsc{Alg}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w) = (1-\ensuremath{\gamma(\IPRED_w,I_w)}\xspace)\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w)$.
The next theorem shows that the competitive ratio of \ensuremath{\operatorname{\textsc{Trust}}}\xspace compared to the lower bound of Theorem~\ref{thm:generallower} is not tight.
The proof follows from an adversarial sequence similar to that of Theorem~\ref{thm:generallower} in which the payoff of \ensuremath{\operatorname{\textsc{Opt}}}\xspace and $\eta$ grow in phases while the payoff of \ensuremath{\operatorname{\textsc{Trust}}}\xspace stays 0.
\begin{restatable}{theorem}{intervaltrustlower}
\label{th:intervaltrustlower}
For any integer $p \geq 1$, there exists a prediction $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w$ and an
input sequence $I_w$ so that $\eta(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w, I_w) = p$
and
$ \ensuremath{\operatorname{\textsc{Trust}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w,I_w) = (1-2\ensuremath{\gamma(\IPRED_w,I_w)}\xspace)\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w)\,. $
\end{restatable}
\begin{proof}
\labelText{}{LLintervaltrustlower}
Let the prediction be
$\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w = \bigcup_{i=0}^{p-1} \big\{(3i,3i+2), (3i+1,3i+3)\big\}\,.$
\ensuremath{\operatorname{\textsc{Trust}}}\xspace chooses an optimal solution \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace from $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_w$. For each $i$,
\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace will contain either $(3i, 3i+2)$ or $(3i+1, 3i+3)$.
%
If $(3i, 3i+2)$ is in \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace, that interval will be in \ensuremath{\operatorname{\textsc{FP}}}\xspace, and
\ensuremath{\operatorname{\textsc{Opt}}}\xspace will select $(3i+1, 3i+3)$, which will be a \ensuremath{\operatorname{\textsc{TP}}}\xspace-interval in~$I_w$.
Further, $I_w$ will contain the \ensuremath{\operatorname{\textsc{FN}}}\xspace-interval, $(3i, 3i+1)$.
If, instead, $(3i+1, 3i+3)$ is in \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace, that interval will be in \ensuremath{\operatorname{\textsc{FP}}}\xspace, and
\ensuremath{\operatorname{\textsc{Opt}}}\xspace will select $(3i, 3i+2)$, which will be a \ensuremath{\operatorname{\textsc{TP}}}\xspace-interval in~$I_w$.
Further, $I_w$ will then contain the \ensuremath{\operatorname{\textsc{FN}}}\xspace-interval, $(3i+2, 3i+3)$.
Thus, $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w)=2p$, and for each $i$, the interval in \ensuremath{\operatorname{\textsc{FP}}}\xspace and the interval in
\ensuremath{\operatorname{\textsc{FN}}}\xspace overlap, that so $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace \cup \ensuremath{\operatorname{\textsc{FP}}}\xspace) = p$. Since $\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace=\ensuremath{\operatorname{\textsc{FP}}}\xspace$, \ensuremath{\operatorname{\textsc{Trust}}}\xspace does not
accept any intervals, so
\[\ensuremath{\operatorname{\textsc{Trust}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace, I_w) = 0 = \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) - 2\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace \cup \ensuremath{\operatorname{\textsc{FP}}}\xspace) = (1-2\ensuremath{\gamma(\IPRED_w,I_w)}\xspace)\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I_w).\]
\qed\end{proof}
\subsection{\ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace}
In this section, we introduce an algorithm \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace, \ensuremath{\operatorname{\textsc{TG}}}\xspace, which achieves an optimal competitive ratio for interval scheduling.
\subsubsection{The algorithm.}
\ensuremath{\operatorname{\textsc{TG}}}\xspace starts by choosing an optimal solution
offline set $\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace$ of the schedules in \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace, and
plans to accept those intervals in \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace and reject all others,
and it just follows its plan, except possibly when the next request is
in~\ensuremath{\operatorname{\textsc{FN}}}\xspace.
\ensuremath{\operatorname{\textsc{TG}}}\xspace maintains an updated plan, $A$.
Initially, $A$ is \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace.
When a request, $r$, is in~\ensuremath{\operatorname{\textsc{FN}}}\xspace, \ensuremath{\operatorname{\textsc{TG}}}\xspace accepts if $r$ overlaps no
previously accepted intervals and can be accepted by
replacing at most one other interval in $A$ that ends no earlier than~$r$.
In that case, $r$ is added to $A$, possibly replacing an overlapping interval
to maintain the feasibility of~$A$ (no two intervals overlap).
As a comment, only the first interval from \ensuremath{\operatorname{\textsc{FN}}}\xspace that replaces an interval~$r$
in the current~$A$ is said to ``replace'' it. There may be other intervals
from \ensuremath{\operatorname{\textsc{FN}}}\xspace that overlap~$r$ and are accepted by \ensuremath{\operatorname{\textsc{TG}}}\xspace, but they are not said to
``replace'' it.
We let $U$ denote the set of intervals in $\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace \cap \ensuremath{\operatorname{\textsc{FP}}}\xspace$ that are
not replaced during the execution of \ensuremath{\operatorname{\textsc{TG}}}\xspace.
\subsubsection{Analysis.}
Let $\ensuremath{\TG}\xspace$ denote the set of intervals chosen by \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace on input~$I$
and prediction~\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace, and \ensuremath{\OPT}\xspace the intervals chosen by the optimal algorithm.
We define the following subsets of \ensuremath{\operatorname{\textsc{TG}}}\xspace and \ensuremath{\operatorname{\textsc{Opt}}}\xspace:
\begin{itemize}
\item$ \ensuremath{\TG^{\FN}}\xspace = \ensuremath{\TG}\xspace \cap \ensuremath{\operatorname{\textsc{FN}}}\xspace$ and $\ensuremath{\OPT^{\FN}}\xspace = \ensuremath{\OPT}\xspace \cap \ensuremath{\operatorname{\textsc{FN}}}\xspace$
\item $\ensuremath{\TG^{\TP}}\xspace = \ensuremath{\operatorname{\textsc{TG}}}\xspace \cap \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace = \ensuremath{\operatorname{\textsc{TG}}}\xspace \cap \ensuremath{\operatorname{\textsc{TP}}}\xspace$ and $\ensuremath{\OPT^{\TP}}\xspace = \ensuremath{\OPT}\xspace
\cap \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace = \ensuremath{\OPT}\xspace \cap \ensuremath{\operatorname{\textsc{TP}}}\xspace$
\end{itemize}
\begin{lemma}
\label{lemma:IOPT}
Each interval $i \in \ensuremath{\OPT^{\TP}}\xspace$ overlaps an interval in \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace extending no
further to the right than $i$.
\end{lemma}
\begin{proof}
Assume to the contrary that there is no interval in \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace that
overlaps $i$ and ends no later than $i$. If $i$ does not overlap
anything in~\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace, we could have added $i$ to \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace and have a
feasible solution (non-overlapping intervals), contradicting the
fact that \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace is optimal.
Thus, $i$ must overlap an interval $r$ in~\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace, which, by assumption,
must end strictly later than~$i$. This
contradicts the construction of~\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace, since $i$ would have been in
\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace instead of~$r$.
\qed\end{proof}
We define a set \ensuremath{O^{\FN}}\xspace consisting of a copy of each interval in \ensuremath{\OPT^{\FN}}\xspace and
let $\ensuremath{\operatorname{\mathcal{F}}}\xspace = \ensuremath{O^{\FN}}\xspace \cup U$.
We define a mapping $f \colon \ensuremath{\OPT}\xspace \rightarrow \ensuremath{\TG}\xspace \cup \ensuremath{\operatorname{\mathcal{F}}}\xspace$ as
follows.
For each $i \in \ensuremath{\OPT}\xspace$:
\begin{enumerate}
\item \label{stepI} If there is an interval in \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace that overlaps $i$ and ends no
later than $i$, then let $r$ be the rightmost such interval.
\begin{enumerate}
\item \label{stepFU} If $r \in U \cup \ensuremath{\TG^{\TP}}\xspace$, then $f(i)=r$.
\item \label{stepTGFN} Otherwise, $r$ has been replaced by some
interval $t$.
In this case, $f(i)=t$.
\end{enumerate}
\item Otherwise, by Lemma~\ref{lemma:IOPT}, $i$ belongs to \ensuremath{\OPT^{\FN}}\xspace.
\begin{enumerate}
\item \label{stepTGmI}If there is an interval in \ensuremath{\TG^{\FN}}\xspace that overlaps
$i$ and ends no later than $i$ and an interval in $U$ that overlaps
$i$'s right endpoint, let $r$ be the rightmost
interval in \ensuremath{\TG^{\FN}}\xspace that overlaps $i$ and ends no later than $i$.
In this case, $f(i)=r$.
\item \label{stepFFN} Otherwise, let $o_i$ be the copy
of $i$ in \ensuremath{O^{\FN}}\xspace. In this case, $f(i)=o_i$.
\end{enumerate}
\end{enumerate}
We let \ensuremath{F}\xspace denote the subset of \ensuremath{\operatorname{\mathcal{F}}}\xspace mapped to by $f$
and note that in step~\ref{stepFU}, intervals are
added to $F \cap U$ when $r\in U$.
In step~\ref{stepFFN}, all
intervals are added to $F \cap \ensuremath{O^{\FN}}\xspace$.
\begin{lemma}
\label{lemma:injection}
The mapping $f$ is an injection.
\end{lemma}
\begin{proof}
Intervals in $U \cup \ensuremath{\TG^{\TP}}\xspace$ are only mapped to in
step~\ref{stepFU}. Note that $U$ and $\ensuremath{\TG}\xspace$ are disjoint.
If an interval $i \in \ensuremath{\OPT}\xspace$ is mapped to an interval $r \in U \cup \ensuremath{\TG}\xspace$ in
this step, $i$ overlaps the right endpoint of $r$. There can be only
one interval in \ensuremath{\OPT}\xspace overlapping the right endpoint of $r$, so this part of the
mapping is injective.
Intervals in \ensuremath{\TG^{\FN}}\xspace are only mapped to in steps~\ref{stepTGFN}
and~\ref{stepTGmI}.
In step~\ref{stepTGFN}, only intervals that replace intervals in \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace are
mapped to.
Since each interval in \ensuremath{\TG^{\FN}}\xspace replaces at most one interval in \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace and
the right endpoint of each interval in \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace overlaps at most one interval
in $\ensuremath{\OPT}\xspace$, no interval is mapped to twice in step~\ref{stepTGFN}.
If, in step~\ref{stepTGmI}, an interval, $i$, is mapped to an interval,
$r$, $i$ overlaps the right endpoint of $r$.
There can be only one interval in \ensuremath{\OPT}\xspace overlapping the right endpoint
of $r$, so no interval is mapped to twice in step~\ref{stepTGmI}.
We now argue that no interval is mapped to in both steps~\ref{stepTGFN}
and~\ref{stepTGmI}.
Assume that an interval, $i_1$, is mapped to an interval, $t$, in
step~\ref{stepTGFN}.
Then, there is an interval, $r$, such that $r$ overlaps the right
endpoint of $t$ and $i_1$ overlaps the right endpoint of $r$.
This means that the right endpoint of $i_1$ is no further to the left
than the right endpoint of $t$.
Assume for the sake of contradiction that an interval $i_2 \neq i_1$
is mapped to $t$ in step~\ref{stepTGmI}.
Then, $i_2$ overlaps the right endpoint of $t$, and there is an
interval, $u \in U$, overlapping the right endpoint of $i_2$.
Since $i_2$ overlaps $t$, $i_2$ must be to the left of $i_1$.
Since $i_2$ is mapped to $t$, $t$ extends no further to the right than
$i_2$.
Thus, since $r$ overlaps both $t$ and $i_1$, $r$ must overlap the
right endpoint of $i_2$, and hence, $r$ overlaps $u$.
This is a contradiction since $r$ and $u$ are both in \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace.
Intervals in $F \cap \ensuremath{O^{\FN}}\xspace$ are only mapped to in
step~\ref{stepFFN} and no two intervals are mapped to the same
interval in this step.
\qed\end{proof}
\begin{lemma}
\label{lemma:feasible}
The subset \ensuremath{F}\xspace of \ensuremath{\operatorname{\mathcal{F}}}\xspace mapped to by $f$ is a feasible
solution.
\end{lemma}
\begin{proof}
We first note that $F \cap U$ is feasible since $F \cap U
\subseteq U \subseteq \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace$ and \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace is feasible.
Moreover, $\ensuremath{F}\xspace \cap \ensuremath{O^{\FN}}\xspace$ is feasible since the intervals of $\ensuremath{F}\xspace
\cap \ensuremath{O^{\FN}}\xspace$ are identical to the corresponding subsets of \ensuremath{\OPT}\xspace.
Thus, we need to show that
no interval in $\ensuremath{F}\xspace \cap U$ overlaps any interval in $\ensuremath{F}\xspace \cap \ensuremath{O^{\FN}}\xspace$.
Consider an interval $u \in \ensuremath{F}\xspace \cap U$ mapped to from an interval
$i \in \ensuremath{\OPT}\xspace$.
Since $i$ is not mapped to its own copy in \ensuremath{\operatorname{\mathcal{F}}}\xspace, its copy does not belong to \ensuremath{F}\xspace.
Since $i \in \ensuremath{\OPT}\xspace$, no interval in $\ensuremath{F}\xspace \cap \ensuremath{O^{\FN}}\xspace$ overlaps $i$.
Thus, we need to argue that $\ensuremath{F}\xspace \cap
\ensuremath{O^{\FN}}\xspace$ contains no interval strictly to the left of $i$ overlapping $u$.
Assume for the sake of contradiction that there is an interval $\ell \in
\ensuremath{F}\xspace \cap \ensuremath{O^{\FN}}\xspace$ to the left of $i$ overlapping $u$.
Since $\ell$ ended up in \ensuremath{F}\xspace although its right endpoint is
overlapped by an interval from $U$, there is no interval in \ensuremath{\operatorname{\mathit{I^\ast}}}\xspace (because of
step~\ref{stepI} in the mapping algorithm) or in $\ensuremath{\TG^{\FN}}\xspace$ (because of step~\ref{stepTGmI} in the
mapping algorithm)
overlapping $\ell$ and ending no later than $\ell$.
Thus, $\ensuremath{\operatorname{\mathit{I^\ast}}}\xspace \cup \ensuremath{\TG^{\FN}}\xspace$ contains no interval strictly to the left of $u$
overlapping $\ell$.
This contradicts the fact that $u$ has not been replaced since the
interval in \ensuremath{\OPT^{\FN}}\xspace corresponding to $\ell$ could have replaced it. \qed
\end{proof}
The following theorem follows from Lemmas~\ref{lemma:injection} and \ref{lemma:feasible}.
\begin{restatable}{theorem}{trustgreedymain}
\label{th:trustgreedymain}
For any prediction \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace and any input sequence $I$, we have
\[\ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) \geq (1-\ensuremath{\gamma(\IPRED,I)}\xspace)\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)\,.\]
\end{restatable}
\begin{proof}
\labelText{}{LLtrustgreedymain}
We show that
\[\ensuremath{\operatorname{\textsc{TG}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) \geq \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) - \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace \cup \ensuremath{\operatorname{\textsc{FN}}}\xspace) = (1-\ensuremath{\gamma(\IPRED,I)}\xspace)\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I):\]
\begin{align*}
\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) & \leq |\ensuremath{\TG}\xspace| + |\ensuremath{F}\xspace|,
\text{ since, by Lemma~\ref{lemma:injection}, $f$ is an injection}\\
& \leq |\ensuremath{\TG}\xspace| + \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\mathcal{F}}}\xspace),
\text{ since, by Lemma~\ref{lemma:feasible}, \ensuremath{F}\xspace is feasible} \\
&\leq |\ensuremath{\TG}\xspace| + \ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace \cup \ensuremath{\operatorname{\textsc{FN}}}\xspace), \text{ since } U \subseteq
\ensuremath{\operatorname{\textsc{FP}}}\xspace \text{ and } \ensuremath{\OPT^{\FN}}\xspace \subseteq \ensuremath{\operatorname{\textsc{FN}}}\xspace
\end{align*}
\qed\end{proof}
\section{Consistency-Robustness Trade-off}
We study the trade-off between the competitive ratio of the interval scheduling algorithm when predictions are error-free (consistency) and when predictions are adversarial (robustness).
The following proposition shows an obvious trade-off between the consistency and robustness of deterministic algorithms.
\begin{restatable}{proposition}{deterconstrob}
If a deterministic algorithm has non-zero consistency, $\alpha$, it has robustness $\beta \leq \frac{1}{m}$.
\label{prop:deterconstrob}
\end{restatable}
\begin{proof}
\labelText{}{LLdeterconstrob}
Consider a prediction that indicates the input to be one long interval,
$\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace = (0,m)$. In order to have non-zero consistency, $\alpha$,
the algorithm must
accept this interval, if it is first in some sequence because it might
be the only interval in that sequence.
Suppose an input $\sigma$ is
$(0,m),(0,1),(1,2),(2,3),\ldots, (m-1,m)\,.$
Clearly, \ensuremath{\operatorname{\textsc{Opt}}}\xspace accepts the $m$ intervals of length~$1$, giving robustness~$\frac{1}{m}$.
\qed\end{proof}
The more interesting case is randomized algorithms. The proof of the following was inspired
by the proof of Theorem 13.8 in~\cite{BE98}
for the online case without predictions, and that $\mathrm{\Omega}(\log m)$ result was
originally proven in~\cite{ABFR94}.
\begin{theorem}\label{th:constrob}
If a (possibly randomized) algorithm \ensuremath{\operatorname{\textsc{Alg}}}\xspace is both $\alpha$-consistent and $\beta$-robust,
then
$\alpha \leq 1-\frac{\lfloor\log m\rfloor-1}{2}\beta$
and
$\beta \leq \frac{2}{\lfloor\log m\rfloor -1}\cdot (1-\alpha)$.
\end{theorem}
\begin{proof}
Let $r = \lfloor\log m\rfloor-1$ and let $m'=2^{r+1}$.
Consider a prediction
$\sigma=\langle\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_0, \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_1,\ldots, \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_{r}, \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace'\rangle$, where
$\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace'=\langle (0,1),(1,2),\ldots,(m'-1,m')\rangle$
and, for $0\leq i\leq r$,
$\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_i=\langle (0,m'/2^i),(m'/2^i,2m'/2^i),\ldots,(m'-m'/2^i,m')\rangle$. Note that $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_i$ consists of $2^i$ disjoint intervals of length $m'/2^i$.
For $0\leq i\leq r$, let $\sigma_i=\langle \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_0, \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_1,\ldots, \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_i\rangle$.
In order to maximize the number of small intervals that can be accepted
if they arrive, an algorithm would minimize the (expected) fraction of the line
occupied by the larger intervals, to leave space for the small intervals, while
maintaining $\beta$-robustness.
Since $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\sigma_0) = 1$ and \ensuremath{\operatorname{\textsc{Alg}}}\xspace is $\beta$-robust,
$E[\ensuremath{\operatorname{\textsc{Alg}}}\xspace(\sigma_0)]\geq \beta$.
For $\sigma_i$ with $i\geq 1$, \ensuremath{\operatorname{\textsc{Opt}}}\xspace accepts all intervals in $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace_i$, so $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\sigma_i)=2^i$. To be $\beta$-robust,
the
expected number of intervals of length at most $m'/2^i$ that $\ensuremath{\operatorname{\textsc{Alg}}}\xspace$
accepts is at least
$2^i\beta$.
Inductively, for $i\geq 1$, by the linearity of expectations, this is at
least
$2^{i-1}\beta$
intervals of length~$m'/2^i$, and these intervals have a total
expected size
of at least
$2^{i-1}\beta \times m'/2^i = \frac{m'}{2}\beta$.
Again, by the linearity of expectations, for $\sigma_r$, the expected sum
of the lengths of the accepted intervals is at least
$\sum_{i=0}^r \frac{m'}{2}\beta=\frac{m' (r+1)}{2}\beta$.
From $\sigma_r$, the expected number of intervals \ensuremath{\operatorname{\textsc{Alg}}}\xspace must have
accepted is at least
$2^r\beta$.
If $\sigma$ is the
actual input sequence, then the
predictions are correct, so for \ensuremath{\operatorname{\textsc{Alg}}}\xspace to be $\alpha$-consistent,
we must have
$E[\ensuremath{\operatorname{\textsc{Alg}}}\xspace(\sigma')]\geq m'\alpha$.
Since also
$2^r\beta+(m'-\frac{m' (r+1)}{2}\beta)\geq E[\ensuremath{\operatorname{\textsc{Alg}}}\xspace(\sigma')]$,
we can combine these two inequalities and obtain
$\frac{2^r}{m'}\beta+1-\frac{r+1}{2}\beta\geq\alpha$.
Since $\frac{2^r}{m'}=\frac12$,
this reduces to
$\alpha\leq 1-\frac{r}{2}\beta$.
Solving for $\beta$,
$\beta\leq \frac{2}{r}(1-\alpha)$.
\qed
\end{proof}
Note that as $\alpha$ approaches 1 (optimal consistency), $\beta$ goes to $0$ (worst-case robustness) and vice-versa.
Next, we present a family of algorithms, \textsc{RobustTrust}\xspace, which has a parameter $0\leq \alpha \leq 1$ and works as follows.
With a probability of
$\alpha$,
\textsc{RobustTrust}\xspace applies \ensuremath{\operatorname{\textsc{TG}}}\xspace.
(Applying \ensuremath{\operatorname{\textsc{Trust}}}\xspace, instead of \ensuremath{\operatorname{\textsc{TG}}}\xspace, gives the same consistency and robustness
results.) With probability
$1-\alpha$,
\textsc{RobustTrust}\xspace ignores the predictions, and applies the Classify-and-Randomly-Select (\ensuremath{\operatorname{\textsc{Crs}}}\xspace) algorithm described in Theorem 13.7 in~\cite{BE98}.
\ensuremath{\operatorname{\textsc{Crs}}}\xspace is strictly $\lceil \log m \rceil$-competitive (they use ratios at least one). A similar algorithm
was originally proven $O(\log m)$-competitive in~\cite{ABFR94}.
For completeness, we include the \ensuremath{\operatorname{\textsc{Crs}}}\xspace algorithm.
To avoid the problem of $m$ possibly not being a power of $2$, we define
$j=\lceil \log m \rceil$ and $m'=2^j$. Thus, the algorithm will define
its behavior for a longer line and some sequences that cannot exist.
We define a set of $\lceil\log m\rceil$ levels for the possible requests.
Since $m'$ is a power of two, there is an odd number of edges, so
the middle edge, $e_1$, in the line is well
defined. The set $E_1=\{ e_1\}$ and Level~1 consists of all intervals
containing $e_1$. After Levels~1 through $i$ are defined, we define $E_{i+1}$
and Level~$i+1$ as follows: After removing all edges in $E_1\cup E_2 \cup
\cdots \cup E_i$ from the line, we are left with $2^{i}$ segments, each consisting
of $2^{j-i}$ vertices. The set $E_{i+1}$ consists of the middle edges of these
segments, and Level $i+1$ consists of all intervals, not in any of the Levels
$1$ through $i$, but containing an edge in $E_{i+1}$. Thus, the levels create a partition
of all possible intervals.
The algorithm \ensuremath{\operatorname{\textsc{Crs}}}\xspace initially chooses a level $i$ between $1$ and $j$, each with
probability $\frac{1}{j}$. It accepts any interval in Level~$i$ that does not
overlap an interval it already has accepted. Any intervals not in Level~$i$
are rejected.
When \textsc{RobustTrust}\xspace applies \ensuremath{\operatorname{\textsc{TG}}}\xspace and the predictions are correct, it accepts exactly as many intervals as there are in the optimal solution.
From these observations, we can get the following results.
\begin{restatable}{theorem}{robtrust}\label{th:robtrust}
\textsc{RobustTrust}\xspace (\ensuremath{\operatorname{\textsc{Rt}}}\xspace) with parameter $\alpha$
has consistency at least $\alpha$ and robustness at least $\frac{1-\alpha}{\lceil \log m \rceil}$.
\end{restatable}
\begin{proof}
\labelText{}{LLrobtrust}
We investigate the \textsc{RobustTrust}\xspace when all predictions are correct (the consistency) and when some predictions may be incorrect (robustness).
Suppose all predictions are correct. \textsc{RobustTrust}\xspace applies \ensuremath{\operatorname{\textsc{TG}}}\xspace with probability $\alpha$.
Since \ensuremath{\operatorname{\textsc{TG}}}\xspace is optimal when all predictions are correct, the expected payoff of \textsc{RobustTrust}\xspace is at least $\alpha \cdot \ensuremath{\operatorname{\textsc{Opt}}}\xspace$.
Therefore, the competitive ratio (consistency) of \textsc{RobustTrust}\xspace is at least~$\alpha$.
Suppose some predictions are incorrect.
If the intervals in Level~$i$
are the only intervals given, and \ensuremath{\operatorname{\textsc{Crs}}}\xspace chooses that level,
\ensuremath{\operatorname{\textsc{Crs}}}\xspace accepts as many intervals as \ensuremath{\operatorname{\textsc{Opt}}}\xspace does,
since each interval in Level~$i$ contains an edge in $E_i$, and no intervals
containing more than one edge in $E_i$ exist. Since the number of levels
is $\lceil\log m\rceil$, the expected number of
intervals from \ensuremath{\operatorname{\textsc{Opt}}}\xspace's configuration that \ensuremath{\operatorname{\textsc{Crs}}}\xspace accepts on any given level is
$\frac{1}{\lceil\log m\rceil}$ times the number of intervals \ensuremath{\operatorname{\textsc{Opt}}}\xspace accepted from that level,
so by the linearity of expectations, this totals
$\frac{1}{\lceil \log m \rceil}\ensuremath{\operatorname{\textsc{Opt}}}\xspace$. \ensuremath{\operatorname{\textsc{Crs}}}\xspace is
chosen with probability $1-\alpha$, so the robustness is
at most $\frac{1-\alpha}{\lceil \log m \rceil}$.
\qed\end{proof}
\begin{center}
\begin{table}[!b]\label{table:stat}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
name & input size ($N$) & no. timesteps ($m$) & max. length & avg. length \\
\hline \hline
LLNL-uBGL-2006-2 & 13,225 & 16,671,553 & 14,403 & 1,933.92\\
\hline
NASA-iPSC-1993-3.1 & 18,066 & 7,947,562 & 62,643 & 772.21 \\
\hline
CTC-SP2-1996-3.1 & 77,205 & 8,986,769 & 71,998 & 11,279.61 \\
\hline
SDSC-DS-2004-2.1 & 84,893 & 31,629,689 & 6,589,808 & 7,579.36 \\
\hline
\end{tabular}
\caption{Details on the benchmarks from~\cite{ChapinCFJLSST99} used in our experiments.\label{tablek}}
\end{table}
\end{center}
\section{Experimental Results}
We present an experimental evaluation of \ensuremath{\operatorname{\textsc{Trust}}}\xspace and \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace in comparison with the \ensuremath{\operatorname{\textsc{Greedy}}}\xspace algorithm, which serves as a baseline online algorithm, and \ensuremath{\operatorname{\textsc{Opt}}}\xspace, which serves as the performance upper bound.
We evaluate our algorithms using real-world scheduling data for parallel machines~\cite{ChapinCFJLSST99}. Each benchmark from~\cite{ChapinCFJLSST99} specifies the start and finish times of tasks as scheduled on parallel machines with several processors.
We use these tasks to generate inputs to the interval scheduling problem; Table~\ref{tablek} details the interval scheduling inputs we generated from benchmarks of~\cite{ChapinCFJLSST99}.
For each benchmark with $N$ tasks, we create an instance $I$ of an interval scheduling problem by randomly selecting $n = \lfloor N/2 \rfloor$ tasks from the benchmark and randomly permuting them. This sequence serves as the input to all algorithms. To generate the prediction, we consider $1000$ equally distanced values of $d \in [0,n]$. For each value of $d$, we initiate the prediction set $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$ with the set of intervals in $I$, remove $|\ensuremath{\operatorname{\textsc{FN}}}\xspace|=d$ randomly selected intervals from $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$ and add to it $|\ensuremath{\operatorname{\textsc{FP}}}\xspace|=d$ randomly selected intervals from the remaining $N-n$ tasks in the benchmark. The resulting set $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$ is given to \ensuremath{\operatorname{\textsc{Trust}}}\xspace and \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace as prediction $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$. For each value of $d$, we compute the normalized error $\gamma(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) = \frac{\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FN}}}\xspace\cup \ensuremath{\operatorname{\textsc{FP}}}\xspace)}{\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)}$, and report the payoff of \ensuremath{\operatorname{\textsc{Trust}}}\xspace and \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace as a function of $\gamma$.
\begin{figure}[!b]
\centering
\begin{subfigure}[b]{0.495\textwidth}
\hspace*{-.7cm}
\centering
\includegraphics[page=3,trim = 3.3cm 15.55cm 3.2cm 3.25cm,clip,scale=.55]{allplots}
\caption{LLNL-uBGL-2006-2}
\label{fig:LLNLEqual}
\end{subfigure}
\begin{subfigure}[b]{0.495\textwidth} \hspace*{-.7cm}
\centering
\includegraphics[page=11,trim = 3.3cm 6.7cm 3.2cm 12.1cm,clip,scale=.55]{allplots}
\caption{ SDSC-DS-2004-2.1-cln}
\label{fig:SDSCEqual}
\end{subfigure}
\begin{subfigure}[b]{0.495\textwidth}
\centering
\includegraphics[page=9,trim = 3.3cm 15.7cm 3.2cm 2.8cm,clip,scale=.55]{allplots}
\caption{CTC-SP2-1996-3.1-cln}
\label{fig:CTCEqual}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.495\textwidth}
\hspace*{-.7cm} \centering
\includegraphics[page=5,trim = 3.3cm 6.7cm 3.2cm 12.1cm,clip,scale=.55]{allplots}
\caption{ NASA-iPSC-1993}
\label{fig:NASAEqual}
\end{subfigure}
\caption{Payoff as a function of normalized error value \vspace*{1mm}}
\label{fig:mainexp}
\end{figure}
\begin{figure}[!b]
\centering
\begin{subfigure}[b]{0.495\textwidth}
\hspace*{-.7cm}
\centering
\includegraphics[page=1,trim = 3.3cm 15.55cm 3.2cm 3.25cm,clip,scale=.55]{allplots}
\caption{LLNL-uBGL-2006-2, no \ensuremath{\operatorname{\textsc{FP}}}\xspace}
\label{fig:LLNLNOFP}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.495\textwidth}
\hspace*{-.7cm}
\centering
\includegraphics[page=1,trim = 3.3cm 6.7cm 3.2cm 12.1cm,clip,scale=.55]{allplots}
\caption{LLNL-uBGL-2006-2, no \ensuremath{\operatorname{\textsc{FN}}}\xspace}
\label{fig:LLNLNoFN}
\end{subfigure} \vspace*{3mm} \\
\begin{subfigure}[b]{0.495\textwidth}
\hspace*{-.7cm} \centering
\includegraphics[page=9,trim = 3.3cm 5.6cm 3.2cm 13.2cm,clip,scale=.55]{allplots}
\caption{ SDSC-DS-2004-2.1-cln, no \ensuremath{\operatorname{\textsc{FP}}}\xspace}
\label{fig:SDSCNoFP}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.495\textwidth}
\hspace*{-.7cm} \centering
\includegraphics[page=11,trim = 3.3cm 15.55cm 3.2cm 3.25cm,clip,scale=.55]{allplots}%
\caption{ SDSC-DS-2004-2.1-cln, no \ensuremath{\operatorname{\textsc{FN}}}\xspace}
\label{fig:SDSCNoFN}
\end{subfigure}
\caption{Payoff as a function of normalized error value in the absence of false positives (a), (c) and false negatives (b), (d).}
\label{fig:three graphs}
\end{figure}
Figure~\ref{fig:mainexp} shows the results for two representative benchmarks from~\cite{ChapinCFJLSST99}, namely, LLNL (the workload of the BlueGene/L system installed at Lawrence Livermore National Lab), SDSC (the workload log from San Diego Supercomputer Center), NASA-iPSC (scheduling log from Numerical Aerodynamic Simulation -NAS- Systems Division at NASA Ames Research Center) and CTC-SP2 (Cornell Theory Center IBM SP2 log). These four benchmarks are selected to represent a variety of input sizes and interval lengths. These four benchmarks are selected to represent a variety of input sizes and interval lengths. The results are aligned with our theoretical findings: \ensuremath{\operatorname{\textsc{Trust}}}\xspace quickly becomes worse than \ensuremath{\operatorname{\textsc{Greedy}}}\xspace as the error value increases, while \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace degrades gently as a function of the prediction error. In particular, \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace is better than \ensuremath{\operatorname{\textsc{Greedy}}}\xspace for almost all error values. We note that \ensuremath{\operatorname{\textsc{Greedy}}}\xspace
performs better when there is less overlap between the input intervals, which is the case in LLNL compared to SDSC. In an extreme case, when no two intervals overlap, \ensuremath{\operatorname{\textsc{Greedy}}}\xspace is trivially optimal. Nevertheless, even for LLNL, \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace is not much worse than \ensuremath{\operatorname{\textsc{Greedy}}}\xspace for extreme values of error: the payoff for the largest normalized error of $\gamma = 1.87$ was 5149 and 5198 for \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace and \ensuremath{\operatorname{\textsc{Greedy}}}\xspace, respectively. Note that for SDSC, where there are more overlaps between intervals, \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace is strictly better than \ensuremath{\operatorname{\textsc{Greedy}}}\xspace, even for the largest error values. It is worth noting that, in an extreme case, where $\ensuremath{\operatorname{\textsc{FP}}}\xspace=\ensuremath{\operatorname{\textsc{FN}}}\xspace=n$, the predictions contain a completely different set from the input sequence. In that case, $|\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace| = 2n$, and
$\gamma = \frac{\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)}{\ensuremath{\operatorname{\textsc{Opt}}}\xspace(I)}$ takes values in $[1.5,2]$.
We also experiment in a setting where false positives and negatives contribute differently to the error set. We generate the input sequences in the same way as in the previous experiments. To generate the prediction set $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$, we consider $1000$ equally-distanced values of $d$ in the range $[0,n]$ as before. We first consider a setting in which all error is due to false negatives; for that, we generate $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$ by removing $d$ randomly selected intervals from $I$. In other words, $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$ is a subset of the intervals in $I$.
Figures~\ref{fig:LLNLNOFP} and~\ref{fig:SDSCNoFP} illustrate the payoff of \ensuremath{\operatorname{\textsc{Trust}}}\xspace and \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace in this case. We note that \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace is strictly better than both \ensuremath{\operatorname{\textsc{Trust}}}\xspace and \ensuremath{\operatorname{\textsc{Greedy}}}\xspace. In an extreme case, when $d=n$, \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace becomes empty and \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace becomes \ensuremath{\operatorname{\textsc{Greedy}}}\xspace; in other words, \ensuremath{\operatorname{\textsc{Greedy}}}\xspace is the same algorithm as \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace with the empty predictions set $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$.
We also consider a setting in which there are no false negatives. For that, we generate $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$ by adding $d$ intervals to \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace. In other words, $\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace$ will be a superset of intervals in $I$. Figures~\ref{fig:LLNLNOFP} and~\ref{fig:SDSCNoFP} illustrate the payoff of \ensuremath{\operatorname{\textsc{Trust}}}\xspace and \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace in this case. In this case, the payoff of \ensuremath{\operatorname{\textsc{Trust}}}\xspace and \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace is similar to the setting where both false positives and negatives contributed to the error set. In particular, \ensuremath{\operatorname{\textsc{Trust}}}\xspace quickly becomes worse than \ensuremath{\operatorname{\textsc{Greedy}}}\xspace as the error increases, while \ensuremath{\operatorname{\textsc{TrustGreedy}}}\xspace degrades gently as a function of the prediction error.
\section{Related Problems: Matching and Independent Set}
In~\cite{GVV97}, the authors
observe that finding
disjoint paths on stars is equivalent to finding maximal matchings on
general graphs, where each request in the input to the disjoint path selection bijects to an edge in the input graph for the matching problem. Therefore, we can extend the results of Section~\ref{sect:dpa} to the following \emph{online matching problem}. The input is a graph $G=(V, E)$, where $V$ is known, and edges in $E$ appear in an online manner; upon arrival of an edge, it must be added to the matching or rejected. The prediction is a set $\ensuremath{\operatorname{\mathit{\hat{E}}}}\xspace$ that specifies edges in $E$. As before, we use $\ensuremath{\operatorname{\textsc{FP}}}\xspace$ and $\ensuremath{\operatorname{\textsc{FN}}}\xspace$ to indicate the set of false positives and false negatives and define $\ensuremath{\gamma(\EPRED,E)}\xspace = \frac{\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace \cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)}{\ensuremath{\operatorname{\textsc{Opt}}}\xspace(E)}$, where $\ensuremath{\operatorname{\textsc{Opt}}}\xspace(S)$ indicates the size of the matching for graph $G=(V,S)$.
The correspondence between the two problems is as follows: Consider a set of
intervals on a star. Each such
interval is a pair of vertices.
We can assume no pair contains the star's center since all such intervals should be accepted if they can be.
For the matching problem, the pairs of vertices from the disjoint paths
problem on the star can be the edges in the graph. A feasible solution
to the disjoint paths problem corresponds to matching and vice versa.
One can similarly consider an instance of a matching problem, and the
endpoints of the edges can be the non-center vertices of the star in
the disjoint paths problem.
Using this correspondence between disjoint paths on a star and matchings in
general graphs, for the star $S_8$, we get the following graph for
matching: $G=(V,E)$, where $V=\SET{1,2,3,4,5,6,7,8}$ and
\[
E=\SET{(1,2),(2,3),(3,4),(4,5),(5,6),(6,7),(7,8),(5,8),(1,6),(1,8)}.
\]
\newsavebox\mysubpic
\sbox{\mysubpic}{%
\begin{tikzpicture}[scale=.5]
\draw[rotate=30,dashed] (0,0) ellipse (1.9cm and .75cm);
\end{tikzpicture}
}
\begin{figure}[!t]
\begin{center}
\raisebox{2em}{\begin{tikzpicture}[scale=.5]
\foreach \X [count=\Y] in {1,2,3,4,5,6,7,8}
{\node (cn\Y) at ({-(\Y+3)*360/8}:2.5) {$\X$}; }
\foreach \Y [remember=\Y as \LastY (initially 8)]in {1,...,8}
{\draw (cn\LastY) -- (cn\Y);}
\draw (cn8) -- (cn5);
\draw (cn1) to [out=240,in=270,looseness=1.5] (cn6);
\end{tikzpicture}}
\hspace{4em}
\begin{tikzpicture}[scale=.5]
\node (n1) at (-3,0) {12};
\node (n2) at (-2,2) {23};
\node (n3) at (0,3) {34};
\node (n4) at (2,2) {45};
\node (n5) at (4,1) {56};
\node (n6) at (4,-1) {67};
\node (n7) at (2,-2) {78};
\node (n8) at (0,-3) {18};
\node (n9) at (-2,-2) {16};
\node (n10) at (0,0) {58};
\draw (n9) -- (n1) -- (n2) -- (n3) -- (n4) -- (n5) -- (n6) -- (n7) -- (n8) -- (n9);
\draw (n10) -- (n4);
\draw (n10) -- (n5);
\draw (n10) -- (n7);
\draw (n10) -- (n8);
\draw (n1) -- (n8);
\draw (n9) to [out=270,in=270,looseness=.8] (n6);
\draw (n9) to [out=255,in=315,looseness=1.6] (n5);
\node at (3,-1.5) {\usebox{\mysubpic}};
\end{tikzpicture}
\vspace*{-1cm}
\end{center}
\caption{Left: The graph, $G$, giving a correspondence between disjoint paths on a star and matchings in general graphs. Right: The line graph, $G'$, corresponding to~$G$, where the two digits in a vertex name in~$G'$ indicate the edge (given by its two endpoints) from~$G$ that the vertex corresponds to; the dashed ellipse indicates the contraction of the two vertices.}
\label{figure-matchings-independent-set}
\end{figure}
See also Figure~\ref{figure-matchings-independent-set}.
Note that the edges in this graph correspond to the intervals that are
used in the proof of Theorem~\ref{thm:star}. The proof can be
simulated in this new setting so that the number of intervals accepted
in the different cases in Theorem~\ref{thm:star} is the same as the
number of edges in the matchings found in the corresponding subgraphs
of~$G$. Thus, the same result holds for matchings in any graph class
containing this graph.
All edges have one even-numbered endpoint and one odd, so this includes
the bipartite graph class.
It is also planar but not an interval or chordal graph.
Given the correspondence between interval scheduling and the matching problem, the following is immediate from Theorems~\ref{pr:trustupper} and~\ref{thm:star}.
\begin{proposition}
For any instance $G=(V,E)$ of the online matching problem under the edge-arrival model and a prediction set $\ensuremath{\operatorname{\mathit{\hat{E}}}}\xspace$, there is an algorithm \ensuremath{\operatorname{\textsc{Trust}}}\xspace that matches at least $(1-2\ensuremath{\gamma(\EPRED,E)}\xspace) \ensuremath{\operatorname{\textsc{Opt}}}\xspace(G)$ edges. Moreover, there are instances $G_w=(V,E_w)$ of the matching problem, along with predictions $\ensuremath{\operatorname{\mathit{\hat{E}}}}\xspace_w$ for which any deterministic algorithm matches at most $(1-2\ensuremath{\gamma(\EPRED,E)}\xspace_w) \ensuremath{\operatorname{\textsc{Opt}}}\xspace(G_w)$ edges.
\end{proposition}
Using the correspondence between matchings in a graph, $G$, and
an independent set in the line graph of~$G$,
we can get the same result for the independent set.
The line graph of a graph, $G$, has a vertex for each edge in~$G$
and an edge between two vertices if the corresponding edges in~$G$
share a vertex.
The line graph $G'=(V',E')$ of the graph above used for matching is defined by
$$V'=\SET{12, 23, 34, 45, 56, 67, 78, 18, 16, 58},$$
where, for brevity, we use the notation $12$ to denote the vertex
corresponding to the edge $(1,2)$ from~$G$.
The set of edges is then
\[\begin{array}{r@{}l}
E'=\{ & (12,23), (23,34), (34,45), (45,56), (56,67), (67,78), (78,18), (18,16), \\
& (16,12), (58,18), (58,78), (58,56), (58,45), (12,18), (16,67), (16,56)
\}
\end{array}\]
Intervals from the proof in Theorem~\ref{thm:star} correspond to vertices here.
See also Figure~\ref{figure-matchings-independent-set}.
We note that the graph $G'$ is planar, but not outerplanar, since,
contracting $67$ and $78$ into one vertex, $67\textrm{-}78$, the sets
$\SET{16,58}$ and $\SET{18,56,67\textrm{-}78}$ form a $K_{2,3}$ minor,
which is a so-called forbidden subgraph for outerplanarity~\cite{CH67,H69}.
Also, it is not chordal.
However, the lower bound from Theorem~\ref{thm:generallower}, that for
any deterministic algorithm~\ensuremath{\operatorname{\textsc{Alg}}}\xspace, there are instances $I$ and
predictions \ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace such that $\ensuremath{\operatorname{\textsc{Alg}}}\xspace(\ensuremath{\operatorname{\mathit{\hat{I}}}}\xspace,I) = \ensuremath{\operatorname{\textsc{Opt}}}\xspace(I) -
\ensuremath{\operatorname{\textsc{Opt}}}\xspace(\ensuremath{\operatorname{\textsc{FP}}}\xspace\cup\ensuremath{\operatorname{\textsc{FN}}}\xspace)$ clearly holds for independent sets in interval
graphs, too, by considering the interval graph corresponding to a set
of intervals on the line.
Using the correspondence between matchings in a graph, $G$, and
the independent set in the line graph of~$G$,
we can get a similar result for the independent set under the vertex-arrival model.
|
{
"arxiv_id": "2302.13685",
"language": "en",
"timestamp": "2023-02-28T02:27:35",
"url": "https://arxiv.org/abs/2302.13685",
"yymm": "2302"
} | \section{Introduction}\label{sec:intro}
One-dimensional (1D) quantum gases can be experimentally realized by applying strong confinement in two transverse directions and allow free motion along the longitudinal direction \cite{Gorlitz2001,Schreck2001,Dettmer2001,Greiner2001,Moritz2003,Paredes2004,ObservationTGgas,Syassen2008,Haller2009,PhysRevLett.104.153203}.
1D quantum gases are very different from the ordinary three-dimensional (3D) quantum gases \cite{RevModPhys.83.1405,RevModPhys.85.1633}.
The three-body problem in 1D has been studied for many years \cite{PhysRevA.72.032728,Esry2007,PhysRevA.97.061603,PhysRevA.97.061605,Quantumdroplet2018,PhysRevA.100.013614}.
In this paper, we define and study the \emph{three-body scattering hypervolume} of identical spin-polarized fermions in 1D. The scattering hypervolume is a three-body analog of the two-body scattering length \cite{tan2008three}, which can be extracted from the wave function of two particles colliding at zero energy.
If the interaction is short ranged, ie the interaction potential vanishes beyond a finite pairwise distance $r_e$,
the wave function of two particles colliding at zero energy in 1D is
\begin{equation}
\phi_l(s)=(|s|-a_{l})Y_l(s)
\end{equation}
at $|s|>r_e$ in the center-of-mass frame,
where $a_{l}$ is the two-body scattering length in 1D, $s$ is the difference of the coordinates of the two particles,
and $l$ can be 0 or 1 for $s$-wave collisions or $p$-wave collisions respectively.
$Y_0(s)=1$, and $Y_1(s)=\sign(s)$.
Here $\mathrm{sgn}(s)$ is the sign function. $\mathrm{sgn}(s)=1$ for $s>0$, $\mathrm{sgn}(s)=0$ for $s=0$, and $\mathrm{sgn}(s)=-1$ for $s<0$.
For particles in higher dimensional spaces, people have defined and studied the three-body scattering hypervolume in various systems
\cite{tan2008three,zhu2017threebody,mestrom2019scattering,mestrom2020van,wang2021threebody,mestrom2021pwave,mestrom2021spin1,wang2021fermion3D,wang2022fermion2D}. The three-body scattering hypervolumes have been defined and studied
for identical bosons in 3D \cite{tan2008three,zhu2017threebody,mestrom2019scattering,mestrom2020van,mestrom2021spin1}, distinguishable particles in 3D \cite{wang2021threebody,mestrom2021pwave}, identical spin-polarized fermions in 3D \cite{wang2021fermion3D} or in 2D \cite{wang2022fermion2D}.
In this paper, we define the scattering hypervolume $D_F$ of identical spin-polarized fermions in 1D, by studying the wave function of three such fermions colliding at zero energy, and study its analytical and numerical calculations and its physical implications.
Our results may be applicable to ultracold atomic Fermi gases confined in one dimension.
This paper is organized as follows.
In Sec.~\ref{sec:two-body} we define the two-body $p$-wave special functions.
In Sec.~\ref{sec:asymp} we derive the asymptotic expansions of the three-body wave function for zero energy collision. The scattering hypervolume $D_F$ appears in the coefficients in these expansions.
In Sec.~\ref{sec:value-DF}, we derive an approximate formula of $D_F$ for weak interaction potentials by using the Born expansion. For the square-barrier and the square-well potentials and the Gaussian potential we numerically compute $D_F$ for various interaction strengths.
In Sec.~\ref{energy} we consider the dilute spin-polarized Fermi gas in 1D and derive the shifts of its energy and pressure due to a nonzero $D_F$.
In Sec.~\ref{sec:recombination}, we derive the formula for the three-body recombination rate of the dilute spin-polarized Fermi gas in 1D in terms of the imaginary part of $D_F$.
\section{two-body special functions}\label{sec:two-body}
For identical spin-polarized fermions in 1D, the $s$-wave two-body scattering is forbidden due to Fermi statistics, and only the $p$-wave scattering is permitted.
The two-fermion scattering wave function $\Phi$ in the center-of-mass frame with collision energy $E=\hbar^2k^2/m$,
where $m$ is the mass of each fermion and $\hbar$ is Planck's constant over $2\pi$,
satisfies the following Schr\"{o}dinger equation:
\begin{equation}\label{two-body-equ}
\frac{d^2\Phi(s)}{ds^2}+\left[k^2-\frac{mV(s)}{\hbar^2}\right]\Phi(s)=0,
\end{equation}
where $V(s)$ is the two-body interaction potential.
We assume that $V(s)$ is an even function of $s$, namely $V(s)=V(|s|)$, and that it vanishes at $|s|>r_e$.
At $|s|>r_e$,
\Eq{two-body-equ} is simplified as
$\frac{d^2\Phi}{ds^2}+k^2\Phi=0$, and its solution is
\begin{equation}
\Phi(s)=A \sin\left(k|s|+\delta_p\right)\sign(s),\label{two-body-Phi}
\end{equation}
where
$\delta_p$ is the $p$-wave scattering phase shift which obeys the effective range expansion in 1D \cite{hammer2009causality,hammer2010causality}:
\begin{equation}
k\cot \delta_p=-\frac{1}{a_p}+\frac{1}{2}r_p k^2+\frac{1}{4!}r_p' k^4+O(k^6).
\end{equation}
Here $a_p$ is the $p$-wave scattering length in 1D, $r_p$ is the $p$-wave effective range, and $r_p'$ is the $p$-wave shape parameter.
If the collision energy is small, namely $|k| \ll 1/r_e$, the wave function can be expanded in powers of $k^2$:
\begin{equation}
\Phi^{(k)}(s)=\phi(s)+k^2 f(s)+k^4 g(s)+O(k^6),
\end{equation}
where $\phi,f,g,\dots$ are called the two-body special functions, and they satisfy the equations \cite{wang2021threebody,wang2021fermion3D}:
\begin{equation}
\widetilde{H}\phi=0,~~\widetilde{H}f=\phi,~~\widetilde{H} g=f,~\dots,
\end{equation}
where $\widetilde{H}$ is defined as
$
\widetilde{H} \equiv -\frac{d^2}{ds^2}+ \frac{m}{\hbar^2} V(s).
$
The two-body special functions at $|s|>r_e$ can be extracted from \Eq{two-body-Phi}. By choosing the coefficient $A=-a_p/\sin\delta_p$, we get
\begin{subequations}\label{phi-f-g}
\begin{align}
&\phi(s)=\left(|s|-a_p\right)\mathrm{sgn}(s),\\
&f(s)=\left(-\frac{|s|^3}{6}+\frac{a_p}{2}|s|^2-\frac{1}{2} a_p r_p |s|\right)\mathrm{sgn}(s),\\
&g(s)=\left(\frac{|s|^5}{120}-\frac{a_p}{24}|s|^4+\frac{a_p r_p}{12} |s|^3-\frac{a_p r_p'}{24}|s|\right)\mathrm{sgn}(s)
\end{align}
\end{subequations}
for $|s|>r_e$.
\section{ASYMPTOTICS OF THE THREE-BODY WAVE FUNCTION}\label{sec:asymp}
We consider the collision of three fermions with finite range interactions at zero energy in the center-of-mass frame.
The three-body wave function $\Psi(x_1,x_2,x_3)$ satisfies the following Schr\"{o}dinger equation:
\begin{equation}\label{3body_equ}
-\sum_{i=1}^{3}\frac{\hbar^2}{2m}\frac{\partial^2\Psi}{\partial x_i^2}+ \sum_{i=1}^{3}V(s_i)\Psi+U(s_1,s_2,s_3)\Psi=0,
\end{equation}
where $x_i$ is the coordinate of the $i$th fermion, and $s_i\equiv x_j-x_k$. The indices $(i,j,k)=(1,2,3)$, $(2,3,1)$, or $(3,1,2)$.
$U$ is the three-body potential.
We assume that the interactions among these fermions depend only on the interparticle distances.
The total momentum of the three fermions is zero such that the wave function is translationally invariant.
We assume that $V(s_i)=0$ if $|s_i|>r_e$, and that $U(s_1,s_2,s_3)=0$ if $|s_1|$, $|s_2|$, or $|s_3|$ is greater than $r_e$.
To uniquely determine the wave function for the zero energy collision, we need to also specify the asymptotic behavior of $\Psi$ when the three particles are far apart.
Suppose that the leading-order term $\Psi_0$ in the wave function scales as $B^p$ at large $B$, where $B=\sqrt{(s_1^2+s_2^2+s_3^2)/2}$ is the hyperradius. $\Psi_0$ should also satisfy the free Schr\"{o}dinger equation $(\partial_1^2+\partial_2^2+\partial_3^2)\Psi_0=0$.
The most important channel for zero-energy collisions, for purposes of understanding ultracold collisions, should be the one with the minimum value of $p$ \cite{wang2021fermion3D}.
We find that the minimum value of $p$ for three identical fermions in 1D is $p_{\textrm{min}}=3$, and the leading order term $\Psi_0$ is
\begin{equation}\label{leading-order}
\Psi_0=s_1 s_2s_3=(x_2-x_3)(x_3-x_1)(x_1-x_2).
\end{equation}
One can check that $\Psi_0$ in \Eq{leading-order} is translationally invariant and it obeys the Fermi statistics.
Like what we did in previous works \cite{tan2008three,wang2021threebody,wang2021fermion3D,wang2022fermion2D}, we derive the corresponding 111 expansion and 21 expansion for the three-body wave function $\Psi$.
When the three particles are all far apart from each other, such that the pairwise distances $|s_1|$, $|s_2|$, $|s_3|$ go to infinity simultaneously for any fixed ratio $s_1:s_2:s_3$, we expand $\Psi$ in powers of $1/B$ and this expansion is called the 111 expansion.
When one fermion is far away from the other two, but the two fermions are held at a fixed distance $s_i$, we expand $\Psi$ in powers of $1/R_i$, where $R_i=x_i-(x_j+x_k)/2$ is a Jacobi coordinate, and this is called the 21 expansion.
These expansions can be written as
\begin{subequations}
\begin{align}
&\Psi=\sum_{p=-3}^{\infty} \mathcal{T}^{(-p)}(x_1,x_2,x_3),\label{111-form}\\
&\Psi=\sum_{q=-2}^{\infty}\S^{(-q)}(R,s),\label{21-form}
\end{align}
\end{subequations}
where $\mathcal{T}^{(-p)}$ scales like $B^{-p}$, $\mathcal{S}^{(-q)}$ scales like $R^{-q}$,
and $R\equiv R_i$ and $s\equiv s_i$ for any $i$.
$\mathcal{T}^{(-p)}$ satisfies the free Schr\"odinger equation outside of the interaction range:
\begin{equation}\label{T-p}
-\left( \frac{\partial^2}{\partial s^2}+\frac{3}{4}\frac{\partial^2}{\partial R^2}\right) \mathcal{T}^{(-p)}=0.
\end{equation}
If one fermion is far away from the other two, \Eq{3body_equ} becomes
\begin{equation}
\left(\widetilde{H}-\frac{3}{4}\frac{\partial^2}{\partial R^2}\right)\Psi =0,
\end{equation}
where $\widetilde{H}\equiv-\frac{\partial^2}{\partial s^2}+\frac{m}{\hbar^2}V(s)$.
Therefore, $\mathcal{S}^{(-q)}$ satisfies the following equations,
\begin{align}
&\widetilde{H} \mathcal{S}^{(2)}=0,\quad\widetilde{H} \mathcal{S}^{(1)}=0,\nonumber\\
&\widetilde{H} \mathcal{S}^{(-q)}=\frac{3}{4}\frac{\partial^2}{\partial R^2} \mathcal{S} ^{(-q+2)}\quad (q\geqslant 0).
\end{align}
\begin{figure}[htb]
\includegraphics[width=0.5\textwidth]{expansion.pdf}
\caption{\label{fig:expansion} Diagram of the points representing $t^{(i,j)}$ on the $(i,j)$ plane. Each point with coordinates $(i,j)$ represents $t^{(i,j)}$ which scales like $R^i s^j$. Thick dots represent those points at which $t^{(i,j)}\ne0$.
The term $\mathcal{T}^{(-p)}$ in the 111 expansion is represented by a red dashed line satisfying the equation $i+j=-p$. The term $\mathcal{S}^{(-q)}$ in the 21 expansion is represented by a blue dashed line satisfying the equation $i=-q$.}
\end{figure}
To derive the two expansions, we start from the leading-order term in the 111 expansion (which fixes the overall amplitude of $\Psi$):
\begin{equation}
\mathcal{T}^{(3)}=\Psi_0=\frac{1}{4}s^3-sR^2.
\end{equation}
We then first derive $\mathcal{S}^{(2)}$, and then derive $\mathcal{T}^{(2)}$, and then derive $\mathcal{S}^{(1)}$, and then derive $\mathcal{T}^{(1)}$, and so on, all the way until $\mathcal{S}^{(-8)}$. At every step, we require the 111 expansion and the 21 expansion to be consistent in the region $r_e\ll |s|\ll |R|$, in which the wave function has a double expansion:
\begin{equation}
\Psi=\sum_{i,j}t^{(i,j)},
\end{equation}
where $t^{(i,j)}$ scales as $R^i s^j$,
and
\begin{align}
\mathcal{T}^{(-p)}&=\sum_{i}t^{(i,-p-i)},\\
\mathcal{S}^{(-q)}&=\sum_j t^{(-q,j)}.
\end{align}
In Fig.~\ref{fig:expansion} we show the points on the $(i,j)$ plane for which $t^{(i,j)}$
is nonzero.
Our resultant 111 expansion is
\begin{widetext}
\begin{align}
\Psi&=s_1 s_2 s_3\left(1-\frac{3\sqrt{3}D_F}{2\pi B^6}\right)+\sum_{i=1}^{3}\bigg[ -a_p B^2 \cos( 2\Theta_i )\mathrm{sgn}(s_i)-\frac{6}{\pi}a_p^2 B \theta_i\sin\theta_i\mathrm{sgn}(s_i)+\frac{3}{4}\left(2a_p^3+a_p^2 r_p\right) \mathrm{sgn}(s_i)\nonumber\\
&-\frac{3\sqrt{3}\,a_pD_F}{2\pi B^4} \cos (4\Theta_i)\mathrm{sgn}(s_i)-\frac{18\sqrt{3}\,a_p^2D_F}{\pi^2 B^5} \theta_i \sin (5\theta_i) \mathrm{sgn}(s_i)+\frac{45\sqrt{3}D_F}{4\pi B^6}\left(2a_p^3+a_p^2r_p\right) \cos (6\Theta_i) \mathrm{sgn}(s_i)\nonumber\\
&+\frac{405\sqrt{3}}{2\pi^2 B^7}a_p^3 r_pD_F \theta_i \sin (7\theta_i) \mathrm{sgn}(s_i)-\frac{945\sqrt{3}D_F}{32\pi B^8}\left(6a_p^3r_p^2+a_p^2r_p'\right) \cos (8\Theta_i) \mathrm{sgn}(s_i) \bigg]+O(B^{-9}),\label{111}
\end{align}
where $D_F$ is the three-body scattering hypervolume.
The coefficient in $\mathcal{T}^{(-3)}$ is chosen such that
$
(\partial_s^2+\frac{3}{4}\partial_R^2)\mathcal{T}^{(-3)}=\frac{3}{4} D_F [\delta'(s)\delta''(R)-\frac{4}{9}\delta'''(s)\delta(R)],
$
and this coefficient will simplify the expression for the shift of the energy of three fermions along a periodic line; see \Eq{energy-3fermion}.
$\Theta_i$ is called the hyperangle and is defined via the following equations:
\begin{equation}
\frac{\sqrt{3}}{2}s_i=B \cos \Theta_i,~~R_i=B\sin\Theta_i.
\end{equation}
One can verify that the three hyperangles satisfy
$\Theta_1=\Theta_2-\frac{2\pi}{3}+2n\pi$,
$\Theta_3=\Theta_2+\frac{2\pi}{3}+2n'\pi$,
where $n$ and $n'$ are integers. We also define the reduced hyperangle
$
\theta_i \equiv \arctan\frac{2|R_i|}{\sqrt{3} |s_i|},~~\theta_i\in[0,\frac{\pi}{2}].
$
Three fermions in 1D have 6 different sorting orders. If $x_1<x_2<x_3$, the 111 expansion is simplified as
\begin{align}\label{111-B-th}
\Psi=&\frac{2}{3\sqrt{3}}B^3 \cos (3\theta_2)-2a_p B^2 \cos (2\theta_2)+2\sqrt{3} a_p^2 B \cos \theta_2-\frac{3}{4}\left(2a_p^3+a_p^2 r_p\right)\nonumber\\
&-\frac{D_F}{\pi B^3}\cos (3\theta_2)-\frac{3\sqrt{3}D_Fa_p}{\pi B^4}\cos (4\theta_2)-\frac{18D_F a_p^2 }{\pi B^5}\cos (5\theta_2)-\frac{45\sqrt{3}D_F}{4\pi B^6}\left(2a_p^3+a_p^2 r_p\right)\cos (6\theta_2)\nonumber\\
&-\frac{405D_F a_p^3 r_p}{2\pi B^7}\cos (7\theta_2)-\frac{945\sqrt{3}D_F}{16\pi B^8}\left(6a_p^3r_p^2+a_p^2 r_p'\right)\cos (8\theta_2)+O(B^{-9}).
\end{align}
Our resultant 21 expansion is
\begin{align}\label{21}
\Psi=&\Bigg[-R^2+3a_p|R|-\frac{3}{4}\left(2a_p^2+a_p r_p\right)+\frac{3\sqrt{3}D_F}{2\pi R^4}+\frac{9\sqrt{3}\,a_pD_F}{\pi |R|^5}+\frac{45\sqrt{3}D_F}{4\pi R^6}\left(2a_p^2+a_p r_p\right)\nonumber\\
&\quad+\frac{405\sqrt{3}D_F}{4\pi|R|^7} a_p^2 r_p + \frac{945\sqrt{3}D_F}{32\pi R^8} \left(6a_p^2 r_p^2+a_p r_p'\right)\Bigg]\phi(s)\nonumber\\
&+\left[-\frac{3}{2}+\frac{45\sqrt{3}D_F}{2\pi R^6}+\frac{405\sqrt{3}\,a_pD_F}{2\pi|R|^7} +\frac{2835\sqrt{3}D_F}{8\pi R^8} \left(2a_p^2+a_p r_p\right) \right]f(s)+\frac{2835\sqrt{3}D_F}{4\pi R^8} g(s)+O(R^{-9}).
\end{align}
\end{widetext}
We need to emphasize that \Eq{21} is applicable when the interaction does not support any two-body bound states.
If the interaction supports $n_b$ two-body bound states, three fermions may form such a two-body bound state and a free fermion, which fly
apart with total kinetic energy equal to the released two-body binding energy. In this case, the 21 expansion is modified as \cite{zhu2017threebody}
\begin{equation}\label{21mod}
\Psi=\Psi_{21}+\sum_{n=1}^{n_b} c_n \phi_n(s) \exp\left(i\frac{2}{\sqrt{3}}\kappa_n R\right),
\end{equation}
where $\Psi_{21}$ is defined as the right-hand side of \Eq{21}.
The second term on the right-hand side of \Eq{21mod}
is the outgoing wave with wave number $2\kappa_n/\sqrt{3}>0$. Here $\phi_n$ is the wave function of the $n$th two-body $p$-wave bound state with energy $E_n=-\hbar^2\kappa_n^2/m$, and satisfies the Schr\"{o}dinger equation and the normalization condition:
\begin{align}
&\left(-\frac{d^2}{ds^2}+\frac{mV(s)}{\hbar^2}+\kappa_n^2\right)\phi_n(s)=0,\\
&\int_{-\infty}^{\infty} ds~ |\phi_n(s)|^2=1.
\end{align}
As the outgoing wave contributes a positive probability flux towards the outside of a large circle centered at the origin in the plane of coordinates $(\frac{\sqrt3}{2}s,R)$, $D_F$ gains a negative imaginary part to make the total flux through the circle vanish and conserve the probability. From this conservation of probability we derive the relation between the imaginary part of $D_F$ and the coefficients $c_n$:
\begin{equation}
\mathrm{Im}D_F=-\frac{3\sqrt{3}}{2}\sum_{n=1}^{n_b} \kappa_n |c_n|^2.
\end{equation}
In Sec.~\ref{sec:recombination} we will study the relation between $\im D_F$ and the three-body recombination rates of one-dimensional ultracold spin-polarized Fermi gases.
\section{Evaluation of the scattering hypervolume for several interaction potentials}\label{sec:value-DF}
In this section, we first derive an approximate formula for the hypervolume $D_F$ for weak potentials by using the Born expansion.
We then numerically compute $D_F$ for the square-barrier and the square-well pairwise potentials and the Gaussian pairwise potentials having various strengths.
\subsection{Weak interaction potentials}
If the potentials $V(s)$ and $U(s_1,s_2,s_3)$ are waek, we can express the wave function as a Born expansion \cite{zhu2017threebody,wang2022fermion2D}:
\begin{equation}
\Psi=\Psi_0+\Psi_1+\Psi_2+\cdots,
\end{equation}
where $\Psi_0=s_1s_2s_3=s^3/4-sR^2$ is the wave function of three free fermions, $\Psi_n=(\widehat{G}\mathcal{V})^n\Psi_0$,
$\widehat{G}=-\widehat{H}_0^{-1}$ is the Green's operator, $\widehat{H}_0$ is the three-body kinetic-energy operator, and $\mathcal{V}=U(s_1,s_2,s_3)+\sum_i V(s_i)$ is the interaction potential.
We derive the first-order and the second-order corrections at $|s_i|\gg r_e$:
\begin{subequations}\label{Born}
\begin{align}
&\Psi_1= -\frac{3\sqrt{3}s_1s_2s_3}{4\pi B^6}\Lambda -\sum_{i=1}^{3}\left( \alpha_1 B^2\cos 2\Theta_i+\frac{\alpha_3}{2}\right) \mathrm{sgn}(s_i)\nonumber\\
&\quad\quad\,\,+O(UB^{-9}),\label{Born1}\\
&\Psi_2=\sum_{i=1}^{3}\left[ \beta_1 B^2 \cos 2\Theta_i -\frac{6\alpha_1^2}{\pi}R_i \theta_i +\beta_3\right] \mathrm{sgn}(s_i)\nonumber\\
&\quad\quad~-\frac{3\sqrt{3}s_1s_2s_3}{20\pi B^6}\left(25\alpha_3^2-7\alpha_1\alpha_5\right)+O(V^2B^{-9})\nonumber\\
&\quad\quad\,\,+O(UV)+O(U^2),\label{Born2}
\end{align}
\end{subequations}
where
\begin{subequations}
\begin{align}
&\alpha_n=\frac{m}{\hbar^2}\int_0^{\infty}\!\!\!ds \:s^{n+1}V(s),\label{alpha}\\
&\beta_1=\frac{m^2}{\hbar^4}\int_0^{\infty}\!\!\!ds\int_0^{s}\!\!\!ds'\: 2s s'^2 V(s)V(s'),\\
&\beta_3=\frac{m^2}{\hbar^4}\int_0^{\infty}\!\!\!ds\int_0^{s}\!\!\!ds'\: (ss'^4+2s^3 s'^2 ) V(s)V(s'),\\
&\Lambda=\frac{m}{\hbar^2}\int_{-\infty}^{\infty}\!\!\!ds'\int_{-\infty}^{\infty}\!\!\!dR'\: \Big(\frac{1}{4}s'^3-s' R'^2\Big)^2 U(s',R').
\end{align}
\end{subequations}
See Appendix \ref{sec:Born} for details of the derivation.
By comparing the results in Eqs.~\eqref{Born} with the 111 expansion in \Eq{111}, we find the expansions of $a_p$ and $D_F$ in powers of the interaction potential:
\begin{align}
a_p&=\alpha_1-\beta_1+O(V^3),\label{ap-Born}\\
D_F&=\frac{\Lambda}{2}+\frac{1}{10}(25\alpha_3^2-7\alpha_1\alpha_5)+O(V^3)\nonumber\\
&~~~+O(UV)+O(U^2).\label{DF-Born}
\end{align}
For any particular two-body potential $V(s)$, e.g., the square-well potential or the Gaussian potential, one can calculate $a_p$ by solving the two-body Schr\"{o}dinger equation and verify that the result is consistent with \Eq{ap-Born} if $V$ is weak.
Equation \eqref{DF-Born} shows that $D_F$ is quadratically dependent on the
two-body potential $V$ if $V$ is weak and the three-body potential $U$ is absent. On the other hand, $D_F$ is linearly dependent on $U$ if $U$ is weak.
If the interactions are not weak, one can solve the three-body Schr\"{o}dinger equation numerically at zero energy and match the resultant wave function with the asymptotic expansions in \Eq{111} and \Eq{21mod} to numerically extract the value of $D_F$.
\subsection{Numerical computations}
The three-body problem in 1D for zero total momentum is equivalent to a one-body problem on a 2D plane.
The three-body wave function $\Psi$ here depends only on $(s,R)$ or $(B,\Theta)$,
where $s\equiv s_2$, $R\equiv R_2$, and $\Theta\equiv\Theta_2$.
We define the two-dimensional vector $\vect{B}=(\frac{\sqrt{3}}{2}s,R)$, and $\Psi=\Psi(\vect{B})$.
The zero energy Schr\"{o}dinger equation is
\begin{equation}\label{SEin2D}
-\nabla^2 \Psi+\frac{4m}{3\hbar^2}\mathcal{V}\Psi=0,
\end{equation}
where $\nabla^2$ is the Laplace operator in 2D:
\begin{equation}
\nabla^2=\frac{1}{B}\frac{\partial}{\partial B}\left(B\frac{\partial}{\partial B}\right)+\frac{1}{B^2}\frac{\partial^2}{\partial \Theta^2}.
\end{equation}
Because the interaction potential conserves parity and $\Psi_0$ has odd parity, we can assume that
$\Psi$ has odd parity, namely
\begin{equation}
\Psi(-x_1,-x_2,-x_3)=-\Psi(x_1,x_2,x_3).
\end{equation}
From the above equation and the Fermi statistics we can show that
\begin{equation}\label{even}
\Psi(B,-\Theta)=\Psi(B,\Theta)
\end{equation}
and
\begin{equation}\label{periodic}
\Psi\left( B,\Theta+\frac{\pi}{3}\right)=-\Psi(B,\Theta).
\end{equation}
We can divide the 2D plane into six regions; see Fig.~\ref{fig:2Dplane}. Each region corresponds to a specific order of the coordinates of the three fermions, and we only need to solve \Eq{SEin2D} in one of the six regions.
In the remainder of this section, we always choose to solve the problem in the region $-\pi/6<\Theta<\pi/6$ which corresponds to the order of the coordinates $x_1<x_2<x_3$.
\begin{figure}[htbp]
\includegraphics[width=0.45\textwidth,height=0.40\textwidth]{2D-plane.png
\caption{\label{fig:2Dplane} The possible configurations of three particles in one dimension. The potential vanishes outside of the colored belts. The whole plane can be divided into six regions corresponding to six different orders of the coordinates of the three particles. The corresponding order of the three particles is labeled in each region in the figure.}
\end{figure}
According to \Eq{even} and \Eq{periodic},
the wave function can be expanded as the following Fourier series:
\begin{equation}\label{Psi-fourier}
\Psi(B,\Theta)=\sum_{i=1}^{\infty} \frac{1}{\sqrt{B}}f_{i}(B)\cos (6i-3) \Theta.
\end{equation}
The potential $\mathcal{V}$ can also be expanded as
\begin{equation}\label{V-fourier}
\mathcal{V}(B,\Theta)=\frac{\nu_0(B)}{2}+\sum_{i=1}^{\infty} \nu_{6i}(B)\cos 6i\Theta.
\end{equation}
The Schr\"odinger equation \eqref{SEin2D} can be written as coupled ordinary differential equations:
\begin{equation}\label{SEvector}
-f''+\mathcal{U} f=0,
\end{equation}
where $f=(f_1,f_2,f_{3}...)^{\textrm{T}}$ is a column vector, $f''$ means $d^2 f/dB^2$, and $\mathcal{U}=\mathcal{U}(B)$ is a symmetric matrix dependent on $B$.
The matrix elements of $U$ are
\begin{subequations}\label{Umatrix}
\begin{align}
&\mathcal{U}_{ii}=\frac{(6i-3)^2-1/4}{B^2}+\frac{2}{3}\left( \nu_0+\nu_{12i-6}\right) ,\\
&\mathcal{U}_{ij}=\frac{2}{3}\left( \nu_{6|i-j|}+\nu_{6(i+j-1)}\right),~~\textrm{if}~i\neq j .
\end{align}
\end{subequations}
Given the wave function on a circle with radius $B$ centered at the origin in the $(\frac{\sqrt3}{2}s_2,R_2)$ plane,
one can use the Schr\"{o}dinger equation to uniquely determine the wave function inside such a circle,
and therefore can determine the partial derivative of the wave function with respect to $B$ on the circle.
Therefore the partial derivative of the wave function with respect to $B$ on such a circle depends linearly
on the wave function on such a circle. So there is a matrix $F$ such that
\begin{equation}\label{def-of-F}
f'=F f.
\end{equation}
Substituting the above equation into \Eq{SEvector}, and requiring that \Eq{SEvector} be satisfied for all $f$, we find that
$F$ satisfies a first-order differential equation:
\begin{equation}\label{eq-F}
F'=\mathcal{U}-F^2.
\end{equation}
At small $B$, we can solve \Eq{SEvector} to find the analytical solution to $f_{i}$ (for square well potentials) or find an expansion of
$f_{i}$ in powers of $B$ (for other potentials); from these we can analytically determine $F$ at infinitesimal $B$, and see that it is diagonal.
Using the result of $F$ at infinitesimal $B$ as our initial condition,
we then solve \Eq{eq-F} numerically and determine $F$ at $B=B_0$ for some large $B_0$.
Matching \Eq{def-of-F} at $B=B_0$ with the 111 and the 21 expansions of $\Psi$, we can approximately determine $D_F$.
We then compare the approximate values of $D_F$ determined in this way, using various large values of $B_0$.
We approximately extrapolate to the $B_0\to\infty$ limit
to find the value of $D_F$ with some numerical uncertainty.
\subsubsection{Square-barrier and square-well potentials}
For the square-barrier or square-well potential with strength $V_0$ ($V_0$ can be positive or negative),
\begin{equation}\label{squarewell}
V(s)=V_0 \frac{\hbar^2}{mr_0^2}\times
\begin{cases}
1, & |s|<r_0,\\
0, & |s|>r_0.
\end{cases}
\end{equation}
In the remainder of this subsection, we set $\hbar=m=r_0=1$ for simplicity.
We can analytically calculate all the Fourier components of $\mathcal{V}$,
\begin{equation}
\nu_{0}=V_0\times
\begin{cases}
6, & 0\leqslant B\leqslant\frac{\sqrt{3}}{2},\\
\frac{12}{\pi}\arcsin(\sqrt{3}/2B), & \frac{\sqrt{3}}{2}<B,
\end{cases}
\end{equation}
\begin{equation}
\nu_{6i}=V_0\times
\begin{cases}
0, & 0\leqslant B\leqslant\frac{\sqrt{3}}{2},\\
(-1)^i\frac{12}{\pi}\frac{\sin 6i\theta_0}{6i}, & \frac{\sqrt{3}}{2}<B,
\end{cases}
\end{equation}
for $i\geqslant 1$, where $\theta_0=\arcsin(\sqrt{3}/2B)$.
In the region $B\leqslant\sqrt{3}/2$, the potential $\mathcal{V}=3V_0$ is a constant, and $\nu_0=6V_0$, $\nu_{6i}=0$ for $i\geqslant1$.
So $\mathcal{U}$ is diagonal in this region and $f$ can be analytically determined:
\begin{equation}\label{f-ana}
f_{i}=\begin{cases}
c_{i}\sqrt{B}I_{6i-3}(2\sqrt{V_0}B), & V_0>0,\\
c_{i}'\sqrt{B}J_{6i-3}(2\sqrt{-V_0}B), & V_0<0,
\end{cases}
\end{equation}
where $I_j$ is the modified Bessel function of the first kind, and $J_j$ is the Bessel function of the first kind.
At $0<B\leqslant\sqrt{3}/2$, $F$ is diagonal and its elements can be easily calculated by using \Eq{f-ana}.
Equation~\eqref{eq-F} is a first-order ordinary differential equation, and the initial value of $F$ at $B=\sqrt{3}/2$ is known, so we can compute $F$ numerically at any $B>\sqrt{3}/2$.
At large $B$, we use the 111 and the 21 expansions of the wave function in \Eq{111} and \Eq{21mod} to determine $f_1,f_2,f_{3},...$ approximately. By solving \Eq{def-of-F}, we get the numerical value of the scattering hypervolume $D_F$.
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth]{square_Df_Born.pdf
\caption{\label{fig:square-born} $D_F$ for weak square-barrier or square-well potentials. The blue solid line shows the numerical results and the red dashed line shows the Born approximation.}
\end{figure}
Fig.~\ref{fig:square-born} shows our results of $D_F$ at small $V_0$.
According to \Eq{alpha} we have
\begin{align}
\alpha_n=\frac{V_0}{n+2} r_0^n.
\end{align}
If $V_0$ is small, by using \Eq{DF-Born} we get
\begin{equation}\label{DF-Born-square}
D_F=\frac{1}{15}V_0^2r_0^6+O(V_0^3).
\end{equation}
The blue solid line in Fig.~\ref{fig:square-born} shows the numerical results and the red dashed line corresponds to the Born approximation $D_F\simeq\frac{1}{15}V_0^2r_0^6$.
The numerical results agree quite well with the Born approximation for small values of $V_0$.
Fig.~\ref{fig:square-Df-rep} shows the full curve of $D_F$ for repulsive $V_0$.
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth]{square_Df_rep.pdf
\caption{\label{fig:square-Df-rep} The value of $D_F$ for the repulsive square-barrier potential defined in \Eq{squarewell}.}
\end{figure}
$D_F$ increases at $0<V_0<V_c$ where $V_c\simeq1.325$. At $V_0=V_c$, $D_F$ has a maximum of about $0.0099r_0^6$.
$D_F$ decreases at $V_0>V_c$.
In the following we will prove that $D_F$ approaches zero as $V_0\rightarrow +\infty$ and scales as $1/V_0^3$ at large $V_0$ for the square-barrier potentials.
If $V_0= +\infty$, the square-barrier potential becomes the hard-core potential. In this case,
the wave function goes to zero in the blue banded region in Fig.~\ref{fig:2Dplane}. We use the new coordinates $\vect{B}'=(\frac{\sqrt{3}}{2} (s-2),R)$.
$\Psi(\vect B)\equiv\widetilde{\Psi}(\vect{B}')$ satisfies the Laplace equation in the sector area, and $\widetilde{\Psi}(\vect{B}')$ satisfies the following boundary conditions:
\begin{equation}
\widetilde{\Psi}\left(B',\Theta'=-\frac{\pi}{6}\right)=\widetilde{\Psi}\left(B',\Theta'=\frac{\pi}{6}\right)=0.
\end{equation}
where $B',\Theta'$ are defined via
$
\frac{\sqrt{3}}{2}(s-2)=B' \cos \Theta',~~R=B'\sin\Theta'.
$
In the domain $-\pi/6<\Theta'<\pi/6$, one can easily find the analytical solution:
\begin{align}\label{HSsolution}
\widetilde{\Psi}(\vect{B}')=\frac{2}{3\sqrt{3}}B'^3 \cos 3\Theta'.
\end{align}
If we change back to the coordinates $\vect{B}=(\frac{\sqrt{3}}{2}s,R)$, we get
\begin{equation}\label{HSsolution1}
\Psi=\frac{2}{3\sqrt{3}} B^3\cos3\theta_2-2B^2\cos 2\theta_2+2\sqrt{3}B\cos\theta_2-2.
\end{equation}
Note that at $V_0=+\infty$ \Eq{HSsolution1} is the exact solution and is not just the asymptotic expansion of $\Psi$.
On the other hand, the 111 expansion in this area is simplified as \Eq{111-B-th}.
For the hard-core potential with $r_0=1$, we have $a_p=1$, $r_p=2/3$. One can check that \Eq{HSsolution1} agrees with \Eq{111-B-th} if $D_F=0$.
So $D_F=0$ for the hard-core potential, and this is consistent with our numerical results
for the values of $D_F$ for the square-barrier potential at $V_0\to\infty$.
If $V_0$ is large but finite, we also get an expansion in powers of $1/V_0$:
\begin{widetext}
\begin{align}
&\widetilde{\Psi}(\vect{B}')=\frac{2}{3\sqrt{3}}B'^3 \cos 3\theta'+\frac{2}{\sqrt{V_0}}B'^2\cos 2\theta'+\frac{2\sqrt{3}}{V_0}B'\cos\theta'+\frac{9}{4V_0^{3/2}}+\O(B'^{-3})\nonumber\\
&=\frac{2}{3\sqrt{3}} B^3\cos3\theta_2-2B^2\cos 2\theta_2\left( 1-\frac{1}{\sqrt{V_0}}\right) +2\sqrt{3}B\cos\theta_2\left( 1-\frac{1}{\sqrt{V_0}}\right)^2-\left( 2-\frac{6}{\sqrt{V_0}}+\frac{6}{V_0}-\frac{9}{4V_0^{3/2}}\right) +\O(B^{-3}).\label{largeV0}
\end{align}
\end{widetext}
If $1/\sqrt{V_0}\ll B'\ll r_0$, the wave function $\widetilde{\Psi}(\vect{B}')$ satisfies a scaling law: if $\widetilde{\Psi}(\vect{B}')$ is the solution at interaction strength $V_0$, then $\widetilde{\Psi}(\sqrt{\lambda}\vect{B}')$ is the solution at interaction strength $\lambda V_0$.
According to this we know the next term in the first line of \Eq{largeV0} should take the form $1/V_0^3 B'^3$, which implies that $D_F$ scales as $V_0^{-3}$ at large $V_0$:
\begin{equation}
D_F=\frac{\mathcal{C}}{V_0^3}+o(V_0^{-3}).
\end{equation}
Fig.~\ref{fig:DF-V03} shows that our numerical results agree with this.
\begin{figure}[htb]
\subfloat[]
{
\includegraphics[width=0.25\textwidth]{DF-V03.pdf}
}
\subfloat[]
{
\includegraphics[width=0.25\textwidth]{DFV03.pdf}
}
\caption{(a) $D_F$ vs $1/V_0^3$ for the repulsive square-barrier potentials. (b) $D_F V_0^3$ vs $V_0$ for these potentials.
The subfigures (a) and (b) both show that $D_F$ is proportional to $1/V_0^3$ if $V_0$ is large.}
\label{fig:DF-V03}
\end{figure}
From the numerical results we get $\mathcal{C}\simeq 0.79$.
\subsubsection{Gaussian potential}
In this subsection we consider the Gaussian potential
\begin{equation}
V(s)=V_0 \frac{\hbar^2}{m r_0^2} e^{-s^2/r_0^2},
\end{equation}
where the strength $V_0$ can be positive or negative.
According to \Eq{alpha} we get
\begin{align}
\alpha_n=\frac{1}{2}\Gamma\left(1+\frac{n}{2}\right)V_0 r_0^n.
\end{align}
If $V_0$ is small, by using \Eq{DF-Born} we get the Born approximation of $D_F$:
\begin{equation}\label{DF-Born-gaussian}
D_F=\frac{3\pi}{16}V_0^2 r_0^6+O(V_0^3).
\end{equation}
To numerically compute the value of $D_F$, we also Fourier-expand the wave function and the potential function.
The Fourier components of $\mathcal{V}$ for the Gaussian potential can be calculated analytically:
\begin{equation}
\nu_{6i}(B)= (-1)^i 6V_0 e^{-2B^2/3}I_{3i}\Big(\frac{2}{3}B^2\Big),
\end{equation}
where $i=0,1,2,\dots$.
At small $B$, unlike the case of square-well potential, we can not get an analytical expression for the matrix $F$ for the Gaussian potential. However we can solve \Eq{SEvector} to find an expansion of $f_{i}$ in powers of $B$, and get an approximate expression for the matrix $F$ at small $B$. The remaining algorithm is similar to the case of square-well potential, and we get the numerical values of $D_F$ for the repulsive and the attractive Gaussian potentials.
Fig.~\ref{fig:gaussian-born} shows our numerical results of $D_F$ at small $V_0$. We see that the results are consistent with the Born approximation.
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth]{Gaussian_Df_Born.pdf
\caption{\label{fig:gaussian-born} The values of $D_F$ for weak Gaussian potentials. The blue solid line shows the numerical results and the red dashed line shows the Born approximation.}
\end{figure}
Fig.~\ref{fig:gaussian-rep} shows the values of $D_F$ for repulsive Gaussian potentials. $D_F/r_0^6$ has a maximum of about $0.144 $ at $V_0\simeq1.91$. $D_F/r_0^6$ decreases at $V_0>1.91$. The rate of the decrease is slower than in the case of square-well potentials.
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth]{Gaussian_Df_rep.pdf}
\caption{\label{fig:gaussian-rep} The values of $D_F$ for repulsive Gaussian potentials.}
\end{figure}
Fig.~\ref{fig:gaussian-attr} shows our results of $D_F$ for attractive Gaussian potentials.
If the potential strength is weak, there is no two-body bound state. As the depth of the potential increases, two-body bound states appear one by one. At $V_0=V_{c1}\simeq-2.684$ the first $p$-wave resonance occurs, and the first two-body bound state appears. When $V_0$ is close to $V_{c1}$ we find an approximate formula for $a_p/r_0$:
\begin{equation}\label{ap-resonance}
a_p/r_0\simeq-3.007/(V_0-V_{c1})+1.041.
\end{equation}
At $V_0=V_{c2}\simeq-17.796$, the second $p$-wave resonance occurs, and the second two-body bound state appears.
These resonances are indicated by the vertical black dot-dashed lines in Fig.~\ref{fig:gaussian-attr}.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{DF-Gaussian-attr.pdf
\caption{\label{fig:gaussian-attr} The values of $D_F$ for attractive Gaussian potentials. The red dots represent the real part of $D_F/r_0^6$ and the blue triangles represent the imaginary part of $D_F/r_0^6$. The vertical dashed lines show the critical strengths of the Gaussian potential at which the $p$-wave resonances occur. }
\end{figure*}
At $V_{c1}<V_0<0$ there is no two-body bound state and $D_F$ is real. When $V_0$ approaches $V_{c1}$ from above, $D_F$ diverges.
To understand the behavior of $D_F$ when $V_0$ is close to $V_{c1}$, we plot $\ln(D_F/r_0)$ vs. $\ln(V_0-V_{c1})$ when $V_0$ is slightly greater than $V_{c1}$, in Fig.~\ref{fig:fitB}. It seems that there is a linear relationship. Doing a linear fit, we find that $D_F$ is proportional to $(V_0-V_{c1})^{-6}$,
and we derive an approximate formula: $D_F\simeq 0.74 a_p^6$ when $V_0$ is slightly greater than $V_{c1}$.
\begin{figure*}[htb]
\subfloat[]
{
\includegraphics[width=0.5\textwidth]{fitB.pdf}
\label{fig:fitB}
}
\subfloat[]
{
\includegraphics[width=0.5\textwidth]{fitA.pdf}
\label{fig:fitA}
}
\caption{
(a) $\ln(D_F/r_0^6)$ vs. $\ln(V_0-V_{c1})$ when $V_0$ is slightly greater than $V_{c1}$.
Doing a linear fit in this double-log plot, we find that
$D_F\simeq\frac{542.9}{(V_0-V_{c1})^{5.963\pm0.002}}\simeq0.74 a_p^6$.
(b)
$\ln[\re(D_F/r_0^6)]$ (red squares) and $\ln[-\im(D_F/r_0^6)]$ (blue dots)
ploted against $\ln(V_{c1}-V_0)$ when $V_0$ is slightly less than $V_{c1}$. Doing linear fits in these double-log plots, we find that
$ \textrm{Re} D_F\simeq\frac{796}{(V_{c1}-V_0)^{4.98\pm0.36}}$ and
$\textrm{Im} D_F\simeq-\frac{337}{(V_{c1}-V_0)^{6.00\pm0.02}}\simeq-0.46 a_p^6$
, where we have used the approximate formula \Eq{ap-resonance}.
}
\end{figure*}
At $V_{c2}<V_0<V_{c1}$ there is one two-body $p$-wave bound state, and in this case $D_F$ gains a negative imaginary part, $D_F=\textrm{Re} D_F+i \textrm{Im} D_F$.
The absolute value of $\textrm{Im} D_F$ is smaller than the absolute value of $\re D_F$ for most values of $V_0$ in this range.
When $V_0$ approaches $V_{c1}$ from below, $\re D_F$ and $\im D_F$ both diverge.
We plot $\ln[\re(D_F/r_0)]$ and $\ln[-\im(D_F/r_0)]$ vs. $\ln(V_{c1}-V_0)$ when $V_0$ is slightly less than $V_{c1}$, in Fig.~\ref{fig:fitA}. We again see approximately linear relationships. Doing linear fits, we find
that $\re D_F$ seems to be proportional to $(V_{c1}-V_0)^{-5}$ but $\im D_F$ is perhaps proportional to $(V_{c1}-V_0)^{-6}$, and we get an approximate formula: $\im D_F\simeq-0.46 a_p^6$ when $V_0$ is slightly less than $V_{c1}$.
According to the results in Sec.~\ref{sec:recombination}, the divergence of $\im D_F$ indicates that a one-dimensional spin-polarized Fermi gas will suffer strong three-body recombination losses near such resonances.
If $V_0$ is slightly less than $V_{c1}$, $a_p$ is positive and very large, and the two-body bound state is very shallow. The energy of the shallow bound state satisfies the universal formula:
\begin{equation}\label{E2universal}
E_2\simeq-\hbar^2/ma_p^2.
\end{equation}
According to the Bose-Fermi duality \cite{Girardeau1960,Fermion-Boson-duality}, the properties of the one-dimensional Fermi system with large and positive scattering length are similar to those of a weakly attractive bosonic system, which can be described by using the Lieb-Liniger model \cite{LiebLiniger1963} with the repulsive contact interaction replaced by attractive contact interaction, and this model can be exactly solved by using the Bethe ansatz \cite{Bethe1931}. Ref~\cite{McGuire1964} shows that such a bosonic system has a three-body bound state with energy $E_3=4E_2$. Mapping this bosonic system to the fermionic system with two-body $p$-wave scattering length $a_p\gg r_0$,
we infer a three-body bound state with energy
\begin{equation}\label{E3universal}
E_3\simeq-4\hbar^2/ma_{p}^2.
\end{equation}
When $V_0$ is slightly less than $V_{c1}$, we indeed find that a three-body bound state appears. We have numerical solved the Schr\"{o}dinger equations to find the energies of the two-body and the three-body bound states with Gaussian pairwise interactions.
These energies are plotted in Fig.~\ref{fig:3bodybound}. We find that when $V_0$ is less than but close to $V_{c1}$,
these bound state energies are indeed close to the predictions of the aforementioned universal formulas.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{3bodybound.pdf}
\caption{\label{fig:3bodybound}The energies of bound states with Gaussian pairwise interactions vs. the interaction strength $V_0$. The blue solid line shows the two-body bound state energy, and the red solid line shows the three-body bound state energy.
The dashed lines correspond to the universal formulas in Eqs.~\eqref{E2universal} and \eqref{E3universal}.}
\end{figure}
\section{energy shifts due to $D_F$}\label{energy}
We consider three identical spin-polarized fermions on a line with length $L$, and impose the periodic boundary condition on the wave function: $\Psi(x_1+L,x_2,x_3)=\Psi(x_1,x_2,x_3)$. Consider an energy eigenstate in which the momenta of the fermions are $k_1$, $k_2$ and $k_3$ in the absence of interactions.
When we introduce interactions that give rise to a nonzero $D_F$, the shift of the energy eigenvalue due to a nonzero $D_F$ is
\begin{equation}\label{energy-3fermion}
\mathcal{E}_{k_1k_2k_3}=\frac{\hbar^2D_F}{12m L^2}(k_1-k_2)^2(k_2-k_3)^2(k_3-k_1)^2.
\end{equation}
See Appendix.~\ref{sec:energy} for the details of the derivation of this formula.
In addition, if there are two-body interactions, in general the shift of the energy of the three fermions will also contain terms due to the two-body parameters including $a_p,r_p$, etc.; nevertheless, the shift due to $D_F$ in \Eq{energy-3fermion} is still valid. We can also calculate the leading-order shift of the three-body energy due to $a_p$ by using a method similar to the one used in Appendix.~\ref{sec:energy}:
\begin{equation}
\mathcal{E}_{k_1k_2k_3}^\text{2-body}=\frac{\hbar^2a_p}{mL}[(k_1-k_2)^2+(k_2-k_3)^2+(k_3-k_1)^2].
\end{equation}
We then generalize the energy shift to $N$ fermions in the periodic length $L$. The number density of the fermions is $n=N/L$.
We define the Fermi wave number $k_F=\pi n$, the Fermi energy $\epsilon_F=\hbar^2 k_F^2/2m$, and the Fermi temperature $T_F=\epsilon_F/k_B$, where $k_B$ is the Boltzmann constant.
\subsection{Adiabatic shifts of energy and pressure in the thermodynamic limit due to $D_F$}
Starting from a many-body state at a finite temperature $T$, if we introduce a nonzero $D_F$ \emph{adiabatically}, the energy shift at first
order in $D_F$ is equal to the sum of the contributions from all the triples of fermions, namely
\begin{equation}
\Delta E=\frac{1}{6}\sum_{k_1k_2k_3}\mathcal{E}_{k_1k_2k_3}\, n_{k_1}n_{k_2}n_{k_3},
\end{equation}
where $n_{k}=(e^{\beta(\epsilon_{k}-\mu)}+1)^{-1}$ is the Fermi-Dirac distribution function, $\beta=1/k_B T$, $\epsilon_{k}=\hbar^2 k^2/2m$ is the kinetic energy of a fermion, and $\mu$ is the chemical potential. The summation over $k$ can be replaced by a continuous integral $\sum_{k}=L \int dk/(2\pi)$ in the thermodynamic limit. Carrying out the integral, we get
\begin{align}
\Delta E(T)&=\frac{N\hbar^2 D_F}{768\sqrt{\pi} m}k_F^8\nonumber\\
&\times\widetilde{T}^{9/2}\left[ 3 f_{1/2}(z)f_{3/2}(z)f_{5/2}(z)-f_{3/2}^3(z)\right] ,
\end{align}
where $\widetilde{T}=T/T_F$, $z=e^{\beta \mu}$, and the function $f_{\nu}(z)$ is defined as
\begin{equation}
f_{\nu}(z)\equiv -\mathrm{Li}_{\nu}(-z)=\frac{2}{\Gamma(\nu)}\int_0^{\infty}\!\! dx~ \frac{x^{2\nu-1}}{1+e^{x^2}/z},
\end{equation}
where $\mathrm{Li}_{\nu}$ is the polylogarithm function.
The number of fermions satisfies
$
N=\sum_{k} \frac{1}{e^{\beta (\epsilon_{k}-\mu)}+1},
$
and this leads to the equation of the chemical potential $\mu$:
\begin{equation}
\frac{2}{\sqrt{\pi}}=\sqrt{\widetilde{T}} ~f_{1/2}(\\e^{\widetilde{\mu}/\widetilde{T}}),
\end{equation}
where $\widetilde{\mu}=\mu/\epsilon_F$.
In the low temperature limit, namely $T\ll T_F$,
\begin{equation}
\Delta E(T)=\frac{N\hbar^2 D_F}{405\pi^2 m}k_F^8\left[ 1+\frac{3}{2}\pi^2 \widetilde{T}^2
\O(\widetilde{T}^4)\right].
\end{equation}
In an intermediate temperature regime, $T_F\ll T\ll T_e$,
\begin{equation}
\Delta E(T)=\frac{N\hbar^2 D_F}{48\pi^2 m}k_F^{8}\widetilde{T}^3
\left[1+\frac{9}{4 \sqrt{2 \pi \widetilde{T}}}+\O(\widetilde{T}^{-1})\right],
\end{equation}
where $T_e=\frac{\hbar^2}{2mr_e^2k_B}$.
If $T$ is comparable to or higher than $T_e$, the de Broglie wave lengths of the fermions will be comparable to or shorter than the range $r_e$ of interparticle interaction potentials, and we can no longer use the effective parameter $D_F$ to describe the system.
See Fig. \ref{ep1} for $\Delta E$ as a function of the initial temperature.
\begin{figure*}[htb]
\centering
\subfloat[Energy]
{ \label{ep1}
\includegraphics[width=0.5\textwidth]{energy-new.pdf}
}
\subfloat[Pressure]
{ \label{ep2}
\includegraphics[width=0.5\textwidth]{pressure-new.pdf}
}
\caption{The shifts of energy (a) and pressure (b) caused by the adiabatic (red solid lines) or isothermal (blue dashed lines) introduction of $D_F$ vs the temperature $T$. At $T\simeq0.2268T_F$, the isothermal energy shift $\Delta E$ changes sign.}
\label{fig:energy and pressure}
\end{figure*}
The pressure of the spin-polarized Fermi gas changes by the following amount due to the adiabatic introduction of $D_F$:
\begin{equation}
\Delta p=-\left(\frac{\partial \Delta E}{\partial L}\right)_{S,N}=\frac{8\Delta E}{L},\label{pressure adia}
\end{equation}
where the subscripts $S,N$ prescribe that we keep the entropy $S$ and the particle number $N$ fixed when taking the partial derivative. See Fig. \ref{ep2} for $\Delta p$ as a function of the initial temperature.
\subsection{Isothermal shifts of energy and pressure in the thermodynamic limit due to $D_F$}
If the interaction is introduced adiabaticly, the temperature will increase (if $D_F>0$) or decrease (if $D_F<0$). The change of temperature is
\begin{equation}
\Delta T=\left(\frac{\partial \Delta E}{\partial S}\right)_{N,L}.
\end{equation}
So if we introduce $D_F$ isothermally, the energy shift $\Delta E'$ should be
\begin{equation}
\Delta E'=\Delta E-C \Delta T=\left(1-T \frac{\partial}{\partial T}\right)\Delta E,
\end{equation}
where $C$ is the heat capacity of the noninteracting Fermi gas at constant volume.
In the low temperature limit, $T\ll T_F$,
\begin{equation}\label{DeltaE'lowT}
\Delta E'(T)=\frac{N\hbar^2 D_F k_F^8}{405\pi^2 m}\left[ 1-\frac{3}{2}\pi^2 \widetilde{T}^2
\O(\widetilde{T}^4)\right].
\end{equation}
In an intermediate temperature regime, $T_F\ll T\ll T_e$,
\begin{equation}\label{DeltaE'highT}
\Delta E'(T)=\frac{N\hbar^2 D_F}{48\pi^2 m}k_F^{8}\widetilde{T}^3
\left[-2-\frac{27}{8 \sqrt{2 \pi \widetilde{T}}}+\O(\widetilde{T}^{-1})\right].
\end{equation}
According to Eqs.~\eqref{DeltaE'lowT} and \eqref{DeltaE'highT}, $\Delta E'$ changes sign as we increase the temperature.
Therefore, there is a critical temperature $T_c$ at which $\Delta E'=0$. We find
\begin{equation}
T_c \simeq 0.2268 T_F.
\end{equation}
The pressure of the spin-polarized Fermi gas changes by the following amount due to the isothermal introduction of $D_F$:
\begin{equation}
\Delta p'=\Delta p-\frac{2 C\Delta T}{L}=
\left(1-\frac{1}{4}T \frac{\partial}{\partial T}\right)\Delta p.
\end{equation}
In the low temperature limit, $T\ll T_F$,
\begin{equation}
\Delta p'=\frac{8n\hbar^2 D_F}{405\pi^2 m}k_F^{8}
\left[1+\frac{3}{4}\pi^2 \widetilde{T}^2+O(\widetilde{T}^4)\right].
\end{equation}
In an intermediate temperature regime, $T_F\ll T\ll T_e$,
\begin{equation}
\Delta p'=\frac{n\hbar^2 D_F}{6\pi^2 m}k_F^{8}\widetilde{T}^3
\left[\frac{1}{4}+\frac{27}{32\sqrt{2\pi\widetilde{T}}}+O(\widetilde{T}^{-1})\right].
\end{equation}
The shifts of energy and pressure are plotted as functions of temperature in Fig.~\ref{ep1} and Fig.~\ref{ep2} respectively.
\section{The Three-body recombination rate\label{sec:recombination}}
If the collision of the three particles is purely elastic, $D_F$ is a real number.
But if the two-body interaction supports bound states, then the three-body collisions are usually not purely elastic, and the three-body recombination may occur. In this case $D_F$ becomes complex, and the three-body recombination rate constant is proportional to the imaginary part of $D_F$ \cite{zhu2017threebody,braaten2006universality}.
Within a short time $\Delta t$, the probability that no recombination occurs is $\mathrm{exp}(-2|\mathrm{Im} {E}|\Delta t/\hbar)\simeq 1-2|\mathrm{Im} {E}|\Delta t/\hbar$. Then the probability for one recombination is $2|\mathrm{Im} {E}|\Delta t/\hbar$.
Since each recombination event causes the loss of three low-energy fermions, the change of the number of remaining low-energy fermions in the short time $dt$ is
\begin{equation}
d N=-\frac{1}{6}\sum_{k_1 k_2 k_3}3\frac{2d t}{\hbar}|\mathrm{Im}\mathcal{E}_{k_1k_2k_3}| n_{k_1}n_{k_2}n_{k_3}.
\end{equation}
This leads to
\begin{equation}
\frac{d n}{d t}=-L_3 n^{3},
\end{equation}
and the coefficient $L_3$ is
\begin{align}
L_3&=\frac{\pi^{3/2}}{128}\frac{\hbar |\mathrm{Im}D_F|}{m}k_F^6\nonumber\\
&\times\widetilde{T}^{9/2}\left[ 3 f_{1/2}(z)f_{3/2}(z)f_{5/2}(z)-f_{3/2}^3(z)\right].
\end{align}
$L_3$ depends on the density $n$ and the temperature $T$.
In the low temperature limit, $T\ll T_F$,
\begin{equation}
L_3\simeq \frac{2}{135}\left(1+\frac{3\pi^2}{2}\widetilde{T}^2\right)\frac{\hbar|\mathrm{Im}D_F|}{m}k_F^6.
\end{equation}
In particular, at $T=0$,
\begin{equation}
L_3=\frac{2\hbar|\mathrm{Im} D_F|}{135m} k_F^6,
\end{equation}
and $L_3$ is proportional to $n^6$.
In an intermediate temperature regime, $T_F\ll T\ll T_e$, we find that
\begin{equation}\label{L3highT}
L_3\simeq \frac{m^2}{\hbar^5}|\mathrm{Im}D_F|(k_B T)^3,
\end{equation}
and $L_3$ is approximately proportional to $T^3$, which is consistent with the prediction in Ref. \cite{Esry2007}.
\section{Summary and Discussion}
We derived the asymptotic expansions of the three-body wave function $\Psi$ for identical spin-polarized fermions colliding at zero energy in one dimension, and defined the three-body scattering hypervolume $D_F$. Now the scattering hypervolumes of spin-polarized fermions have been defined in 3D \cite{wang2021fermion3D}, 2D \cite{wang2022fermion2D} and 1D.
For weak interaction potentials, we derived an approximate formula for $D_F$ by using the Born expansion.
For stronger interactions, one can solve the three-body Schr\"{o}dinger equation numerically at zero energy and match the resultant wave function with the asymptotic expansion formulas we have derived in this paper to numerically compute the values of $D_F$.
We did such numerical calculations for the square-barrier, square-well, and Gaussian potentials.
We considered three fermions along a line with periodic boundary condition
and derived the shifts of their energy eigenvalues due to a nonzero $D_F$, and then considered the dilute spin-polarized Fermi gas in 1D and derived the shifts of its energy and pressure due to a nonzero $D_F$.
Finally, we studied the dilute spin-polarized atomic Fermi gas in 1D with interaction potentials that support two-body bound states, for which we have three-body recombination processes and $D_F$ has nonzero imaginary part,
and we derived formulas for the three-body recombination rate constant $L_3$ in terms of the imaginary part of $D_F$ and the temperature and density of the Fermi gas.
One can similarly define the three-body scattering hypervolumes for identical bosons or for distinguishable particles in 1D and study their physical implications.
For ultracold atoms, one can use the optical lattice to confine them in quasi-1D, and the van der Waals range of the interatomic potential
is usually much shorter than the radial confinement length. One can solve the three-body problem in three dimensional space to numerically
determine the one-dimensional scattering hypervolume of the three atoms.
\begin{acknowledgments}
This work was supported by the National Key R\&D Program of China (Grants No.~2019YFA0308403 and No.~2021YFA1400902).
\end{acknowledgments}
|
{
"arxiv_id": "2302.13590",
"language": "en",
"timestamp": "2023-02-28T02:24:47",
"url": "https://arxiv.org/abs/2302.13590",
"yymm": "2302"
} | \section*{Abstract}
Particle tracking has several important applications for solute transport studies in aquifer systems. Travel time distribution at observation points, particle coordinates in time and streamlines are some practical results providing information of expected transport patterns and interaction with boundary conditions. However, flow model complexity and simultaneous displacement of multiple particle groups leads to rapid increase of computational requirements. MODPATH is a particle tracking engine for MODFLOW models and source code displays potential for parallel processing of particles. This article addresses the implementation of this feature with the OpenMP library. Two synthetic aquifer applications are employed for performance tests on a desktop computer with increasing number of particles. Speed up analysis shows that dynamic thread scheduling is preferable for highly heterogeneous flows, providing processing adaptivity to the presence of slow particles. In simulations writing particles position in time, thread exclusive output files lead to higher speed up factors. Results show that above a threshold number of particles, simulation runtimes become independent of flow model grid complexity and are controlled by the large number of particles, then parallel processing reduces simulation runtimes for the particle tracking model MODPATH.
\section*{Introduction}
MODPATH is a particle tracking post-processing program for MODFLOW-based groundwater flow models developed by the U.S. Geological Survey \citep{Pollock2016}. The program computes three-dimensional advective displacement of particles following the semi-analytical solution of \citet{Pollock1988}. Methodology allows particles to efficiently move towards a cell interface in a single displacement step, being then transferred to a connected neighbor cell. This process continues until particle encounters one of several possible stopping conditions, with displacement of one particle being independent of the others.
Particle tracking has a variety of applications in studies of groundwater systems. Different simulation kinds provide necessary results for construction of travel time distributions (TTD) at observation points and for spatiotemporal characterization of particles and streamlines. These properties are often used to understand flow patterns in groundwater systems \citep[e.g.][]{Buxton1991}, delineate sources of water to discharging areas \citep[e.g.][]{Eberts2012}, determine time-dependent capture zones of wells \citep[e.g.][]{Bair1991,Riva2006}, and characterize the interplay between chemical reactions, dispersion and boundary conditions \citep[e.g.][]{Gusyev2014}, among others. Groundwater flow models with complex distribution of hydraulic properties or multiple boundary conditions might require simultaneous displacement of a large number of particles, and consequently, CPU demand. In reactive transport, different sets of particles are needed (one for each chemical substance), so computational requirements grow rapidly with increasing complexity of chemical systems. MODPATH source code \citep[see][]{Pollock2017} is written using serial programming, thus model runs are handled by a single CPU. Potential for parallelization has been identified in program stages processing particles due to independent displacements, which would allow to take advantage of all available computational resources simultaneously, leading to faster results. In this regard, a prototype GPU implementation of MODPATH algorithm has exemplified the potential for parallelization of this method \citep{Ji2020}.
Objective of this work is to incorporate distributed processing of particles into MODPATH starting from the current public version of the program \citep{Pollock2017}. Source code is written in Fortran and parallelization is implemented using the OpenMP library \citep{Openmp2020}. Performance of parallel implementation is compared against single processor runs for two synthetic test cases: a two dimensional heterogeneous aquifer with variable degree of heterogeneity and a three dimensional layered aquifer with multiple flow boundary conditions and different levels of grid complexity. Speed up is discussed for variable number of particles and processing threads, considering also different OpenMP library configuration scenarios.
The paper is organized as follows. Methods section presents a discussion of MODPATH flow chart, followed by a summary of considerations regarding possible OpenMP configurations and presentation of synthetic test cases. Results and discussions elaborate from the speed up quantification of endpoint and timeseries simulations of test cases. Conclusions summarize results from implementation, revisiting interaction between aquifer model characteristics and parallel library configuration.
\section*{Methods}
\subsection*{MODPATH}
Current version of the software is implemented in Fortran following an object oriented programming paradigm and works as a module independent of MODFLOW \citep{Pollock2016}, thus the particle tracking process is decoupled from the groundwater flow model. It provides compatibility for models based on MODFLOW-2005 \citep{Harbaugh2005} and MODFLOW-6 \citep{Langevin2017}, for both structured and rectangular unstructured grids \citep[see][]{Pollock2015}.
MODPATH displaces particles through flow model cells until an stopping condition is met. These include for example encountering a boundary face, reaching the maximum tracking time, or landing in a cell with sink flows. The latter depends on the specific configuration of the particle tracking model because in some scenarios, mostly while considering flow models with coarse resolution, particles might be still allowed to displace inside these cells, in which case these are known as weak sinks. Different approaches for modeling weak sink cells have been addressed in literature \citep{Visser2009, Abrams2012}. Since the displacement of one individual particle is independent of the others, there is significant potential for parallelization, in particular, for particle tracking models with high computation requirements. For example, a transient groundwater flow model of an aquifer with a complex distribution of hydraulic properties, multiple flow boundary conditions and simultaneous displacement of several particle groups, is an scenario easily found in reactive transport analysis of hydrogeologic units.
\begin{figure}[ht!]
\centering
\includegraphics[scale=1]{figure_1_modpath_flowchart.pdf}
\caption{MODPATH simplified flow chart. Particles loop is parallelized with OpenMP.}
\label{fig:flowchart}
\end{figure}
Program initializes reading a configuration file that specifies data related to MODFLOW model and particle tracking process (Fig. \ref{fig:flowchart}). Depending on the type of simulation (endpoint, timeseries, pathline) and configuration parameters, a simulation stop time $T_{stop}$ is determined. The program has three relevant nested loop structures. The outermost structure is the time step loop, which runs over the MODFLOW time steps individually, defined from all of the flow stress periods. For each loop, MODPATH computes the maximum time for the current flow conditions $ts_{max}$ and once this limit is reached, flow related arrays are updated. The middle loop structure is the tracking loop, which sets the maximum tracking time $t_{max}$, depending on the simulation type. This time is not necessarily the same as $ts_{max}$. For example, in a timeseries simulation it is possible that the user defines smaller output time steps than the time step of the flow model. In this case, the program will perform several tracking stages until $ts_{max}$ is reached. The innermost loop runs over particles, where each one is displaced individually and this is the stage with highest potential for parallelization. At this point, it is important to remark that algorithm for particles displacement and determination of stopping conditions while considering the parallel implementation, remains exactly the same as the original MODPATH. For more details about these procedures the reader is referred to \citet{Pollock1988,Pollock2016}.
In MODPATH, timeseries simulations can be chosen to visualize the temporal evolution of particle clouds. This is often used to understand, for instance, the interaction between transport and flow patterns in groundwater systems. Usually, these type of simulations are configured in such a way that particles positions are written to output files at user defined times or equispaced by a given time interval. These simulations are particularly challenging when considering multiprocessing because the writing to output files is performed within the particles loop. This and other related aspects when considering multiprocessing for the program are discussed in the following.
\subsection*{Multiprocessing}
Practical considerations have arisen from integrating multiprocessing into MODPATH due to the interactions between the program structure and the OpenMP library configurations. Before discussing these in detail, some concepts related to OpenMP need to be introduced. Specifically, sections of the code executed in parallel are known as \emph{parallel regions} and a single process within these sections is known as a \emph{thread} \citep{Openmp2020}. Throughout this article, multiprocessing and parallel processing are used indistinctively to indicate that tasks are distributed among several threads on a single computer, with the concept of threads closely related to computing cores.
In MODPATH, particles are displaced by a tracking engine represented in source code by an object class that manages the displacement procedure between flow cells and verifies stopping conditions. Current implementation of tracking engine contains flow related arrays as class properties used for the initialization of flow cell velocities. In order to parallelize MODPATH, the tracking engine needs to be independent for each processing thread to avoid memory inconsistencies. That is, each thread has to manage a different particle history and current flow model cell. OpenMP library allows defining thread private objects by creating independent copies managed by each thread. Replication of the tracking engine in its current form would also mean that flow related arrays are replicated unnecessarily inside the parallel region. Since flow models may be composed of millions of cells, this can be memory demanding. To overcome this problem, the implementation of particles multiprocessing considers the introduction of an intermediate object that handles flow model information, which is stored centrally. The latter is accessed from the tracking engine through a pointer which can be easily replicated without significant impacts on system memory.
Similarly, some considerations should be taken into account for timeseries and pathline simulations. For these, MODPATH writes output records while being inside the particles loop. As each processing thread operates independently, it is possible that more than one thread tries to write a record to an output unit, which may generate data corruption if not handled appropriately. For such situations, OpenMP library provides thread exclusive clauses that block a portion of code execution for other threads if one is performing such instructions. Nevertheless, when thread exclusive operations are performed with high frequency, blocking clauses may reduce the efficiency of parallel implementation. For comparison purposes, three different output procedures are discussed for timeseries simulations: (i) all threads write exclusively to a single output unit using OpenMP \texttt{critical} directive; (ii) threads write to specific binary output units which are then consolidated into a text-plain file at the end of timeseries step; and (iii) thread exclusive text-plain output units are not consolidated. In the case of the consolidated protocol, the output of thread specific units is recollected after each timeseries step in order to preserve sorting of time indexes.
Another important aspect to consider for multiprocessing is the workload distribution protocol while MODPATH is executing the parallel loop. This is known as \textit{scheduling} and specifies the distribution of loop indexes (particles) to be processed by each thread. In this article, two scheduling strategies are discussed. The first one is the \texttt{static} scheduling with balanced distribution of particles. In this case, the number of iterations is approximately the same for each thread and particles to be processed are specified only once before entering the parallel region. It is a good approach for models where simulation time is approximately the same for all particles. The latter however, is not necessarily true in real groundwater systems.
Particles are displaced sequentially between flow model cells, involving a cell initialization stage before computing displacements. This means that the computational time required for processing the trajectory of particles can be different, most notably, in scenarios with non-uniform flow where the length of streamlines is influenced by the spatial variability of hydraulic properties. To exemplify this, consider a timeseries simulation writing particles position at regular time intervals, using a model aquifer with non-uniform velocity distribution. A particle moving along a streamline of relatively high velocity will travel through a higher number of cells compared to slower particles for the same simulation time interval. Consequently, faster particles will require the initialization of a higher number of flow cells. Differences in loop time for each particle may be further influenced by outflow boundaries that also act as stop conditions.
Different processing times for particles motivate the analysis of \texttt{dynamic} scheduling, which distributes particles to be processed by each thread during runtime. A new particle is assigned after the completion of the current. The advantage of this approach is that it reduces the likelihood of threads being idle. A thread processing a slow particle, at the end of the simulation, might have processed a smaller total amount of particles than others, without necessarily delaying the total simulation time. The possibility of unbalanced distribution of processed particles, gives to the program some adaptivity to the particles travel time distribution. This can favor some hydrogeological settings, like for example a highly heterogeneous system. The distribution of particles during runtime in \texttt{dynamic} scheduling introduces some overhead in comparison to \texttt{static} scheduling. The impact that this effect might have in MODPATH simulation times is discussed in the following sections.
An additional aspect to consider for multiprocessing is that MODPATH stores the particles in a list, assigning particles' indexes according to the position in which they appear in this list. In the serial implementation particles are processed sequentially, which means that output files will present sorted particles indexes. With the parallel implementation the list of particles is not necessarily processed in order, hence output files will not display sorted particles indexes regardless of the scheduling protocol. Because particles are displaced independently, this does not cause issues in the context of the program, however leads to visible differences in output files, so attention should be paid in cases where post-processing tools rely on sorted particles indexes while reading output files from parallel model runs. At this point, it is important to remark that output results from single thread and parallel runs of models discussed in the following sections were verified to be equivalent in value. For reference, OpenMP specification enabling parallel particles loop is shown in Figure (\ref{fig:openmp}). By default OpenMP library considers that all variables within parallel loops are \texttt{shared} between threads, except the loop index. However, in some cases this may lead to memory inconsistencies specially in complex loop structures. To avoid this issue, memory state of all loop variables is forced to be explicitly defined with the clause \texttt{default(none)}. Depending on their functionality, memory states are declared as \texttt{shared}, thread \texttt{private} or initialized with the same values for all threads and then private (\texttt{firstprivate}). Similarly, counters increasing their values within the parallel loop should be declared with a \texttt{reduction} clause for consistent results.
\begin{figure}
\centering
\includegraphics[scale=1]{figure_2_openmp_code.pdf}
\caption{OpenMP specification for parallel particles loop in MODPATH. Memory state for all loop variables is explicitly declared as \texttt{shared}, \texttt{private} or \texttt{firstprivate}. Counters are declared with a summation \texttt{reduction} clause.}
\label{fig:openmp}
\end{figure}
\subsection*{Synthetic test cases}
This section presents synthetic test cases aimed at evaluating the performance of MODPATH parallelization under different scenarios. Simulations are performed on a desktop computer with an Intel\textsuperscript{\textregistered{}} Core\texttrademark{} i7-9700 CPU @ 3.00GHz processor. For each test case, the number of processing threads $N$ is modified in integer powers of 2 between $1-8$ and parallel MODPATH code is compiled with \texttt{[email protected]} \citep{Gfortran2020} on a \texttt{linux} system. CPU performance is compared considering the elapsed time reported by MODPATH, which measures the time employed by the outermost loop (Time Step Loop in Fig. \ref{fig:flowchart}), including the writing to output files in the case of timeseries and pathline runs, but not the reading of input configuration and MODFLOW files. In this regard, an additional variable that could influence overall performance is the kind of disk where data is being written (Solid State Drive, SSD; or Hard Disk Drive, HDD). This would not be the case however for the time reported in endpoint simulations, because output files in this case are written after the outermost loop. For the simulations of this article, observed speed up due to parallelization was in general similar while writing to SSD or HDD, with the exception of one scenario addressed with more detail in results and discussion.
The first test case (TC1) consists of a two-dimensional heterogeneous aquifer under steady-state flow conditions (Fig. \ref{fig:streamlines}). Objective of this test is to evaluate performance of parallel MODPATH under different scenarios of heterogeneity, and hence, spatial variability of flow velocities. For these purposes, a domain discretized in $1500\times300$ cells of size $\Delta_x=\Delta_y=1[m]$ is considered (Table \ref{tab:tc1}), with a spatial distribution of hydraulic conductivity constructed from one realization of a sequential Gaussian simulation, denoted as $Y(\bold{x})$, characterized by an isotropic exponential variogram with correlation length of $I_Y=10[m]$, zero mean, and unit variance. The hydraulic conductivity is determined according to $K(\bold{x}) = \exp\left(\sigma_{Y} Y(\bold{x}) \right)$. The degree of heterogeneity is therefore controlled by the variance $\sigma^2_{Y}$, while preserving the underlying conductivity patterns across test runs. Groundwater flow is induced by a unit mean pressure gradient oriented along the $x$-axis. Models are solved with MODFLOW-6 and convergence is verified for each simulation. Particles are injected near the aquifer inlet, uniformly distributed at $x_o = 10[m]$. The number of injected particles $N_p$ is increased between $10^3-10^7$. A set of endpoint simulations for different values of $\sigma^2_Y$ are used to compare thread scheduling strategies. Simulations are configured to displace all particles until reaching the aquifer outlet.
\begin{figure}[]
\centering
\includegraphics[scale=1]{figure_3_tc1_streamlines.pdf}
\caption{Synthetic two-dimensional heterogeneous aquifer (TC1). Vertical black line near the origin marks particles injection and white lines show reference streamlines for $\sigma^2_Y=2.5$.}
\label{fig:streamlines}
\end{figure}
\begin{table}[ht!]
\centering
\caption{Parameters for synthetic test case TC1.}
{\small
\begin{tabularx}{8.2cm}{@{} l l c*{2}{Y}}
\toprule
\multicolumn{2}{l}{Parameter} & Value & Unit \\
\midrule
Cell size & $\Delta_{x,y}$ & $1$ & $m$ \\
Correlation length & $I_Y$ & $10$ & $m$ \\
Aquifer length & $L_x$ & $1500$ & $m$ \\
Aquifer width & $L_y$ & $300$ & $m$ \\
Inlet constant head & $C_h^{in}$ & 1510 & $m$ \\
Outlet constant head & $C_h^{out}$ & 10 & $m$ \\
Aquifer variance & $\sigma_Y^2$ & $0.1 - 5$ & - \\
\bottomrule
\end{tabularx}
}
\label{tab:tc1}
\end{table}
The second synthetic test case (TC2), considers a layered three-dimensional aquifer subject to several flow boundary conditions. The objective of this test case is to analyze the improvement due to parallelization in more practical and complete groundwater model scenarios, involving recharge, pumping wells, river and drain boundary conditions. The problem is based on one of the examples provided in the current MODPATH repository \citep[{\texttt{ex03\_mf6} in}][]{Pollock2017}, with model parameters reinterpreted in SI units (Table \ref{tab:tc2}). The example has been modified with grid refinement near relevant boundary conditions to also consider an unstructured \textit{quadtree} grid (Fig. \ref{fig:complexdiagram}). The domain is composed of three homogeneous layers that represent two aquifers separated by an aquitard. Hydraulic conductivity for both top and bottom aquifers present vertical anisotropy ($K_{zz}<K_{xx},K_{yy}$), while the two horizontal values are considered to be the same. The original structured grid was composed of $N_{cpl}^{str}=420$ cells per layer, but after refinement this number has grown to $N_{cpl}^{usg}=1464$. The system is subjected to homogeneous recharge with an east-side river boundary condition. The problem analyzes trajectories of particles released from the surface layer, influenced by two pumping wells acting individually on the top and bottom aquifer (W1 and W3 respectively). The default test case configuration allows particles to pass through weak sink cells. MODPATH simulation considers timeseries runs with 5 and 30 time snapshots of particles location (TS5 and TS30 respectively) to quantify performance improvements under different output conditions. Snapshots are homogeneously distributed along the timeseries simulation time. Unstructured grid simulations are also performed considering the same output conditions. Like previous test case, the total number of injected particles is systematically increased. Particles are uniformly released from the uppermost face of four cells in the top aquifer, injected in 10 stages every 20 days of simulation time. The final injection time is small compared to the final timeseries time ($T_{ts}=60000 [d]$, $T_{inj}/T_{ts}\approx0.3\%$). This test considers 3 flow stress periods, although values for boundary conditions remain the same during the simulation: an initial steady-state, a second transient flow with 10 time steps, and a final stress period also at steady-state. This means that for TC2, MODPATH will employ some simulation time in updating the flow model arrays, which is currently performed in serial. This reduces the potential speed up factor due to parallelization of the particles loop because in the total elapsed time used for comparison, there is a higher proportion of mandatory serial operations in comparison to a fully steady-state flow model.
\begin{figure}[]
\centering
\includegraphics[scale=1]{figure_4_tc2_grid.pdf}
\caption{Synthetic three-dimensional layered aquifer (TC2). $a)$ Original structured grid, $b)$ modified unstructured grid. In both panels, scatter points indicate the particles release area.}
\label{fig:complexdiagram}
\end{figure}
\begin{table}[ht!]
\centering
\caption{Parameters for synthetic test case TC2. Layered properties are shown in curly brackets from top to bottom layer.}
{\small
\begin{tabularx}{8.2cm}{@{} l l c*{2}{Y}}
\toprule
\multicolumn{2}{l}{Parameter} & Value & Unit \\
\midrule
Flow rate W1 & $Q_{W1}$ & -$7.5\times10^4$ & $m^3/d$ \\
Flow rate W3 & $Q_{W3}$ & -$1\times10^5$ & $m^3/d$ \\
Hydr. cond. & $K_{xx,yy}$ & \{50 , 0.01, 200\} & $m/d$ \\
Hydr. cond. & $K_{zz}$ & \{10 , 0.01, 20 \} & $m/d$ \\
Layer height & $\Delta_z$ & \{130, 20 , 200\} & $m$ \\
Spec. yield & $S_y$ & 0.1 & - \\
Spec. storage & $S_s$ & $1\times10^{-4}$ & $1/m$ \\
Recharge & $q_R$ & $5\times10^{-3}$ & $m/d$ \\
River stage & $R_s$ & 320 & $m$ \\
River bot. & $R_b$ & 317 & $m$ \\
River cond. & $C_R$ & $1\times10^{5}$ & $m^2/d$ \\
Drain elev. & $D_e$ & 322.5 & $m$ \\
Drain cond. & $C_D$ & $1\times10^{5}$ & $m^2/d$ \\
\bottomrule
\end{tabularx}
}
\label{tab:tc2}
\end{table}
\section*{Results and discussion}
\subsection*{Thread scheduling}
Runtimes from TC1 simulations obtained with different thread scheduling strategies are used to determine the best protocol for parallelizing MODPATH under different scenarios of heterogeneity. Regions of low and high pixel values of the aquifer realization $Y(\bold{x})$ will have lower and larger hydraulic conductivity values, respectively, when increasing the aquifer variance $\sigma_Y^2$. For a fixed pressure gradient, this means that minimum and maximum flow velocities are also influenced by this parameter. MODPATH will displace all particles until reaching the aquifer outlet. So, increasing the variance leads to longer simulation times, as shown in Figure (\ref{fig:omp:endpoint}$a$). This occurs consistently for both \texttt{static} and \texttt{dynamic} thread scheduling protocols. Simulation times with the latter are lower for all scenarios of heterogeneity, indicating that improvements in time due to dynamic scheduling are able to compensate the expected overhead from the assignment of particles during runtime.
Increasing the number of processing threads leads to significant differences in the runtime obtained with different threading protocol. The ratio between simulation times $T_{dyn}/T_{sta}$ for a fixed number of particles is shown in Figure (\ref{fig:omp:endpoint}$b$). In runs with 1 or 2 threads, the ratio is close to 1, and the impact of scheduling strategy is not noticeable, even for large aquifer variances. However, simulations with 4 and 8 threads exhibit an important reduction of the computational time when \texttt{dynamic} scheduling is used. For higher values of aquifer variability and $N=8$, the simulated ratio indicates that \texttt{dynamic} simulations require up to $25\%$ less runtime than the corresponding \texttt{static} simulation for the variances investigated, illustrating the interplay between the threading protocol (a parallel library configuration) and aquifer heterogeneity on the total simulation time.
\begin{figure}[]
\centering
\includegraphics[scale=1]{figure_5_omp_endpoint.pdf}
\caption{Performance of endpoint simulations with $N_p=10^7$ particles as a function of aquifer variability and different number of threads $N$. $a)$ Computational time for \texttt{static} (solid, squares) and \texttt{dynamic} (dashed, circles) scheduling with 4 and 8 threads; $b)$ measured ratio $T_{dyn}/T_{sta}$ for all thread configurations. }
\label{fig:omp:endpoint}
\end{figure}
The speed up factor relating simulation times between runs with single thread and parallel simulations $T_1/T_N$ provides a complementary picture of the parallelization performance. Results show (Fig. \ref{fig:omp:endpoint:speedup}$a,b$) that for a given aquifer variance, speed up factor is limited by the number of particles. Above $N_p=10^5$, the speed up approaches an asymptotic value, which occurs for both scheduling protocols. However, the speed up displays higher values when employing \texttt{dynamic} scheduling for simulations with number of particles above $N_p=10^5$. For this number of particles, $\sigma_{Y}^2=2.5$, and maximum number of threads, the speed up of the \texttt{static} and \texttt{dynamic} scheduling protocols were $T_1/T_8=5.09$ and $T_1/T_8=6.63$, respectively. Notice that expected speed up values from this implementation of parallel MODPATH, will not be as high as those observed in prototypical GPU codes \citep[{e.g.}][]{Ji2020}, where the hardware can provide thousands of processing threads, in contrast to classical desktop CPU hardware. Still, the development here presented comes with the inherent advantage that is integrated into MODPATH source code, without any change to current input files, then adoption is straightforward.
Another interesting aspect of the speed up factor and the threading protocol is its dependence on the degree of heterogeneity $\sigma_Y^2$ (Fig. \ref{fig:omp:endpoint:speedup}$c,d$). When using \texttt{static} scheduling and maximum threads, the speed up factor decreases consistently with the degree of heterogeneity, which is explained by both larger travel times to aquifer outlet and the even distribution of particles to be processed. Under this scenario of workload distribution, consider a simplified case of only one slow particle and all the rest moving at the same velocity, then it is evident that threads will be waiting for completion of the slow particle. In contrast, speed up factors achieved with \texttt{dynamic} scheduling are essentially independent of aquifer variance for a given number of particles in test case TC1.
\begin{figure}[]
\centering
\includegraphics[scale=1]{figure_6_omp_benchmark_2d_speedup.pdf}
\caption{Speed up for each threading protocol. $a),b)$ as a function of the number of particles with aquifer variance of $\sigma_Y^2=2.5$, $c),d)$ as a function of aquifer variance with number of particles $N_p=10^7$.}
\label{fig:omp:endpoint:speedup}
\end{figure}
\subsection*{Timeseries}
Performance of endpoint simulations provided a base line for discussion of timeseries, given that in some cases, endpoint runs can be seen as timeseries with no output stage. Still, printing out particle positions on runtime requires some considerations in order to preserve parallel speed ups seen in the corresponding endpoint simulations.
The obtained speed up factors are different depending on the output protocol. To exemplify this, a timeseries simulation with \texttt{dynamic} scheduling is performed using TC1 considering $\sigma_Y^2=2.5$ and 10 writing stages. Results show (Fig. \ref{fig:tsout}$a,c,e$) that parallel output leads to higher values of speed up in comparison to the other output protocols, with magnitudes close to those obtained in endpoint runs. Employing the blocking clause (\texttt{critical}) leads to smaller speed up factors, however, without much impact in comparison to the case of parallel output files. In constrast, the consolidated protocol is the slowest due to the time required to load thread specific data and write it back to the consolidated file after each timeseries snapshot.
\begin{figure}[ht!]
\centering
\includegraphics[scale=1]{figure_7_omp_parallel_output_2d_3d.pdf}
\caption{Speed up factors for parallel output protocols. Panels $a),c),e)$ (first column) present results for timeseries in test case TC1 with 10 writing stages considering $\sigma^2_Y=2.5$. Panels $b),d),f)$ (second column) display results for test case TC2 with 5 (solid, TS5) and 30 (dashed, TS30) writing stages. Corresponding output protocol is indicated in each panel.}
\label{fig:tsout}
\end{figure}
From timeseries runs using test case TC2 with unstructured grid, considering 5 and 30 writing stages, different performance is observed (Fig. \ref{fig:tsout}$b,d,f$). Specifically, employing critical or consolidated output formats leads to a significant decrease in the speed up factor relative to parallel output. Moreover, in some cases of consolidated output protocol and TS30, the performance decreases in comparison to single thread runs. Parallel output preserves the magnitude of speed up observed in previous numerical tests. Notably for this case, changes in speed up factor with respect to the number of particles exhibit a peak value when employing the maximum number of threads, with the number of particles of maximum performance depending on the timeseries output frequency. Smaller frequency (TS5) leads to maximum speed up factor with a higher number of particles ($N_p=10^6$) than the obtained for the simulation with high frequency (TS30). In this case the number of particles of maximum speed up is $N_p=10^5$. This effect could be explained by differences on disk writing speeds. Simulations with high output frequency and high number of particles are expected to generate large output volumes. When writing in parallel, all threads write large volumes of data simultaneously generating simulation output faster than single thread runs. Simulations with parallel output protocol shown in Figure (\ref{fig:tsout}$f$) were performed writing to HDD hardware, and the decreasing speed up for high number of particles may be explained by limitations on output writing speed. For comparison, TC2 runs with the highest output frequency (TS30) and parallel output protocol were also configured to write to output files in SSD hardware. Speed up factors where similar to those obtained from HDD, excepting the case with highest number of particles ($N_p=10^{7}$) and $N=8$ threads. In this scenario, speed up when writing to HDD was $T_1/T_8=3.8$ (Fig. \ref{fig:tsout}$f$), whereas when writing to SSD reached a value of $T_1/T_8=5.1$. A similar analysis was performed for the other two output protocols in TC2, with small influence of disk hardware on speed up results, although in general faster simulations with SSD.
There are significant differences in timeseries performance between the two test cases while employing the blocking directive for output files. Besides the time required in TC2 for updating flow model arrays due to the transient stress period, this difference in speed up is explained by the influence that heterogeneity in TC1 has on the simulation time required for a particle to reach the timeseries output time. As seen previously, non-uniform flow leads to differences in the processing time for each particle, depending on the characteristics of the streamline and influenced by the number of cells initialized during displacement. This means that particles traveling through streamlines with different velocities, will arrive to the instant where the output is required at different times. This decreases the likelihood of the output file being busy because another thread is writing. To remark this point, the opposite case of a fully homogeneous domain can be considered, which is closer to the conditions in TC2. The processing time needed to reach the timeseries writing stage for all particles (traveling in horizontal streamlines with the same velocity) will be always almost the same. As a result, all threads will try to write simultaneously to the output unit, and the blocking clause will force some threads to wait.
In any case, results from both test cases show that parallel output with thread exclusive units is by far the fastest approach and provides significant speed ups for increasing number of threads and particles. Thus, in general, writing to thread specific output units reduces the simulation runtime, however, this format requires reading thread specific output files during post-processing stages. These files have been configured to preserve the same data structure used in the original MODPATH program.
\subsection*{Grid complexity}
An interesting result is obtained when comparing the computational time of simulations from test case TC2 with unstructured grid versus the regular structured $T_{usg}/T_{str}$ with increasing number of particles. Runs are performed with \texttt{dynamic} scheduling and increasing the number of particles leads to almost the same computational time for the regular and unstructured grids (Fig. \ref{fig:complex:strusg}). These results confirm that above a threshold number of particles, the runtime of the particle tracking model appears to be independent of groundwater flow complexity or the number of cells. It is rather controlled by the total number of particles in concordance with results from both endpoint and timeseries simulations discussed in previous sections.
Overall, results from this test case show that there are scenarios in which better grid model resolutions can be achieved without sacrificing runtime. Notice that when employing the parallel output protocol some difference in runtimes between different flow model grids is preserved for the case with smaller output frequency (TS5). Still, the ratio $T_{usg}/T_{str}$ is close to one, and above $N_p=10^{5}$ remains practically constant for all number of threads. In particle tracking problems with large number of particles, the runtime is controlled by this quantity rather than specific features of the flow model grid, at least for the conditions presented in test case TC2. The extent of parameters up to which these observations remain valid should be further evaluated in future research.
\begin{figure}[ht!]
\centering
\includegraphics[scale=1]{figure_8_omp_benchmark_3d_strusg.pdf}
\caption{Relative time of simulations with unstructured grid in comparison to structured in test case TC2, for different number of particles and output protocols.}
\label{fig:complex:strusg}
\end{figure}
\section*{Conclusions}
In this article, the integration of parallel particles processing into the semi-analytical particle tracking program MODPATH has been presented and discussed. Potential for parallel computing was identified from an initial diagnostic of the source code and flow diagram of the program. The parallel particles loop is achieved by integrating the OpenMP library into the Fortran source code, allowing the management of a large number of particles in a desktop computer with efficient runtimes compared to serial processing of particles. Besides OpenMP directives, the implementation required the introduction of an intermediate object class to manage groundwater flow model data that avoids unnecessary replication of arrays when entering the parallel region.
Two synthetic test cases were used to quantify the improvements in computational times. Results show that speed up factors are limited by the number of particles, meaning that these approach an asymptotic value while increasing total particles for a given number of threads. Endpoint simulations in heterogeneous aquifers of varying degrees of heterogeneity where used to determine an efficient thread scheduling strategy. Analyses showed that \texttt{dynamic} scheduling is convenient for MODPATH in practical groundwater applications, as the time required for displacing each particle naturally adapts to streamline velocities. The influence of the scheduling protocol is not that important in simulations with low spatial variability of groundwater velocity and low number of particles. However, as aquifer heterogeneity and the number of particles increases, the chosen scheduling protocol clearly impacts the overall program performance, most noticeably when employing a number of threads equal to the maximum number of cores of the benchmark system.
For timeseries simulations, three output protocols compatible for parallel computing were discussed. Model runs using thread specific output units where significantly faster than other approaches writing to a single output unit. Moreover, depending on the number of timeseries snapshots, consolidating data from thread specific units into a single file introduces an important computational overhead that ultimately could degrade the advantages provided by parallel processing. Thread specific output files preserves parallelization speed ups, but requires adapting post-processing tools to load data from different files, and if necessary, sorting particles indexes. Output protocols for timeseries were configured to preserve the order of time indexes. The main difference in output files of the parallel implementation in comparison to previous serial models is the sorting of particles indexes. Comparison of simulation runtimes for unstructured and structured grids from second synthetic test case showed that above a certain number of particles, the total simulation time is similar for both grid types. That is, it is possible to obtain a higher spatiotemporal resolution from particle tracking models without significantly sacrificing runtimes.
As MODPATH already provides code infrastructure for managing MODFLOW models, it is a good starting point for the development of particle-based transport models that aim to be integrated with this program. In this regard, it is of relevance to improve the performance of MODPATH base code with parallel particles processing. Parallelization has been implemented minimizing interventions to the current public version of the source code. Future developments may consider further applications of the OpenMP library to other serial stages within the program outside the particles loop, and the integration of additional parallelization methodologies like the Message Passing Interface (MPI) which could be justified for high performance computing architectures simulating several particle groups, each of them with a large number of particles.
\section*{Acknowledgments}
To Alden Provost, Chris Langevin and Joseph Hughes of the U.S. Geological Survey for discussions regarding MODPATH source code.
The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska ‐ Curie grant agreement no. 814066 ( Managed Aquifer Recharge Solutions Training Network – MARSoluT ).
DFG acknowledges financial support provided by the AGAUR (Agència de Gestió d’Ajuts Universitaris i de Recerca, Generalitat de Catalunya), through project AGAUR-2017-SGR1485.
\section*{Supporting Information}
Source code for the parallel version of MODPATH: \href{https://github.com/MARSoluT/modpath-omp}{https://github.com/MARSoluT/modpath-omp}. Supporting Information is generally \emph{not} peer reviewed.
\printbibliography
\clearpage
\pagebreak
\section*{Figure captions}
Figure 1: MODPATH simplified flow chart. Particles loop is parallelized with OpenMP.
Figure 2: OpenMP specification for parallel particles loop in MODPATH. Memory state for all loop variables is explicitly declared as \texttt{shared}, \texttt{private} or \texttt{firstprivate}. Counters are declared with a summation \texttt{reduction} clause.
Figure 3: Synthetic two-dimensional heterogeneous aquifer (TC1). Vertical black line near the origin marks particles injection and white lines show reference streamlines for $\sigma^2_Y=2.5$.
Figure 4: Synthetic three-dimensional layered aquifer (TC2). $a)$ Original structured grid, $b)$ modified unstructured grid. In both panels, scatter points indicate the particles release area.
Figure 5: Performance of endpoint simulations with $N_p=10^7$ particles as a function of aquifer variability and different number of threads $N$. $a)$ Computational time for \texttt{static} (solid, squares) and \texttt{dynamic} (dashed, circles) scheduling with 4 and 8 threads; $b)$ measured ratio $T_{dyn}/T_{sta}$ for all thread configurations.
Figure 6: Speed up for each threading protocol. $a),b)$ as a function of the number of particles with aquifer variance of $\sigma_Y^2=2.5$, $c),d)$ as a function of aquifer variance with number of particles $N_p=10^7$.
Figure 7: Speed up factors for parallel output protocols. Panels $a),c),e)$ (first column) present results for timeseries in test case TC1 with 10 writing stages considering $\sigma^2_Y=2.5$. Panels $b),d),f)$ (second column) display results for test case TC2 with 5 (solid, TS5) and 30 (dashed, TS30) writing stages. Corresponding output protocol is indicated in each panel.
Figure 8: Relative time of simulations with unstructured grid in comparison to structured in test case TC2, for different number of particles and output protocols.
\end{document}
|
{
"arxiv_id": "2302.13767",
"language": "en",
"timestamp": "2023-02-28T02:29:45",
"url": "https://arxiv.org/abs/2302.13767",
"yymm": "2302"
} | \section{Introduction}
The purpose of this paper is to prove some enumeration formulae, conjectured by Egge~\cite{Egge2022PatternAvoidingFP}, of pattern-avoiding Fishburn permutations.
Enumerations of pattern-avoiding permutations have attracted lots of interest.
Recall that the reduced form of a permutation on the set $\{j_1,j_2,\cdots,j_r\} $ with $j_1<j_2<\cdots<j_r$ is a rearrangement of $1\ 2\cdots r$ by replacing $j_i$ with $i$ for $1\leq i\leq r$ without changing the relative order in the permutation.
For example, the reduced form of permutation 342 is 231.
We say a permutation $\pi$ containing a \emph{classical pattern} $\tau$ if $\pi$ has a subsequence (of the same length as $\tau$) whose reduced form is $\tau$.
A permutation $\pi$ avoiding a pattern $\tau$ means $\pi$ does not contain $\tau$.
We refer the readers to the book of Kitaev~\cite{Kitaev2011} for a comprehensive study.
Fishburn permutations are permutations avoiding the Fishburn pattern.
A permutation $\pi=\pi_1\pi_2\cdots\pi_n$ contains a Fishburn pattern, denoted by $f$ (see \figref{fishburnpattern}), if $\pi$ has a subsequence $\pi_i,\pi_{i+1},\pi_j$ with $1\leq i\leq n-2$ and $i+2\leq j\leq n$ such that $\pi_i<\pi_{i+1}$ and $\pi_i=\pi_j+1$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1, baseline=(current bounding box.center)]
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\draw[blue, ultra thick] (0,1) -- (3,1);
\draw[blue, ultra thick] (1,0) -- (1,3);
\filldraw (0.5,1.5) circle (4pt);
\filldraw (1.5,2.5) circle (4pt);
\filldraw (2.5,0.5) circle (4pt);
\end{tikzpicture}
\caption{The graph of the Fishburn pattern $f$.}
\label{fishburnpattern}
\end{figure}
Bousquet-M\'{e}lou, Claesson, Dukes, and Kitaev~\cite{BousquetMlou200822freePA} gave bijections between permutations avoiding the pattern $f$ and ascent sequences, unlabeled (2+2)-free posets, and linearized chord diagrams. They also showed their generating function is\[
1+\sum_{n\geq1}\prod_{i=1}^{n}(1-(1-t)^i).
\]
Since the enumeration is related to the Fishburn numbers~\cite{Stoimenow1998,Levande2013}, Gil and Weiner~\cite{Gil2021} call permutations avoiding the Fishburn pattern as \emph{Fishburn permutations}.
In this paper, we are interested in the enumeration of Fishburn permutations which avoid some classical patterns.
Gil and Weiner~\cite{Gil2021} studied the enumeration of Fishburn permutations which avoid classical patterns of size 3 or 4 in both regular and indecomposable forms. Based on Gil and Weiner's work, Egge~\cite{Egge2022PatternAvoidingFP} settled one of their conjectures and further studied the enumeration of Fishburn permutations that avoid classical pattern sets of size 3 or 4 by using generating tree techniques. Furthermore, he concluded several conjectures of the pattern avoidance of Fishburn permutations.
Let $\sigma_1,\cdots,\sigma_k$ be some classical patterns. Denote the set of Fishburn permutations of length $n$ which avoid each $ \sigma_1,\cdots,\sigma_k $ by $F_n(\sigma_1,\cdots,\sigma_k)$.
We prove an enumeration conjecture of Egge~\cite[Conjecture 10.17]{Egge2022PatternAvoidingFP}, except two concerning classical patterns of size 5.
Table~\ref{tab-1} is a summary of main results of this paper.
\begin{table}[ht]
{
\renewcommand{\arraystretch}{1.7}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Patterns $\sigma_1,\cdots,\sigma_k$ & |$F_{n}(\sigma_1,\cdots,\sigma_k)$|&Reference\\
\hline \hline
321, 1243 & $n^2-3n+4\quad(n\geq2)$&Theorem \ref{1243}\\[5pt]
\hline
321, 2134 & $n^2-3n+4\quad(n\geq2)$& Theorem \ref{2134} \\[5pt]
\hline
321, 1324 & $\frac{3}{2}n^{2}-\frac{13}{2}n+10\quad(n\geq3)$& Theorem \ref{1324}\\
\hline \hline
321, 1423, 2143 & $\binom{n}{2}+1\quad(n\geq 0) $& Theorem \ref{1423-2143}\\
\hline
321, 3142, 2143& $ \binom{n}{2}+1\quad(n\geq 0) $& Theorem \ref{3142-2143}\\
\hline
321, 2143, 3124 &$\binom{n}{2}+1\quad(n\geq 0)$& Theorem \ref{3142-2143}\\
\hline
321, 2143, 4123 &$ \binom{n}{2}+1\quad(n\geq 0) $& Theorem \ref{2143-4123}\\
\hline
321, 1423, 3124&$F_n+2\quad(n\geq4)$& Theorem \ref{1423-3124}\\
\hline
321, 1423, 4123& $F_{n+1}-1\quad(n\geq1)$& Theorem \ref{1423-4123}\\
\hline
321, 3124, 4123& $F_{n+1}-1\quad(n\geq1)$& Theorem \ref{3124-4123}\\
\hline
\end{tabular}
\caption{Enumeration of $\sigma$-avoiding Fishburn permutations}
\label{tab-1}
\end{center}}
\end{table}
The following lemma tells that all the enumeration results discussed in this paper can be classified into two types according to where the smallest letter 1 is placed.
\begin{lem}\label{lem-12}
Let $\pi\in F_n(321)$. Then either $\pi_1=1$ or $\pi_2=1$.
\end{lem}
\begin{proof}
We shall prove this lemma by contradiction. Suppose the letter 1 does not appear in the first two sites. We assume $\pi_1=x$ and $\pi_2=y$. If $x>y$, then $x,y$, and $1$ would be a copy of 321. If $x<y$, then $x-1$ must appear to the right of $y$. Therefore, $x,y$, and $x-1$ would be a copy of the Fishburn pattern $f$, which is a contradiction. Consequently, we have $\pi_1=1$ or $\pi_2=1$.
\end{proof}
Since the enumerations in this paper all involve 321-avoiding Fishburn permutations, the proofs of the enumeration results can be classified into two types according to the site where 1 appears.
Throughout the paper we shall denote the set $\{\pi\in F_n(321, \sigma_1,\cdots,\sigma_k):\pi_i=1\}$ by $F_n^{(i)}(321, \sigma_1,\cdots,\sigma_k)$ for $i=1$ or 2.
The rest of this paper is organized as follows. In Section~\ref{sec2}, we prove three enumeration results, Theorems \ref{1243}, \ref{2134}, and \ref{1324}, of Fishburn permutations which avoid 321 and a classical pattern of size 4.
In Section~\ref{sec3}, we consider the enumeration of Fishburn permutations that avoid 321 and two classical patterns of size 4.
It is worth mentioning that the last three enumeration results, Theorems \ref{1423-3124}, \ref{1423-4123}, and \ref{3124-4123} involve the famous Fibonacci numbers.
\section{Avoiding 321 and a classical pattern of size 4}
\label{sec2}
\subsection{Enumerating $ F_{n}(321,1243)$}
The main result of this subsection is as follows.
\begin{thm}\label{1243}
For $n\geq 3$, $|F_{n}(321,1243)|=n^{2}-3n+4$.
\end{thm
The enumeration of $F_{n}(321,1243)$ can be divided into the following two propositions according to the site where 1 appears in the permutation.
\begin{prop}
For $n\geq 3$, $|F_{n}^{(1)}(321,1243)|=n-1$.
\end{prop}
\begin{proof}
For $n=3$, $F_{n}(321,1243)=\{123,132\}$. Thus, $|F_{3}^{(1)}(321,1243)|=2$.
Next consider $n\geq 4$. We classify $F_{n}^{(1)}(321,1243)$ into several cases according to the value of $\pi_{2}$.
\begin{enumerate}[(1)]
\item $\pi_2=2$. Then to avoid 1243, the letters $3,\cdots,n$ must be in increasing order. In this case, $\pi=1\ 2\cdots n$.
\item $3\leq \pi_2 = k\leq n-1$.
First, to avoid 321, the letters $2,\cdots,k-1$ appearing after $k$ must be in increasing order. Let $$\pi^\prime=1\ k\ ^{1}\ 2\ ^{2}\cdots\ ^{k-2}\ k-1\ ^{k-1}$$
be a permutation of $\{1,2,\ldots, k \}$, where we label the sites between two letters for our convenience of discussion.
For any $ x\in \{k+1,\cdots,n\}$, we claim that $x$ must be in site $k-1$.
Otherwise, if $x$ is placed in site 1, then $k,x,k-1$ is a copy of $f$; and if $x$ is in site 2, 3, $\cdots$, or $k-2$, then $1,2,x,k-1$ is a copy of 1243.
Therefore, $k+1,\cdots,n$ must be in site $k-1$.
Next, to avoid 1243, the letters $k+1,\cdots,n$ appearing after $1\ k$ are also in increasing order. Hence $\pi$ must be of the following form
$$\pi=1\ k\ 2\cdots k-1\ k+1\cdots n.$$
Since $k$ has $n-3$ possible values, the number of permutations in this case is $n-3$.
\item $\pi_2=n$. In this case, $2,\cdots,n-1$ appears to the right of $n$. To avoid 321, they must be in increasing order. So $\pi=1\ n\ 2\cdots n-1$.
\end{enumerate}
It follows that $|F_{n}^{(1)}(321,1243)|=1+(n-3)+1=n-1$.
\end{proof}
Given a permutation $\pi\in F_{n-1}(\sigma)$, recall that a site of $\pi$ is \emph{active} if the new permutation of length $n$ obtained by inserting $n$ into this site is also a Fishburn permutation avoiding pattern $\sigma$.
\begin{prop}
For $n\geq 3$, $|F_{n}^{(2)}(321,1243)|=n^{2}-4n+5$.
\end{prop}
\begin{proof}
Suppose $\pi_1=k$.
We classify according to the value of $k$.
\begin{enumerate}[(1)]
\item$k=n$. To avoid 321, the letters $2,\cdots,n-1$ appearing after $1$ must be in increasing order. So $\pi=n\ 1\ 2\cdots n-1$.
\item $k=n-1$. Then to avoid 321, the letters $2,\cdots,n-2$ must be in increasing order. Suppose that $$\pi^{\prime}=n-1\ 1\ 2\cdots n-2$$ and consider the number of active sites of inserting $n$. We find there are two active sites: one is immediately to the right of 1, and the other one is the rightmost site of $\pi^{\prime}$. If $n$ was inserted to the sites except these two, then $1,2,n,n-2$ would create a copy of 1243. So in this case we have either $\pi=n-1\ 1\ n\ 2\cdots n-2$ or $\pi=n-1\ 1\ 2\cdots n-2\ n$. Clearly, the number of permutations in this case is 2.
\item $3\leq k\leq n-2$. Suppose that $\pi=k\ 1\ \pi_{3}\pi_{4}\cdots \pi_{n}$. Next we proceed to consider the value of $\pi_3$.
\begin{enumerate}[(i)]
\item If $\pi_{3}<k$, then $\pi_{3}=2$. Otherwise, $k,\pi_{3},2$ would be a copy of 321. Moreover, to avoid 1243, the letters to the right of 2 must be in increasing order. So $$\pi=k\ 1\ 2\cdots k-1\ k+1\cdots n.$$
\item $\pi_{3}=k+1$. First, $2,\cdots,k-1$ are in increasing order in the permutation to avoid 321. Second, $k+2,\cdots,n$ are also in increasing order. If not, $1,k+1$ and the decreasing pair would be a copy of 1243. Third, no letters in $k+2,\cdots,n$ can be inserted into $2\cdots k-1$. If $x\in \{k+2,\cdots,n\}$ are inserted into the sites between $2\cdots k-1$, say $x$ is inserted between $i$ and $i+1$ ($2\leq i\leq k-2$), then $1,i,x,i+1$ would create a copy of 1243. Now, we just need to consider $2\cdots k-1$ as a whole part and insert it into
$$k\ 1\ k+1\ ^{1}\ k+2\ ^{2}\ k+3\cdots ^{n-k-1}\ n\ ^{n-k}.$$ Indeed, as labeled above, the $n-k$ sites are all available to insert $2\cdots k-1$.
So the number of permutations in this case is $n-k$.
\item $\pi_{3}\geq k+2$. We claim $\pi_{4}=2$. If $2<\pi_{4}<\pi_{3}$, then $\pi_{3},\pi_{4},2$ would be a copy of 321. Whereas if $\pi_{4}>\pi_{3}$, then $\pi_{3},\pi_{4},\pi_{3}-1$ would create a copy of $f$. Similar to the discussion in the above paragraph, $2,\cdots,k-1$ are in increasing order and no letters in $\{k+1,\cdots,\pi_{3}-1,\pi_{3}+1,\cdots,n\}$ can be inserted into them to avoid 1243. So $k+1,\cdots,\pi_{3}-1,\pi_{3}+1,\cdots,n$ are in increasing order to the right of $k-1$. So $$\pi=k\ 1\ a_{1}\ 2\cdots k-1\ k+1\cdots \pi_{3}-1\ \pi_{3}+1\cdots n.$$ The number of permutations in this case is $n-k-1$.
\end{enumerate}
For a fixed $k$, the number of permutations in this case is $1+(n-k)+(n-k-1)=2(n-k)$.
\item $k=2$.
Then we shall classify according to the value of $\pi_{3}$.
\begin{enumerate}[(i)]
\item $\pi_{3}=3$. To avoid 1243, the letters $4,\cdots ,n$ appearing after 3 must be in increasing order.
Otherwise, 2, 3, and the decreasing pair would be a copy of 1243. So $\pi=2\ 1\ 3\ 4\cdots n$.
\item $\pi_{3}>3$. Suppose that $\pi=2\ 1\ x\ y\cdots$, where $x>3$. We first claim $y=3$. If $y>x$, then $x,y,x-1$ is a copy of $f$. If $3<y<x$, then $x,y,3$ is a copy of 321. So $y=3$. Next we claim that $4,\cdots,x-1,x+1,\cdots,n$ must be in increasing order. Otherwise, if a larger letter $a$ stays to the left of a smaller $b$, then $1,3,a,b$ is a copy of 1243, which $\pi$ avoids. Thus $$\pi=2\ 1\ x\ 3\ 4\cdots x-1\ x+1\cdots n.$$
\end{enumerate}
\end{enumerate}
Therefore, \[|F_{n}^{(2)}(321,1243)|=1+(n-2)+2+\sum_{k=3}^{n-2} 2(n-k) =n^{2}-4n+5.\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{1243}]
For $n=2$, $F_{n}(321,1243)=\{12,21\}$, so $|F_{2}(321,1243)|=2=2^{2}-3\times 2+4$.
For $n\geq 3$, we have
\[
\begin{aligned}
|F_{n}(321,1243)|=&|F_{n}^{(1)}(321,1243)|+|F_{n}^{(2)}(321,1243)|
\\=&(n-1)+(n^2-4n+5)\\
=&n^{2}-3n+4.
\end{aligned}
\]
\end{proof}
\subsection{Enumerating $F_{n}(321,2134)$}
The main result of this subsection is as follows.
\begin{thm}\label{2134}
For $n\geq 3$, $|F_{n}(321,2134)|=n^{2}-3n+4$.
\end{thm
\begin{prop}
For $n\geq 3$, $|F_{n}^{(2)}(321,2134)|=2n-4$.
\end{prop}
\begin{proof}
First we illustrate that $\pi_{1}\geq n-2$. Suppose that $\pi_{1}=k\leq n-3$. To avoid 2134, the letters $k+1,\cdots,n$ appearing after $k\ 1$ must be in decreasing order. But $|\{k+1,\cdots,n\}|\geq 3$, if they are in decreasing order there will be a copy of 321, which $\pi$ avoids. It follows that $k\geq n-2$.
\begin{enumerate}[(1)]
\item $k=n$. In this case, to avoid 321, the letters $2,\cdots,n-1$ must be in increasing order. So $\pi=n\ 1\ 2\cdots n-1$.
\item $k=n-1$. Similarly, $2,\cdots,n-2$ are also in increasing order. Suppose that $\pi^{\prime}=n-1\ 1\ ^{1}\ 2\ ^{2}\cdots\ ^{n-3}\ n-2\ ^{n-2}$, and we just need to insert $n$ to create $\pi$. We find that all the $n-2$ sites are active sites. So the number of permutations in this case is $n-2$.
\item $k=n-2$. Then to avoid 321, the letters $2,\cdots,n-3$ must be in increasing order. Let $$\pi^{\prime}=n-2\ 1\ ^{1}\ 2\ ^{2}\cdots\ ^{n-4}\ n-3\ ^{n-3}.$$ We just need to insert $n-1$ and $n$ into the sites labeled from 1 and $n-3$ to get $\pi$. First, $n$ must appear to the left of $n-1$ to avoid 2134. Second, $n-1$ can not appear in the site labeled from 1 to $n-4$ since $n,n-1,n-3$ would create a copy of 321. So $n-1$ can only be inserted to the site $n-3$. Further, let $$\pi^{\prime\prime}=n-2\ 1\ ^{1}\ 2\ ^{2}\cdots\ ^{n-4}\ n-3\ ^{n-3}\ n-1.$$ We should insert $n$ into $\pi^{\prime\prime}$ to obtain $\pi$. Noticed that $n$ can be inserted into all the $n-3$ sites. The number of permutations in this case is $n-3$.
\end{enumerate}
Therefore, \[|F_{n}^{(2)}(321,2134)|=1+(n-2)+(n-3)=2n-4.\]
\end{proof}
\begin{prop} \label{2134-p1=1}
For $n\geq 3$, $|F_{n}^{(1)}(321,2134)|=n^{2}-5n+8$.
\end{prop}
\begin{proof}
Let $\pi_{2}=k$. We shall prove $k=2$ or $k\geq n-2$ by contradiction. Suppose $3\leq k\leq n-3$. First, to avoid 321, the letters $2,\cdots,k-1$ must be in increasing order.
We next claim that $\pi_3=2$.
If not, for $\pi_3 \in\{k+1,\cdots,n\}$, then $k, \pi_3,k-1$ would create a copy of $f$.
Furthermore, $k+1,\cdots,n$ must be in decreasing order. Otherwise, $k,2$ and the increasing pair would be a copy of 2134. However, since $|\{k+1,\cdots,n\}|=n-k\geq 3$, the letters $n,n-1,n-2$ will create a copy of 321, which is a contradiction. So $k=2$ or $k\geq n-2$.
We can directly check that the permutation $\pi=1\ 2\cdots n$ is a permutation in $F_n(321,2134)$. Notice this contributes 1 to the total enumeration of $|F_n(321,2134)|$.
Next suppose $\pi=1\ 2\cdots j\ x\cdots$, where $1\leq j\leq x-2$ and $x\leq n$. We first prove $x=n$, $n-1$ or $n-2$. Suppose by way of contradiction that $x\leq n-3$. There are at least 4 letters to the right of $x$, which are $n,n-1,n-2,x-1$. The entry immediately to the right of $x$, say $y$, must be smaller than $x$. If not, $x,y,x-1$ would be a copy $f$. Furthermore, $n-2,n-1,n$ must be in decreasing order. If $n-2,n-1,n$ form an ascent, then $x,y$ and the increasing pair would be a copy of 2134. However, if $n-2,n-1,n$ are in decreasing order, then $n,n-1,n-2$ would be a copy of 321, which is a contradiction. So $x\geq n-2$. Next we classify it into three cases according to the value of $x$.
\begin{enumerate}[(1)]
\item$x=n$. Let $\pi=1\cdots j\ n\cdots$. Then $j+1,\cdots,n-1$ must be in increasing order to the right of $n$ to avoid 321. So $\pi=1\cdots j\ n\ j+1\cdots n-1$, in which the value of $j$ ranges from 1 to $ n-2 $. The number of permutations in this case is $n-2$.
\item$x=n-1$. Let $\pi=1\cdots j\ n-1\cdots$. Then $j+1,\cdots,n-2$ must be in increasing order to the right of $n-1$ to avoid 321. Suppose that $$\pi^{\prime}=1\cdots j\ n-1\ ^{1}\ j+1\ ^{2}\cdots\ ^{n-j-2}\ n-2\ ^{n-j-1}$$ and insert $n$ into $\pi^{\prime}$. Since $n-1,n,n-2$ is a copy of $f$, the letter $n$ can not be inserted into the site 1. Actually it can be inserted into the site labeled from 2 to $n-j-1$, totally $n-j-2$ active sites. At this time the value of $j$ ranges from 1 to $ n-3 $. So the number of permutations in this case is \[
\sum_{j=1}^{n-3}(n-j-2)=\sum_{j=1}^{n-3}j=\frac{1}{2}n^{2}-\frac{5}{2}n+3.
\]
\item$x=n-2$. Let $\pi=1\cdots j\ n-2\cdots$. First, to avoid 321, the letters $j+1,\cdots,n-3$ appearing after $n-2$ must be in increasing order. Suppose that $$\pi^{\prime}=1\cdots\ j\ n-2\ j+1\ ^{1}\cdots\ ^{n-j-2}\ n-3\ ^{n-j-3}.$$ Second, we consider into which sites we can insert $n$ and $n-1$. We claim $n$ and $n-1$ can not appear to the site immediately to the right of $n-2$, since either $n-2,n,n-3$ or $n-2,n-1,n-3$ is a copy of $f$.
Furthermore, $n$ must be to the left of $n-1$. If not, $n-2,j+1,n-1,n$ would be a copy of 2134. Next we claim $n-1$ can only be to the right of $n-3$. Otherwise, $n,n-1,n-3$ would be a copy of 321. Thus $n-1$ must be in the site $n-j-3$, and $n$ can be either in the site labeled from 1 to $n-j-2$ or the site immediately to the left of $n-1$. There are $n-j-3$ ways to place $n$ and $n-1$. Since $j$ ranges from 1 to $n-4$, the number of permutations in this case is
\[
\sum_{j=1}^{n-4}(n-j-3)=\sum_{j=1}^{n-4}j=\frac{1}{2}n^{2}-\frac{7}{2}n+6.
\]
\end{enumerate}
Therefore, \[\begin{aligned}
|F_{n}^{(1)}(321,2134)|=&1+(n-2)+(\frac{1}{2}n^{2}-\frac{5}{2}n+3)+(\frac{1}{2}n^{2}-\frac{7}{2}n+6)\\
=&n^{2}-5n+8.
\end{aligned}
\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{2134}]
For $n=2$, $F_{n}(321,2134)=\{12,21\}$, so $$|F_{2}(321,2134)|=2=2^{2}-3\times 2+4.$$
For $n\geq 3$, we have
\[
\begin{aligned}
|F_{n}(321,2134)|=&|F_{n}^{(1)}(321,2134)|+|F_{n}^{(2)}(321,2134)|
\\=&(n^2-5n+8)+(2n-4)\\
=&n^{2}-3n+4,
\end{aligned}
\]which completes the proof.
\end{proof}
\subsection{Enumerating $F_{n}(321,1324)$}
The main result of this subsection is as follows.
\begin{thm}\label{1324}
For $n\geq 3$, $|F_{n}(321,1324)|=\dfrac{3}{2}n^{2}-\dfrac{13}{2}n+10$.
\end{thm
\begin{prop}\label{1324-p1=1}
For $n\geq 3$, $|F_{n}^{(1)}(321,1324)|=n-1$.
\end{prop}
\begin{proof}
For $n=3$, $F_{n}(321,1324)=\{123,132\}$, so we have $|F_{3}^{(1)}(321,1324)|=2$.
We claim $2,\cdots,n-1$ must be in increasing order when $\pi_1=1$($n\geq 4$). Otherwise, suppose there is a decreasing pair $a,b$. On one hand, if $n$ is to the left of the decreasing pair, then $n,a,b$ would be a copy of 321. On the other hand, if $n$ is to the right of the decreasing pair, then $1,a,b,n$ would be a copy of 1324, which is a contradiction.
Next suppose that $\pi^\prime=1\ ^1\ 2\ ^2\cdots\ ^{n-2}\ n-1\ ^{n-1}$ and insert $n$ into it. We can directly check that all the $n-1$ sites are active sites. Surely the number of permutations in this case is $n-1$.
Therefore, for $n\geq 3$,
\[
|F_{n}^{(1)}(321,1324)|=n-1.
\]
\end{proof}
\begin{prop}
For $n\geq 3$, $|F_{n}^{(2)}(321,1324)|=\dfrac{3}{2}n^{2}-\dfrac{15}{2}n+11$.
\end{prop}
\begin{proof}
Suppose $\pi_1=k$. We classify according to the value of $k$.
\begin{enumerate}[(1)]
\item $k=2$. The form of permutation in this case is similar to the one in Proposition \ref{1324-p1=1}. Suppose that $$\pi^\prime=2\ 1\ ^1\ 3\ ^2\cdots\ ^{n-3}\ n-1\ ^{n-2}$$ and insert $n$. It is easy to check that all the $n-2$ sites are available to insert $n$. Hence the number of permutations in this case is $n-2$.
\item $k=n$. Then $2,\cdots,n-1$ must be in increasing order to the right of 1 to avoid 321. So\[
\pi=n\ 1\ 2\cdots n-1.
\]
\item $k=n-1$. Then $2,\cdots,n-2$ must be in increasing order. Suppose that
\[
\pi^{\prime}=n-1\ 1\ ^{1}\ 2\ ^{2}\cdots ^{n-3}\ n-2\ ^{n-2}.
\] Next we will insert $n$ into $\pi^{\prime}$. We find that all the $n-2$ sites are available to insert $n$. So the number of permutations in this case is $n-2$.
\item $k=n-2$. Suppose that \[
\pi^{\prime}=n-2\ 1\ ^{1}\ 2\ ^{2}\cdots ^{n-4}\ n-3\ ^{n-3}.
\] and consider into which sites we can insert $n$ and $n-1$ to obtain $\pi$.
\begin{itemize}
\item If $n$ is to the right of $n-1$, then $n$ must be adjacent to $n-1$. Otherwise we let $x$ be an entry between them. Then $1,n-1,x,n$ would be a copy of 1324, which $\pi$ avoids. Actually, we can insert $n-1\ n$ into all the $n-3$ sites in $\pi^{\prime}$. The number of permutations in this subcase is $n-3$.
\item If $n$ is to the left of $n-1$, then $n-1$ must be inserted into the rightmost site to avoid 321 and $n$ can be inserted into all the $n-3$ sites. The number of permutations in this subcase is $n-3$.
\end{itemize}
It follows that the number of permutations when $k=n-2$ is $2(n-3)$.
\item $3\leq k\leq n-3$. Then $2,\cdots,k-1$ must be in increasing order to the right of $k\ 1$ to avoid 321. Next we consider where $k+1,\cdots,n$ are placed.
\begin{itemize}
\item None of the letters in $\{k+1,\cdots,n\}$ are to the left of $k-1$. Then $k+1,\cdots,n-1$ must be in increasing order. Suppose that\[
\pi^\prime= k\ 1\ 2\cdots k-1\ ^{1}\ k+1\ ^{2} \cdots \ ^{n-k-1}\ n-1\ ^{n-k}.
\]All the $n-k$ sites are active to insert $n$ to obtain $\pi$.
The number of permutations in this case is $n-k$.
\item Only one entry is in $\{k+1,\cdots,n\}$ to the left of $k-1$. Then the entry must be $n$. If not, suppose $x$ is to the left of $k-1$ with $k+1\leq x<n$. Thus $1,x,k-1,n$ is a copy of 1324, which $\pi$ avoids. Therefore $k+1,\cdots,n-1$ appearing after $n$ must be in increasing order and $n$ can be inserted into all the sites between 1 and $k-1$ as follows:
\[
\pi^\prime=k\ 1\ ^{1}\ 2\ ^{2}\cdots\ ^{k-2}\ k-1\ k+1\cdots n-1.
\]
The number of permutations in this case is $k-2$.
\item At least two letters in $\{k+1,\cdots,n\}$ are to the left of $k-1$. Suppose that \[
\pi^\prime=k\ 1\ ^{1}\ 2\ ^{2}\cdots\ k-2 \ ^{k-2}\ k-1.
\] We claim that $k+1,\cdots,n$ are all to the left of $k-1$ in increasing order and they are inserted into the same site between 1 and $k-1$.
First, the letters in $\{k+1,\cdots,n\}$ to the left of $k-1$ must be in increasing order. If there is a decreasing pair, say $x,y$, then $x,y,k-1$ would be a copy of 321.
Second, we shall prove $k+1,\cdots,n$ are all to the left of $k-1$. If one or more letters of them are to the right of $k-1$, then they must be smaller than those are to the left of $k-1$. Otherwise, we select $x$ from the left of $k-1$ and $y$ from the right of $k-1$ respectively with $x<y$. Then $1,x,k-1,y$ is a copy of 1324, which $\pi$ avoids. Based on this, we let the entry $s$ be the largest one to the right of $k-1$. The letters in $\{k+1,\cdots,n\}$ to the left of $k-1$ are all greater than $s$. Since in this case, there are at least two letters to the left of $k-1$, so $s\leq n-2$. Then $s+1$ and the entry immediately to its right together with $s$ would be a copy of $f$, which $\pi$ avoids. Therefore there are no letters are to the right of $k-1$.
Third, the letters in $\{k+1,\cdots,n\}$ to the left of $k-1$ must be inserted into the same site. If $x$ and $y$ are inserted into two different sites, $x$ is inserted into site $i$ and $y$ is inserted into site $j$, then $i,x,j,y$ would be a copy of 1324. So the letters in $\{k+1,\cdots,n\}$ that are to the left of $k-1$ must be inserted into the same site.
In this case, $k+1,\cdots,n$ can be inserted into all the $k-2$ sites that the left of $k-1$ in increasing order. The number of permutations in this case is $k-2$.
\end{itemize}
Consequently, the number of permutations in this case is $(n-k)+(k-2)+(k-2)=n-k-4$.
\end{enumerate}
Therefore, \[
\begin{aligned}
|F_{n}^{(2)}(321,1324)|=&(n-2)+1+(n-2)+2(n-3)+\sum_{k=3}^{n-3}(n-k-4)\\
=&\dfrac{3}{2}n^{2}-\dfrac{15}{2}n+11.
\end{aligned}
\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{1324}]
For $n\geq 3$, we have
\[
\begin{aligned}
|F_{n}(321,1324)|=&|F_{n}^{(1)}(321,1324)|+|F_{n}^{(2)}(321,1324)|
\\=&(n-1)+(\dfrac{3}{2}n^{2}-\dfrac{15}{2}n+11)\\
=&\dfrac{3}{2}n^{2}-\dfrac{13}{2}n+10.
\end{aligned}
\]
\end{proof}
\section{Avoiding 321 and two classical patterns of size 4}
\label{sec3}
\subsection{Enumerating $F_{n}(321,1423,2143)$}
The main result of this subsection is as follows.
\begin{thm}\label{1423-2143}
For $n\geq 0$, $|F_{n}(321,1423,2143)|=\binom{n}{2}+1$.
\end{thm
\begin{prop}
For $n\geq 4$, $$|F_{n}^{(2)}(321,1423,2143)|=\frac{(n-1)(n-2)}{2}+1.$$
\end{prop}
\begin{proof}
Let $\pi_1=k$ with $2\leq k\leq n$.
\begin{enumerate}[(1)]
\item If $k=2$, then $3,4,\cdots,n$ must be in increasing order. If not, let $x$ and $y$ be a decreasing pair. Then $2,1,x,y$ is a copy of 2143. So $\pi=2\ 1\ 3\cdots n$.
\item If $k=3$, then $4,\cdots,n$ must be in increasing order to the right of $3\ 1$ to avoid 2143. Suppose $$\pi^{\prime}=3\ 1\ ^{1} 4 ^{2}\cdots ^{n-3} n\ ^{n-2}$$ and insert 2 into $\pi^\prime$. We find that all of the sites labeled from 1 to $n-2$ are available to insert 2. So in this case, the number of permutations is $n-2$.
\item If $4\leq k\leq n-1$, then both $2,\cdots,k-1$ and $k+1,\cdots ,n$ are in increasing order to avoid 321 and 2143 respectively. Assume that $$\pi^{\prime}=k\ 1\ ^{1}\ 2\ ^{2}\cdots ^{k-3}\ k-2\ ^{k-2}\ k-1 \ ^{k-1}$$ and we just need to insert $k+1,\cdots,n$ into these sites in increasing order. We find $k+1$ can not be inserted into the site to the left of $k-2$. If $k+1$ is inserted into site labeled from 1 to $k-3$, then $1,k+1,k-2,k-1$ would be a copy of 1423, which $\pi$ avoids. Hence $k+1$ can only be inserted into the sites to the right of $k-2$.
\begin{itemize}
\item If $k+1$ is inserted into site $k-2$, then we assume that there are $j$ letters between $k-2$ and $k-1$ with $1\leq j\leq n-k$ and there are $n-k-j$ letters remaining to be inserted into site $k-1$ in increasing order. So
\[
\pi=k\ 1\ 2 \cdots k-2\ \underbrace{k+1 \cdots k+j}_j\ k-1\ \underbrace{k+j+1 \cdots n}_{n-k-j}.
\]For a fixed $k$, there is only one permutation in terms of the value of $j$. Since $j$ ranges from 1 to $n-k$, the number of permutations in this subcase is $n-k$.
\item If $k+1$ is inserted into site $k-1$, then \[
\pi=k\ 1\ 2\ \cdots k-2\ k-1\ k+1\ \cdots n.
\]
\end{itemize}
So in this case, the number of permutations is $n-k+1$.
\item If $k=n$, then to avoid 321, the letters $2,\cdots,n-1$ must be in increasing order. Thus $\pi=n\ 1\ 2\cdots n-1$.
\end{enumerate}
For $n=4$, we have $F_{4}(321,1423,2143)=\{2134,3124,3142,4123\}$, so \[
|F_{4}^{(2)}(321,1423,2143)|=4=\frac{3\cdot 2}{2}+1.
\]
For $n\geq 5$, we have \[
\begin{aligned}
|F_{n}^{(2)}(321,1423,2143)|
=&1+(n-2)+\sum_{k=4}^{n-1}(n-k+1)+1\\
=&n+\frac{n^2-5n+4}{2}\\
=&\frac{(n-1)(n-2)}{2}+1.
\end{aligned}
\]
Therefore, for $n\geq 4$,
$$|F_{n}^{(2)}(321,1423,2143)|=\frac{(n-1)(n-2)}{2}+1.$$
\end{proof}
\begin{prop}
For $n\geq 4$, $$|F_{n}^{(1)}(321,1423,2143)|=n-1.$$
\end{prop}
\begin{proof}
Let $\pi_2=k.$ We claim $k=2$ or $k=3$. If not, assume that $k\geq 4$. If 2 is to the left of 3, then $1,k,2,3$ is a copy of 1423. If 3 is to the left of 2, then $k,3,2$ is a copy of 321. So $k=2$ or $k=3$.
\begin{itemize}
\item $k=3$. Then 2 must be immediately to the right of 3. Otherwise, 3 and the entry immediately to its right together with 2 would be a copy of $f$, which $\pi$ avoids. Moreover, to avoid 2143, the letters $4,\cdots,n$ appearing after $3\ 2$ must be in increasing order. So\[
\pi=1\ 3\ 2\ 4\cdots n.
\]
\item $k=2$. We assume that $\pi$ is of the form $\pi=1\ 2\cdots j\cdots$. If $j=n$ then $\pi=1\ 2\cdots n$. Next we assume $\pi=1\ 2\cdots j\ x\cdots$ with $2\leq j\leq n-2$ and $x\geq j+2$. We claim $x=j+2$. If $x\geq j+3$, we consider the order of $j+1$ and $j+2$ in $\pi$. If $j+1$ is to the left of $j+2$, then $j,x,j+1,j+2$ is a copy of 1423. If $j+2$ is to the left of $j+1$, then $x,j+2,j+1$ is a copy of 321. So $x=j+2$. Furthermore, $j+1$ must be immediately to the right of $j+2$. If not, $j+2$ and the entry immediately to its right together with $j+1$ would be a copy of $f$. Besides, to avoid 2143, the letters $j+3\cdots,n$ appearing after $j+2\ j+1$ must be in increasing order. So \[
\pi=1\ 2\cdots j\ j+2\ j+1\ j+3\cdots n.
\]Together with the exceptional permutation $\pi=1\ 2\cdots n$, the number of permutations in this case is $n-3+1=n-2$.
\end{itemize}
Therefore, $$|F_{n}^{(1)}(321,1423,2143)|=1+(n-2)=n-1.$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{1423-2143}]
We can check directly for $n\leq 3$.
For $n\geq 4$, we have
\[
\begin{aligned}
|F_{n}(321,1423,2143)|=&|F_{n}^{(1)}(321,1423,2143)|+|F_{n}^{(2)}(321,1423,2143)|
\\=&(n-1)+\frac{(n-1)(n-2)}{2}+1\\
=&\frac{n^{2}-n}{2}+1\\
=&\binom{n}{2}+1.
\end{aligned}
\]
It follows that
\[
|F_{n}(321,1423,2143)|=\binom{n}{2}+1(n\geq 0).
\]
\end{proof}
\subsection{Enumerating $F_{n}(321,3142,2143)$ and $F_{n}(321,2143,3124)$}
The main result of this subsection is as follows.
\begin{thm}\label{3142-2143}
For $n\geq 0$, $|F_{n}(321,3142,2143)|=|F_{n}(321,2143,3124)|=\binom{n}{2}+1$.
\end{thm
\begin{prop}\label{3142-2143-p2=1}
For $n\geq 3$, $$|F_{n}^{(2)}(321,3142,2143)|=n-1.$$
\end{prop}
\begin{proof}
If $n=3$, then $ F_{3}(321,3142,2143)=\{123,132,213,312\} $, so $$|F_{3}^{(2)}(321,3142,2143)|=2=3-1.$$
Now we consider $n\geq4$. Let $\pi_1=k$ with $2\leq k\leq n$.
\begin{itemize}
\item If $k=2$, then to avoid 2143, the letters $3,4,\cdots,n$ must be in increasing order. So $\pi=2\ 1\ 3\cdots n $.
\item If $3\leq k\leq n-1$, then we claim $$\pi=k\ 1\ 2\cdots k-1\ k+1\cdots n.$$ First, to avoid 321, the letters $2,\cdots,k-1$ appearing after $k$ must be in increasing order. Second, to avoid 2143, the letters $k+1,\cdots,n$ appearing after $k$ and 1 must be in increasing order. Third, $k+1$ must be to the right of $k-1$. If $k+1$ appears to the left of $k-1$, then $k,1,k+1,k-1$ would be a copy of 3142, which $\pi$ avoids. This proves our claim.
\item If $k=n$, then to avoid 321, the letters $2,\cdots,n-1$ must be in increasing order. So $\pi=n\ 1\ 2\cdots n-1$.
\end{itemize}
Fixing $k$, we can determine a unique permutation. The number of permutations in this case, which depends on $k$ ranging from 2 to $n$, is $n-1$.
\end{proof}
\begin{prop}\label{3142-2143-p1=1}
For $n\geq 3$, $$|F_{n}^{(1)}(321,3142,2143)|=\frac{(n-1)(n-2)}{2}+1.$$
\end{prop}
\begin{proof}
For $n=3$, then $ F_{3}(321,3142,2143)=\{123,132,213,312\} $, so $$|F_{3}(321,3142,2143)|_{\pi_{1}=1}=2=\frac{2\times 1}{2}+1.$$
For $n\geq4$, if $\pi$ has no descents, then $\pi=1\ 2\cdots n$. For $\pi$ with at least one descent, we assume $$\pi=1\ 2\cdots k\ x\cdots,$$ where $1\leq k\leq n-2$ and $k+2\leq x\leq n$. Under our assumption, there is at least one entry after $x$. Next we will divide our proof into several cases according to the value of $x$.
\begin{itemize}
\item $x=n$. Then to avoid 321, the letters $k+1,\cdots,n-1$ appearing after $n$ must be in increasing order. So $\pi=1\ 2\cdots k\ n\ k+1\cdots n-1.$
\item $k+3\leq x\leq n-1$. First, to avoid 321, the letters $k+1,\cdots,x-1$ appearing after $x$ must be in increasing order.
Assume that $$\pi^\prime= 1\ 2\cdots k\ x\ ^{1}\ k+1\ ^{2}\cdots\ ^{x-k-1} x-1.$$ We need to insert $x+1,\cdots,n$ into $ \pi^\prime $ to get $\pi$. We noticed that no letters in $\{x+1,\cdots,n\}$ can be inserted into site 1. If not, let $a$ be the entry inserted into site 1. Then $x,a,x-1$ would be a copy of $f$. So $k+1$ is the entry immediately to the right of $x$. Second, to avoid 2143, the letters $x+1,\cdots,n$ appearing after $x $ and $k+1$ must be in increasing order. Furthermore, we claim that $x+1$ must be to the right of $x-1$. If $x+1$ is inserted into sites labeled from 2 to $x-k-1$, then $x,k+1,x+1,x-1$ would be a copy of 3142. Actually, $x+1$ must be to the right of $x-1$ in increasing order and $$\pi=1\ 2\cdots k\ x\ k+1\cdots x-1\ x+1\cdots n.$$
\item $x=k+2$. Let $y$ be the entry immediately to the right of $k+2$. Then $k+1$ must be immediately to the right of $k+2$. If $y\neq k+1$, then $y$ must be greater than $k+2$. Clearly $k+2,y,k+1$ is a copy of $f$, which $\pi$ avoids. So $y=k+1$. Furthermore, to avoid 2143, the letters $k+3,\cdots,n$ appearing after $k+2$ and $k+1$ must be in increasing order. Therefore, $\pi=1\cdots k\ k+2\ k+1\ k+3\cdots n.$
\end{itemize}
Fixing $k$ and $x$, we can determine a unique permutation.
???Since $x$ ranges from $k+2$ to $n$ and $k$ ranges from 1 to $n-2$,
the number of permutations is \[
\sum_{k=1}^{n-2}(n-k-1)=\frac{(n-1)(n-2)}{2}.
\] Together with the exceptional case $\pi=1\ 2\cdots n$, we have
$$|F_{n}^{(1)}(321,3142,2143)|=\frac{(n-1)(n-2)}{2}+1.$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{3142-2143}]
We first prove \[
|F_{n}(321,3142,2143)|=\binom{n}{2}+1.
\]
The cases when $n\leq 2$ are easy to check.
For $n\geq 3$, we have
\[
\begin{aligned}
|F_{n}(321,3142,2143)|=&|F_{n}^{(1)}(321,3142,2143)|+|F_{n}^{(2)}(321,3142,2143)|
\\=&[\frac{(n-1)(n-2)}{2}+1]+(n-1)\\
=&\frac{n^{2}-n}{2}+1\\
=&\binom{n}{2}+1.
\end{aligned}
\]
Therefore, for $n\geq 0$ we have
\[
|F_{n}(321,3142,2143)|=\binom{n}{2}+1.
\]
For the enumeration of $|F_{n}(321,2143,3124)|$, it is nearly completely same with the enumeration of $|F_{n}(321,3142,2143)|$, except for two differences of the permutation forms, which does not influence the total enumeration. The cases for $0\leq n\leq 2$ are easy to check. The two different points are as follows.
One is in the proof of Proposition \ref{3142-2143-p2=1}, when $3\leq k\leq n-1$. For the same reason, we have \[
\pi^{\prime}=k\ 1\ ^{1}\ 2\ ^{2} \cdots k-1 ^{k-1}\cdots,
\] and we need to insert $k+1,\cdots,n$ into $\pi^{\prime}$ in increasing order.
We claim that $k+1,\cdots,n$ should be inserted into site 1. If not, $k,1,2,k+1$ would be a copy of 3124, which $\pi$ avoids. So \[
\pi= k\ 1\ k+1\cdots n\ 2\cdots k-1.
\]
The other is in the proof of Proposition \ref{3142-2143-p1=1}, when $k+3\leq x\leq n-1$. Similarly, we need to insert $x+1,\cdots,n$ into the site labeled from 2 to $x-k-1$. If we insert them into the sites after site 2, then $x,k+1,k+2,x+1$ would be a copy of 3124. So $x+1,\cdots,n$ must be inserted into site 2, and \[
\pi=1\ 2\cdots k\ x\ k+1\ x+1\cdots n\ k+2\cdots x-1.
\]
Therefore, for $n\geq 0$, we have$$|F_{n}(321,3142,2143)|=|F_{n}(321,2143,3124)|=\binom{n}{2}+1,$$ which completes the proof.
\end{proof}
\subsection{Enumerating $F_{n}(321,2143,4123)$}
The main result of this subsection is as follows.
\begin{thm}\label{2143-4123}
For $n\geq 0$, $|F_{n}(321,2143,4123)|=\binom{n}{2}+1$.
\end{thm
\begin{prop}
For $n\geq 3$, $|F_{n}^{(2)}(321,2143,4123)|=n-1$.
\end{prop}
\begin{proof}
If $n=3$, then $ F_{3}(321,2143,4123)=\{123,132,213,312\} $, so $$|F_{3}^{(2)}(321,2143,4123)|=2=3-1.$$
Consider $n\geq4$. Let $\pi_1=k$. We shall prove $k<4$. If $k\geq 4$, then $k-1$ and $k-2$ must be to the right of 1. If they are in increasing order, then $k,1,k-2,k-1$ is a copy of 4123. If they are in decreasing order, then $k,k-1,k-2$ is a copy of 321. So we have $k=2$ or $k=3$.
\begin{itemize}
\item If $k=2$, then to avoid 2143, the letters $3,4,\cdots,n$ must be in increasing order. So $\pi=2\ 1\ 3\cdots n $.
\item If $k=3$, then $4,\cdots,n$ are also in increasing order similar to the above. Let \[
\pi^{\prime}=3\ 1\ ^{1}\ 4\ ^{2}\ 5\cdots n\ ^{n-2}
\] and we should insert 2 into $\pi^{\prime}$ now. It is easy to check that 2 can be inserted into all the $n-2$ sites, so the number of permutations in this case is $n-2$.
\end{itemize}
Therefore,
$$|F_{n}^{(2)}(321,2143,4123)|=1+(n-2)=n-1.$$
\end{proof}
\begin{prop}
For $n\geq 3$, $$|F_{n}^{(1)}(321,2143,4123)|=\frac{(n-1)(n-2)}{2}+1.$$
\end{prop}
\begin{proof}
If $n=3$, then $ F_{3}(321,3142,2143)=\{123,132,213,312\} $, so $$|F_{3}^{(1)}(321,2143,4123)|=2=\frac{2\times 1}{2}+1.$$
If $n\geq4$, then we assume that $\pi$ has the following form $\pi=1\ 2\cdots k\cdots$. If $k=n$ then $\pi=1\ 2\cdots n$. Next we consider $\pi=1\cdots k\ x\cdots$ with $1\leq k\leq n-2$ and $x\geq k+2$. Moreover, we claim $x=k+2$ or $x=k+3$. If not, we assume that $x\geq k+4$ and consider the order of $k+1$, $k+2$ and $k+3$. If they are in increasing order, then $x,k+1,k+2,k+3$ would be a copy of 4123. If $k+1,k+2,k+3$ contain at least one descent, then $x$ and the decreasing pair would be a copy of 321, which is a contradiction. So we have $x=k+2$ or $x=k+3$.
\begin{itemize}
\item $x=k+2$. Then $k+1$ must be immediately to the right of $k+2$ to avoid $f$. To avoid 2143, the letters $k+3,\cdots,n$ appearing after $k+2$ and $k+1$ must be in increasing order. So $\pi=1\ 2\cdots k\ k+2\ k+1\ k+3\cdots n.$
\item $x=k+3$. Then $k+1$ must be immediately to the right of $k+3$. If not, let $y$ be the entry immediately to the right of $k+3$. Note that $y=k+2$ or $y\geq k+4$ now. If $y=k+2$, then $k+3,k+2,k+1$ would be a copy of 321. If $y\geq k+4$, then $k+3,y,k+2$ would be a copy of $f$. So $y=k+1$.
Next we consider the relation of $n$ and $k+3$. If $n=k+3$, then $k+2$ is to the right of $k+1$ and we have determined $\pi=1\ 2\cdots k\ k+3\ k+1\ k+2$. If $n>k+3$, then to avoid 2143, the letters $k+4,\cdots,n$ appearing after $k+3$ and $k+1$ must be in increasing order. Let\[
\pi^{\prime}=1\ 2\cdots k\ k+3\ k+1\ ^{1}\ k+4\ ^{2}\cdots ^{n-k-3} n\ ^{n-k-2}
\] and we insert $k+2$ into it.
We find all the $n-k-2$ sites are available to insert $k+2$. So the number of permutations in this case is $n-k-2$.
\end{itemize}
For a fixed $k$, the number of permutations which depends on the value of $k$ is $1+(n-k-2)=n-k-1.$
Together with the exceptional case $\pi=1\ 2\cdots n$, we have
$$|F_{n}^{(1)}(321,2143,4123)|=\sum_{k=1}^{n-2}(n-k-1)+1=\frac{(n-1)(n-2)}{2}+1.$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{2143-4123}]
We can easily prove the cases when $n\leq 2$.
For $n\geq 3$, we have
\[
\begin{aligned}
|F_{n}(321,2143,4123)|=&|F_{n}^{(1)}(321,2143,4123)|+|F_{n}^{(2)}(321,2143,4123)|
\\=&[\frac{(n-1)(n-2)}{2}+1]+(n-1)\\
=&\frac{n^{2}-n}{2}+1\\
=&\binom{n}{2}+1.
\end{aligned}
\]
Therefore, for $n\geq 0$ we have
\[
|F_{n}(321,2143,4123)|=\binom{n}{2}+1.
\]
\end{proof}
\subsection{Enumerating $F_{n}(321,1423,3124)$}
In this subsection and the following two subsections, the enumeration involves the famous Fibonacci numbers, see A000045 in OEIS~\cite{oeis}. The Fibonacci numbers, denoted by $F_{n}$, satisfy initial conditions $F_{0}=1, F_1=1$ and recurrence relation $F_{n}=F_{n-1}+F_{n-2}$ for $n\geq 2$.
It is worth mentioning that, in the enumeration involving Fibonacci numbers, we consider the first $k$ letters of $\pi$ of some kind of classifications, which are also the rearrangement of $1\ 2\cdots\ k$, as a whole part to play the role of 1.
In this way, we can reduce the enumeration to the permutation of a lower length, which can be turned into the enumeration we have proved before, to facilitate enumeration and induction.
The main result of this subsection is as follows.
\begin{thm}\label{1423-3124}
For $n\geq 4$, $$|F_{n}(321,1423,3124)|=F_{n}+2.$$
\end{thm
\begin{prop}
For $n\geq 4$, we have $$|F_{n}^{(1)}(321,1423,3124)|=F_{n-1}.$$
\end{prop}
\begin{proof}
We shall prove by induction. For $n=4$ we have \[
F_{4}^{(1)}(321,1423,3124)=\{1234,1243,1324\}.
\] So, $$|F_{4}^{(1)}(321,1423,3124)|=3=F_{3}.$$
Suppose this equality holds for positive integers less than $n$. We shall prove it holds for $n$ as well. We claim $\pi_{2}=2$ or $\pi_{2}=3$. If not, we have $\pi_{2}\geq 4$ and 2,3 must appear to the right of $\pi_{2}$. If 2 is to the left of 3, then $1,\pi_{2},2,3$ is a copy of 1423. If 3 is to the left of 2, then $\pi_{2},3,2$ is a copy of 321. Therefore, $\pi_{2}\leq 4$. Next we classify $F_{n}^{(1)}(321,1423,3124)$ into three cases.
\begin{enumerate}[(1)]
\item $\pi=1\ 2\cdots n$.
\item $\pi=1\ 2\cdots k\ x\ y\cdots$ with $2\leq k\leq n-2$ and $x\geq k+2$. We claim $x=k+2$ and $y=k+1$. First, we prove $x= k+2$. If $x>k+2$, then we consider the order of $k+1$ and $k+2$. If $k+1$ is to the left of $k+2$, then $k,x,k+1,k+2$ would be a copy of 1423. If $k+2$ is to the left of $k+1$, then $x,k+2,k+1$ would be a copy of 321. So $x=k+2$. Second, we prove $y=k+1$. If $y\neq k+1$, then $k+2,y,k+1$ would be a copy of $f$. Now we have shown that the first $k+2$ letters of $\pi$ are $1,2,\cdots,k,k+2,k+1$.
We can check that any pair or triple from them can not be parts of the copy of 321,1423,3124 and $f$.
The first $k+2$ letters, as a whole, can play the role of 1 as the minimum entry in any copy of 321,1423,3124 and $f$. So, by induction, the number of permutations in this case is \[
|F_{n-k-1}^{(1)}(321,1423,3124)|=F_{n-k-2}.
\]
\item $\pi=1\ 3\ x\cdots$. Then $x=2$. If not, $3,x,2$ would be a copy of $f$. Surely the first three letters of $\pi$ are 1,3,2. Similar to the above case, 1,3,2 are also as a whole to play the role of 1. The number of permutations in this case is \[
|F_{n-2}^{(1)}(321,1423,3124)|=F_{n-3}.
\]
\end{enumerate}
Therefore,\[
\begin{aligned}
|F_{n}^{(1)}(321,1423,3124)|=&1+\sum_{k=2}^{n-2}F_{n-k-2}+F_{n-3}\\
=&1+\sum_{k=0}^{n-3}F_{k}\\
=&1+(F_{n-1}-1)\\
=&F_{n-1},
\end{aligned}
\]where the third equality follows from the property of Fibonacci numbers. For more details, see \cite{Sagan2020}.
\end{proof}
\begin{prop}
For $n\geq 4$, $$|F_{n}^{(2)}(321,1423,3124)|=F_{n-2}+2.$$
\end{prop}
\begin{proof}
Suppose $\pi_1=k.$ We claim the value of $k$ can be chosen from $\{2,3,n\}$. If not, we assume that $4\leq k\leq n-1$. Then there are at least two letters less than $k$ to the right of 1 and they can not contain descents to avoid 321. Next suppose $$\pi^{\prime}=k\ 1\ 2\cdots k-1\cdots$$ and consider which site after 1 we can insert $n$ into. If $n$ is to the right of 2, then $k,1,2,n$ would be a copy of 3124. Whereas if $n$ is immediately to the left of 2, then $1,n,2,k-1$ would be a copy of 1423, which is a contradiction. So we have $k\leq 4$ or $k=n$.
\begin{enumerate}[(1)]
\item $k=2$. Then 2,1 can be as a whole to play the role of 1. The number of permutations in this case is \[
|F_{n-1}^{(1)}(321,1423,3124)|=F_{n-2}.
\]
\item $k=3$. First we have $\pi_{n}=2$. If there are letters, say $x$, to the right of 2, then $3,1,2,x$ is a copy of 3124. Second, $4,\cdots,n$ must be in increasing order. If $4,\cdots,n$ contain at least one descent, then the decreasing pair and 2 would form a copy of 321. So\[
\pi=3\ 1\ 4\cdots n\ 2.
\]\item $k=n$. Then to avoid 321, the letters $2,\cdots, n-1$ must be in increasing order. So \[
\pi =n\ 1\ 2\cdots n-1.
\]
\end{enumerate}
Thus, $$|F_{n}^{(1)}(321,1423,3124)|=F_{n-2}+1+1=F_{n-2}+2.$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{1423-3124}]
For $n\geq 4$, we have
\[
\begin{aligned}
|F_{n}(321,1423,3124)|=&|F_{n}^{(1)}(321,1423,3124)|+|F_{n}^{(2)}(321,1423,3124)|
\\=&F_{n-1}+(F_{n-2}+2)\\
=&F_{n}+2.
\end{aligned}
\]
\end{proof}
\subsection{Enumerating $F_{n}(321,1423,4123)$}
The main result of this subsection is as follows.
\begin{thm}\label{1423-4123}
For $n\geq 1$, $$|F_{n}(321,1423,4123)|=F_{n+1}-1.$$
\end{thm
\begin{prop}
For $n\geq 4$, \begin{equation}\label{(1423,4123)-1}
|F_{n}^{(1)}(321,1423,4123)|=F_{n-1}.
\end{equation}
\end{prop}
\begin{proof}
We first claim $\pi_{2}=2$ or $\pi_{2}=3$. We shall prove it by contradiction.
If $\pi_{2}\geq 4$, then 2 and 3 must be to the right of $\pi_{2}$. If they are in increasing order, then $1,\pi_{2},2,3$ is a copy of 1423. Whereas if 2 and 3 are in decreasing order, then $\pi_{2},3,2$ is a copy of 321, which $\pi$ avoids.
Next we shall prove the equality \eqref{(1423,4123)-1} by induction. For $n=4$ we have \[
F_{4}^{(1)}(321,1423,4123)=\{1234,1243,1324\}.
\] So, $$|F_{4}^{(1)}(321,1423,4123)|=3=F_{3}.$$
Suppose that the equality \eqref{(1423,4123)-1} holds for positive integers less than $n$. We shall prove it holds for $n$ as well. We classify $F_{n}^{(1)}(321,1423,4123)$ into two cases according to the value of $\pi_2$.
If $\pi_{2}=2$, then 1,2 can be as a whole to play the role of 1. So the number of permutations in this case can be reduced to \[
|F_{n-1}^{(1)}(321,1423,4123)|=F_{n-2}.
\]
If $\pi_{2}=3$ then $\pi_{3}=2$. If not, $3,\pi_{3},2$ would be a copy of $f$. Thus the first three letters are 1,3,2 and they can also be as a whole to play the role of 1. The number of permutations in this case is
\[
|F_{n-2}^{(1)}(321,1423,4123)|=F_{n-3}.
\]
Therefore,\[
|F_{n}^{(1)}(321,1423,4123)|=F_{n-2}+F_{n-3}=F_{n-1}.
\]
\end{proof}
\begin{prop}
For $n\geq 4$, $$|F_{n}^{(2)}(321,1423,4123)|=F_{n}-1.$$
\end{prop}
\begin{proof}
We claim $\pi_{1}=2$ or $\pi_{1}=3$. If not, suppose that $\pi_{1}\geq 4$. Then 2,3 must be to the right of 1. If they are increasing order, then $\pi_{1},1,2,3$ would be a copy of 4123. Whereas if they are in decreasing order, then $\pi_{1},3,2$ would be a copy of 321. Next we classify $ F_{n}^{(2)}(321,1423,4123) $ into two cases according to the value of $\pi_1$.
If $\pi_{1}=2$, then 2,1 can be as a whole to play the role of 1. The number of permutations in this case is \[
|F_{n-1}^{(1)}(321,1423,4123)|=F_{n-2}.
\]
If $\pi_{1}=3$, then we consider the position of 2.
\begin{itemize}
\item $\pi_3=2$. Then 3,1,2 can be as a whole to play the role of 1. Using the equality \eqref{(1423,4123)-1}, we have that the number of permutations in this case is \[
|F_{n-2}^{(1)}(321,1423,4123)|=F_{n-3}.
\]
\item $\pi_3\neq 2$. We claim \[
\pi=3\ 1\ 4\cdots k\ 2\cdots
\]with $4\leq k\leq n$. First we shall prove $\pi_3=4$ by contradiction. We assume that $\pi_3>4$. On one hand, if $\pi_4=2$, then 4 is to the right of 2 and $1,\pi_3,2,4$ would be a copy of 1423. On the other hand, if $\pi_4\neq 2$, then to avoid 321, $\pi_3$ and $\pi_4$ must be in increasing order since 2 is to the right of them. Under the assumption of $\pi_3>4$, we have $\pi_4\neq 4$. Hereby, $\pi_3-1$ must be to the right of $\pi_4$. Thus $\pi_3,\pi_4,\pi_3-1$ would be a copy of $f$, which is a contradiction.
Second, suppose that there are $\ell$ letters between 1 and 2. We denote them by $a_1,a_2,\cdots,a_\ell$ with $a_1=4$. Next we will show that $a_{i+1}=a_{i}+1$ for $1\leq i\leq \ell-1$. Due to $a_1,a_2,\cdots,a_\ell$ being to the left of 2, they must be in increasing order to avoid 321. Furthermore, if there is an entry $j\in [\ell]$ such that $a_{j}\neq a_{j-1}+1$, then $a_{j}-1$ is to the right of 2. Thus $1,a_{j},2,a_{j}-1$ is a copy of 1423. Let $a_\ell$ be $k$. Then we can substitute $a_1,a_2,\cdots,a_\ell$ with $4,5,\cdots,k$, where $k$ ranges from 4 to $n$.
The first $k$ letters $3,1,4,\cdots,k,2$ can be as a whole to play the role of 1. So \[
|F_{n-(k-1)}^{(1)}(321,1423,4123)|=F_{n-k}.
\]
\end{itemize}
Therefore, \[\begin{aligned}
|F_{n}^{(2)}(321,1423,4123)|=&F_{n-2}+F_{n-3}+\sum_{k=4}^{n}F_{n-k}\\
=&\sum_{k=0}^{n-2}F_{k}
\\=&F_{n}-1.
\end{aligned}
\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{1423-4123}]
We can check directly for $n\leq 3$.
For $n\geq 4$, we have
\[
\begin{aligned}
|F_{n}(321,1423,4123)|=&|F_{n}^{(1)}(321,1423,4123)|+|F_{n}^{(2)}(321,1423,4123)|
\\=&F_{n-1}+(F_{n}-1)\\
=&F_{n+1}-1.
\end{aligned}
\]
\end{proof}
\subsection{Enumerating $F_{n}(321,3124,4123)$}
The main result of this subsection is as follows.
\begin{thm}\label{3124-4123}
For $n\geq 1$, $$|F_{n}(321,3124,4123)|=F_{n+1}-1.$$
\end{thm
\begin{prop}
For $n\geq 4$, \begin{equation}\label{(3124,4123)-1}
|F_{n}^{(1)}(321,3124,4123)|=F_{n}-1.
\end{equation}
\end{prop}
\begin{proof}
We claim $\pi_{2}\leq 4$.
If $\pi_{2}\geq 5$, then 2,3 and 4 must be to the right of $\pi_{2}$. If they are in increasing order, then $\pi_{2},2,3,4$ would be a copy of 4123. Whereas if there is a descent, then $\pi_{2}$ and the decreasing pair would be a copy of 321, which $\pi$ avoids. So $\pi_2\leq 4$.
We shall prove \eqref{(3124,4123)-1} by induction.
For $n=4$ we have \[
F_{4}^{(1)}(321,3124,4123)=\{1234,1243,1324,1423\}.
\] So, $$|F_{4}^{(1)}(321,3124,4123)|=4=F_{4}-1.$$
Suppose this equality holds for positive integers less than $n$, and we prove it holds for $n$ as well. The number of $F_{n}^{(1)}(321,3124,4123)$ can be classified into three cases according to the value of $\pi_{2}$.
\begin{enumerate}[(1)]
\item $\pi_{2}=2$. Then 1,2 can be as a whole to play the role of 1. So by induction, the number of permutations in this case can be reduced to \[
|F_{n-1}^{(1)}(321,3124,4123)|=F_{n-1}-1.
\]
\item $\pi_{2}=3$. Then $\pi_{3}=2$. If not, 2 must be to the right of $\pi_3$ and $3,\pi_{3},2$ would be a copy of $f$. So the first three letters of $\pi$ are 1,3,2. They can also be as a whole play the role of 1. Thus the number of permutations in this case is \[
|F_{n-2}^{(1)}(321,3124,4123)|=F_{n-2}-1.
\]
\item $\pi_{2}=4$. Then we have $\pi_{3}=2$. We shall prove by contradiction. Suppose that $\pi_3\geq 3$. On one hand, if $\pi_{3}=3$, then 4,3,2 create a copy of 321. On the other hand, if $\pi_{3}>3$, actually $\pi_{3}\geq 5$, then $4,\pi_{3},3$ would be a copy of $f$. So $\pi_{3}=2$. Furthermore, we have $\pi_{n}=3$. If not, $4,2,3,\pi_{n}$ is a copy of 3214 since 3 appears after 2 and before $\pi_n$. Now it just leaves the desired sites of $5,\cdots,n$ undetermined for $n\geq 5$. They are to the left of 3, so to avoid 321, they must be in increasing order. Hereby, we have\[
\pi=1\ 4\ 2\ 5\cdots n\ 3.
\]
\end{enumerate}
Thus,\[
|F_{n}^{(1)}(321,3124,4123)|=(F_{n-1}-1)+(F_{n-2}-1)+1=F_{n}-1.
\]
\end{proof}
\begin{prop}
For $n\geq 4$, \begin{equation}
|F_{n}^{(2)}(321,3124,4123)|=F_{n-1}.
\end{equation}
\end{prop}
\begin{proof}
We claim $2\leq \pi_{1}\leq 3$. If not, suppose $\pi_{1}\geq 4$. Then 2,3 must be to the right of 1. If they are increasing order, then $\pi_{1},1,2,3$ is a copy of 4123. Whereas if they are in decreasing order, then $\pi_{1},3,2$ is a copy of 321. So $\pi_1=2$ or $\pi_1=3$. Thus the number of $ F_{n}^{(2)}(321,3124,4123) $ can be classified into two cases according to the value of $\pi_1$.
If $\pi_{1}=2$, then 2,1 can be as a whole to play the role of 1. The number of permutations in this case can be reduced to \[
|F_{n-1}^{(1)}(321,3124,4123)|=F_{n-1}-1.
\]
If $\pi_{1}=3$, then we have$ \pi=3\ 1\ 4\cdots n\ 2. $ First we shall show that $\pi_{n}=2$. If $\pi_{n}\neq 2$, then $3,1,2,\pi_{n}$ would be a copy of 3124. Second, to avoid 321, the letters $4,\cdots,n$ appearing before 2 must be in increasing order.
Therefore,
\[
|F_{n}^{(2)}(321,3124,4123)|=(F_{n-1}-1)+1=F_{n-1}.
\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{3124-4123}]
We can check directly for $n\leq 3$.
For $n\geq 4$, we have
\[
\begin{aligned}
|F_{n}(321,3124,4123)|=&|F_{n}^{(1)}(321,3124,4123)|+|F_{n}^{(2)}(321,3124,4123)|
\\=&(F_{n}-1)+F_{n-1}\\
=&F_{n+1}-1.
\end{aligned}
\]
\end{proof}
|
{
"arxiv_id": "2302.13656",
"language": "en",
"timestamp": "2023-03-02T02:12:46",
"url": "https://arxiv.org/abs/2302.13656",
"yymm": "2302"
} | \section{Introduction} \label{SECT:intro}
The goal of this paper is to evaluate the irrationality level of all possible choice behaviors on a finite set of alternatives.
We perform this task in three successive steps:
\begin{itemize}
\item[(1)] establish a benchmark of rational choice behavior;
\item[(2)] endow the set of all choice behaviors with a highly discerning metric;
\item[(3)] compute the distance of any behavior from the benchmark of rationality.
\end{itemize}
The output of this process is a \textsl{rational degree of irrationality} of any deterministic choice behavior.
(The use of the term `rational' is motivated by the fact that we compute a distance from rationality in order to measure irrationality.)
Before addressing in detail each step of this approach, let us discuss the general domain of our analysis.
Classically, the literature on choice theory is exclusively concentrated on `decisive' choice behaviors, intended as situations in which the decision maker (DM) selects at least one item from any nonempty subset of the ground set: see, among a large amount of relevant contributions, the seminal papers by \cite{Samuelson1938}, \cite{Arrow1959}, and \cite{Sen1971}.
In other words, the domain of analysis is classically restricted to \textsl{choice correspondences}, which are functions mapping nonempty sets into nonempty subsets.
In addition, most of the recent models of `bounded rationality in choice' typically deal with the even more restricted case of \textsl{choice functions}, which are single-valued choices correspondences (i.e., a unique item is selected from any nonempty menu): see, among several papers on the topic, \cite{ManziniMariotti2007} and \cite{MasatliogluNakajimaOzbay2012}.\footnote{See \cite{GiaPetWat2022a} for a list of many models of bounded rationality in choice and a common analysis of their features by a unified approach.}
Despite the great abundance of literature on choice functions and choice correspondences, it appears more realistic to consider the general case of \textsl{quasi-choices}, which model the behavior of possibly indecisive DMs: in this situation, the agent is allowed to select all, some, or none of the items available in any menu.
To justify the potential interest in this approach, very recently \cite{Costa-GomezCuevaGerasimouTejiscak2022} mention some compelling experiments, which suggest that choice models rejecting decisiveness may offer a powerful tool to study revealed preferences.\footnote{See also Chapter~1 of the advanced textbook on microeconomic theory by \citet{Kreps2013}, as well as the arguments presented in Section~1 of the recent paper by \citet{AlcantudCantoneGiarlottaWatson2022}.}
That is why in this paper we evaluate the rationality level of any type of choice behavior, may it be decisive or not.
Now we describe the three stages of our approach.
\begin{description}
\item[(1)] The first step consists of the selection of the benchmark of rationality ---the `zero'--- from which deviations ought to be measured.
We select the most natural candidate, namely the family of all quasi-choices over the given set that are considered `rational' according to \textsl{revealed preference theory} \citep{Samuelson1938}.
Technically, these are the quasi-choices that can be explained by the maximization of a binary relation.\footnote{A more restrictive benchmark of rationality may be the family of quasi-choices rationalizable by binary relations satisfying some desirable properties.}
\item[(2)] The selection of a metric is the key step: this distance should accurately discerns among different types of choice behavior in an economically significant way.
A possible candidate for this goal is the distance on quasi-choices proposed by \cite{Klamler2008}, which is computed by summing the cardinalities of all symmetric differences between pairs of choice sets.
Due to its decomposability into trivial metrics, Klamler's distance is however not well-suited for our goals, due to its low discernibility power.
Using a notion of local rationalization, we design a refinement of this metric, which displays a sharp level of discrimination among different choices.
\item[(3)] To finally establish the degree of irrationality for any deterministic choice behavior, we use the metric selected at step 2 to compute the minimum distance of a quasi-choice from a rationalizable one.
In this way, all quasi-choices belonging to the benchmark of rationality have a degree of irrationality equal to zero, whereas all the others display a degree with a strictly positive value.
Moreover, the more irrational a choice behavior is, the higher the value of the index becomes.
We also describe a weighted version of this approach.
Formally, since each rationalizable choice is explained by the maximization of a unique asymmetric preference ---the \textsl{strict revealed preference} \citep{Samuelson1938}---, we measure the subjective desirability of each rational behavior by the `level of transitivity' of this binary relation:\footnote{Both \citet{Mas-ColellWhinstonGreen1995} and \cite{Kreps2013} consider transitivity and completeness the basic tenets of economic rationality.} the more this preference is close to being fully transitive,\footnote{By `fully transitive' we mean that both strict preference and associated incomparability are transitive.} the higher the desirability of the choice becomes.
Once subjective desirability is encoded, we measure the degree of irrationality of any behavior by taking a weighted distance from rational behavior.
\end{description}
Finally, we suggest a probabilistic extension of our approach, which applies to stochastic choice functions.
Recall that a stochastic choice function assigns a real number to each pair formed by a menu and an item in it, evaluating the likelihood of that item being selected from that menu.
Choice functions are special stochastic choices in which this likelihood is one for exactly one item in a menu and zero for all the others.
The steps to measure the irrationality of a stochastic choice behavior are, however, different from the ones of the deterministic setting.
Specifically, the first step is again the selection of a benchmark of rationality, for which we take the family of all stochastic choice functions satisfying the \textsl{random utility model} \textsl{(RUM)} \citep{Block_Marschak1960}.
On the other hand, since the second and the third step of the deterministic approach are hardly adaptable,
we employ a different procedure.
In fact, we take advantage of the characterization established by \cite{Falmagne1978}, who shows that a stochastic choice function satisfies RUM if and only if all its \textsl{Block-Marschak polynomials} are non-negative.
Therefore, any choice that fails to satisfy RUM must have at least one negative Block-Marschak polynomial.
Upon summing up all these negative polynomials for each element in the ground set, we obtain a \textsl{negativity vector}, which provides a discerning measure of the irrationality of a stochastic choice behavior.
The comparison of these vectors is then performed by a permutation-invariant Pareto ordering, which in turn yields a partial classification of all stochastic choices according to their degree of irrationality.
\smallskip
The paper is organized as follows.
Section~\ref{SECT:measuring_irrationality} collects preliminary notions and presents a review of the related literature on deterministic choices.
In Section~\ref{SECT:metrics_on_choices} we describe the metric introduced by Klamler, and then a highly discerning variation of it.
In Section~\ref{SECT:rational_index} we formally define two distance-based degrees of irrationality of a choice behavior, and show the soundness of the novel metric for this task.
In Section~\ref{SECT:stochastic_setting} we suggest an extension of our approach to a stochastic environment.
\section{Measures of deterministic irrationality} \label{SECT:measuring_irrationality}
First we recall some preliminary notions in choice theory.
Then we suggest several ways to measures the irrationality of a deterministic choice behavior, and present a quick review of recent literature on the topic.
\subsection{Preliminaries}
A finite set $X$ of $n \geq 2$ alternatives is fixed throughout.
We use $\XX$ to denote the family of all nonempty subsets of $X$.
A \textsl{quasi-choice correspondence} over $X$ is a function $C \colon \XX \cup \{\varnothing\} \to \XX \cup \{\varnothing\}$ such that $C(A) \subseteq A$ for all $A \in \XX \cup \{\varnothing\}$.
A \textsl{choice correspondence} over $X$ is a quasi-choice that is never empty-valued on nonempty sets, that is, a function $c \colon \XX \cup \{\varnothing\} \to \XX \cup \{\varnothing\}$ such that $\varnothing \neq c(A) \subseteq A$ for all $A \in \XX$.\footnote{To emphasize decisiveness, we shall use upper case letters ($C$, $C'$, etc.) to denote possibly indecisive choice behaviors, that is, quasi-choice correspondences. On the other hand, lower case letters ($c$, $c'$, etc.) will be employed to denote decisive choice behaviors, that is, choice correspondences.}
Sets in $\XX \cup \{\varnothing\}$ are \textsl{menus}, elements of a nonempty menu are \textsl{items}, and the set $C(A)$ (or $c(A)$) is the \textsl{choice set} of the menu $A$.
Unless confusion may arise, hereafter we speak of \textsl{quasi-choices} and \textsl{choices}, respectively.
Moreover, $\textsf{Choice}(X)$ (resp.\ $\textsf{choice}(X)$) denotes the family of all quasi-choices (resp.\ choices) over $X$.
\smallskip
A binary relation $\succ$ over $X$ is a subset of $X \times X$, which is:
- \textsl{asymmetric} if $x \succ y$ implies $\neg(y \succ x)$ for all $x,y \in X$;
- \textsl{irreflexive} if $x \succ x$ holds for no $x \in X$;
- \textsl{acyclic} if $x_1 \succ x_2 \succ \ldots \succ x_k \succ x_1$ holds for no $x_1,x_2,\ldots,x_k \in X$ ($k \geqslant 3$);
- \textsl{transitive} if $x \succ y \succ z$ implies $x \succ z$ for all $x,y,z \in X$;
- \textsl{negatively transitive} if $\neg(x \succ y) \wedge \neg(y \succ z)$ implies $\neg(x \succ z)$ for all $x,y,z \in X$.
\noindent Note that (i) asymmetry implies irreflexivity, (ii) transitivity and asymmetry implies acyclicity, and (iii) asymmetry and negative transitivity implies transitivity.
We will often refer to an asymmetric binary relation as a \textsl{(strict) preference}.
\smallskip
Choices and preferences are closely related to each other.
In fact, since the seminal work of \cite{Samuelson1938}, the `rationality' of a decisive choice behavior is classically modeled by the notion of `binary rationalizability', that is, the possibility to explain it by maximizing a suitable binary relation.
Formally, a choice $c \colon \XX \cup \{\varnothing\}\to \XX \cup \{\varnothing\}$ is \textsl{rationalizable} if there is an asymmetric binary relation $\succ$ over $X$ such that for any nonempty menu $A$, the equality\vspace{-0,1cm}
$$
c(A) = \max (A, \succ) = \{x \in A : a \succ x \text{ for no } a \in A\}\vspace{-0,1cm}
$$
holds.
The binary relation $\succ$ is called the \textsl{(strict) preference revealed by $c$}.
Note that $\succ$ must also be acyclic in order to rationalize the choice $c$.
Moreover, the asymmetric relation of revealed preference is unique for any rationalizable choice.\footnote{Here, we purposely avoid mentioning the symmetric part of the relation of reveled preference, because it is irrelevant to detect the rationalizability of a choice.}
\subsection{Related literature}
In view of our goal to distinguish choice behaviors by their consistency features, the notion of rationalizability is the most popular in the literature.
This notion was first introduced for \textsl{choice functions} (that is, single-valued choice correspondences), and then extended to choice correspondences.
However, rationalizability can be naturally generalized to quasi-choices, provided that the rationalizing preference is allowed not to be irreflexive, asymmetric, or acyclic.
Formally, we call a quasi-choice $C \colon \XX \cup \{\varnothing\} \to \XX \cup \{\varnothing\}$ \textsl{rationalizable} if there is an arbitrary binary relation over $X$ ---here denoted by `$\to$' to emphasize its arbitrariness--- such that the equality\vspace{-0,1cm}
$$
C(A) = \max(A,\to) = \{x \in A : a \to x \text{ for no } a \in A\}\vspace{-0,1cm}
$$
holds for all menus $A \in \XX$.
Here the key fact is the possible lack of properties of $\to$, which follows from the necessity to model indecisive choice behaviors.
For instance, since asymmetry is not guaranteed, we may have $x \to y \to x$ for some distinct elements $x$ and $y$, in which case $C(\{x,y\})$ is empty.\footnote{To justify such a situation, imagine a political ballot in which the two remaining candidates are extremists, and my moderate political view suggests me to abstain from voting.}
Similarly, the possible lack of irreflexivity of $\to$ permits situations of the type $x \to x$, which in turn yields $C(\{x\}) = \varnothing$.\footnote{For instance, if a restaurant only offers a chocolate cake as dessert and I am allergic to chocolate, then I shall avoid taking dessert.}
Note also that, contrary to the case of choices, the rationalizable preference ---which is called a \textsl{voter} by \cite{AlcantudCantoneGiarlottaWatson2022}--- need not be unique for the general case of quasi-choices.\footnote{On this point, see Section~2 in \cite{AlcantudCantoneGiarlottaWatson2022}. Here the authors extensively dwell on the reasons motivating the more general use of quasi-choices instead of choices, and the use of arbitrary binary relations to justify choice behavior.}
All in all, according to this classical paradigm, any (decisive or indecisive) choice behavior is regarded \textsl{irrational} if it fails to be rationalizable.
This yields a simple dichotomy \textit{rational/irrational} or, equivalently, \textit{rationalizable/non-rationalizable}.
However, this dichotomy not very satisfactory in practice, because rationalizability fails to explain the overwhelming majority of observed choice behaviors.\footnote{For a precise computation of the fraction of rationalizable choices over a set of fixed size, see \citet[Lemma~6]{GiaPetWat2022a} }
Recently, following the inspiring analysis of~\cite{Simon1955}, the notion of rationalizability has been amended by several forms of \textsl{bounded rationality}, which aim to explain a larger portion of choice behaviors by means of more flexible paradigms.
To wit this trend, there are tens of models of bounded rationality in choice that have been proposed in the last twenty years: see \cite{GiaPetWat2022a} for a vast account of them.
The dichotomy \textit{boundedly rational/boundedly irrational} is certainly more satisfactory than the rational/irrational one, allowing one to identify choice behaviors that obey some more relaxed (but still justifiable) constraints.\footnote{The fraction of boundedly rational choice functions is definitively larger than that of rationalizable choices: compare Lemma~6 with Theorem~3 in \cite{GiaPetWat2022a}.}
However, this bounded rationality approach does not apply to most choice behaviors: in fact, it has essentially been proposed exclusively for choice functions, with very few cases of choice correspondences, moreover leaving completely out the case of quasi-choices.
A conceptually different modelization of rationality does not distinguish between (bounded) rationality and (bounded) irrationality.
Rather, it creates a partition of the family of choices in several classes, each of which is assigned a degree of rationality.
A seminal approach in this direction is the \textsl{rationalization by multiple rationales (RMR)} of \cite{KalaiRubinsteinSpiegler2002}.
The RMR model yields a partition of the family of all choice functions over a set with $n$ items into $n-1$ equivalence classes of rationality, which are determined by the minimum number of linear orders that are necessary to explain decisive choice behavior: the larger this number, the less rational the behavior.\footnote{Very recently, a structured version of the RMR model, called \textsl{choice by salience}, has been proposed by \cite{GiarlottaPetraliaWatson2022b}.}
Rationalizable choice functions obviously belong to the first class of rationality, since a unique linear order suffices.
On the other side of the scale of rationality, we find those choice functions that require the maximum number of rationales (namely $n-1$) to be justified.
Despite its conceptually appealing motivation, the RMR model displays some drawbacks: (i) the family of rationalizing linear orders only provides a `non-structured' explanation of choice behavior; (ii) the class of maximally irrational choices (i.e., the ones requiring $n -1$ rationales) essentially collects all choices, even for very small sets of alternatives; and (iii) this model only applies to choice functions (but it could be naturally extended to choice correspondences).\footnote{The choice model based on salience \citep{GiarlottaPetraliaWatson2022b} creates a partition into $n$ classes of rationality, and positively addresses the first two issues of the RMR approach. Specifically, concerning (1), a binary relation of salience restricts the application of rationales to those indexed by the maximally salient items of a menu. Concerning (2), the smallest choice function in the last class of rationality that the authors are able to exhibit is defined on a set of 39 elements.}
Another approach devoted to identify the degree of irrationality of a deterministic choice function is due to \cite{AmbrusRozen2014}.
As for the RMR model, also this approach is based on a counting technique.
Specifically, the authors use a classical property of choice consistency ---namely \textsl{Independence of Irrelevant Alternatives} \citep{Arrow1950}, which is equivalent to \textsl{Axiom~$\alpha$} \citep{Chernoff1954} for choice functions--- to establish the degree of irrationality of a choice.
They count the number of violations of Axiom~$\alpha$ that a choice behavior exhibits: the larger this number, the less rational the behavior.
In particular, they introduce a notion of violations of Axiom~$\alpha$, and accordingly define the \textsl{index of irrationality} of a choice by counting all menus that violate Axiom~$\alpha$.
The abstract idea of their approach is appealing: it accounts to measure irrationality by counting deviations from rationality according to an axiomatic parameter (Axiom~$\alpha$).
As we shall see, the approach developed in this paper measures the irrationality of choice behaviors in a way inspired by \cite{AmbrusRozen2014}.
In fact, similarly to them, we analyze deviations from rationality according to axiomatic parameters, namely Axioms~$\alpha$ and $\gamma$ \citep{Sen1971}, which are equivalent to the rationalizability of a quasi-choice.
However, contrary to \cite{AmbrusRozen2014}, we do not \textit{directly} count violations of properties of choice consistency.
Instead, we use an \textit{indirect} approach: first we establish a theoretical way to measure violations, that is, a metric, and only then we count deviations from rationality using this metric.
Of course, the soundness of such a procedure boils down to the selection of a metric that is both economically significant and highly discerning.
The next three sections will extensively address this issue.
\section{Metrics on quasi-choices} \label{SECT:metrics_on_choices}
This section is devoted to present ways to endow the family of all possible choice behaviors with metrics.
Specifically, first we recall a metric due to \cite{Klamler2008}, and then describe a variation of it, which showcases a rather sharp discernibility power.
In Section~\ref{SECT:rational_index} we shall employ this novel metric as the measuring stick to evaluate deterministic deviations from rational behavior.
To start, we recall the notion of distance between quasi-choices.
\begin{definition} \rm \label{DEF:distance on choices}
A \textsl{metric} on $\textsf{Choice}(X)$ is a map $d \colon \textsf{Choice}(X) \times \textsf{Choice}(X) \to {\mathbb R}$ such that for all $C,C',C'' \in \textsf{Choice}(X)$, the following properties hold:\vspace{-0,1cm}
\begin{itemize}
\item[\textsf{[A0.1]}] $d(C,C') \geq 0$, and equality holds if and only if $C = C'$;
\item[\textsf{[A0.2]}] $d(C,C') = d(C',C)$;
\item[\textsf{[A0.3]}] $d(C,C') + d(C',C'') \geq d(C,C'')$.
\end{itemize}
Property \textsf{A0.1} is \textsl{non-negativity}, property \textsf{A0.2} is \textsl{symmetry}, and property \textsf{A0.3} is the \textsl{triangle inequality}.
\end{definition}
\subsection{Klamler's metric}
The symmetric difference $\Delta$ of sets \citep{Kemeny1959} induces a metric on quasi-choices:
\begin{definition}[\citealp{Klamler2008}] \rm \label{DEF:distance by symmetric difference}
Let $d_\Delta \colon \textsf{Choice}(X) \times \textsf{Choice}(X) \to {\mathbb R}$ be the function defined as follows for all $C,C' \in \textsf{Choice}(X)$:\vspace{-0,1cm}
$$
d_\Delta(C,C') \coloneqq \sum_{S \in \XX} \big\vert C(S) \, \Delta \, C'(S) \big\vert \,.\vspace{-0,1cm}
$$
\end{definition}
By Definition~\ref{DEF:distance by symmetric difference}, the distance between two quasi-choices over the same set of alternatives is obtained by a simple and intuitive procedure: first count the number of items in a menu that are in one choice set but not in the other one, and then take the sum of these numbers over all menus.
Usually, being simple and intuitive is regarded as a good feature of a notion.
Unfortunately, here this fact translates into an oversimplified evaluation of the distance between two behaviors, which totally neglects their structural features.
Specifically, by only looking at the `size' of the disagreement of two quasi-choices over menus, Definition~\ref{DEF:distance by symmetric difference} fails to consider the `semantics' of this disagreement, which lies in the very nature of the items selected by exactly one of them.
This in turn produces some important shortcomings of this metric in the process of detecting deviations from rational behavior.
The next two examples provide striking instances of this kind.
\begin{example} \rm \label{EX:motivating}
Consider the following three choice functions on $X=\{x,y,z\}$ (the unique item selected from each menu is underlined):\vspace{-0,1cm}
\begin{align*}
(c_1) & \qquad \underline{x}y \,,\quad \underline{x}z \,,\quad \underline{y}z \,,\quad \underline{x}yz\,, \\
(c_2) & \qquad \underline{x}y \,,\quad \underline{x}z \,,\quad \underline{y}z \,,\quad x\underline{y}z\,, \\
(c_3) & \qquad \underline{x}y \,,\quad \underline{x}z \,,\quad \underline{y}z \,,\quad xy\underline{z}\,.
\end{align*}
The choices $c_1, c_2, c_3$ are equal on pairs of items but differ on the full menu $X$.
On pairs, the selection process is reproduced by maximizing the linear order $x \succ y \succ z$.
However, $c_1$ is rationalizable by $\succ$, whereas $c_2$ and $c_3$ are not.
Intuition suggests that $c_3$ should be further than $c_2$ from the rational choice $c_1$.
For instance, if we use the linear order $\succ$ to rationalize pairs of items, then $c_2$ selects the second-best item from $X$, whereas $c_3$ ends up selecting the worst item of the three.\footnote{Of course, one may always consider different scenarios, in which $c_3$ is regarded more rational than $c_2$. However, these scenarios appear to be less likely to happen.}
On the other hand, the metric $d_\Delta$ does regard $c_2$ and $c_3$ as equally distant from $c_1$, because we have\vspace{-0,1cm}\vs
$$
d_\Delta(c_1,c_2) = \vert c_1(X) \Delta c_2(X) \vert = 2 \quad \text{ and } \quad d_\Delta(c_1,c_3) = \vert c_1(X) \Delta c_3(X) \vert = 2.
$$
\end{example}
\begin{example}\label{EX:metric_new} \rm
Let $c_1,c_2,c_3,c_4$ be four choice correspondences over $X=\{x,y,z,w\}$, which are defined exactly in the same way for all menus distinct from $\{x,y,z\}$ and $X$, namely\vspace{-0,1cm}\vs
$$
\underline{x}\underline{y} \,,\; \underline{x}z \,,\; \underline{x}w \,,\; \underline{y}\underline{z} \,,\; \underline{y}w \,,\; \underline{z}w \,,\; \underline{x}\underline{y}w \,,\; \underline{x}zw \,,\; \underline{y}\underline{z}w\,.\vspace{-0,1cm}
$$
However, $c_1,c_2,c_3,c_4$ select different elements from the two menus $\{x,y,z\}$ and $X$, namely\vspace{-0,1cm}
\begin{itemize}
\item[$(c_1)$] $\underline{x}\underline{y}z\,,\; \underline{x}\underline{y}zw\,$,\vspace{-0,1cm}
\item[$(c_2)$] $\underline{x}y\underline{z}\,,\; \underline{x}y\underline{z}w\,$,\vspace{-0,1cm}
\item[$(c_3)$] $x\underline{y}\underline{z}\,,\; x\underline{y}z\underline{w}\,$,\vspace{-0,1cm}
\item[$(c_4)$] $x\underline{y}z\,,\; xyz\underline{w}\,$.\vspace{-0,1cm}
\end{itemize}
Note that $c_1$ is rationalizable by the relation $\succ$ over $X$ such that $x \succ z$, $x \succ w$, $y \succ w$, and $z \succ w$.
On the other hand, the three choices $c_2,c_3,c_4$ fail to be rationalizable, but they have exactly the same distance from the rationalizable choice $c_1$:
\begin{align*}
d_\Delta(c_1,c_2) \;& =\; \big\vert c_1(X) \,\Delta\, c_2(X) \big\vert + \big\vert c_1(\{x,y,z\}) \,\Delta\, c_2(\{x,y,z\}) \big\vert = 2 + 2 = 4 \,, \\
d_\Delta(c_1,c_3) \; & =\; \big\vert c_1(X) \,\Delta\, c_3(X) \big\vert + \vert c_1(\{x,y,z\}) \,\Delta\, c_2(\{x,y,z\}) \vert = 2 + 2= 4\,,\\
d_\Delta(c_1,c_4) \; & =\; \big\vert c_1(X) \,\Delta\, c_3(X) \big\vert + \vert c_1(\{x,y,z\}) \,\Delta\, c_2(\{x,y,z\}) \vert = 3 + 1= 4\,.\vspace{-0,1cm}
\end{align*}
However, similarly to Example~\ref{EX:motivating}, it is reasonable to assume that $c_2$ is `semantically' closer to $c_1$ than $c_3$ is: in fact, $c_2$ selects from the menus $\{x,y,z\}$ and $X$ some items that are better ranked (by $\succ$) than those selected by $c_3$.
There are also solid arguments to validate the opinion that $c_4$ should be the farthest choice from $c_1$.
\end{example}
The low discernibility power of $d_\Delta$ is due to (some of) the properties it satisfies.
\cite{CarGiaWat2023} ---slightly correcting the findings of \cite{Klamler2008}--- prove that the following properties characterize $d_\Delta$ (a universal quantification is implicit):
\begin{itemize}
\item[\textsf{[A1]}] $d(C,C') + d(C',C'') = d(C,C'')$ if and only if $C'$ is between $C$ and $C''$;\footnote{The notion of `betweenness' is due to \cite{AlabayrakAleskerov2000}: $C'$ is \textsl{between} $C$ and $C''$ if $C(S) \cap C''(S) \subseteq C'(S) \subseteq C(S) \cup C''(S)$ holds for any $S \in \XX$.}
\item[\textsf{[A2]}] if $\widetilde{C}$ and $\widetilde{C'}$ result from, respectively, $C$ and $C'$ by the same permutation of alternatives, then $d(C,C') = d(\widetilde{C},\widetilde{C'})$;
\item[\textsf{[A3]}] if $C$ and $C'$ agree on all (nonempty) menus in $\XX$ except for a subfamily $\XX' \subseteq \XX$, then the distance $d(C,C')$ is determined exclusively from the choice sets over $\XX'$;
\item[\textsf{[A4$'$]}] if $C,C',\widetilde{C},\widetilde{C'}$ only disagree on a menu $T \in \XX$ such that $C(T) = \widetilde{C}(T) \Delta S$ and $C'(T) = \widetilde{C'}(T) \Delta S$ for some $S \subseteq T$, then $d(C,C') = d(\widetilde{C},\widetilde{C'})$;
\item[\textsf{[A5']}] for all $C \in \textsf{Choice}(X)$ and $A \in \XX$, there is $C' \in \textsf{Choice}(X)$ with the property that $\vert C(A) \Delta C'(A) \vert = 1 $, $C(B) = C'(B)$ for all $B \neq A$, and $d(C,C')=1$.
\end{itemize}
Axioms \textsf{A1}, \textsf{A2}, \textsf{A3}, \textsf{A4'}, and \textsf{A5'} are rather intuitive requirements for a metric on the family of quasi-choices.
In fact, Axiom~\textsf{A1} strengthens the triangle inequality \textsf{A0.3} by requiring that equality holds exactly for cases of betweenness.
Axiom~\textsf{A2} states a condition of invariance under permutations.
Axiom~\textsf{A3} is a separability property, whereas Axiom~\textsf{A4'} is a condition of translation-invariance.
The first four properties produce a unique metric, up to some multiplicative coefficients that only depend on the size of the menu: Axiom~\textsf{A5'} forces these coefficients to be unique.
As announced, we have:
\begin{theorem}[\citealp{CarGiaWat2023}]\label{THM:main_theorem}
The unique metric on $\textsf{Choice}(X)$ that satisfies Axioms \textsf{A1},\,\textsf{A2},\,\textsf{A3},\,\textsf{A4'},\,\textsf{A5'} is $d_\Delta$.
\end{theorem}
Unfortunately, the discernibility power of $d_\Delta$ among different choice behaviors is rather low, which is essentially due to the satisfaction of Axioms \textsf{A1} and \textsf{A3}.
To illustrate this fact, below we summarize some of the findings in \cite{CarGiaWat2023}.
\begin{definition} \rm \label{DEF:elementary_choices}
A quasi-choice $C$ on $X$ is \textsl{elementary} if there is at most one menu $S \in \XX$ such that $C(S) \neq \varnothing$.
For any $S,T \in \XX \cup \{\varnothing\}$ such that $T \subseteq S$, we denote by $C_{S \mapsto T}$ the elementary quasi-choice over $X$ defined as follows:\vspace{-0,1cm}\vs
$$
C_{S \mapsto T}(A) \coloneqq
\left\{
\begin{array}{lll}
T & \text{if } A =S \\
\varnothing & \text{otherwise.}
\end{array}
\right.
$$
\end{definition}
\begin{definition} \rm \label{DEF:characteristic_metric_on_sets}
Let $d$ be a metric on $\textsf{Choice}(X)$, and $S \in \XX$.
Denote by $\XX_S$ the family of all nonempty subsets of $S$.
Define a metric $d_S \colon \XX_S \cup \{\varnothing\} \times \XX_S\cup \{\varnothing\} \to {\mathbb R}$ by\vspace{-0,1cm}
$$
d_S(A,B) \coloneqq d\left(C_{S \mapsto A},C_{S \mapsto B} \right)\vspace{-0,1cm}
$$
for all $A,B \in \XX_S \cup \{\varnothing\}$.
We call $d_S$ the \textsl{characteristic metric induced by $d$ on $\XX_S \cup \{\varnothing\}$}.
\end{definition}
Any metric on $\textsf{Choice}(X)$ satisfying \textsf{A1} and \textsf{A3} ---hence, in particular, $d_\Delta$--- is a sum of characteristic metrics:
\begin{lemma}[Elementary Decomposability] \label{LEMMA:semantics of A3}
Let $d$ be a metric on $\textsf{Choice}(X)$ satisfying Axioms \textsf{A1} and \textsf{A3}.
For all $C,C' \in \textsf{Choice}(X)$,\vspace{-0,1cm}
$$
\displaystyle d(C,C') = \sum_{S \in \XX} d_S(C(S),C'(S)).
$$
\end{lemma}
The metric defined in the next section satisfies neither \textsf{A1} nor \textsf{A3}.
\subsection{A rational variation of Klamler's metric} \label{SECT:rational_metric}
We design a novel metric by suitably modifying Klamler's distance.
This variation is inspired by \cite{AmbrusRozen2014}, because we employ two axioms of choice consistency ---in place of one--- to guide its construction:\vspace{-0,1cm}
\begin{description}
\item[Axiom $\alpha\,$:]
for all $A,B \subseteq X$ and $x \in X$, if $x \in A \subseteq B$ and $x \in C(B)$, then $x \in C(A)$;\vspace{-0,1cm}
\item[Axiom $\gamma\,$:]
for all $A,B \subseteq X$ and $x \in X$, if $x \in C(A)$ and $x \in C(B)$, then $x \in C(A \cup B)$.\vspace{-0,1cm}
\end{description}
Axiom~$\alpha$ is due to \cite{Chernoff1954}.
In words, if an item is selected from a menu, then it is also chosen from any smaller menu containing it.
This property is often referred to as \textsl{Standard Contraction Consistency}.
Its role in abstract theories of individual and social choice is central.
\citet[p.\,407]{Nehring1997} even calls Axiom~$\alpha$ ``\textit{the mother of all choice consistency conditions'}'.
Axiom~$\gamma$, often referred to as \textsl{Standard Expansion Consistency}, is due to \cite{Sen1971}.
It says that if an item is selected from two menus, then it is also chosen from the larger menu obtained as their union.
The connection between these two properties and rational behavior is well-known:
\begin{theorem}[\citealp{Sen1971}]
A choice correspondence is rationalizable if and only if both $\alpha$ and $\gamma$ hold.\footnote{This characterization readily extends to quasi-choices: see \citet[Theorem 2.5]{AizermanAleskerov1995}. For a proof of this generalization, see \citet[Theorem 2.8]{AleMon2002}.}
\end{theorem}
We now proceed to define a suitable refinement of $d_\Delta$, which takes into account all `locally rational approximations' of the original quasi-choice.
Specifically, we consider all restrictions of the given correspondence to all subsets of any given menu, and modify them in order to obtain quasi-choices that locally satisfy Axioms~$\alpha$ and~$\gamma$.
Finally, we sum up all differences of these rational modifications.
\begin{definition}\rm \label{DEF:alpha_closure_of_point}
Let $C \colon \XX \cup \{\varnothing\} \to \XX \cup \{\varnothing\}$ be a quasi-choice over $X$, and $A \in \XX$ a nonempty menu.
Define a quasi-choice $C_A \colon \mathscr{A} \cup \{\varnothing\} \to \mathscr{A} \cup \{\varnothing\}$ over $A$, where $\mathscr{A}$ is the family of $\XX$ comprising all nonempty subsets of $A$, as follows for each $B \in \mathscr{A} \cup \{\varnothing\}$:
$$
C_A(B) \coloneqq
\left\{
\begin{array}{ll}
C(A)\cap B & \text{ if } C(A)\cap B \neq \varnothing, \\
C(B) & \text{ otherwise.}
\end{array}\vspace{-0,1cm}
\right.
$$
We call $C_A$ the \textsl{rational localization of $C$ at $A$}.
Then, for all $C,C' \in \textsf{Choice}(X)$, the \textsl{rational distance} between $C$ and $C'$ is defined by\vspace{-0,1cm}
$$
d_{\operatorname{rat}}(C,C')\coloneqq \sum_{A \in \XX} d^A_\Delta(C_A,C'_A),\vspace{-0,1cm}\vs
$$
where $d^A_\Delta$ denotes the restriction $d_\Delta \!\!\upharpoonright_{\textsf{Choice}(A) \times \textsf{Choice}(A)}$.
\end{definition}
Definition~\ref{DEF:alpha_closure_of_point} employs $(\vert 2^X \vert -1)$-many restrictions of the given metric $d_\Delta$ to compare standard modifications of two given quasi-choices: these modifications are a sort of rational closures of a given choice on a given menu.
Note that the terminology of `local rationalization' used for $C_A$ is motivated by the fact that any element $x \in C(A)$ is never responsible for a violation of Axioms~$\alpha$ or~$\gamma$ by $C_A$.\footnote{More formally, considering Axiom~$\alpha$, this means that if $y \in S \subseteq T \subseteq A$, $y \in C_A(T)$, and $y \notin C_A(S)$, then $y \notin C(A)$. Similarly, for Axiom~$\gamma$, if $S,T \subseteq A$, $y \in C_A(S) \cap C_A(T)$, and $y \notin C_A(S \cup T)$, then $y \notin C(A)$.}
\begin{example} \label{EX:computation C_A} \rm
We illustrate how Definition~\ref{DEF:alpha_closure_of_point} works in a very simple case.
Consider the two choice functions $c_2$ and $c_2^\prime$ over $X= \{x,y,z\}$ defined by\vspace{-0,1cm}\footnote{The choice function $c_2$ has already been considered in Example~\ref{EX:motivating}.}
\begin{align*}
(c_2) & \qquad \underline{x}y \,,\quad \underline{x}z \,,\quad \underline{y}z \,,\quad x\underline{y}z\,, \\
(c_2^\prime) & \qquad x\underline{y} \,,\quad \underline{x}z \,,\quad \underline{y}z \,,\quad x\underline{y}z\,.
\end{align*}
Note that $c_2$ and $c_2^\prime$ are equal except on the menu $\{x,y\}$, and $c_2^\prime$ is rationalizable by the linear order $y \succ x \succ z$.
To determine $d_{\operatorname{rat}}(c_2,c_2^\prime)$, we preliminary compute their rational localizations $(c_2)_A$ and $(c_2^\prime)_A$ at any nonempty subset $A$ of $X$ having size at least two:
\begin{table}[h!]
\begin{center} \small
\begin{tabular}{|l||*{2}{c|}}
\hline
\backslashbox{menu}{localization}
&\makebox[2em]{$(c_2)_A \in \textsf{choice}(A)$}&\makebox[2em]{$(c_2^\prime)_A \in \textsf{choice}(A)$} \\
\hline\hline
\small $A =\{x,y\}$ & $\underline{x}\,,\;\underline{y}\,,\;\underline{x}y$ & $\underline{x}\,,\;\underline{y}\,,\;x\underline{y}$ \\
\hline
\small $A =\{x,z\}$ & $\underline{x}\,,\;\underline{z}\,,\;\underline{x}z$ & $\underline{x}\,,\;\underline{z}\,,\;\underline{x}z$ \\
\hline
\small $A =\{y,z\}$ & $\underline{y}\,,\;\underline{z}\,,\;\underline{y}z$ & $\underline{y}\,,\;\underline{z}\,,\;\underline{y}z$ \\
\hline
\small $A =\{x,y,z\}$ & $\underline{x}\,,\;\underline{y}\,,\;\underline{z}\,,\;x\underline{y}\,,\;\underline{x}z\,,\;\underline{y}z\,,\;x\underline{y}z$ & $\underline{x}\,,\;\underline{y}\,,\;\underline{z}\,,\;x\underline{y}\,,\;\underline{x}z\,,\;\underline{y}z\,,\;x\underline{y}z$ \\
\hline
\end{tabular}
\end{center}
\vspace{-0,1cm}\vs\vspace{-0,1cm}\vs\vspace{-0,1cm}
\end{table}
\noindent
(Note that all rational localizations at singletons are trivial choice functions in this particular case.)
Since $(c_2)_A=(c^\prime_2)_A$ for each $A\neq \{x,y\}$, whereas $(c_2)_{\{x,y\}}(\{x,y\})=x$ and $(c_2^\prime)_{\{x,y\}}(\{x,y\})=y$, we conclude\vspace{-0,1cm}\vs
$$
d_{\operatorname{rat}}(c_2,c_{2}^\prime)=d^{\{x,y\}}_{\Delta}\big((c_2)_{\{x,y\}},(c_2^\prime)_{\{x,y\}} \big)=2\,.
$$
\end{example}
\smallskip
As possibly expected, Definition~\ref{DEF:alpha_closure_of_point} is sound:
\begin{lemma} \label{LEMMA:new_metric_is_sound}
The function $d_{\operatorname{rat}}$ is a metric on $\textsf{Choice}(X)$.
\end{lemma}
\begin{proof}
For \textsf{A0.1}, clearly $d_{\operatorname{rat}}(C,C')$ is nonnegative.
If $d(C,C')=0$, then $d^A_\Delta(C_A,C'_A)=0$ for all $A \in \XX$.
It follows that $C_A(B)=C'_A(B)$ for all $B \subseteq A$, and so $C(A)=C_A(A)=C'_A(A)=C'(A)$.
This proves \textsf{A0.1}.
Axiom \textsf{A0.2} is obvious.
For \textsf{A0.3}, observe that $d^A_\Delta(C_A,C''_A) \leq d^A_\Delta(C_A,C'_A) + d^A_\Delta(C'_A,C''_A)$, because $d^A_\Delta$ is the restriction of a metric.
Thus the claim follows from summing over all $A \in \XX$.
\hfill $\Box$ \\
\end{proof}
The next remark shows that, despite being derived from $d_\Delta$, the rational metric $d_{\operatorname{rat}}$ does not satisfy several properties considered by Klamler; in particular, neither of the two axioms responsible for elementary decomposability ---namely \textsf{A1} and \textsf{A3}--- hold for $d_{\operatorname{rat}}$.
\begin{remark} \rm \label{REM:nontriviality_new_metric}
We prove that $d_{\operatorname{rat}}$ satisfies neither \textsf{A1} nor \textsf{A3} nor \textsf{A4'}.
All counterexamples will be quasi-choices over the set $X= \{x,y,z\}$.
Since in all cases the choice set of any singleton is nonempty, we only define them on menus having size two or three.
\medskip
To prove the failure of \textsf{A1}, define $C,C',C'' \in \textsf{Choice}(X)$ by\vspace{-0,1cm}\vs
\begin{itemize}
\item[\rm $(C)$] $\underline{x}y \,,\; \underline{x}\underline{z} \,,\; yz \,,\; \underline{x}y\underline{z}\,$;\vspace{-0,1cm}
\item[\rm $(C')$] $\underline{x}\underline{y} \,,\; \underline{x}\underline{z} \,,\; \underline{y}z \,,\; \underline{x}\underline{y}z\,$;\vspace{-0,1cm}
\item[\rm $(C'')$] $\underline{x}\underline{y} \,,\; \underline{x}\underline{z} \,,\; \underline{y}\underline{z} \,,\; x\underline{y}z\,$.\vspace{-0,1cm}
\end{itemize}
Clearly, $C'$ is between $C$ and $C''$.
However, $d_{\operatorname{rat}}(C,C'')=10$ is different from $d_{\operatorname{rat}}(C,C') + d_{\operatorname{rat}}(C',C'') = 8+4 =12$.
\medskip
For the failure of \textsf{A3}, define $C,C',D,D' \in \textsf{Choice}(X)$ by\vspace{-0,1cm}\vs
\begin{itemize}
\item[\rm $(C)$] $\underline{x}y \,,\; xz \,,\; y\underline{z} \,,\; \underline{x}\underline{y}z\,$;\vspace{-0,1cm}
\item[\rm $(C')$] $xy \,,\; xz \,,\; y\underline{z} \,,\; \underline{x}\underline{y}z\,$;\vspace{-0,1cm}
\item[\rm $(D)$] $\underline{x}y \,,\; xz \,,\; yz \,,\; xy\underline{z}\,$;\vspace{-0,1cm}
\item[\rm $(D')$] $xy \,,\; xz \,,\; yz \,,\; xy\underline{z}\,$.\vspace{-0,1cm}
\end{itemize}
Let $\XX'=\{\{x,y\},\{x,z\}\}$, and observe that $C \!\!\upharpoonright_{\XX \setminus \XX'}=C'\!\!\upharpoonright_{\XX \setminus \XX'}$, $D \!\!\upharpoonright_{\XX \setminus \XX'}=D'\!\!\upharpoonright_{\XX \setminus \XX'}$, $C \!\!\upharpoonright_{ \XX'}=D \!\!\upharpoonright_{ \XX'}$, and $C'\!\!\upharpoonright_{ \XX'}=D'\!\!\upharpoonright_{ \XX'}$.
However, $d_{\operatorname{rat}}(C,C')=1$ whereas $d_{\operatorname{rat}}(D,D')=2$.
\medskip
For the failure of \textsf{A4'}, define $C,\widetilde{C},C',\widetilde{C'} \in \textsf{Choice}(X)$ by\vspace{-0,1cm}\vs
\begin{itemize}
\item[\rm $(C)$] $\underline{x}y \,,\; \underline{x}z \,,\; \underline{y}z \,,\;\underline{x}yz\,$;\vspace{-0,1cm}
\item[\rm $(\widetilde{C})$] $\underline{x}y \,,\; \underline{x}z \,,\; \underline{y}z \,,\; \underline{x}\underline{y}\underline{z}\,$;\vspace{-0,1cm}
\item[\rm $(C')$] $\underline{x}y \,,\; \underline{x}z \,,\; \underline{y}z \,,\; x\underline{y}\underline{z}\,$;\vspace{-0,1cm}
\item[\rm $(\widetilde{C'})$] $\underline{x}y \,,\; \underline{x}z \,,\; \underline{y}z \,,\; xyz\,$.\vspace{-0,1cm}
\end{itemize}
The four quasi-choices over $X$ agree on every menu, except on $X$.
For $S=\{y,z\}$, we have $C(X)=\widetilde{C}(X)\Delta S$ and $C'(X)=\widetilde{C'}(X)\Delta S$, and yet $d_{\operatorname{rat}}(C,C')=8 \neq 6 = d_{\operatorname{rat}}(\widetilde{C},\widetilde{C'})$.
\end{remark}
It would be interesting to axiomatically characterize the rational metric $d_{\operatorname{rat}}$: we leave this as an open problem.
\section{Distance-based degrees of irrationality} \label{SECT:rational_index}
We finally give a formal definition of the measure of irrationality of a deterministic choice behavior with respect to a given metric, where the family of rationalizable quasi-choices acts as the benchmark of rationality.
We provide two versions of it: (1) simple, and (2) weighted.
The first applies to all quasi-.choices, whereas the second is only designed for choice correspondences.
\subsection{A simple degree of irrationality}
\begin{definition} \rm \label{DEF:irrationality_degree}
Let $\rho \colon \textsf{Choice}(X) \to \textsf{Choice}(X)$ be a metric.
Denote by $\textsf{Choice}_{\mathrm{rat}}(X)$ the subfamily of $\textsf{Choice}(X)$ comprising all quasi-choices that are rationalizable.
For any quasi-choice $C$ over $X$, the $\rho$\textsl{-degree of irrationality} of $C$ is the integer defined by\vspace{-0,1cm}
$$
\irr_\rho (C) \coloneqq \min \{\rho(C,D) : D \in \textsf{Choice}_{\mathrm{rat}}(X)\}.\vspace{-0,1cm}
$$
(This degree is well-defined, because $X$ is finite.)
\end{definition}
Given a metric $\rho$ on $\textsf{Choice}(X)$, the larger the $\rho$-degree of irrationality of a quasi-choice $C$ is, the more irrational $C$ is considered \textit{from the point of view of $\rho$}.
Note that if a quasi-choice is rationalizable, then its $\rho$-degree of irrationality is zero for any metric $\rho$.
For instance, the choice function $c_1$ defined in Example~\ref{EX:motivating} has a $d_\Delta$-degree of irrationality equal to zero, whereas $c_2$ and $c_3$ have a $d_\Delta$-degree of irrationality equal to two.
As already pointed out, the soundness of Definition~\ref{DEF:irrationality_degree} depends on the economic significance and the discernibility power of the metric used to determine the degree of irrationality.
In this respect, the rational metric $d_{\operatorname{rat}}$ appears to be better suited than Klamler's distance $d_\Delta$.
The next two examples witness this claim.
\begin{example} \rm \label{EX:3_choice_continued}
Consider the three choice functions $c_1,c_2,c_3$ defined in Example~\ref{EX:motivating}.
It is easy to show that\vspace{-0,1cm}
$$
\irr_{d_\Delta}(c_1) = 0\,,\quad \irr_{d_\Delta}(c_2)= 2\,,\quad \irr_{d_\Delta}(c_3)=2\,.\vspace{-0,1cm}
$$
On the other hand, below we show that\vspace{-0,1cm}
$$
\irr_{d_{\operatorname{rat}}}(c_1) = 0\,,\quad \irr_{d_{\operatorname{rat}}}(c_2)= 2\,,\quad \irr_{d_{\operatorname{rat}}}(c_3)=3\,.\vspace{-0,1cm}
$$
%
\begin{description}
\item[$\bullet$ $\irr_{d_{\operatorname{rat}}}(c_1)= 0$:] This is obvious, because $c_1$ is rationalizable.
\item[$\bullet$ $\irr_{d_{\operatorname{rat}}}(c_2)= 2$:] As noted in Example~\ref{EX:computation C_A}, the choice function $c_2^\prime$ defined by\vspace{-0,1cm}
$$
\qquad x\underline{y} \,,\quad \underline{x}z \,,\quad \underline{y}z \,,\quad x\underline{y}z\vspace{-0,1cm}
$$
is rationalizable by the linear order $\succ$, with $y \succ x \succ z$.
We know that $d_{\operatorname{rat}}(c_2,c_{2}^\prime)=2$.
Therefore, to prove the claim, we show that $d_{\operatorname{rat}}(c_2,D) \geq 2$ for all $D \in \textsf{Choice}_{\mathrm{rat}}(X)$.
Hereafter we shall employ a simplified notation, which is also used -- \textit{mutatis mutandis} -- in the proof of the equality $\irr_{d_{\operatorname{rat}}} (c_3) =3$.
Specifically, for all $A \in \XX$, we denote $d^{A}_\Delta((c_2)_{A},D_A)$ by the less cumbersome $d^{A}_\Delta$.
Moreover, we drop brackets and set separators whenever clear from context, using $D(xz)=x$ instead of $D(\{x,z\})=\{x\}$, $d^{xz}_\Delta$ instead of $d^{\{x,z\}}_\Delta$, etc.
Now fix $D \in \textsf{Choice}_{\mathrm{rat}}(X)$.
Then, either (1) $y \in D(xy)$, or (2) $y \notin D(xy)$. \vspace{-0,1cm}
\begin{description}
\item [\rm(1)] If $y \in D(xy)$, then we separately consider two cases.
\begin{description}
\item [\rm(1A)] If $x \notin D(xy)$, then $D_{xy}(xy)=y$.
Since $(c_2)_{xy}(xy)=x$, we obtain $d_{\Delta}^{xy}\geq 2$, hence $d_{\operatorname{rat}}(c_2,D)\geq 2$.
\item [\rm(1B)] If $x \in D(xy)$, then we split the analysis in two subcases.
\begin{description}
\item [\rm(1B1)] If $x \notin D(xz)$, then $x \notin D_{xz}(xz)$, while $x \in (c_2)_{xz}(xz)$.
It follows that $d_{\Delta}^{xz} \geq 1$.
Note that $y \in D_{xy}(xy)$ and $y \notin (c_2)_{xy}(xy)$ imply $d_{\Delta}^{xy} \geq 1$.
We conclude $d_{\operatorname{rat}}(c_2,D) \geq 2$.
\item [\rm(1B2)] If $x \in D(xz)$, then $x \in D(xyz)$ by Axiom~$\gamma$, and so $x \in D_{xyz}(xyz)$.
Since $x \notin (c_2)_{xyz}(xyz)$, we get $d_{\Delta}^{xyz} \geq 1$.
As before, $y \in D_{xy}(xy)$ and $y \notin (c_2)_{xy}(xy)$ imply $d_{\Delta}^{xy} \geq 1$.
We conclude $d_{\operatorname{rat}}(c_2,D) \geq 2$.
\end{description}
\end{description}
\item [\rm(2)] If $y \notin D(xy)$, then $y \notin D(xyz)$ by Axiom~$\alpha$.
Thus $y \notin D_{xyz}(xyz) \cup D_{xyz}(xy)$ and $y \in (c_2)_{xyz}(xyz) \cap (c_2)_{xyz}(xy)$.
We conclude $d_{\Delta}^{xyz} \geq 2$, hence $d_{\operatorname{rat}}(c_2,D) \geq 2$.
\end{description}
\item[$\bullet$ $\irr_{d_{\operatorname{rat}}}(c_3)= 3$:]
Let $c_3^\prime$ be the choice rationalizable by the relation $\succ$ on $X$ defined by $z\succ x$ and $x \succ y$, that is,\vspace{-0,1cm}
$$
\underline{x}y \,,\; x\underline{z} \,,\; \underline{y}\underline{z} \,, \, xy\underline{z}\,.
$$
(Note that $\succ$ is not transitive, because $y$ and $z$ are incomparable.)
It is easy to check that $d_\Delta^{xz}((c_3)_{xz},(c_3^\prime)_{xz})=2$ and $d_\Delta^{yz}((c_3)_{yz},(c_3^\prime)_{yz})=1$, whereas all other rational localizations of $c_3$ and $c_3^\prime$ coincide.
It follows that $d_{\operatorname{rat}}(c_3,c_3')=3$.
To complete the proof, we show that $d_{\operatorname{rat}}(c_3,D) \geq 3$ for any $D \in \textsf{Choice}_{\mathrm{rat}}(X)$.\vspace{-0,1cm}
%
\begin{description}
\item [\rm (1)] If $z \notin D(xyz)$, then either (1A) $z \notin D(xz)$, or (1B) $z \notin D(yz)$, using Axiom~$\gamma$.
\begin{description}
\item [\rm(1A)] If $z \notin D(xz)$, then we split the analysis into two subcases.
\begin{description}
\item [\rm(1A1)] If $x \in D(xyz)$, then $x \in D_{xyz}(xyz)$, $z \notin D_{xyz}(xyz)$, and $D_{xyz}(xz) = x$.
Therefore, from $z \in (c_3)_{xyz}(xyz) \cap (c_3)_{xyz}(xz)$ and $x \notin (c_3)_{xyz}(xz)$, we derive $d_\Delta^{xyz} \geq 3$. We conclude $d_{\operatorname{rat}}(c_3,D) \geq 3$.
\item [\rm(1A2)] If $x \notin D(xyz)$, then by Axiom~$\gamma$, $x \notin D(xy)$ or $x \notin D(xz)$, and so $x \notin D_{xy}(xy)$ or $x \notin D_{xz}(xz)$.
Since $x \in (c_3)_{xy}(xy) \cap (c_3)_{xz}(xz)$, we obtain $d_{\Delta}^{xy} \geq 1$ or $d_{\Delta}^{xz} \geq 1$.
Finally, since $z \in (c_3)_{xyz}(xyz) \cap (c_3)_{xyz}(xz)$ and $z \notin D_{xyz}(xyz) \cup D_{xyz}(xz)$, we get $d_{\Delta}^{xyz}\geq 2$, and so $d_{\operatorname{rat}}(c_3,D) \geq 3$.
\end{description}
%
\item [\rm(1B)] Suppose $z \notin D(yz)$. We can assume that $z \in D(xz)$, otherwise we would be done by case (1A).
It follows that $z \in D_{xz}(xz)$ while $z \notin c_{3_{xz}}(xz)$, hence $d_{\Delta}^{xz}\geq 1$.
Since $z \notin D_{xyz}(xyz)$ and $z \notin D_{xyz}(yz)$, we conclude $d_{\Delta}^{xyz} \geq 2$, and so $d_{{\operatorname{rat}}}(c_3,D) \geq 3$.
\end{description}
%
\item [\rm(2)] If $z \in D(xyz)$, then $z \in D(xz)$ and $z \in D(yz)$ by Axiom~$\alpha$.
\begin{description}
\item [\rm(2A)] If $x \notin D(xy)$, then $d_\Delta^{xy}\geq 1$, $d_\Delta^{xz}\geq 1$, $d_\Delta^{yz}\geq 1$, and so $d_{\operatorname{rat}}(c_3,D) \geq 3$.
%
\item [\rm(2B)] If $x \in D(xy)$, we consider two subcases.
\begin{description}
\item [\rm(2B1)] If $x \notin D(xyz)$, then $x \notin D(xz)$ by Axiom~$\gamma$, hence $d_\Delta^{xz}\geq 2$ and $d_\Delta^{yz}\geq 1$. We conclude $d_{\operatorname{rat}}(c_3,D) \geq 3$.
\item [\rm(2B2)] If $x \in D(xyz)$, then $x \in D_{xyz}(xz)$ and $x \in D(xz)$ by Axiom~$\alpha$, hence $d_\Delta^{xyz} \geq 2$ and $d_\Delta^{xz}=1$.
Again, we conclude $d_{\operatorname{rat}}(c_3,D) \geq 3$.
\end{description}
\end{description}
\end{description}
%
\end{description}
\end{example}
\begin{example}\label{EX:metric_new_continued} \rm
Let $c_1,c_2,c_3,c_4$ be the four choice correspondences over $X=\{x,y,z,w\}$ defined in Example~\ref{EX:metric_new}.
One can easily show that these three choices have the same $d_\Delta$-degree of irrationality, being\vspace{-0,1cm}
\begin{equation*}
\irr_{d_\Delta}(c_2) =
\irr_{d_\Delta}(c_3) =
\irr_{d_\Delta}(c_4) = 4\,.\vspace{-0,1cm}
\end{equation*}
On the contrary, the metric $d_{\operatorname{rat}}$ agrees with the perception that $c_2$ is less irrational than $c_3$, and $c_4$ is the most irrational of all, being\vspace{-0,1cm}
\begin{equation} \label{EQ:irrationality_of_choices_on_4_elements}
\irr_{d_{\operatorname{rat}}} (c_1) = 0\,,\;\; \irr_{d_{\operatorname{rat}}} (c_2) = 6\,,\;\; \irr_{d_{\operatorname{rat}}}(c_3) = 16\,,\;\; \irr_{d_{\operatorname{rat}}} (c_4) = 19\,.\vspace{-0,1cm}
\end{equation}
The related computations are extremely long and tedious, so we omit them.\footnote{However, they are available upon request.}
\end{example}
\subsection{A weighted degree of irrationality}
The evaluation of the degree of irrationality of choice behavior described above can be refined, as long as the DM is able to provide additional pieces of information.
Here we illustrate a possible refinement of it, which applies to the family of choice correspondences; in other words, we consider the special case of a decisive DM.
In a preliminary step, the DM is required to provide additional information about the `subjective desirability' of all rational choice behaviors.
Operationally, this is obtained by assigning weights to each rationalizable choice correspondence.
According to intuition, very desirable rational behaviors should be given a weight less or equal than one, because this may produce the effect of contracting the rational distance of all choices close to them.
On the contrary, less appealing rational behaviors should given a weight greater or equal than one, in order to possibly dilate the distance from rationalization.
Once desirability is assessed, the irrational degree of a decisive choice behavior is then computed as the minimum weighted distance from the benchmark of rationality.
In the process of designing the weighting procedure, we adhere to some natural rules of conduct.
We select the transitivity of the relation of revealed preference as our guiding parameter: the more transitive this relation is, the more desirable the associated behavior becomes, and the lower the corresponding weight must be.
From this point of view, the most desirable choices will be those rationalized by \textsl{weak orders} (asymmetric and negatively transitive, hence transitive), which will be assigned the lowest weight among all rational behaviors.
Less desirable levels are those of choices rationalized by \textsl{semiorders} (asymmetric, Ferrers, and semitransitive) \citep{Luce1956}, and by \textsl{interval orders} (asymmetric and Ferrers) \citep{Fishburn1970,Fishburn1985}.
At an even lower desirability level lie all choices rationalized by transitive asymmetric relations that fail to be interval orders.
At the bottom of the scale, we find those choices that are rationalized by asymmetric, acyclic and intransitive binary relations, which will be given the highest weight of all.
An even finer tuning of the weighting procedure can be achieved by employing the so-called \textsl{strict and weak $(m,n)$-Ferrers properties} \citep{GiaWat2014Ferrers,GiaWat2018}, which provide a classification of all asymmetric and acyclic binary relations on a set according to their discrete level of transitivity.\footnote{On the point, see also \cite{CanGiaGreWat2016} for a classification of all rationalizable choices on the basis of the so-called \textsl{axioms of $(m,n)$-replacement consistency}.}
The next definition provides a simplified version of these properties, which is however sufficient for our goal.
\begin{definition}[\citealp{GiaWat2014Ferrers}] \rm \label{DEF:Ferrers_properties}
Let $\succ$ be an asymmetric and acyclic binary relation over $X$.
Denote by $\succsim$ the \textsl{canonical completion} of $\succ$, obtained by adding all $\succ$-incomparable pairs to $\succ$.\footnote{Two (not necessarily distinct) elements $x,y \in X$ are \textsl{$\succ$-incomparable} if nether $x \succ y$ nor $y \succ x$ holds. Technically, the canonical completion $\succsim$ is the extension of $\succ$ in which incomparability is transformed into indifference. In particular, the canonical completion $\succsim$ of $\succ$ is both \textsl{reflexive} (i.e., $x \succsim x$ for all $x \in X$) and \textsl{complete} (i.e., $x \succsim y$ or $y \succsim x$ for all distinct $x,y \in X$).}
For any integers $m \geq n \geq 1$, we say that $\succ$ is \textit{$(m,n)$-Ferrers} if the joint satisfaction of $(x_1 \succsim \ldots \succsim x_m)$ and $(y_1 \succsim \ldots \succsim y_n)$ implies either $x_1 \succsim y_n$ or $y_1 \succsim x_m$, for all (not necessarily distinct) $x_1, \ldots, x_m, y_1, \ldots, y_n \in X$.
\end{definition}
It is easy to show that $(m,n)$-Ferrers implies $(m',n')$-Ferrers for any $1 \leqslant m' \leqslant m$ and $1 \leqslant n' \leqslant n$ \citep[Lemma~2.6]{GiaWat2014Ferrers}.
Furthermore, $(3,3)$-Ferrers implies $(m,n)$-Ferrers for any $m\geqslant n \geqslant 1$ \citep[Theorem~3.1(v)]{GiaWat2014Ferrers}.
Note also that $(3,3)$-Ferrers relations are weak orders, $(3,1)$- and $(2,2)$-Ferrers relations are semiorders, $(2,2)$-Ferrers relations are interval orders, $(2,1)$-Ferrers relations are transitive, and $(1,1)$-Ferrers are acyclic but intransitive.
Consequently, all asymmetric and acyclic binary relation on a given set of alternatives can be partitioned according to a lattice structure, which is induced by the satisfaction of $(m,n)$-Ferrers properties.
This lattice is composed of 14 pairwise disjoint sets, which in turn can be arranged into 9 desirability classes according to their discrete degree of transitivity: see Figure~\ref{FIG:combinations_weak_Ferrers}.\footnote{This figure is a simple elaboration of Figure~6 in \cite{Gia2019}. See also \cite{Gia2014}, where the typical form of \textsl{strong semiorders} and \textsl{strong interval orders} is displayed in Figure~5.}
For instance, the most desirable class is that of weak orders, whereas the least desirable class comprises all intransitive preferences.
We can finally define a weighted variation of the degree of irrationality.
\begin{figure}[h]
\begin{center}
\psset{xunit=2.2cm} \psset{yunit=2.3cm}
\begin{pspicture}[showgrid=false](0,0.5)(4,10)
\psset{linewidth=0.5pt}
\rput(-1,9.75){\textbf{Desirability class}}
\rput(2,9.75){\textbf{Asymmetric and acyclic relations}}
\rput(-1,9){\large \textbf{(1)}}
\rput(-1,8){\large \textbf{(2)}}
\rput(-1,7){\large \textbf{(3)}}
\rput(-1,6){\large \textbf{(4)}}
\rput(-1,5){\large \textbf{(5)}}
\rput(-1,4){\large \textbf{(6)}}
\rput(-1,3){\large \textbf{(7)}}
\rput(-1,2){\large \textbf{(8)}}
\rput(-1,1){\large \textbf{(9)}}
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(2,8.8)(2,8.21)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(2,7.8)(2,7.24)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(1.8,6.75)(1.15,6.23)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(2.2,6.75)(2.85,6.23)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(0.68,5.75)(0.02,5.21)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(3.3,5.75)(3.96,5.21)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(1.57,5.78)(1.89,5.21)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(2.43,5.78)(2.12,5.21)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(1.57,4.8)(0.28,4.19)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(2.43,4.8)(3.44,4.22)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(0,4.8)(0,4.21)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(4,4.8)(4,4.24)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(0.08,3.79)(0.85,3.2)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(3.92,3.77)(3.15,3.2)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(3.42,3.79)(1.27,3.18)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(1.13,2.79)(1.87,2.2)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(2.87,2.79)(2.13,2.2)
\psline[linecolor=darkgray,linewidth=0.07,arrowsize=0.3]{->}(2,1.8)(2,1.21)
\psline[fillstyle=solid,fillcolor=lightgray](1.75,8.83)(2.25,8.83)(2.25,9.17)
(1.75,9.17)(1.75,8.83)
\rput(2,9){$(3,3)$}
\rput(2.73,9){\small \emph{weak order}}
\psline[fillstyle=solid,fillcolor=lightgray](1.75,7.83)(2.25,7.83)(2.25,8.17)
(1.75,8.17)(1.75,7.83)
\rput(2,8){$(4,2)$}
\psline[fillstyle=solid,fillcolor=lightgray](1.45,6.8)(2.55,6.8)(2.55,7.2)
(1.45,7.2)(1.45,6.8)
\rput(2,7){$(5,1) \:\&\: (3,2)$}
\psline[fillstyle=solid,fillcolor=lightgray](0.45,5.8)(1.55,5.8)(1.55,6.2)
(0.45,6.2)(0.45,5.8)
\rput(1,6){$(5,1) \:\&\: (2,2)$}
\psline[fillstyle=solid,fillcolor=lightgray](2.45,5.8)(3.55,5.8)(3.55,6.2)
(2.45,6.2)(2.45,5.8)
\rput(3,6){$(4,1) \:\&\: (3,2)$}
\rput(4.27,6){\small \emph{strong semiorder}}
\psline[fillstyle=solid,fillcolor=lightgray](-0.25,4.83)(0.25,4.83)(0.25,5.17)
(-0.25,5.17)(-0.25,4.83)
\rput(0,5){$(5,1)$}
\psline[fillstyle=solid,fillcolor=lightgray](3.75,4.83)(4.25,4.83)(4.25,5.17)
(3.75,5.17)(3.75,4.83)
\rput(4,5){$(3,2)$}
\rput(5.1,5){\small \emph{strong interval order}}
\psline[fillstyle=solid,fillcolor=lightgray](1.45,4.83)(2.55,4.83)(2.55,5.17)
(1.45,5.17)(1.45,4.83)
\rput(2,5){$(4,1) \:\&\: (2,2)$}
\psline[fillstyle=solid,fillcolor=lightgray](-0.25,3.83)(0.25,3.83)(0.25,4.17)
(-0.25,4.17)(-0.25,3.83)
\rput(0,4){$(4,1)$}
\psline[fillstyle=solid,fillcolor=lightgray](3.45,3.8)(4.55,3.8)(4.55,4.2)
(3.45,4.2)(3.45,3.8)
\rput(4,4){$(3,1) \:\&\: (2,2)$}
\rput(5,4){\small \emph{semiorder}}
\psline[fillstyle=solid,fillcolor=lightgray](0.75,2.83)(1.25,2.83)(1.25,3.17)
(0.75,3.16)(0.75,2.83)
\rput(1,3){$(3,1)$}
\psline[fillstyle=solid,fillcolor=lightgray](2.75,2.83)(3.25,2.83)(3.25,3.17)
(2.75,3.17)(2.75,2.83)
\rput(3,3){$(2,2)$}
\rput(3.84,3){\small \emph{interval order}}
\psline[fillstyle=solid,fillcolor=lightgray](1.75,1.83)(2.25,1.83)(2.25,2.17)
(1.75,2.17)(1.75,1.83)
\rput(2,2){$(2,1)$}
\rput(2.68,2){\small \emph{transitive}}
\psline[fillstyle=solid,fillcolor=lightgray](1.75,0.83)(2.25,0.83)(2.25,1.17)
(1.75,1.17)(1.75,0.83)
\rput(2,1){$(1,1)$}
\rput(2.75,1){\small \emph{intransitive}}
\end{pspicture}
\end{center}
\caption{\label{FIG:combinations_weak_Ferrers}
Ranking of desirability according to implications of $(m,n)$-Ferrers properties}
\end{figure}
\begin{definition}\label{DEF:distribution_metric} \rm
A \textsl{feasible weighting map} is a function $w \colon \{1,2, \ldots,9\} \to (0,2)$, which assigns a positive weight to each desirability class in a way that
\begin{description}
\item[\rm \textsl{(monotonicity)}] $i \leqslant j$ implies $w(i) \leqslant w(j)$ for all $i,j \in \{1,2,\ldots, 9\}$, and
\item[\rm \textsl{(average property)}] $\sum_{i = 1}^9 \frac{w(i)}{9} \in \left[1-\varepsilon, 1 + \varepsilon \right]$ for some $0 \leqslant \varepsilon < 1$,
\end{description}
where $\varepsilon$ is a discrimination threshold determined \textit{a priori} by the DM.\footnote{Here we do not dwell on the procedure to assess the discrimination threshold $\varepsilon$. In fact, the sole purpose of this section is to illustrate a simple variant of our approach.}
Given a rationalizable choice $r$ over $X$, denote by $i_r$ the desirability class of its relation of revealed preference $\succ_r$.
Then, for any $c \in \textsf{choice}(X)$, the \textsl{irrationality index of $c$ induced by $w$} is\vspace{-0,1cm}
\begin{equation*} \label{EQ:weighted_degree}
\irr_{d_{{\operatorname{rat}}}}^{w} (c) :=\min\big\{w(i_r) \!\cdot\!d_{{\operatorname{rat}}}(c,r) : r \in \textsf{choice}_{\mathrm{rat}}(X) \big\}.
\end{equation*}
\end{definition}
Definition~\ref{DEF:distribution_metric} can be motivated as follows.
The property of monotonicity ensures that the weight of rational choices decreases as the level of transitivity of the corresponding revealed preference increases.
As a consequence, if, for instance, a choice behavior is close to a highly desirable rational choice, then its degree of irrationality will be accordingly contracted.
Furthermore, the average property guarantees that the average weight given to a rational choice behavior belongs to a close neighborhood of $1$ according to a threshold established by the DM.
In the simplest case, all weights are the same, and the discrimination threshold is equal to $0$.
This implies that the weighting function $w$ assigns weight equal to $1$ to all asymmetric and acyclic binary relations over $X$.
However, even in this very special case, it may happen that $\irr_{\operatorname{rat}}(c) \neq \irr_{\operatorname{rat}}^w (c)$ for some choice $c$.
The reason is that the weighted variant of our approach only applies to decisive choice behaviors, and so the computation of the minimum distance from the benchmark of rationality may give different results.
We conclude this section with an example, which showcases how a weighting procedure of rational choices yields a fine tuning of the results obtained in Example~\ref{EX:metric_new}.
\begin{example} \rm \label{EX:stochastic_measure_of_deterministic_choice}
Let $c_1,c_2,c_3,c_4$ be the four choice correspondences over $X = \{x,y,z,w\}$ defined in Example~\ref{EX:metric_new} (and further analyzed in Example~\ref{EX:metric_new_continued}).
On a four-element set, the phenomenology of $(m,n)$-Ferrers properties is quite poor, that is, many equivalence classes of the partition are empty.
In fact, it suffices to assign weights to the following classes: (1) weak orders, (6) semiorders, (7) interval orders and semitransitive relations, and (9) intransitive relations.
For the sake of illustration, first set $w(i) := 1 + 0.1 (i - 5)$ for $i = 1,\ldots , 9$.
Clearly, $w$ is a feasible weighting map for any $0 \leqslant \varepsilon < 1$.
A computer-aided computation yields the following $d_{\operatorname{rat}}$-degrees of irrationality induced by $w$:\vspace{-0,1cm}
\begin{equation*} \label{EQ:irrationality_of_choices_on_4_elements_with_weight}
\irr^{w}_{d_{\operatorname{rat}}} (c_1) = 0\,,\;\; \irr^{w}_{d_{\operatorname{rat}}} (c_2) = 5.4\,,\;\; \irr^{w}_{d_{\operatorname{rat}}}(c_3) = 12\,,\;\; \irr^{w}_{d_{\operatorname{rat}}} (c_4) = 13.2\,.
\end{equation*}
Now define $w' \colon \{1,2,\ldots , 9\} \to (0,2)$ by $w'(i) := 0.8$ for $1 \leqslant i \leqslant 4$, $w'(5) = 0.9$, and $w'(i) := 1.1$ for $6 \leqslant i \leqslant 9$.
Again, $w'$ is a feasible weighting map for any $0.2 \leqslant \varepsilon < 1$.
Now we get the $d_{\operatorname{rat}}$-degrees of irrationality induced by $w'$ become\vspace{-0,1cm}
\begin{equation*} \label{EQ:irrationality_of_choices_on_4_elements_with_weight}
\irr^{w'}_{d_{\operatorname{rat}}} (c_1) = 0\,,\;\; \irr^{w'}_{d_{\operatorname{rat}}} (c_2) = 6.6\,,\;\; \irr^{w'}_{d_{\operatorname{rat}}}(c_3) = 16\,,\;\; \irr^{w'}_{d_{\operatorname{rat}}} (c_4) = 17.6\,.
\end{equation*}
A sensitivity analysis connected to the weighting procedure and the threshold of discrimination may provide further insight into the DM's preference system.
\end{example}
\section{Measures of stochastic irrationality} \label{SECT:stochastic_setting}
In this last section we suggest how to adapt our approach to a stochastic environment.
The underlying idea is to transform the search for a measure of irrationality into the formulation of a geometric problem (concerning polytopes).
For simplicity, we shall only consider the case of stochastic choice functions,\footnote{Our approach can be extended to stochastic choice \textsl{correspondences}, too.} as defined below.
\begin{definition}\label{DEF:SCF} \rm
A \textsl{stochastic choice function} over $X$ is a map $p \colon X \times 2^X\setminus\{\varnothing\} \to [0,1]$ such that for all $a \in X$ and $A \in 2^X \setminus \{\varnothing\}$, the following conditions hold:\vspace{-0,1cm}
\begin{itemize}
\item $\sum_{a\in A}\, p(a,A)=1$,\vspace{-0,1cm}
\item $a \notin A$ implies $p(a,A)=0$.
\end{itemize}
We denote by $\textsf{choice}^*(X)$ the family of all stochastic choice functions over $X$.
\end{definition}
As in the deterministic case, the first step in determining the degree of irrationality of a stochastic behavior consists of fixing a benchmark of rationality.
In their interesting approach, \cite{ApesteguiaBallester2015} essentially consider the \textit{finite} family of \textit{deterministic} rationalizable choices (which are in a one-to-one correspondence with linear orders) as the benchmark of rationality.
Roughly speaking, the authors associate to a suitable stochastic choice behavior ---a \textsl{collection of observations}--- what they call a \textsl{swap index}, computed by using probabilities to weigh swaps in linear orders.
Our selection of the benchmark is instead an \textit{infinite} family of \textit{stochastic} choices, namely those that satisfy the following well-known model of rational behavior:
\begin{definition}[\citealp{Block_Marschak1960}] \rm \label{DEF:RUM}
A stochastic choice function $p$ over $X$ satisfies the \textsl{random utility model} (for brevity, it is a \textsl{RUM} function) if there is a probability distribution $Pr$ on the set $\textsf{LO}(X)$ of all linear orders over $X$ such that for each $A \in 2^X \setminus \{\varnothing\}$ and $a \in A$,\vspace{-0,1cm}
$$
p(a,A) \;=\; Pr\big( \{\rhd \in \textsf{LO}(X) : (\forall \,x \in A\setminus\{a\}) \;a \rhd x\} \big).\vspace{-0,1cm}
$$
Hereafter, any RUM function will be called \textsl{rational}; accordingly, we shall denote by $\textsf{choice}^*_{\operatorname{rat}}(X)$ the family of all RUM functions over $X$.
\end{definition}
The selection of RUM as a prototype of stochastic rationality is statistically robust: see, among several related contributions, \cite{MarleyRegenwetter2017} for a review of random utility models, \cite{McCausland2019} for a direct Bayesian testing of RUM, and \citet[Section~8]{Davis-Stober2009} for an application to axiomatic measurement theory.
Now an attempt to fully adapt our deterministic approach to a stochastic setting poses salient challenges.
In fact, we need an economically significant metric ---or, alternatively, a function that satisfies weaker properties, such as a `divergence'--- which enables us to discern different levels of irrationality for different types of stochastic choice behaviors.
However, none of the metrics/divergences considered in the literature appears to be a good fit for our goal,\footnote{Some examples in Subsection~\ref{SUBSECT:examples} illustrate how different types of stochastic choice behaviors are not adequately distinguished by some well-known distances/divergences.} and it seems not simple to design new metrics that do the job.
In view of the difficulties illustrated above, here we choose a different path to evaluate the level of irrationality of a stochastic choice behavior.
Specifically, we take advantage of a known characterization of the RUM model to attach a vector with $\vert X \vert$-many components to each stochastic behavior: the higher the entries in the vector, the most irrational the choice behavior.
Then, to compare irrationality levels, we use a permutation-invariant version of the classical Pareto ordering of these vectors, which arranges all irrational stochastic choices into a preordered set (ties and incomparability being allowed).
As a preliminary step, we recall the known characterization of the RUM model.
\begin{definition}[\citealp{Block_Marschak1960,Falmagne1978}]\label{DEF:BM_inequalities} \rm
Let $p$ be a stochastic choice function over $X$.
For any $T \in 2^X \setminus \{\varnothing\}$ and $a \in T$, define
$$
q_{a,T}:=\sum\limits_{T\subseteq U \subseteq X} (-1)^{|U\setminus T|}p(a,U).
$$
The $q_{a,T}$'s are the \textsl{Block-Marshak polynomials} (\textsl{BM polynomials}, for brevity)\footnote{`Polynomial' is the usual term, although $q_{a,T}$ is a linear expression in the $p(a,U)$'s} of $p$.
\end{definition}
\cite{Block_Marschak1960} show that having $q_{a,T}\ge 0$ for suitable menus $T \subseteq X$ is a necessary condition for having a RUM function.
However, the general definition of the BM polynomials and the complete characterization of the random utility model came almost twenty years later:\footnote{See \cite{Fiorini2004} for an elegant and very short proof of this result, which involves M\"{o}bius inversion and network flow.}
\begin{theorem}[\citealp{Falmagne1978}]\label{THM:Falmagne}
A stochastic choice function is RUM if and only if all its Block-Marshak polynomials are nonnegative.
\end{theorem}
Theorem~\ref{THM:Falmagne} allows us to derive a measure of irrationality for stochastic choices.
\begin{definition} \rm \label{DEF:stochastic_irrationality_vector}
Let $p$ be a stochastic choice function over $X = \{x_1,\ldots,x_n\}$, where $n \geqslant 2$.
For each $x_i \in X$, let\vspace{-0,1cm}\vs
$$
v_p(x_i) :=
\left\{
\begin{array}{ll}
\left\vert \sum_{q_{x_i,T} < 0} q_{x_i,T} \right\vert & \text{ if } q_{x_i,T} < 0 \text{ for some $T \in 2^X$,}\\
0 & \text{ otherwise.}
\end{array}\vspace{-0,1cm}
\right.\vspace{-0,1cm}
$$
The $n$-tuple $v_p = \big(v_p(x_1),\ldots,v_p(x_n)\big) \in {\mathbb R}^n_+$ is the \textsl{negativity vector} of $p$
\end{definition}
Clearly, the larger the entries in the negativity vector, the more irrational the corresponding stochastic choice.
By Definition~\ref{DEF:stochastic_irrationality_vector} and Theorem~\ref{THM:Falmagne}, all RUM functions ---and, in particular, all deterministic rationalizable choices--- have $(0,\ldots,0)$ as negativity vector.
For all non-RUM functions, the next definition establishes a way to compare their (strictly positive) irrationality.
\begin{definition} \rm \label{DEF:preorder_of_irrationality}
Denote by $\mathscr{S}(X)$ the family of all permutations of $X$.
Define a binary relation $\precsim^*$ over $\textsf{choice}^*(X)$ as follows:\vspace{-0,1cm}
$$
p \precsim^* p' \quad \iff \quad (\exists \sigma \in \mathscr{S}(X))\: (\forall x \in X) \; v_p(x) \leqslant v_{p'}(\sigma(x))\vspace{-0,1cm}
$$
for any $p,p' \in \textsf{choice}^*(X)$.
Then, we say that\vspace{-0,1cm}
\begin{itemize}
\item $p$ and $p'$ are \textsl{equally irrational} if $p \sim^* p'$ (i.e., $p \precsim^* p'$ and $p' \precsim^* p$),\vspace{-0,1cm}
\item $p$ is \textsl{less irrational} than $p'$ if $p \prec^* p'$ (i.e., $p \precsim^* p'$ and $\neg(p' \precsim^* p)$), and\vspace{-0,1cm}
\item $p$ and $p'$ are \textsl{incomparably irrational} if $p \perp^* p'$ (i.e., $\neg(p \precsim^* p')$ and $\neg(p' \precsim^* p)$).
\end{itemize}
The pair $\left(\textsf{choice}^*(X),\precsim^* \right)$ is a preordered set,\footnote{Recall that a \textsl{preorder} is a reflexive and transitive (but possibly incomplete) binary relation.} having all RUM functions as a minimum.
\end{definition}
The next example presents two stochastic choice functions over a set of size four.
We shall compute all related BM polynomials and the two associated negativity vectors, to finally conclude that one function is more irrational than the other.
\begin{example} \label{EX:negativity_vector} \rm
Set $X = \{x,y,z,w\}$.
Let $p_1$ be the stochastic choice function over $X$ defined in Table~\ref{TABLE:p1}.
For the sake of illustration, we explicitly compute the first two BM polynomials of $p_1$ associated to the item $x$:\vspace{-0,1cm}
\begin{align*}
q_{x,\{x\}} = & \quad p(x,\{x\}) - \big( p(x,\{x,y\}) + p(x,\{x,z\}) + p(x,\{x,w\}) \big) + \\
& \quad \big( p(x,\{x,y,z\}) + p(x,\{x,y,w\}) + p(x,\{x,z,w\}) \big) - p(x,X) \\
= & \quad 1 - \big( 0.5 + 0.4 + 0.9 \big) + \big( 0.6 + 0.7 + 0.4 \big) - 0.4 \\
= & \quad 0.5\,,\\
q_{x,\{x,y\}} = & \quad p(x,\{x,y\}) - \big( p(x,\{x,y,z\}) + p(x,\{x,y,w\}) \big) + p(x,X) \\
= & \quad 0.5 - \big( 0.6 + 0.7 \big) + 0.4 \\
= & \quad - 0.4.
\end{align*}
By Definition~\ref{DEF:stochastic_irrationality_vector}, summing all entries in the last four columns of Table~\ref{TABLE:p1} yields the negativity vector of $p_1$, which is $v_{p_1} = (0.6,0.2,0.2,0.1)$.
\begin{table}[h!] \footnotesize
\begin{center}
\begin{tabular}{|l||||*{4}{c|}*{1}{|}*{4}{c|}}
\hline
\backslashbox{\footnotesize menus}{\footnotesize items}
&\makebox[2em]{\footnotesize$x$} & \makebox[2em]{\footnotesize $y$} & \makebox[2em]{\footnotesize $z$} & \makebox[2em]{\footnotesize $w$} & \makebox[2em]{\footnotesize $q_{x,.}$} & \makebox[2em]{\footnotesize $q_{y,.}$} & \makebox[2em]{\footnotesize $q_{z,.}$} & \makebox[2em]{\footnotesize $q_{w,.}$} \\
\hline\hline \hline \hline
\footnotesize $\{x\}$ & $1$ & & & & $0.5$ & & & \\
\hline
\footnotesize $\{y\}$ & & $1$ & & & & {\red $-0.1$} & & \\
\hline
\footnotesize $\{z\}$ & & & $1$ & & & & $0.1$ & \\
\hline
\footnotesize $\{w\}$ & & & & $1$ & & & & $0.5$ \\
\hline \hline
\footnotesize $\{x,y\}$ & $0.5$ & $0.5$ & & & {\red $-0.4$} & $0.3$ & & \\
\hline
\footnotesize $\{x,z\}$ & $0.4$ & & $0.6$ & & {\red $-0.2$} & & $0.2$ & \\
\hline
\footnotesize $\{x,w\}$ & $0.9$ & & & $0.1$ & $0.2$ & & & $0$ \\
\hline
\footnotesize $\{y,z\}$ & & $0.5$ & $0.5$ & & & $0$ & $0.2$ & \\
\hline
\footnotesize $\{y,w\}$ & & $0.7$ & & $0.3$ & & $0.4$ & & $0.1$ \\
\hline
\footnotesize $\{z,w\}$ & & & $0.6$ & $0.4$ & & & {\red $-0.1$} & $0.3$ \\
\hline \hline
\footnotesize $\{x,y,z\}$ & $0.6$ & $0.3$ & $0.1$ & & $0.2$ & $0.1$ & {\red $-0.1$} & \\
\hline
\footnotesize $\{x,y,w\}$ & $0.7$ & $0.1$ & & $0.2$ & $0.3$ & {\red $-0.1$} & & $0$ \\
\hline
\footnotesize $\{x,z,w\}$ & $0.4$ & & $0.5$ & $0.1$ & $0$ & & $0.3$ & {\red $-0.1$} \\
\hline
\footnotesize $\{y,z,w\}$ & & $0.4$ & $0.4$ & $0.2$ & & $0.2$ & $0.2$ & $0$ \\
\hline
\footnotesize $\{x,y,z,w\}$ & $0.4$ & $0.2$ & $0.2$ & $0.2$ & $0.4$ & $0.2$ & $0.2$ & $0.2$ \\
\hline
\end{tabular}\vspace{-0,1cm}\vs\vspace{-0,1cm}
\end{center}
\caption{\footnotesize The stochastic choice function $p_1$ and its BM polynomials: the entries in columns 1--4 give the probability that an item is chosen in a menu containing it, whereas the entries in columns 5--8 are the respective BM polynomials. (All empty entries stand for $0$.)} \label{TABLE:p1}
\end{table}
A different stochastic choice function $p_2$ over $X$ is given in Table~\ref{TABLE:p2}.
\begin{table}[h!] \footnotesize
\begin{center}
\begin{tabular}{|l||||*{4}{c|}*{1}{|}*{4}{c|}}
\hline
\backslashbox{\footnotesize menus}{\footnotesize items}
&\makebox[2em]{\footnotesize $x$} & \makebox[2em]{\footnotesize $y$} & \makebox[2em]{\footnotesize $z$} & \makebox[2em]{\footnotesize $w$} & \makebox[2em]{\footnotesize $q_{x,.}$} & \makebox[2em]{\footnotesize $q_{y,.}$} & \makebox[2em]{\footnotesize $q_{z,.}$} & \makebox[2em]{\footnotesize $q_{w,.}$} \\
\hline\hline \hline
\footnotesize $\{x\}$ & $1$ & & & & $0.2$ & & & \\
\hline
\footnotesize $\{y\}$ & & $1$ & & & & $0.2$ & & \\
\hline
\footnotesize $\{z\}$ & & & $1$ & & & & $0.1$ & \\
\hline
\footnotesize $\{w\}$ & & & & $1$ & & & & $0.5$ \\
\hline
\hline
\footnotesize $\{x,y\}$ & $0.6$ & $0.4$ & & & $0$ & $0.1$ & & \\
\hline
\footnotesize $\{x,z\}$ & $0.5$ & & $0.5$ & & $0$ & & $0.1$ & \\
\hline
\footnotesize $\{x,w\}$ & $0.8$ & & & $0.2$ & $0.2$ & & & $0$ \\
\hline
\footnotesize $\{y,z\}$ & & $0.4$ & $0.6$ & & & {\red $-0.1$} & $0.2$ & \\
\hline
\footnotesize $\{y,w\}$ & & $0.7$ & & $0.3$ & & $0.3$ & & $0$ \\
\hline
\footnotesize $\{z,w\}$ & & & $0.6$ & $0.4$ & & & $0$ & $0.2$ \\
\hline
\hline
\footnotesize $\{x,y,z\}$ & $0.5$ & $0.3$ & $0.2$ & & $0$ & $0.1$ & $0$ & \\
\hline
\footnotesize $\{x,y,w\}$ & $0.6$ & $0.2$ & & $0.2$ & $0.1$ & $0$ & & $0.1$ \\
\hline
\footnotesize $\{x,z,w\}$ & $0.5$ & & $0.4$ & $0.1$ & $0$ & & $0.2$ & $0$ \\
\hline
\footnotesize $\{y,z,w\}$ & & $0.4$ & $0.4$ & $0.2$ & & $0.2$ & $0.2$ & $0.1$ \\
\hline
\hline
\footnotesize $\{x,y,z,w\}$ & $0.5$ & $0.2$ & $0.2$ & $0.1$ & $0.5$ & $0.2$ & $0.2$ & $0.1$ \\
\hline
\end{tabular}\vspace{-0,1cm}\vs\vspace{-0,1cm}
\end{center}
\caption{\footnotesize The stochastic choice function $p_2$ and its BM polynomials: all entries have the same meaning as in Table~\ref{TABLE:p1}.} \label{TABLE:p2}
\end{table}
Note that $p_2$ provides a minimal counterexample to the fact that the property of \textsl{monotonicity}\footnote{A stochastic choice function $p$ over $X$ is \textsl{monotonic} (or \textsl{regular}) if for all $x \in X$ and $A,B \in 2^X$, $A \subseteq B$ implies $p_2(x,B) \leqslant p_2(x,A)$: see \cite{Block_Marschak1960}.} does not characterize the random utility model: in fact, $p_2$ is monotonic but not RUM.
Since the negativity vector of $p_2$ is $v_{p_2} = (0,0.1,0,0)$, we get $v_{p_2}(a) \leqslant_{{\operatorname{Par}}} v_{p_1}(a)$ for all $a \in X$, and so we conclude that $p_2 \prec^* p_1$.
\end{example}
As possibly expected, isomorphic stochastic choice functions ---in the sense clarified below--- are equally irrational.
\begin{definition} \rm \label{DEF:isomorphic_stochastic_choices}
Two stochastic choice functions $p,p'$ over $X$ are \textsl{isomorphic} is there is a permutation $\sigma \colon X \to X$ such that\vspace{-0,1cm}
$$
p(x,A) = p'(\sigma(x), \sigma(A))\vspace{-0,1cm}
$$
for all $x \in X$ and $A \in 2^X$.
The bijection $\sigma$ is called an \textsl{isomorphism} between $p$ and $p'$.
\end{definition}
The next result shows that our measure of stochastic irrationality is independent of the names of alternatives.
\begin{lemma}
For any stochastic choices $p,p'$ over $X$, if $\sigma$ is an isomorphism between $p$ and $p'$, then $v_p(x) = v_{p'}(\sigma(x))$ for all $x \in X$.
Thus, isomorphic stochastic choice functions always have the same level of irrationality.
\end{lemma}
\begin{proof}
Observe that
$$
\sum\limits_{T\subseteq U \subseteq X} \!\!\!(-1)^{|U\setminus T|}p(x,U) = \!\!\!\sum\limits_{T\subseteq U \subseteq X}\!\!\! (-1)^{|\sigma(U)\setminus \sigma(T)|}p'(\sigma(x),\sigma(U))=\!\!\!\!\!\!\sum\limits_{\sigma(T)\subseteq U' \subseteq X}\!\!\!\!\!\!\! (-1)^{|U'\setminus \sigma(T)|}p'(\sigma(x),U'),
$$
where the last equality is given by the fact that there is a one-to-one correspondences between the family of all menus $U$ containing $T$ and the family of all menus $U'$ containing $\sigma(T)$.
We conclude that the BM polynomial $q_{x,T}$ of $p$ is equal to the BM polynomial $q_{\sigma(x),\sigma(T)}$ of $p'$.
The claim follows.
\hfill $\Box$ \\
\end{proof}
It is currently under study the implementation of a geometric approach (based on polytopes) to the measure of the irrationality of a stochastic choice behavior.
\subsection{Some related literature} \label{SUBSECT:examples}
Here we review some existing metrics/divergences that apply to stochastic choices, and point out some possible drawbacks in detecting different levels of irrationality.
\begin{definition}\rm \label{DEF:total_variation_distance} \rm
Let $\delta \colon \textsf{choice}^*(X) \times \textsf{choice}^*(X) \to {\mathbb R}$ be the map defined by\vspace{-0,1cm}
$$
\delta(p,q)= \sup\vert p(a,A)-p'(a,A) \vert \vspace{-0,1cm}
$$
for all $p,p' \in \textsf{choice}^*(X)$.
Then $\delta$ is a metric, called the \textit{total variation distance}.\footnote{This name originates from the process of considering all differences between two objects (stochastic functions, in this case) and taking either the sum or the supremum (the maximum, in this case).}
\end{definition}
The metric $\delta$ may not be a good fit for our purpose, as the next example shows.
\begin{example}\label{EX:total_variation_distance} \rm
Define a stochastic choice function $p$ over $X=\{x,y,z\}$ by $p(a,A):=\frac{1}{\vert A \vert}$ for all $A \in 2^X \setminus \{\varnothing\}$ and $a \in A$.
Clearly, $p$ is RUM function.
Next, we define two additional stochastic choice functions $p_1$ and $p_2$ over $X$ as follows:\vspace{-0,1cm}
\begin{itemize}
\item $p_1(a,A):=p(a,A)$ for all $A \in 2^X \setminus \{\varnothing, X\}$ and $a \in A$,\vspace{-0,1cm}
\item $p_1(x,X):=0.6$, $p_1(x,X):=0.2$, and $p_1(z,X):=0.2$;\vspace{-0,1cm}
\item $p_2(x,\{x,y\}):=0.7$ and $p_2(y,\{x,y\}):=0.3$,\vspace{-0,1cm}
\item $p_2(x,\{x,z\}):=0.3$ and $p_2(z,\{x,z\}):=0.7$,\vspace{-0,1cm}
\item $p_2(y,\{y,z\}):=0.5$ and $p_2(z,\{y,z\}):=0.5$,\vspace{-0,1cm}
\item $p_2(x,X):=0.6$, $p_2(y,X):=0.3$, and $p_2(z,X):=0.1$.
\end{itemize}
It can be checked that $\delta(p,p_1)=\delta(p,p_2)= \frac{4}{15}$,
that is, the metric $\delta$ puts $p_1$ and $p_2$ at the same total variation distance from the rational function $p$.
However, $v_{p_1} =(0.2,0,0)$ and $v_{p_2} = (0.3,0,0.1)$, and so $p_1 \prec^* p_2$ by Definition~\ref{DEF:preorder_of_irrationality}.
\end{example}
Next, we consider a weaker type of distance, namely a `divergence', which only satisfies the non-negativity property \textsf{A0.1} of a metric, but not necessarily symmetry \textsf{A0.2} and the triangle inequality \textsf{A0.3}.
\begin{definition}[\citealp{KulLeb1951}]\label{DEF:Kullback_Leibler_divergence} \rm
Let $D_{\mathrm{KL}} \colon \textsf{choice}^*(X) \times \textsf{choice}^*(X) \to {\mathbb R}$ the function defined by\vspace{-0,1cm}
$$
D_{\mathrm{KL}}(p \vert \vert p')= \sum_{(a,A)}p(a,A) \,\log\frac{p(a,A)}{p'(a,A)}\vspace{-0,1cm}
$$
for all $p,p' \in \textsf{choice}^*(X)$.
The map $D$ is called the \textsl{Kullback-Leibler divergence}.
\end{definition}
\begin{example}\label{EX:Kullback_Leibler_divergence} \rm
Let $X$, $p$, $p_1$, and $p_2$ be exactly as in Example~\ref{EX:total_variation_distance}.
Define $p_3$ as follows:\vspace{-0,1cm}
\begin{itemize}
\item $p_3(x,\{x,y\})=0.7$ and $p_3(y,\{x,y\})=0.3$,\vspace{-0,1cm}
\item $p_3(x,\{x,z\})=0.3$ and $p_3(z,\{x,z\})=0.7$,\vspace{-0,1cm}
\item $p_3(y,\{y,z\})=0.7$ and $p_3(z,\{y,z\})=0.3$,\vspace{-0,1cm}
\item $p_3(x,X)=\frac{1}{3}$, $p_3(y,X)=\frac{1}{3}$, and $p_3(z,X)=\frac{1}{3}$.
\end{itemize}
Note that the negativity vector of $p_3$ is $(0.03,0.03,0.03)$.
One can check that $D_{\mathrm{KL}}(p_1 \vert \vert p) < D_{\mathrm{KL}}(p_3 \vert \vert p) <D_{\mathrm{KL}}(p_2 \vert \vert p)$.
On the other hand, according to Definition~\ref{DEF:preorder_of_irrationality}, we have $p_3 \perp^* p_1$ and $p_3 \perp^* p_2$.
\end{example}
Examples~\ref{EX:total_variation_distance} and~\ref{EX:Kullback_Leibler_divergence} show that both the total variation distance and the Kullback-Leibler divergence may fail to capture some features of irrationality.
Although one may argue that both examples only deal with one rational function ---possibly the most emblematic---, a similar pathology is still present when calculating distances from other rational functions.
These issues suggest that Definition~\ref{DEF:preorder_of_irrationality} may provide a more adequate tool in assigning levels of irrationality to stochastic choices.
|
{
"arxiv_id": "2302.13581",
"language": "en",
"timestamp": "2023-02-28T02:24:37",
"url": "https://arxiv.org/abs/2302.13581",
"yymm": "2302"
} | \section{Introduction}
In today's modern world of communication, the number of applications and processes where machines and devices are communicating with each other has tremendously increased.
Such communication is often referred to as machine-to-machine~(M2M) communication, taking mostly place in Internet of things~(IoT) scenarios.
From this raise, it can be followed that suitable image and video compression schemes are necessary.
This has been targeted by MPEG since 2019~\cite{zhang2019}, referring to this special type of video or image coding as \textit{video coding for machines~(VCM)}.
This work focuses on improving the image coding performance for instance segmentation networks as information sink following the compress-then-analyze paradigm~\cite{redondi2016}.
For such scenarios, previous approaches~\cite{galteri2018, choi2018, fischer2021_ICASSP} for standard hybrid codecs mainly utilized saliency coding by separating the image into salient and non-salient areas.
The latter ones are coded with reduced bitrate without harming the accuracy of the analysis network.
More recent work~\cite{chamain2021,le2021_ICASSP, fischer2022_journal} employed neural image compression networks~(NCNs) as codecs, which allows for an end-to-end training of the whole VCM framework with the analysis network as discriminator.
By that, the NCNs are adapted towards the characteristics of the analysis network and the input data which eventually results in a superior coding performance outperforming VVC~\cite{bross2021_VVC} for the tasks of object detection and segmentation as it has been shown in \cite{le2021_ICASSP} and \cite{fischer2022_journal}.
\begin{figure}[t]
\centering
\includegraphics[width=0.83\linewidth]{drawings/coding_framework.pdf}
\vspace{-2mm}
\caption{Proposed hierarchical neural image coding framework based on the RDONet structure~\cite{brand2021_CVPR, brand2022} for VCM scenarios.
The input image $\ensuremath{\bm{x}}$ is separated into salient (green) and non-salient (red) areas that are transmitted in different latent spaces $\ensuremath{\bm{y}}_n$.
White latent elements correspond to a value of zero since these latents are masked out.
Dotted lines indicate that this information is transmitted via the channel.
$\downarrow s$ and $\uparrow s$ indicate a {down-upscaling} factor of $s$.
$C$ denotes the number of latent space channels.
Please note that $\ensuremath{\bm{x}}$ and $\ensuremath{\bm{y}}_n$ are not in scale for better visualization.
}
\label{fig:coding framework}
\vspace{-5mm}
\end{figure}
However, the methodology in~\cite{chamain2021,le2021_ICASSP} has the shortcoming that the network has to implicitly derive between salient areas where a high quality is required, and non-salient areas where no potential objects are located. Due to the limited field of view of the NCNs, this decision has to be drawn on a rather small amount of pixels.
Alleviation to this was made in our previous work~\cite{fischer2022_journal}, where we introduced a latent space masking network to mask possibly non-salient areas in the latent space in order to reduce the required bitrate.
Nevertheless, also this information was only implicitly derived from the features of the analysis network.
To overcome these issues, we propose to employ a hierarchical NCN, as it is depicted in Fig.~\ref{fig:coding framework}, which utilizes multiple latent spaces $\ensuremath{\bm{y}}_n$.
These are used to compress different areas of the image with different quality.
By that, the latent spaces are specialized for either transmitting salient or non-salient areas.
In Fig.~\ref{fig:coding framework}, this means that areas including objects of interest, e.g. cars, are transmitted with higher spatial resolution and thus a higher quality, whereas the non-salient areas, e.g. trees, are transmitted with less spatial resolution requiring less bitrate.
In our approach, the saliency information is explicitly derived from an external object detection network and used to steer the NCN.
We propose to employ our existing rate-distortion optimization network~(RDONet)~\cite{brand2021_CVPR, brand2022} as the core network structure.
This NCN allows for hierarchical coding by transmitting the image data in three latent spaces of different spatial resolutions.
To the best of our knowledge, we are the first to propose a learned compression framework exploiting VCM-based saliency information.
All in all, our paper provides the following contributions:
First, we show that the hierarchical RDONet published in~\cite{brand2022} outperforms the latest standard hybrid video codec VVC and a comparable NCN architecture with only one latent space, when being trained in an end-to-end manner similar to~\cite{chamain2021,le2021_ICASSP, fischer2022_journal}.
Second, we further improve the coding performance of RDONet for VCM scenarios by proposing a new RDOnet masking criterion, which allows to explicitly add saliency information to the coding process during the inference.
Third, we show that the overall coding performance is further improved by adding saliency information to the training process to specialize the different latent spaces in coding salient or non-salient areas.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{drawings/LSU.pdf}
\vspace{-2mm}
\caption{$n$-th LSU structure with attendant convolutional layers based on RDONet~\cite{brand2021_CVPR}.
The encoder and decoder part (ref. Fig~\ref{fig:coding framework}) are framed in blue and purple color, respectively.
Conv $C/k/s$ denotes a convolutional layer with $C$ output channels, a kernel size of $k\times k$, and a subsampling factor of $s$. TConv denotes an analogous transposed convolutional layer.}
\label{fig:lsu}
\vspace{-4mm}
\end{figure}
\section{Hierarchical Neural Image Compression}
Today, end-to-end trained neural image compression networks are mainly based on the pioneering work by Ball\' e et al.~\cite{balle2017endtoend}.
There, the authors proposed a variational autoencoder \ensuremath{f}\ that transforms the input image \ensuremath{\bm{x}}\ by an encoder network into a latent space \ensuremath{\bm{y}}\ of reduced spatial dimensionality that is quantized and losslessly transmitted to the decoder side.
The corresponding decoder network reconstructs the image from the transmitted latent space resulting in the deteriorated output image \ensuremath{\bm{\hat{\inputImage}}}.
To train the network weights \ensuremath{\bm{\theta}}, a loss $\ensuremath{\loss_\mathrm{HVS}}$ combining the required rate \ensuremath{R}\ and the distortion \ensuremath{D}\ between the input \ensuremath{\bm{x}}\ and the output $\ensuremath{\bm{\hat{\inputImage}}}$ is utilized
\begin{equation}
\ensuremath{\loss_\mathrm{HVS}} = \ensuremath{D}(\ensuremath{\bm{x}}, \ensuremath{\bm{\hat{\inputImage}}}) + \ensuremath{\lambda} \cdot \ensuremath{R}(\ensuremath{f_\mathrm{enc}}(\ensuremath{\bm{x}}| \ensuremath{\bm{\theta}})),
\end{equation}
where \ensuremath{\lambda}\ steers between the two competing goals of a low bitrate and a low distortion.
In order to provide additional transmission options, RDONet~\cite{brand2021_CVPR,brand2022} adds extra latent spaces $\ensuremath{\bm{y}}_n$ to the NCN structure proposed in~\cite{minnen2018_mbt2018}, which allows compressing different areas of \ensuremath{\bm{x}}\ at different spatial resolutions.
The spatial resolution is halved with every deeper latent space such that one element in the latent space covers more pixels in the image domain. Thus, more bitrate can be saved and the weights can be adapted correspondingly.
Thereby, the external mask \ensuremath{\bm{m}}\ steers which image area is coded by which latent space.
Each area of \ensuremath{\bm{x}}\ is transmitted by exactly one latent space.
The non-selected areas are zeroed-out (white latents in Fig.~\ref{fig:coding framework}).
The RDONet coding order is that the deepest latent space $\ensuremath{\bm{y}}_3$ is coded first and the coding process of each latent space $\ensuremath{\bm{y}}_n$ is conditioned on the before transmitted latent space $\ensuremath{\bm{y}}_{n+1}$.
First, the image $\ensuremath{\bm{x}}$ with size $\ensuremath{H}\times \ensuremath{W}$ is fed into $\ensuremath{f_\mathrm{enc}}$ consisting of four convolutional layers with a stride of two in order to reduce the spatial resolution.
This results in the intermediate latent space $\ensuremath{\bm{u}}_1$.
The three latent spaces $\ensuremath{\bm{y}}_n$ are generated from feeding $\ensuremath{\bm{u}}_1$ into three cascaded latent space units~(LSUs)~\cite{brand2021_CVPR}.
The structure of the $n$-th LSU is depicted in Fig.~\ref{fig:lsu}.
First, the incoming data $\ensuremath{\bm{u}}_n$ is spatially downscaled by a convolution with a stride of two.
The resulting features are concatenated with the output of the deeper $\mathrm{LSU}_{n+1}$ $\ensuremath{\bm{v}}_{n+1}$ and fed into a convolutional layer to obtain the latent space $\ensuremath{\bm{y}}_n$.
Subsequently, elements of $\ensuremath{\bm{y}}_n$ that are not transmitted in this latent space are zeroed-out depending on \ensuremath{\bm{m}}.
The channel to transmit the masked latents $\ensuremath{\bm{y}}_{\mathrm{m},n}$ is similar to~\cite{minnen2018_mbt2018}.
$\ensuremath{\bm{y}}_{\mathrm{m},n}$ is quantized, coded with the help of a conditional hyperprior including a context model, and transmitted to the decoder side.
There, the received latent space $\hat{\ensuremath{\bm{y}}}_{\mathrm{m},n}$ is concatenated with the result from the deeper $\mathrm{LSU}_{n+1}$ $\ensuremath{\bm{v}}_{n+1}$, spatially upscaled by a transposed convolution, and fed forward into the next LSU.
After the last latent space $\ensuremath{\bm{y}}_1$ has been transmitted, the decoder network \ensuremath{f_\mathrm{dec}}\ reconstructs the output image \ensuremath{\bm{\hat{\inputImage}}}\ from the output of $\mathrm{LSU}_1$ $\ensuremath{\bm{v}}_1$.
\section{Optimizing Hierarchical NCN Framework for VCM Scenarios}
This section discusses the adaptations that are proposed in order to adapt RDONet for the compression of images in the VCM context. As analysis network at the decoder side, the state-of-the-art instance segmentation network Mask R-CNN~\cite{he2017} is chosen.
\subsection{End-to-end Training for Analysis Network}
In general, NCNs are optimized for the task of coding for the human visual system~(HVS).
Originally, the weights \ensuremath{\bm{\theta}}\ of RDONet are thus trained on a distortion $\ensuremath{D}_\mathrm{HVS}$ mixing MS-SSIM and MSE~\cite{brand2021_CVPR}:
\begin{equation}
\ensuremath{D}_\mathrm{HVS} = D_\mathrm{MSE} + 0.1 \cdot D_\mathrm{MS-SSIM}.
\label{eq:loss HVS}
\end{equation}
To optimally adapt RDONet to the Mask R-CNN as information sink, we end-to-end train its weights with the analysis network as discriminator in the training loop similar to the work in \cite{chamain2021,le2021_ICASSP,fischer2022_journal}.
Therefore, we substitute $D_\mathrm{HVS}$ by the Mask R-CNN task loss $\ensuremath{\mathcal{L}}_\mathrm{MRCNN}$~\cite{he2017} to obtain the following VCM-optimized loss:
\begin{equation}
\ensuremath{\loss_\mathrm{VCM}} = \ensuremath{\mathcal{L}}_\mathrm{MRCNN}(\ensuremath{\bm{\hat{\inputImage}}}) + \ensuremath{\lambda} \cdot \ensuremath{R}(\ensuremath{f}(\ensuremath{\bm{x}}| \ensuremath{\bm{\theta}})).
\end{equation}
The analysis network weights are not adapted during this training.
\subsection{VCM-Optimized Mask Generation for Inference}
\label{subsec:VCM optimized mask generation}
When applying RDOnet for the HVS,~\cite{brand2022} showed that deriving the masks based on the variance of each block is a decent compromise between rate-distortion performance and runtime.
Fig~\ref{fig:exemplary masks}c shows such a variance-based mask.
From this we can easily follow that the generated mask is sub-optimal, since a lot of highly structured content such as the trees or road markings would still be encoded in the first latent space $\ensuremath{\bm{y}}_1$ requiring a lot of bitrate, despite being not relevant for the analysis network at the decoder side.
In order to obtain optimal masks for inference when coding for the task of instance segmentation, we propose to apply an object detection network to the input data \ensuremath{\bm{x}}\ inspired by our previous work in~\cite{fischer2021_ICASSP}.
There, YOLO~\cite{redmon2016_ieee} is applied to the input image to derive the salient objects, to ultimately reduce the bitrate in non-salient coding units of VVC.
This successful criterion is transferred to our RDONet approach by transmitting all image areas that are covered by the bounding box of a YOLO detection in $\ensuremath{\bm{y}}_1$.
All remaining areas are transmitted in $\ensuremath{\bm{y}}_3$, since \cite{fischer2021_ICASSP} revealed that the best coding performance is achieved, when the non-salient areas are transmitted at the lowest possible quality.
Thus, $\ensuremath{\bm{y}}_2$ is not utilized with this mask.
Since the mask signaling to the decoder is very cheap in RDOnet, the bitrate overhead when keeping this second latent representation in the framework can be neglected.
The mask generated from the YOLO detections is depicted in Fig.~\ref{fig:exemplary masks}d and shows that only the areas containing relevant objects such as cars and pedestrians are transmitted with the best quality.
\begin{figure}[!t]%
\centering
\begin{tabular}{p{0.45\linewidth}p{0.45\linewidth}}
\includegraphics[width=\linewidth]{images_for_comparison/Yolo_Det_CS.png} &
\includegraphics[width=\linewidth]{images_for_comparison/frankfurt_000000_001236_gtFine_color.png} \\
\footnotesize{\makecell[t]{a) Input image with \\ YOLO detections}} &
\footnotesize{\makecell[t]{b) Annotated ground \\ truth~(GT) data}} \\
\includegraphics[width=\linewidth]{images_for_comparison/Var_colored.png} &
\includegraphics[width=\linewidth]{images_for_comparison/yolo_CS_colored.png} \\
\footnotesize{\makecell[t]{c) Mask generated by variance \\ criterion (var mask)~\cite{brand2022}} } &
\footnotesize{\makecell[t]{d) Mask generated by proposed \\YOLO criterion (YOLO mask)}}
\end{tabular}
\caption{Exemplary masks \ensuremath{\bm{m}}\ for Cityscapes input image \textit{frankfurt\_000000\_001236\_leftImg8bit}.
The used color mapping is green~$\rightarrow \ensuremath{\bm{y}}_1$, yellow~$\rightarrow \ensuremath{\bm{y}}_2$, and red $\rightarrow \ensuremath{\bm{y}}_3$.
}
\vspace{-6mm}
\label{fig:exemplary masks}%
\end{figure}
\subsection{VCM-Adapted Training with Ground-Truth Data}
\label{subsec: improved training with GT data}
With the proposed mask criterion during inference, a discrepancy arises between the mask generation in training, i.e. variance based as proposed in~\cite{brand2022}, and inference, i.e. VCM-optimized.
Hence, the codec cannot optimally adapt its weights to the different tasks of delivering a high quality for salient areas transmitted by $\ensuremath{\bm{y}}_1$, and reducing the rate in non-salient areas transmitted in $\ensuremath{\bm{y}}_3$.
To mitigate this discrepancy, we propose to utilize the ground truth data, which is commonly available when training with the task loss of the analysis network.
Analogous to the masking criterion in inference with YOLO detections as presented in the previous Section~\ref{subsec:VCM optimized mask generation}, we generate a mask based on the ground truth objects.
If a pixel of $\ensuremath{\bm{x}}$ is located inside an annotated object, the corresponding block is coded in $\ensuremath{\bm{y}}_1$.
All other objects are coded in latent space $\ensuremath{\bm{y}}_3$.
Therewith, the network learns that all information that is transmitted in $\ensuremath{\bm{y}}_3$ does not influence the Mask R-CNN task loss $\ensuremath{\mathcal{L}}_\mathrm{MRCNN}$, and thus reduces the bitrate in such regions as far as possible.
\vspace{-3mm}
\section{Analytical Methods}
\subsection{Training Procedure}
When training our NCN models, we selected four $\ensuremath{\lambda}$ parameters such that the coding results are in a comparable bitrate range as the reference VVC test model~(VTM-10.0)~\cite{chen2020vtm10} with the four quantization parameter~(QP) values of 22, 27, 32, and 37.
We trained the models on the Cityscapes training dataset cropped to $512\times1024$ patches and a batch size of eight.
As optimizer, we used Adam with a learning rate of $0.0001$.
First, we trained a reference RDONet model on $\ensuremath{\loss_\mathrm{HVS}}$ as in \eqref{eq:loss HVS} with variance masks for 1500 epochs as described in~\cite{brand2022}.
These weights were taken as initialization to further train the models with the proposed VCM optimizations, i.e. the training with $\ensuremath{\loss_\mathrm{VCM}}$ and the training with the GT-based masks, for another 1000 epochs.
To generate the VCM-optimized masks for inference, we trained a YOLO-v5 network~\cite{jocher2020YoloLibrary} on the Cityscapes training data for 600 epochs with the standard configuration from~\cite{jocher2020YoloLibrary}.
\subsection{Evaluation Setup}
To evaluate our proposed methods, we build up a coding framework similar to~\cite{fischer2020_ICIP} in line with the common testing conditions for VCM proposed by MPEG~\cite{liu2020_VCM_CTC}.
As dataset, we compressed the 500 uncompressed Cityscapes~\cite{cordts2016} validation images.
The compressed images were taken as input for the Mask R-CNN~\cite{he2017} instance segmentation network with ResNet50 backbone.
Its weights trained on the Cityscapes training data were taken off the shelf from the Detectron2 library~\cite{wu2019detectron2}.
The Mask R-CNN accuracy is measured by the average precision~(AP), which is the standard metric to evaluate instance segmentation networks.
To alleviate class imbalances, we calculate the weighted AP~(wAP) as in~\cite{fischer2020_ICIP}.
The resulting rate-wAP curves are quantified by the Bj\o ntegaard delta rate~(BDR) metric~\cite{bjontegaard2001_new}, measuring the bitrate savings at the same detection accuracy by coding the data with a certain codec over an anchor codec.
As a reference, we compare our methods against the VTM-10.0 and the NCN from~\cite{fischer2022_journal} with a similar codec structure but only one transmitted latent space.
\vspace{-3mm}
\section{Experimental Results}
\subsection{Influence of End-to-end Training with Analysis Network}
Fig.~\ref{fig:wAP-rate curves} shows the coding efficiency of the tested coding methods.
The RDONet model trained for the human visual system~\cite{brand2022} (orange) is performing worse than VTM-10.0 in terms of wAP-rate performance.
The reference NCN~\cite{fischer2022_journal} with one latent space trained on \ensuremath{\loss_\mathrm{VCM}}\ (blue) outperforms the reference VTM-10.0 codec by 41.4\,\% BDR savings (cf. Tab.~\ref{tab:BD results}).
The proposed approach of coding the data with the hierarchical RDONet structure steered with the masks derived from the basic variance criterion (green) results in even better coding performance of 52.7\,\%.
\subsection{Influence of Advanced Mask Generation for Inference}
Next, Fig.~\ref{fig:comparison different mask generation methods} compares the coding performance employing different masking criteria to obtain $\ensuremath{\bm{m}}$ for the inference.
Here, the trained model remains the same and only the masks are changed during the inference.
The reference case (green) is the mask generated by the variance criterion from~\cite{brand2022}, which is not optimized for VCM scenarios.
When deriving the mask from the YOLO detections (red) as proposed in Sec.~\ref{subsec:VCM optimized mask generation}, the required bitrate can further be reduced compared to the reference case while even increasing the detection accuracy.
As an oracle test, we also conducted experiments with optimal inference masks derived from the GT data (purple) resulting in a slightly higher detection accuracy at the same bitrate than the VCM-optimized masks.
This is due to the fact that YOLO does not perfectly find all salient objects in the Cityscapes dataset.
Thus, those missed objects are transmitted with a worse quality, which ultimately leads to missed detections by the Mask R-CNN that is applied to the coded images.
From this we can follow that the detection accuracy of the network taken to generate the masks is vital, as missed detections can have severe impact on the whole framework.
Despite those possible misses, the Mask R-CNN detection accuracy is still higher than for VVC-coded images at all investigated bitrates.
All in all, utilizing the proposed VCM-optimized mask generation method results in 66.2\,\% bitrate savings over VTM-10.0. With an optimal mask generator, 70.0\,\% of bitrate could be saved.
\begin{figure}
\centering
\includegraphics[height=4cm]{plots/plot_weighted_AP_bitrate.pdf}
\vspace{-3mm}
\caption{wAP-rate curves averaged over the 500 Cityscapes validation images. NCN denotes the reference network with only one latent space. The dotted line shows the accuracy when applying Mask R-CNN to uncompressed images.}
\label{fig:wAP-rate curves}
\vspace{-5mm}
\end{figure}
\begin{table}[]
\centering
\footnotesize
\caption{Bj\o ntegaard delta values with VTM-10.0 as anchor.}
{
\begin{tabular}{lllll|r}
\toprule
\makecell[lb]{\\Marker} & \makecell[lb]{Codec} & \makecell[lb]{Train\\ loss} & \makecell[lb]{Train\\ mask} & \makecell[lb]{Inf. \\ mask} & \makecell[lb]{BDR \\ wAP} \\ \midrule
\showmark{color5}{Mercedes star} & VTM-10.0 & - & - & YOLO & -62.0\,\%\\
\showmark{color0}{*} & NCN~\cite{fischer2022_journal} & \ensuremath{\loss_\mathrm{VCM}} & - & - & -41.4\,\%\\
\showmark{color1}{square*} & RDONet & \ensuremath{\loss_\mathrm{HVS}} & var & var & 21.5\,\% \\
\showmark{color2}{diamond*} & RDONet & \ensuremath{\loss_\mathrm{VCM}} & var & var & -52.7\,\% \\
\showmark{color3}{asterisk} & RDONet & \ensuremath{\loss_\mathrm{VCM}} & var & YOLO & -66.2\,\% \\
\showmark{color4}{pentagon*} & RDONet & \ensuremath{\loss_\mathrm{VCM}} & var & GT & -70.0\,\% \\
\showmarkDashed{color3}{asterisk} & RDONet & \ensuremath{\loss_\mathrm{VCM}} & GT & YOLO & -77.1\,\% \\
\showmarkDashed{color4}{pentagon*} & RDONet & \ensuremath{\loss_\mathrm{VCM}} & GT & GT & -79.5\,\% \\
\bottomrule
\end{tabular}
}
\label{tab:BD results}
\vspace{-3mm}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[height=4cm]{plots/plot_weighted_AP_bitrate_VCM_mask.pdf}
\vspace{-3mm}
\caption{Comparison of the RDONet model trained with $\ensuremath{\loss_\mathrm{VCM}}$ depending on the used masks during inference coding the 500 Cityscapes validation images. Solid and dashed lines symbolize that the RDONet model was trained with variance and GT-based masks, respectively.}
\label{fig:comparison different mask generation methods}
\vspace{-3mm}
\end{figure}
\def0.50{0.50}
\begin{figure}[!t]%
\centering
\begin{tabular}{p{0.45\linewidth}p{0.45\linewidth}}
\includegraphics[width=\linewidth]{images_for_comparison/Bildvergleich/VTM10/frankfurt_000000_001236_leftImg8bit_qp_37_result.png} &
\includegraphics[width=\linewidth]{images_for_comparison/Bildvergleich/004_Mask_var/Det_004_Mask_var.png} \\
\footnotesize{\makecell[t]{a) VTM-10.0 (QP=37)\\ @ 0.057 bits per pixel}} &
\footnotesize{\makecell[t]{b) Train: var mask, Inf.: var mask \\@ 0.078 bits per pixel}} \\
\includegraphics[width=\linewidth]{images_for_comparison/Bildvergleich/004_Mask_CS/004_Mask_CS.png} &
\includegraphics[width=\linewidth]{images_for_comparison/Bildvergleich/004_Mask_GTTraining_CS/004_Mask_GT_CS.png} \\
\footnotesize{\makecell[t]{c) Train: var mask, Inf.: YOLO \\ mask @ 0.059 bits per pixel}} &
\footnotesize{\makecell[t]{d) Train: GT mask, Inf.: YOLO \\ mask @ 0.047 bits per pixel}} \\
\end{tabular}
\vspace{-3mm}
\caption{Visual results for coding the exemplary Cityscapes image \textit{frankfurt\_000000\_001236\_leftImg8bit} with different RDONet models and the corresponding Mask R-CNN detections. All models were trained with \ensuremath{\loss_\mathrm{VCM}}\ on the same $\ensuremath{\lambda}$ value. Corresponding masks are depicted in Fig.~\ref{fig:exemplary masks}. Best to be viewed enlarged on a screen.}%
\label{fig:visual results}%
\vspace{-5mm}
\end{figure}
\subsection{Influence of Improved Training with GT Masks}
For the previously shown results, the networks were all trained with the variance-based masks as proposed in~\cite{brand2022}.
The dashed lines in Fig.~\ref{fig:comparison different mask generation methods} represent the coding behavior when the models are trained with masks derived from the GT data as proposed in Sec.~\ref{subsec: improved training with GT data}.
The curves show that training the models with the VCM-optimal masks further increases the coding efficiency by reducing the bitrate in non-salient areas.
In terms of BDR, the model trained on GT-based masks and executed with the YOLO mask during inference achieves bitrate savings of 77.1\,\% over VTM-10.0.
By that, the proposed framework with RDOnet achieves 15.1 percentage points more BDR savings compared to applying the saliency-driven method proposed in~\cite{fischer2021_ICASSP} with YOLO as saliency detector on VTM-10.0 (brown).
Our method also clearly outperforms the network proposed in~\cite{le2021_ICASSP} (-33.7\,\% BDR over VTM-8.2), and our previous LSMnet method~\cite{fischer2022_journal}, which adds implicit saliency information derived from the Mask R-CNN features to the coding process (-54.3\,\% BDR over VTM-10.0).
\subsection{Visual Results}
Fig.~\ref{fig:visual results} gives a visual comparison.
When coding with the proposed YOLO mask (cf. Fig.~\ref{fig:visual results}c), non-relevant details such as street markers or trees are coded in $\ensuremath{\bm{y}}_3$, and thus with lower quality requiring less rate.
If the model is also trained with explicit saliency information (cf. Fig.~\ref{fig:visual results}d), the quality is drastically reduced in the non-salient areas.
The relevant objects in the image are transmitted with high quality and can still be detected by the analysis network.
These visual results also show that the high VCM coding efficiency comes at a price.
Considering a scenario where, e.g., a human supervisor is supposed to comprehend the detections of the analysis network from the transmitted image.
This would be possible in the salient areas, but not whether there might be missed objects in areas that have been classified as non-salient during mask generation at the encoder.
Therefore, the proposed method can be regarded as an intermediate step between image and feature coding for machines.
Future research might add a HVS-based regularization term to $\ensuremath{\loss_\mathrm{VCM}}$, in order to not let the quality in non-salient regions drop below a certain visual quality depending on the practical use case.
\vspace{-3mm}
\section{Conclusion}
In this paper we proposed to employ a hierarchical neural image compression network for the VCM context, which transmits the information in multiple latent spaces.
To adapt this network to a coding framework with an instance segmentation network as information sink, we end-to-end trained the NCN with the analysis network as discriminator, proposed a VCM-optimized saliency mask generation, and also utilized a mask derived from the GT data to optimally adapt the different latent spaces during training.
With all our proposed optimizations, our RDOnet model is able to save 77.1\,\% of bitrate over VTM-10.0 at the same detection accuracy.
Thereby, RDOnet also clearly outperforms existing NCN approaches with one latent space and the reference case when applying the same saliency criterion based on YOLO to VTM-10.0.
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.13642",
"language": "en",
"timestamp": "2023-02-28T02:26:15",
"url": "https://arxiv.org/abs/2302.13642",
"yymm": "2302"
} | \section{Introduction and Main Results}
We consider Abel equations
\begin{equation}\label{eq:Abel}
\frac{dx}{dt}=x'=A(t)x^3+B(t)x^2,
\end{equation}
with $A,B$ smooth functions defined on $[0,T]$. Let $u(t,x)$ denote
the solution of \eqref{eq:Abel} determined by $u(0,x)=x$. We say
$u(t,x)$ is closed or periodic if $u(T,x)=x$. Let $u(t,x)$ be
closed. It is singular or multiple if $u_x(T,x)=1$, otherwise it is
simple or hyperbolic. Isolated closed solutions are also called limit
cycles. A singular closed solution such that $u_{xx}(T,x)\ne0$ is
called a double closed solution or a semistable limit cycle. The
problem of determining the maximum number of closed solutions of
\eqref{eq:Abel} is the ``Pugh problem'' mentioned by Smale~\cite{S}.
Notice that $x=0$ is always a closed solution of
\eqref{eq:Abel}. Therefore the number of closed solutions in regions
$x>0$ and $x<0$ can be studied separately. Since one region can be
translated to the other with the transformation $x\rightarrow -x$, we
shall restrict attention to the region $x>0$.
There are several results for uniqueness of closed solutions
of~\eqref{eq:Abel} on $x>0$. The best known impose that one of the
functions $A$ or $B$ does not change sign (see \cite{GG,GL,Ll,Panov,
Pliss}). Other conditions, allowing $A$ and $B$ to change sign, are
considered for instance in \cite{AGG,BFG}. In all these results, the
condition of a definite sign is imposed on a certain derivative of the
return map or on the initial conditions corresponding
to positive closed solutions. Applying these results
to~\eqref{eq:Abel}, one determines families for which there is at most
one positive closed solution.
A different approach is taken in \cite{BFG2} where, in order to obtain
two positive closed solutions as upper bound, the Abel equation is
considered to be a member of a one-parameter family,
\begin{equation}\label{eq:Abelparametric}
x'=A(t,\lambda)x^3+B(t,\lambda)x^2, \quad \lambda\in\mathbb{R}
\end{equation}
where $F(t,x,\lambda):=A(t,\lambda)x^3+B(t,\lambda)x^2$ satisfies
$F_\lambda(t,x,\lambda)> 0$ for $x>0$. Thus, $\lambda \to
F(t,x,\lambda)$ is strictly increasing for all $t\in\mathbb{R}$,
$x>0$. This is termed monotonic with respect to $\lambda$.
Notice that the above definition of monotonic with respect to
$\lambda$ for families of Abel equations is an adaptation of the
setting of the so-called rotated families of planar vector fields
introduced by G.F.D. Duff, see \cite{Duff} or
\cite[Sec. 4.6]{Perko}. For these families of vector fields, the
control of bifurcations of double closed solutions is crucial to
understanding their global bifurcation diagram of closed solutions.
\medskip
We consider simple Abel equations for which there is no uniqueness of
positive closed solutions, and study their number by controlling the
nature of the double closed solutions. In~\cite{BFG2}, we studied the
case where $A$ has two simple zeros of which one is at $t=0$, and $B$ has
one simple zero in $[0,T]$. In the present work, we consider the case
where $B$ has two simple zeros in $[0,T]$. Our main result provides
sufficient conditions to determine the stability of positive singular
closed solutions.
Throughout this communication we shall write
\begin{equation}\label{eq:upper}
P(t)=4(B (t)A '(t)-B '(t)A (t)) - B ^3(t)
\end{equation}
and
\begin{equation}\label{ecu:v}
v(t,x)=B(t)(2A(t)x+B(t))^2+P(t).
\end{equation}
\begin{theo}\label{theo:bif}
If
\begin{itemize}
\item[$(C_1)$] $A(0)=0$, $A(t)$ has a simple zero $t_A \in (0,T)$ and $B(t)$ has two simple zeros $ t_{B_1},t_{B_2}\in [0,T]$ with $0<t_{B_1}<t_A < t_{B_2}\leq T,$
\end{itemize}
and for any positive singular closed solution $\tilde u(t):=u(t,\tilde x)$ of Abel equation \eqref{eq:Abel}
\begin{itemize}
\item[$(C_2)$] the function $2A(t)\tilde u(t)+B(t)$ has at most a simple zero in
each of the intervals $[0,t_A]$ and $[t_A,T]$,
\item[$(C_3)$] $\operatorname{sgn}\left(v(t,\tilde u(t))\right) = \operatorname{sgn}(A'(0)B(0)),$ for all $t\in[0,T],$
\end{itemize}
then $ u_{xx}(T,\tilde x)=\operatorname{sgn}(A'(0)B(0))$.
\end{theo}
\begin{rema}\label{rem:signos}
For the sake of simplicity of exposition, we assume $A'(0)<0$ and
$B(0)>0$. So condition $(C_3)$ becomes $v(t,\tilde u(t)) < 0$ for
all $t \in [0,T]$, and the conclusion is $ u_{xx}(T,\tilde
x)<0$. The other cases are proved similarly.
\end{rema}
A difficult point for the above result to be applicable is to verify
when hypotheses $(C_2)$ and $(C_3)$ hold since they include the
unknown singular closed solution. Nevertheless, in
Propositions~\ref{prop:upper} and \ref{PropC3} and
Corollary~\ref{CorC3} below, we shall give sufficient algebraic
conditions for them to be checked computationally.
As a motivating example, consider the family of Abel equations
\begin{equation} \label{eq:Abelquadquad}
x'=t(t-t_A)x^3+ (t-t_{B})(t-1)x^2,
\quad t_A, t_{B} \in\mathbb{R},
\end{equation}
where $t\in[0,1]$. Upper bounds of the number of positive closed
solutions of \eqref{eq:Abelquadquad} have been obtained for some
cases, as will be detailed in Section~\ref{Sect4}. As a consequence of
Theorem \ref{theo:bif} above, we prove:
\begin{theo}\label{theo:example}
Abel equation \eqref{eq:Abelquadquad} has at most two positive closed solutions,
taking into account their multiplicities, and this upper bound is sharp.
\end{theo}
As we shall see, the existence of two positive closed solutions is due
to the fact that for $t_A=2/3$ and $t_B=1/3$ the multiplicity of the
closed solution $x=0$ is four, while generically it is two. Hence a
Hopf-like codimension-two bifurcation appears, and two positive closed
solutions bifurcate from the origin.
\medskip
The main motivation for this paper was Problem~6 of \cite{G},
i.e., to obtain the maximum number of limit cycles of
the Abel equation
\begin{equation}\label{eq:Abel_trigonometric_linear}
x'= \left(a_1+a_2\sin\,t+a_3\cos\,t\right)x^3+\left(b_1+b_2\sin\,t+b_3\cos\,t\right)x^2.
\end{equation}
We address this problem in Section~\ref{Sect5}. For this equation,
the functions $A,B$ have at most two simple zeros, and a Hopf-like
codimension-two bifurcation at $a_0=b_0=0$ proves the existence of at
least two positive limit cycles. If $A$ and $B$ have at most one
simple zero, or the simple zeros of $A$ and $B$ do not alternate, the
problem is solved in~\cite{AGG} and \cite{BFG}. We prove that
Theorem~\ref{theo:bif} explains the upper bound of two positive limit
cycles in a region where two positive limit cycles bifurcate from the
origin, giving a partial answer to Problem~6 of \cite{G}. We also
discuss the limitations of Theorem~\ref{theo:bif} in this case.
\section{Stability of Singular Closed Solutions}
In this section we prove the main result, but first we shall explain
how the stability of the singular closed solutions determines the maximum
number of limit cycles for \eqref{eq:Abelparametric} assuming
that this number is known for certain values of the parameter.
Assume that the family \eqref{eq:Abelparametric} satisfies $F_\lambda(t,x,\lambda)>0$
for all $t\in(0,T)$, $x>0$, and $\lambda\in (\lambda_1,\lambda_2)$. If $u(t,x,\lambda)$
denotes the solution of~\eqref{eq:Abelparametric} determined by $u(0,x,\lambda)=x$ then
$u(t)$ is a closed solution if and only if $u(T,u(0),\lambda)=u(0)$.
As the monotonicity of $F(t,x,\lambda)$ with respect to $\lambda$ implies that of $u(t,x,\lambda)$ when the latter is positive, the Implicit Function Theorem guarantees the existence of a $\mathcal{C}^1$ function $\Lambda$ defined by
\[
u\left(T,x,\Lambda(x)\right)=x.
\]
Therefore, for every fixed $\lambda$, the number of positive
closed solutions, $N(\lambda)$, is the number of solutions of $\Lambda(x)=\lambda$. Note that \[\Lambda'(x)=\frac{1-u_{x}(T,x,\Lambda(x))}{u_{\lambda}(T,x,\Lambda(x))},\]
where $u_{\lambda}(T,x,\Lambda(x))>0$, and if $\Lambda'(x)=0$ then
\[
\Lambda''(x)=-\frac{u_{xx}(T,x,\Lambda(x))
}{u_{\lambda}(T,x,\Lambda(x))}.
\]
\medskip
The following result is an adaptation of \cite[Theorem 1.3]{BFG2}.
It states that if the number of closed solutions for a certain value of $\lambda$, e.g. $\lambda_2$, is known
and the graph of $\Lambda$ has only minima, then the number of closed solutions
cannot increase for lower values of the parameter, except maybe for two closed solutions corresponding to a bifurcation of the origin and a bifurcation of infinity. A similar
conclusion holds when the graph of $\Lambda$ has only maxima.
\begin{theo}\label{theo:bifrotated}
Assume that Abel equation \eqref{eq:Abel} satisfies $F_\lambda(t,x,\lambda)>0$ for every $\lambda \in (\lambda_1,\lambda_2)$, $t\in(0,T)$ and $x>0$, and that $u_{xx}(T,\tilde x ,\lambda)<0$ ($u_{xx}(T,\tilde x,\lambda)>0$), for every positive singular closed solution $u(t,\tilde x,\lambda)$ with $\lambda\in[\lambda_1,\lambda_2]$.
Then
\[
N(\lambda)\le N(\lambda_2) + 2 \ (N(\lambda)\le N(\lambda_1) + 2)\quad \text{for every } \lambda \in(\lambda_1,\lambda_2 ).
\]
Moreover, the two possible additional closed solutions correspond to
a Hopf bifurcation of the origin or a Hopf bifurcation of infinity.
\end{theo}
\begin{proof}
In each of the intervals of the domain of definition of $\Lambda$ there is at most one extremum point, which is a minimum, since otherwise there are two consecutive zeros $x_1<x_2$ of $\Lambda'$ which satisfy $\Lambda''(x_1) \Lambda''(x_2)\leq0$ in contradiction with the hypothesis. Hence, $\Lambda$ is monotonic or it has a unique minimum, being alternate monotonic in the latter case, since two consecutive hyperbolic closed solutions have opposite stability.
\medskip
{\em Claim 1.} If $0<x_1<x_2$ then $u(t,x_1,\Lambda(x_1))<u(t,x_2,\Lambda(x_2))$.
If $\Lambda(x_1)=\Lambda(x_2)$, the conclusion follows by the uniqueness of solutions of the initial value problem. If $\Lambda(x_1)<\Lambda(x_2)$ (resp.~$\Lambda(x_1)>\Lambda(x_2)$) then $u(t,x_1,\Lambda(x_1))$ is a lower (resp.~upper) solution of $x'=F(t,x,\Lambda(x_2))$. The conclusion holds since closed solutions cannot cross lower or upper closed solutions.
\medskip
{\em Claim 2.} If $\Lambda(x)$ is defined in the interval $(\bar{x}, \tilde{x}]$ with $\bar{x}>0$ then it is also defined at $\bar{x}$.
By claim 1, $x\to u(t,x,\Lambda(x))$ is strictly increasing. Also, there exists $\epsilon>0$ such that $\Lambda$ is
monotonous continuous in $(\bar x,\bar x+\epsilon)$, so that,
denoting $\bar{\lambda}= \lim_{x\to \bar{x}}\Lambda(x)$,
\[
u(t,\bar x,\bar \lambda)=\lim_{x\to \bar{x}} u(t,x,\Lambda(x)).
\]
Note that the limit exists since $x\to u(t,x,\Lambda(x))$ is an
increasing function bounded below by $0$. Moreover, $u(t,\bar x,\bar
\lambda)$ is periodic in $t$ since the functions $t\to u(t,x,\Lambda(x))$ are. Thus
$\Lambda(\bar x)=\bar \lambda$.
\medskip
{\em Claim 3.} If $\Lambda$ is defined in a set $[x_1,x_2) \cup \{x_3\}$, where $x_2<x_3$, then it is also defined at $x_2$.
By claim 1, $u(t,x,\Lambda(x))<u(t,x_3,\Lambda(x_3))$ for all $x<x_3$, so that if we
denote $\lambda_2= \lim_{x\to x_2}\Lambda(x)$ then
\[
u(t,x_2,\lambda_2)=\lim_{x\to x_2} u(t,x,\Lambda(x)),
\]
and we conclude analogously.
\medskip
As a consequence of claims 1, 2, and 3, if $D$ is the domain of definition of $\Lambda$ then
$$
D= (0,x_1] \cup [x_2,x_3] \cup \cdots \cup [x_{n-1},x_{n}],
$$
or
$$
D= (0,x_1] \cup [x_2,x_3] \cup \cdots \cup [x_{n},x_{n+1}),
$$
or
$$
D= [x_1,x_2] \cup [x_3,x_4]\cup \cdots \cup [x_{n-1},x_{n}],
$$
or
$$
D= [x_1,x_2] \cup [x_3,x_4]\cup \cdots \cup [x_{n},x_{n+1}),
$$
where $x_{n+1}\leq \infty$, $\Lambda(x_i) \in\{ \lambda_1, \lambda_2\}$, for $i=1,\ldots,n$,
and if $x_{n+1}<\infty$ and
\[
\bar \lambda = \lim_{x\to x_{n+1}} \Lambda(x) <+\infty
\]
then the solution $u(t,x_{n+1},\bar \lambda)$ is unbounded.
As $\Lambda$ has only local minima, for every $\lambda\in[\lambda_1,\lambda_2]$ the number of
solutions of $\Lambda(x) = \lambda$ with $x\in (x_i,x_{i+1})$ is less than or
equal to the number of solutions of $\Lambda(x) = \lambda_2$ with $x\in (x_i,x_{i+1})$, while
the number of
solutions of $\Lambda(x) = \lambda$ with $x\in (0,x_1)$ or $x\in (x_n,x_{n+1})$ is less than or
equal to the number of solutions of $\Lambda(x) = \lambda_2$ with $x\in (0,x_1)$ or $x\in (x_n,x_{n+1})$
plus one, where the extra solution corresponds to a Hopf bifurcation of the origin or infinity, respectively.
\end{proof}
By the change $\lambda\to-\lambda$, a similar result holds if $F_\lambda(t,x,\lambda)<0$.
\medskip
Now we shall prove Theorem~\ref{theo:bif}, which determines the stability of
the singular positive closed solutions.
We divide the proof into various propositions.
In the following, we assume that $\tilde u(t):=u(t,\tilde x)$ is a singular positive closed solution, and that $(C_1),(C_2)$, and $(C_3)$ hold.
\begin{prop}[\cite{BFG2}]\label{prop:bif0}
For any $\alpha,\beta\in\mathbb{R}$,
\begin{equation}\label{exp}
\operatorname{sgn}\left( u_{xx}(T,\tilde x)\right)= \operatorname{sgn}\left(\int_{0}^{T} F(t,\alpha)G(t,\beta)\,dt \right),
\end{equation}
where
\begin{align*}
F(t,\alpha)&:=(2-\alpha)B (t)+ 2(3-\alpha) A (t) \tilde u(t),\\
G(t,\beta)&:= u_x(t,\tilde x) - \beta \tilde u(t).
\end{align*}
\end{prop}
With equation~\eqref{exp} in mind, the idea for proving Theorem~\ref{theo:bif} is to choose
$\alpha$ and $\beta$ such that the changes of sign of the corresponding $F(t,\alpha)$ and $G(t,\beta)$ coincide, and consequently $F(t,\alpha)G(t,\beta)$ does not change sign.
The first step is to determine the changes of sign of $F(t,\alpha)$ and $G(t,\beta)$, which is done in the following two propositions, where we have taken into account Remark \ref{rem:signos}.
By $(C_2)$, $2A(t)\tilde u(t)+B(t)$ has at most one simple zero in each of the intervals $[0,t_A]$, $[t_A,T]$.
In view of the signs of $A$ and $B$, it can be proved that there are at least two simple zeros, denoted by $t_1$ and $t_2$, such
that
\[0<t_1<t_{B_1}<t_A<t_2<t_{B_2}\leq T.\]
Moreover, regarding $(C_1)$ and Remark \ref{rem:signos}, $2A(t)\tilde u(t)+B(t)$ is positive in $[0,t_1)\cup(t_2,T]$ and negative in $(t_1,t_2)$.
\begin{prop}\label{prop:alpha}
for all $\alpha \in \mathbb{R}$, $F(t,\alpha)$ has at most two changes of sign in $(0,T)$.
More precisely,
\begin{enumerate}
\item $F(t,\alpha)=0$ is the graph of a smooth function $\alpha(t)$ defined for
every $t\neq t_1,t_2$,
\item $F(t,\alpha) > 0$ for $\alpha < \alpha(t),\ t \in [0,t_1) \cup (t_2,T]$, and for $\alpha > \alpha(t),\ t \in [t_1,t_2]$,
\item $F(t,\alpha) < 0$ for $\alpha > \alpha(t),\ t \in [0,t_1) \cup (t_2,T]$, and for $\alpha < \alpha(t),\ t \in [t_1,t_2]$,
\item $\alpha(t)$ is strictly decreasing in its domain of definition,
\item for every $i=1,2$,
\[
\lim_{t\to t_i^{\pm}} \alpha(t)=\pm\infty,
\]
\item $\alpha(T)\geq \alpha(0)=2$.
\end{enumerate}
\end{prop}
\begin{proof}
First, we observe that $F(t,\alpha)=0$ if and only if $\alpha=\alpha(t)$ where
\[ \alpha(t)= 2+\dfrac{2A(t)\tilde u(t)}{2A(t)\tilde u(t)+B(t)}.\]
Then, by $(C_2)$, the function $\alpha(t)$ is smooth, so that $(1)$ holds. Moreover, since $2A(t)\tilde u(t)+B(t)$
changes sign at $t_1,t_2$, and $F(t_A,\alpha)=-B(t_A)>0$ then $(2)$ readily follows (see Fig.~\ref{fig:1}).
To prove $(3)$, it suffices to observe that $\operatorname{sgn} (\alpha'(t))=\operatorname{sgn}\left(v(t,\tilde u(t))\right)
$ and that $\operatorname{sgn}\left(v(t,\tilde u(t))\right) < 0$ by $(C_3)$. Moreover, as $A(t_i)\neq 0,\ i = 1,2$, we have $(5)$.
By $(C_1)$ and Remark \ref{rem:signos}, $A(T)$ and $B(T)$ are non-negative and not simultaneously zero, so that
\[
\dfrac{2A(T)u(T,x)}{2A(T)u(T,x)+B(T)}\geq 0.
\]
Hence, $\alpha(T)\geq 2=\alpha(0)$, and $(6)$ follows.
Finally, as $F(t,\alpha)=0$ is the graph of $\alpha(t)$,
then $(1)-(6)$ imply that, for every fixed $\alpha$, the function $t\to F(t,\alpha)$ has at most two changes of sign in $(0,T)$.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1]
\clip (-4.5,-2) rectangle (4.5,2);
\draw[-] (-3.5,0) -- (3.5,0);
\draw[-] (-3.5,-2) -- (-3.5,2);
\draw[-] (3.5,-2) -- (3.5,2);
\draw[dashed] (-1,-2) -- (-1,2);
\draw[dashed] (1,-2) -- (1,2);
\draw[dashed] (-3.5,.19) -- (3.5,.19);
\draw[dashed] (-3.5,.81) -- (3.5,.81);
\filldraw (-1,0) circle (2pt);\node[fill=white] at (-1-0.10,-0.55) {$t_1$};
\filldraw ( 1,0) circle (2pt);\node[fill=white] at ( 1+0.10,-0.55) {$t_2$};
\filldraw ( 0.2,0) circle (2pt);\node[fill=white] at ( 0+0.10,-0.55) {$t_A$};
\node[fill=white] at ( -2.3+0.10,1.55) {${\scriptstyle F(t,\alpha)>0}$};
\node[fill=white] at ( 2.3+0.10,-1.55) {${\scriptstyle F(t,\alpha)<0}$};
\node[fill=white] at ( -2.6+0.10,-1.55) {${\scriptstyle F(t,\alpha)<0}$};
\draw[smooth=0.7,ultra thick,oro!50!rojo] plot[domain=-3.5:-1.05,samples=200] ({\x},{0.5+\x/(\x*\x-1)});
\draw[smooth=0.7,ultra thick,oro!50!rojo] plot[domain=-0.95:0.95,samples=200] ({\x},{0.5+\x/(\x*\x-1)});
\draw[smooth=0.7,ultra thick,oro!50!rojo] plot[domain=1.05:3.5,samples=200] ({\x},{0.5+\x/(\x*\x-1)});
\node at (2.5,1.5) {$\alpha(t)$};
\end{tikzpicture}
\end{center}
\caption{Sketch of $\alpha(t)$.}\label{fig:1}
\end{figure}
\end{proof}
Note that $F(t,\alpha)=0$ always defines
the graph of a function, so that hypotheses of Theorem~\ref{theo:bif}
are imposed to determine its properties. In
particular, $(C_2)$ implies that it has two asymptotes,
and $(C_3)$ implies the monotonicity of $\alpha(t)$.
A similar result holds for the zeros of $G(t,\beta)$, which
are determined by the zeros of a given function whose number of extrema and their nature are determined by $(C_2)$.
\begin{prop}\label{prop:beta}
There exist $\beta_0,\beta_1,\beta_2$ such that $G(t,\beta)$ has two changes of sign in $(0,T)$ for
every $\beta\in(\beta_1,\beta_2)$, $\beta\neq \beta_0$, and no zeros for
$\beta\not\in[\beta_1,\beta_2]$. More precisely,
\begin{enumerate}
\item $G(t,\beta)=0$ is the graph of a positive closed smooth function $\beta(t)$ defined for $t\in[0,T]$,
\item $G(t,\beta)>0$ for $\beta<\beta(t)$ and $G(t,\beta)<0$ for $\beta>\beta(t)$,
\item $\beta(t)$ has exactly two extrema: a maximum at $t_1$ and a minimum at $t_2$.
\end{enumerate}
\end{prop}
\begin{proof}
First, we notice that $G(t,\beta) = 0$ if and only if $\beta = \beta(t)$, where
\begin{equation}\label{eq:beta}
\beta(t)=\frac{ u_x(t,\tilde x)}{\tilde u(t)}.
\end{equation}
As $\tilde u(t) > 0$, we have that $\beta(t)$ is defined for all $t\in\mathbb{R}$. Furthermore, deriving in \eqref{eq:Abel} with respect to $x$ and using that $ u_x(0,x)=1$, we obtain that \[ u_x(t,\tilde x) = \operatorname{exp}\left(\int_0^t \left(3 A(t) \tilde u^2(t) + 2 B(t) \tilde u(t)\right) dt\right),\] and therefore $ u_x(t,\tilde x) > 0$. Thus, $\beta(t) > 0$. Since $\tilde u(t)$ is singular, both $\tilde u(t)$ and $ u_x(t,\tilde x)$ are closed, and then $\beta(0)=\beta(T)=:\beta_0$. Hence we conclude $(1)$.
Since $G_{\beta}(t,\beta)=-u(t,x)<0$, then $(2)$ follows.
To prove $(3)$, we first note that
\begin{equation}\label{ecu:beta_prima}
\beta'(t)=\left(2A(t)\tilde u^2(t)+B(t)\tilde u(t) \right)\beta(t).
\end{equation}
Since $\beta(t) > 0$, we conclude that $\beta'(t) = 0$ if and only if $t = t_1$ or $t = t_2$. Moreover, as $2A(t)\tilde u^2(t)+B(t)\tilde u(t) > 0$ for $t \in [0,t_1) \cup (t_2,T]$, we have that $\beta(t)$ has a maximum at $t_1$ and a minimum at $t_2$.
Finally, writing $\beta_1 = \beta(t_1)$ and $\beta_2 = \beta(t_2)$, we obtain that $G(t,\beta)$ has two changes of sign in $(0,T)$ for every $\beta\in(\beta_1,\beta_2),\ \beta\neq \beta_0$, and no zeros for $\beta\not\in[\beta_1,\beta_2]$.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1]
\clip (-4.5,-1) rectangle (4.5,2);
\draw[-] (-3.5,0) -- (3.5,0);
\draw[-] (-3.5,-1) -- (-3.5,2);
\draw[-] (3.5,-1) -- (3.5,2);
\draw[dashed] (-1,-1) -- (-1,2);
\draw[dashed] (1,-1) -- (1,2);
\draw[dashed] (-3.5,1) -- (3.5,1);
\filldraw (-1,0) circle (2pt);\node[fill=white] at (-1-0.10,-0.55) {$t_1$};
\filldraw ( 1,0) circle (2pt);\node[fill=white] at ( 1+0.10,-0.55) {$t_2$};
\draw[smooth=0.7,ultra thick,oro!50!rojo] plot[domain=-3.5:-1,samples=200] ({\x},{-2*(\x+1)*(\x+1)/2.5/2.5/3+2/3+1});
\draw[smooth=0.7,ultra thick,oro!50!rojo] plot[domain=-1:1,samples=200] ({\x},{(\x*\x*\x/3-\x)+1});
\draw[smooth=0.7,ultra thick,oro!50!rojo] plot[domain=1:3.5,samples=200] ({\x},{2*(\x-1)*(\x-1)/2.5/2.5/3-2/3+1});
\node at (0.2,1.5) {$\beta(t)$};
\node at (-3.5-0.4,1) {$\beta_0$};
\end{tikzpicture}
\end{center}
\caption{Sketch of $\beta(t)$.}\label{fig:beta}
\end{figure}
\end{proof}
Recall that, by Proposition~\ref{prop:bif0},
\[\operatorname{sgn}( u_{xx}(T,\tilde x))= \operatorname{sgn}\left(\int_{0}^{T} F(t,\alpha)G(t,\beta)\,dt \right).\]
Therefore, in order to complete the proof of Theorem~\ref{theo:bif}, it only remains to prove that there exist $\alpha,\beta$ such that the
changes of sign of $F(t,\alpha)$ and $G(t,\beta)$ coincide.
\begin{prop}\label{prop:bif3}
There exist $\alpha,\beta$ such that the changes of sign of $F(t,\alpha)$ and $G(t,\beta)$ coincide. Moreover, for these $\alpha$ and $\beta$,
\[ \int_0^T F(t,\alpha)G(t,\beta)\,dt < 0.\]
\end{prop}
\begin{proof}
Let $\beta(t)$ be the closed smooth function defined
by~\eqref{eq:beta} and write $\beta_0 = \beta(0)$. By
Proposition~\ref{prop:beta}, $\beta(t)$ has a maximum at $t_1$ and a
minimum at $t_2$. Set $\beta_1 = \beta(t_1)$ and $\beta_2 =
\beta(t_2)$, and let $t_0$ be the unique value in $(0,T)$ with
$\beta(t_0) = \beta_0$.
\medskip
We distinguish three cases in accordance with the relative
position of $\alpha(0)=2$, $\alpha(t_0)$, and $\alpha(T) \geq 2$.
\begin{itemize}
\item If $\alpha(t_0) \in (\alpha(0),\alpha(T))$, then take $\alpha =\alpha(t_0)$, $\beta=\beta_0$. Then $F(t,\alpha)$ and $G(t,\beta)$ only change sign in $(0,T)$ at $t_0$.
By Proposition~\ref{prop:alpha}, $F(t,\alpha) < 0$ for $t \in [0,t_0)$ and $F(t,\alpha) > 0$ for $t \in (t_0,T]$. Moreover,
by Proposition~\ref{prop:beta}, $G(t,\alpha) > 0$ for $t \in [0,t_0)$ and $G(t,\alpha) < 0$ for $t \in (t_0,T]$. Consequently, $F(t,\alpha)G(t,\beta) < 0$
for all $t \in [0,t_0) \cup (t_0,T]$, and the result holds.
\item $\alpha(t_0)<\alpha(0)$. Since $\beta(t)$ has a maximum at $t_1$, and
is strictly monotonic for $t\in(0,t_1) \cup (t_1, t_0)$, there exist two continuous monotonic functions
$T_1,T_2$, defined in $(\beta_0,\beta_1)$ such that $T_1(\beta(t))=t,\ t \in (0,t_1)$ and $T_2(\beta(t))=t,\ t \in (t_1,t_0)$. Notice that
$0 < T_1(\beta)<t_1<T_2(\beta)<t_0,\ \beta \in (\beta_0, \beta_1)$, and that $T_1, T_2$ have opposite monotonicity.
Now, let us define the
continuous function
\[
d(\beta)=\alpha(T_1(\beta))-\alpha(T_2(\beta)),\quad \beta\in (\beta_0,\beta_1).
\]
Since $\alpha(t)\to\pm\infty$ as $t\to t_1^{\pm}$, then $\lim_{\beta\to\beta_1}d(\beta)=-\infty$.
On the other hand, $d(\beta_0)=\alpha(0)-\alpha(t_0)>0$. By continuity, there exists $\bar \beta$ such that $d(\bar \beta)=0$. For $\alpha=\alpha(T_1(\bar \beta))$ and $\beta=\bar \beta$, $F,G$
have the same changes of sign: exactly two and both in $(0,t_0)$.
On the one hand, $\alpha=\alpha(T_1(\bar \beta))<\alpha(0)=2$, so that $F(t,\alpha)>0$ for $t$ close to zero by Proposition \ref{prop:alpha}. On the other hand, for $t$ close to zero, $\beta(t)<\beta=\bar \beta$, so that $G(t,\alpha)<0$ by Proposition \ref{prop:beta}. Consequently, $F(t,\alpha)G(t,\beta)\leq 0$, and
\[ \int_0^T F(t,\alpha)G(t,\beta)\,dt <0.\]
\item $\alpha(t_0)>\alpha(T)$. Since $\beta(t)$ has a minimum at $t_2$, and
is strictly monotonic for $t \in(t_0,t_2) \cup (t_2,T)$, then there exist two continuous monotonic functions $T_1,T_2$, defined in $(\beta_2,\beta_0)$,
such that $T_1(\beta(t))=t,\ t \in (t_0,t_2)$ and $T_2(\beta(t))=t,\ t \in (t_2,T)$. Notice that
$t_0 < T_1(\beta)<t_2<T_2(\beta)<T,\ \beta \in (\beta_2, \beta_0)$, and that $T_1,T_2$ have opposite monotonicity. Now, if $d(\beta) = \alpha(T_1(\beta))-\alpha(T_2(\beta)),\ \beta \in (\beta_2,\beta_0)$, then $\lim_{\beta\to\beta_2} d(\beta)=-\infty$ and $d(\beta_0)=\alpha(t_0)-\alpha(T)>0$, and we conclude as in the previous case.
\end{itemize}
\end{proof}
\section{Sufficient Criteria}
The following results establish sufficient conditions for $(C_2)$ and $(C_3)$ to be satisfied without assuming knowledge of the positive singular closed solutions of \eqref{eq:Abel}.
The first result is an adaptation of Proposition 5 of \cite{BFG2} to the case $\gamma=1$. In order to obtain $(C_2)$, we define
\[
\phi(t)=-B(t)/(2A(t)).
\]
By condition $(C_1)$ and Remark \ref{rem:signos}, $\phi(t)\geq 0$ if and only if $t\in [0,t_{B_1}]\cup [t_{A},t_{B_2}]$.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=3]
\clip (-0.5,-1) rectangle (2.5,1);
\draw[-] (0,0) -- (2,0);
\draw[-] (0,-1) -- (0,1);
\draw[-] (2,-1) -- (2,1);
\draw[dashed] (.3,-1) -- (.3,1); \node[fill=white] at (.3+0.05,-0.2) {\small $t_{B_1}$};
\draw[dashed] (.8,-1) -- (.8,1); \node[fill=white] at (0.8+0.05,-0.2) {\small $t_{A}$};
\draw[dashed] (1.4,-1) -- (1.4,1); \node[fill=white] at (1.4+0.05,-0.2) {\small $t_{B_2}$};
\draw[smooth=0.7,ultra thick,oro!50!rojo] plot[domain=0.01:0.79,samples=200] ({\x},{-((\x-1.4)*(\x-0.3))/(20*(\x-0.8)*\x)});
\draw[smooth=0.7,ultra thick,oro!50!rojo] plot[domain=0.81:2,samples=200] ({\x},{-((\x-1.4)*(\x-0.3))/(20*(\x-0.8)*\x)});
\node at (1.75,0.75) {\small $\phi(t)$};
\filldraw (.3,0) circle (1pt);
\filldraw (.8,0) circle (1pt);
\filldraw (1.4,0) circle (1pt);
\end{tikzpicture}
\end{center}
\caption{Sketch of $\phi(t)$.}\label{fig:2}
\end{figure}
Now, we impose some sufficient conditions in order that any positive bounded solution $u(t,x)$ crosses the graph of $\phi$ in at most two points. These conditions are quite restrictive, but can be verified computationally.
\begin{prop}[\cite{BFG2}]\label{prop:upper}
Let $u(t,x)$ be a positive singular closed solution of \eqref{eq:Abel} and suppose that $(C_1)$ holds.
Let $J_1=(0,t_{B_1})$ and $J_2=(t_{A},t_{B_2})$. If the function $P$, defined in \eqref{eq:upper}, has at most one zero in each $J_i,\ i = 1,2$, then $u(t,x)-\phi(t)$ has a unique simple zero in each $J_i, i=1,2$. I.e., condition $(C_2)$ holds.
\end{prop}
\begin{proof}
Firstly, we observe that
\begin{align*}
\phi' - B \phi^2 - A \phi^3 & = \frac{B A '-B 'A }{2A ^2} - \frac{ B ^3}{4A ^2} + \frac{ B ^3}{8A ^2} \\ & = \frac{4(B A '-B 'A ) - B ^3 }{8A ^2} = \frac{P}{8 A^2}.
\end{align*}
If $\phi' - B \phi^2 - A \phi^3$ has no zeros in $J_i$, then $\phi$ is an upper or lower solution of \eqref{eq:Abel} and therefore the graphs of $u(t,x)$ and $\phi$ coincide in at most one point.
If $\phi' - B \phi^2 - A \phi^3$ has one zero in $J_i$, then $ \phi$ changes from an upper (resp.~lower) solution to a lower (resp.~upper) solution of \eqref{eq:Abel} in that interval. In any case, since $u(t,x)$ is bounded and the graph of $\phi$ in $J_i$ goes from zero to infinity, $u(t,x)$ intersects $\phi$ at one point in $J_i$.
Therefore, $2 A(t) u(t,x) + B(t)$ has at most one zero in $(0,t_{B_1})$ and at most one zero in $(t_A,t_{B_2})$.
Finally, since \[2 A(0) u(0,x) + B(0) = B(0) > 0,\] \[2 A(t_{B_1}) u(t_{B_1},x) + B(t_{B_1}) = A(t_{B_1}) u(t_{B_1},x) < 0,\] we have that $2 A(t) u(t,x) + B(t)$ has at least one zero in $(0,t_{B_1}),$ and since
\[2 A(t_A) u(t_A,x) + B(t_A) = B(t_A) < 0,\] \[2 A(t_{B_2}) u(t_{B_2},x) + B(t_{B_2}) = A(t_{B_2}) u(t_{B_2},x) > 0,\] we have that $2 A(t) u(t,x) + B(t)$ has at least one zero in $(t_A,t_{B_2}).$
\end{proof}
We now obtain a sufficient condition for $(C_3)$ to hold, which can be
computed. Note that $(C_3)$ is equivalent to imposing that the graph
of every singular positive closed solution $\tilde u$ is
contained in the region $v(t,x)< 0$, where $v(t,x)$ is the function
defined in \eqref{ecu:v}. In order to control the intersections of the
solutions with the complementary region $v(t,x)\geq 0$, consider the
derivative of the solutions with respect to the vector field
\eqref{eq:Abel}, i.e.,
\[
\dot v(t,x)= v_t(t,x)+v_x(t,x)(A(t)x^3+B(t)x^2).
\]
Controlling the common zeros of $v$ and $\dot v$, we obtain a sufficient condition for $(C_3)$ to hold.
Let
\[
\begin{split}
v^{-1}(0)&=\{(t,x): 0<t<T, x>0, v(t,x)=0 \},\\
\dot v^{-1}(0)&= \{(t,x): 0<t<T, x>0, \dot v(t,x)=0 \},
\end{split}\]
and denote $S = [0,T]\times [0,\infty)$.
\begin{prop}\label{PropC3}
If $v(t,0)=A'(t)B(t)-A(t)B'(t)<0$ for all $t\in[0,T]$, $v(0,x)<0$ and $v(T,x)<0$ for all $x\geq 0$, and $v^{-1}(0) \cap \dot v^{-1}(0)=\emptyset$, then condition $(C_3)$ holds.
\end{prop}
\begin{proof}
$v^{-1}(0) \cap \dot v^{-1}(0)=\emptyset$, the set $v^{-1}(0)$ has no
singular points, so it consists of regular curves. By compactification
of the region $[0,T]\times [0,+\infty)$ into a point, we may assume that they are closed, so that, by the Jordan curve theorem, each of these regular curves
divides the space into two regions.
Since $v(t,0),v(0,x),v(T,x)<0$ for all $t\in[0,T]$ and $x\geq 0$, then there
is a connected region $W$ in $v(t,x)<0$ containing the points of the form
$(t,x),(0,x),(T,x)$, for all $t\in[0,T]$ and $x\geq 0$.
From the hypothesis, we have that
$$
\dot v(t,x)=\langle (v_t(t,x),v_x(t,x)), (1,A(t)x^3+B(t)x^2) \rangle
$$
has definite sign on $v^{-1}(0)$,
where $\langle\cdot.\cdot\rangle$ is the ordinary scalar product in $\mathbb{R}^2$ and $(1,A(t)x^3+B(t)x^2)$ is the vector field defined by \eqref{eq:Abel}. By the Jordan curve theorem, we can fix an orientation for any given regular curve contained in the set $v^{-1}(0)$, and the field has either that same orientation at each point of the curve or the opposite orientation at each point of the curve. Hence, one of the regions into which the curve divides the space is positively invariant and the other negatively invariant.
In any case, since any bounded solution $u(t,x)$ of \eqref{eq:Abel} starts and ends in the connected region $W$
the graph of $u(t,x)$ does not intersect $v^{-1}(0)$, and consequently condition $(C_3)$ holds.
\end{proof}
The following result provides a simple sufficient condition that implies $v^{-1}(0) \cap \dot v^{-1}(0)=\emptyset$, which will be used in the examples.
\begin{coro}\label{CorC3}
Let
\begin{equation}\label{eq:elim}
Q(t)=B(t) (A(t) B''(t)-B(t)A''(t)) + 3 B'(t)(B(t)A'(t) - A(t)B'(t)).
\end{equation}
If $Q(t)$ has no zeros in $(0,T)$ or $v(\bar t,x)=0$ does not have positive solutions for each zero $\bar t$ of $Q(t)$ in $(0,T)$
then $v^{-1}(0) \cap \dot v^{-1}(0)=\emptyset$.
\end{coro}
\begin{proof}
Observe that
\[Q(t)=4 \left(2 A(t) B(t) x^2+B^2(t) x+3 B'(t)\right) v(t,x)-4 B(t)\, \dot v(t,x).\] Thus, if $v(t,x) = \dot v(t,x) = 0$ then $Q(t) = 0$.
So, $v^{-1}(0) \cap \dot v^{-1}(0)=\emptyset$, since otherwise there exist $0<\bar t<T$ and $\bar x>0$ such that $v(\bar t,\bar x)=\dot v(\bar t,\bar x)=Q(\bar t)=0$, in contradiction with the hypothesis.
\end{proof}
\section{Example of Application}\label{Sect4}
In this section, we prove Theorem~\ref{theo:example}, i.e., that \eqref{eq:Abel} has at most two positive closed solutions
when $T=1$ and \[A(t) = t(t-t_A)\quad \text{and}\quad B(t) = (t-t_B)(t-1),\quad t_A,t_B\in\mathbb{R}.\]
In either of the following cases, the known methods allow it to be proved that
\eqref{eq:Abelquadquad} has at most one simple positive closed solution:
\begin{enumerate}
\item\label{Pl} $t_A\not\in(0,1)$ or $t_{B}\not\in(0,1)$.
\item\label{AGG} $t_A\in(0,1), t_{B}\in(0,1)$, and $t_A \in (0,t_{B})$.
\end{enumerate}
In case \eqref{Pl}, either $A$ or $B$ has no zeros in $(0,1)$:
if $A$ has no zeros, it was proved in \cite{Pliss} that \eqref{eq:Abelquadquad} has at most one positive closed solution,
while if $B$ has no zeros, the proof was given in \cite{GL}. In case \eqref{AGG}, it was proved in
\cite{AGG} that if for some $\alpha,\beta\in\mathbb{R}$ the function $\alpha A+\beta B$ does not
vanish identically and does not change sign in $(0,1)$ then the Abel equation has at most one
positive closed solution. Hence, if we consider a linear combination of the form $\alpha A(t)+B(t)$,
its discriminant $d(\alpha)$ is a degree-two polynomial in $\alpha$ with leading coefficient $t_A^2$. Therefore, there exists
$\alpha$ such that $d(\alpha)\leq0$ (and so $\alpha A(t)+B(t)$ does not change sign) if and only if its discriminant
is greater than or equal to zero. But this discriminant is
\[
\operatorname{Disc}(d)=-(1-t_A)t_B(t_A-t_B),
\]
which is non-negative if and only if $t_A \in (0,t_B]$, and the result follows.
\medskip
Hence, to prove Theorem~\ref{theo:example} we may assume that $0 < t_B < t_A < 1$.
We shall divide the proof into two parts, first proving that \eqref{eq:Abel} satisfies the hypotheses of Theorem~\ref{theo:bif}, and
then using Theorem~\ref{theo:bifrotated} to show
that there are at most two positive closed solutions.
\subsection{Semistability of the singular solutions} As it is immediate to check that $(C_1)$ holds, it only remains to verify that $(C_2)$ and $(C_3)$ hold to apply Theorem~\ref{theo:bif}.
Let us see that $(C_2)$ holds. By Proposition \ref{prop:upper}, it suffices to prove
that $P(t)=4(B (t)A '(t)-B '(t)A (t)) - B ^3(t)$ has at most one zero
in each of the intervals $J_1=(0,t_{B})$, $J_2=(t_{A},1)$. First we need the following lemma.
\begin{lema}\label{lema3}
The polynomial $A'(t)B(t)-A(t)B'(t)$ is negative for every $t$.
\end{lema}
\begin{proof}
$A'(t)B(t)-A(t)B'(t)$ is a quadratic polynomial in $t$ with coefficients in $\mathbb{R}[t_A,t_B]$ whose discriminant and leading coefficient are $-(1-t_A)t_B(t_A-t_B) < 0$ and $t_A-t_B-1 < 0$, respectively.
\end{proof}
\begin{prop}\label{prop1}
The function $P(t)$ is negative in $(0,t_B) \cup (t_A,1)$. In particular, condition $(C_2)$ holds.
\end{prop}
\begin{proof}
By hypothesis, $B(t) > 0$ for all $t \in (0,t_B)$, and, by Lemma \ref{lema3}, we have that $A'(t)B(t)-A(t)B'(t) < 0$ for all $t$. Thus, \[P(t) = 4(A'(t)B(t)-A(t)B'(t)) - B^3(t) < 0\] for all $t \in (0,t_B)$.
\medskip
Let us assume that $t_A < t < 1$. Observe that
\[P(t) = -(t^3(t-t_A)^3 + (1-t)(2t-t_A)(t-t_B) + t(t-t_A)).\]
Since $t^3(t-t_A)^3 > 0, (1-t)(2t-t_A)(t-t_B) > 0$, and $t(t-t_A) > 0$ for all $t \in (t_A,1)$, we conclude that $P(t)$ is negative for all $t \in (t_A,1)$.
Finally, since $P(t)$ has no zeros in $(0,t_B) \cup (t_A, 1)$, condition $(C_2)$ holds by Proposition \ref{prop:upper}.
\end{proof}
Now let us prove that $(C_3)$ is fulfilled by using Proposition~\ref{PropC3} and Corollary~\ref{CorC3}.
Recall that $v(t,x)=B(t)(2A(t)x+B(t))^2+P(t)$.
By Lemma~\ref{lema3}, we have that $v(0,x)<0$ and $v(1,x)<0$ for all $x\geq 0$, and $v(t,0)<0$ for all $t\in[0,1]$.
So it suffices to show that $v(\bar{t},x)=0$ has no positive solution for each zero $\bar{t}$ of $Q(t)$ in the interval $(0,1)$.
\medskip
In our setting, the function $Q(t)$ in \eqref{eq:elim} is the following cubic polynomial in $t$ with coefficients in $\mathbb{R}[t_A,t_B]$:
\begin{align*}
Q(t) = & -4(1-t_A+t_B)\, t^3+ ((t_B^2+12t_B+1)-(1+t_B)t_A)\, t^2\\ & -2t_B (t_B+4t_A+1)\, t+ t_B(3t_A(t_B+1)-2t_B).
\end{align*}
We claim that $Q$ has exactly one zero in $(0,1)$. To prove this, we shall apply Sturm's theorem (\cite[Theorem 2.50]{ARAG}), but first we need to introduce some additional notation.
Let $\mathcal{S}$ be the following four-term sequence:
\[\mathcal{S}_0 = Q(t),\]
\[\mathcal{S}_1 = Q'(t),\]
\[\mathcal{S}_2 = -\operatorname{Rem}(\mathcal{S}_0,\mathcal{S}_1),\]
\[\mathcal{S}_3 = -\operatorname{Rem}(\mathcal{S}_1,\mathcal{S}_2),\]
where $\operatorname{Rem}(\mathcal{S}_i,\mathcal{S}_{i+1})$ is the remainder of dividing $\mathcal{S}_i$ by $\mathcal{S}_{i+1}$ as polynomials in $t$.
The sequence $\mathcal{S}$ is the so-called signed remainder sequence of $Q(t)$ and $Q'(t)$ (see \cite[Definition 1.7]{ARAG})).
\begin{lema}\label{lema2}
With the above notation, $\mathcal{S}_3 > 0$.
\end{lema}
\begin{proof}
A direct computation shows that $\mathcal{S}_3$ is equal to \[36(1-t_A)(1-t_B)^2 t_B(t_A-t_B)(1-t_A+t_B)\, \frac{f(t_A,t_B)}{g^2(t_A,t_B)}\]
where
\begin{align*}
f(t_A ,t_B)
=\ & 3t_B^4-6t_At_B^3+3t_A^2t_B^2+506t_At_B^2-506t_B^2 -\\ & -506t_A^2t_B+506t_At_B+3t_A^2-6t_A+3
\end{align*}
and
\begin{align*}
g(t_A ,t_B) =\ & t_B^4 - 2t_At_B^3 + t_A^2t_B^2 - 98t_At_B^2 + 98t_B^2 + \\ & + 98t_A^2t_B - 98t_At_B + t_A^2 - 2t_A+1 .
\end{align*}
Since $0 < t_B < t_A < 1$, we have that the sign of $\mathcal{S}_3$ is the same as the sign of $f(t_A,t_B)$. Now, it suffices to observe that
\begin{align*}
f(t_A,t_B) =\ & (3t_B(t_A-t_B)+506(1-t_A))t_B(t_A-t_B)+3(1-t_A)^2 > 0
\end{align*}
to get the desired result.
\end{proof}
\begin{prop}\label{prop2}
The cubic polynomial $Q(t)$ has exactly one zero in $(0,1)$. Moreover, this zero lies in $(t_B,1)$.
\end{prop}
\begin{proof}
Let us see that $Q(t)$ has exactly one root in $(t_B,1)$.
On the one hand, the number of sign variations of $\mathcal{S}$ at $t_B$, $\operatorname{Var}(\mathcal{S},t_B)$, is $2$. Indeed, when evaluating $\mathcal{S}$ at $t_B$ one has
\[\mathcal{S}_0 = Q(t_B) = 3(1-t_B)^2t_B(t_A-t_B) > 0,\]
\[\mathcal{S}_1 = Q'(t_B) = -10(1-t_B)t_B(t_A-t_B) < 0,\]
\[\mathcal{S}_3 > 0,\]
where the last inequality follows by Lemma \ref{lema2}. Notice that
the number of sign variations at $t_B$ is equal to two regardless of
the sign of $\mathcal{S}_2$ at $t_B$. On the other hand, the number of
sign variations of $\mathcal{S}$ at $1$,
$\operatorname{Var}(\mathcal{S},1)$, is $1$. Indeed, when evaluating
$\mathcal{S}$ at $1$ one has
\[\mathcal{S}_0 = Q(1) = -3(1-t_A)(1-t_B)^2 < 0,\]
\[\mathcal{S}_1 = Q'(1) = -10(1-t_A)(1-t_B) < 0,\]
\[\mathcal{S}_3 > 0,\]
where the last inequality follows by Lemma \ref{lema2}. Again notice
that the number of sign variations at $1$ is equal to one regardless of the sign
of $\mathcal{S}_2$ at $1$.
Now, by Sturm's theorem (\cite[Theorem 2.50]{ARAG}) we conclude that the number of real roots of $Q(t)$ in $(t_B,1)$ is equal to
\[\operatorname{Var}(\mathcal{S},t_B) - \operatorname{Var}(\mathcal{S},1) = 1.\
\medskip
Next we prove that $Q(t)$ has no roots in $(0,t_B]$ by using the
Budan-Fourier theorem (\cite[Theorem 2.35]{ARAG}). Let
$\operatorname{Der}(Q)$ be the list $Q(t), Q'(t), Q''(t),$
$Q'''(t)$. Let us compute the number of sign variations of
$\operatorname{Der}(Q)$ at the borders of the intervals.
\begin{itemize}
\item $\operatorname{Var}(\operatorname{Der}(Q),0)=3$. Indeed, since $0 < t_B < t_A < 1$, we have that
\[Q(0)=t_B(t_A(3t_B+1)+2(t_A-t_B)) > 0,\]
\[Q'(0) = -2t_B(4t_A+t_B+1) < 0,\]
\[
Q''(0) = (2t_B+2)(1-t_A)+2t_B(t_B+11) > 0,
\]
\[Q'''(0) = -24(1+t_B-t_A) < 0.\]
\item $\operatorname{Var}(\operatorname{Der}(Q),t_B)=3$. Indeed, since $0 < t_B < t_A < 1$ and $t_B<1/2$, we have that
\[Q(t_B) > 0,\]
\[Q'(t_B) < 0,\]
\[
Q''(t_B) = 22t_B(t_A-t_B)+2(1-t_A) > 0
\]
\[Q'''(t_B) = Q'''(0) < 0.\]
\end{itemize}
Therefore, by the Budan-Fourier theorem, we obtain that the number of
roots of $Q$ in $(0,t_B]$ is less than or equal
to \[\operatorname{Var}(\operatorname{Der}(Q),0) -
\operatorname{Var}(\operatorname{Der}(Q),t_B) = 0,\] i.e.,
$Q(t)$ has no roots in $(0,t_B]$, so we conclude.
\end{proof}
\begin{prop}\label{prop3}
Let $\bar t \in (t_B,1)$ be the unique real root of $Q(t)$ in $(0,1)$.
Then \[v(\bar t, x) = 0\] has no positive solutions. In particular, $(C_3)$ holds.
\end{prop}
\begin{proof}
First we note that
\[
v(\bar t, x) = 4 A^2(\bar t)B(\bar t)x^2+4 A(\bar t)B^2(\bar t)x+4 (A'(\bar t)B(\bar t)-A(\bar t)B'(\bar t)).
\]
We distinguish three cases:
\begin{enumerate}
\item If $\bar t \in (t_B,t_A)$ then $A^2(\bar t) B(\bar t) < 0$, $A(\bar t) B^2(\bar t) < 0$, and $A'(\bar t) B(\bar t)-B'(\bar t) A(\bar t) < 0$ by Lemma \ref{lema3}. So all the coefficients in $x$ of $v(\bar t,x)$ are negative, and we conclude that $v(\bar t, x)$ has no positive roots.
\item If $\bar t = t_A$ then $v(\bar t, x) = A'(t_A)B(t_A)\neq0$.
\item If $\bar t \in (t_A,1)$ then $P(\bar t) < 0$ by Proposition \ref{prop1}. Now we only need to observe that the discriminant of $v(\bar t,x)$ is equal to $- A^2(\bar t) B(\bar t) P(\bar t) < 0$ to conclude that $v(\bar t, x)$ has no real roots.
\end{enumerate}
By Lemma~\ref{lema3}, $A'(t)B(t)-A(t)'B(t)<0$ for all $t\in(0,1)$. Also, $v(0,x) = -t_A t_B <0$, $v(1,x) = -(1-t_A)(1-t_B)<0$, and $v^{-1}(0) \cap \dot v^{-1}(0)=\emptyset$ by Corollary \ref{CorC3}. So, by Proposition \ref{PropC3}, we are done.
\end{proof}
Since $(C_1)$, $(C_2)$, $(C_3)$ hold, Theorem~\ref{theo:bif} implies
that if $u(t,\tilde x)$ is a singular solution of \eqref{eq:Abel} then $u_{xx}(T,\tilde x)<0$.
\subsection{Number of limit cycles}
Developing $u(t,x)$ in power series with respect to $x$ (see, e.g., \cite{ABF} or \cite{BFG}),
\[
\begin{split}
u(t,x) = & x + \left(\int_0^1 B(t)\,dt\right) x^2 + \left(\int_0^1 A(t)\,dt\right) x^3
\\ & + \left(\int_0^1 A(t)\int_0^t B(s)\,ds\,dt\right) x^4 + \mathcal{O}\\
= & x + \frac{3 t_B-1}{6} x^2 + \frac{2 - 3 t_A}{6} x^3 + \frac{ -16 + 21 t_A + 54 t_B - 75 t_A t_B}{360} x^4 + \mathcal{O},
\end{split}
\]
where $\mathcal{O}$ denotes higher order terms in $x$, $t_B-1/3$, and $t_A-2/3$.
In particular, if $t_A=2/3$ and $t_B=1/3$ then $u(t,x)-x=-x^4/540+\mathcal{O}(x^5)$, while
the signs of the coefficients of $x^2$ and $x^3$ depend on $t_B$ and $t_A$ respectively. Hence, there is a double Hopf bifurcation of the origin giving rise to two positive closed solutions for $t_A<2/3$ and $t_B<1/3$.
To prove that the maximum number of positive closed solutions is two,
we shall apply Theorem~\ref{theo:bifrotated}. To this end, let us think of $-t_A$ as parameter $\lambda \in (\lambda_1, \lambda_2)$, with $\lambda_1 = -1$ and $\lambda_2 = -t_B$, so that
\[F(t,x,\lambda)=t(t+\lambda)x^3+(t-t_B)(t-1)x^2,\] and
$F_\lambda(t,x,\lambda) = t x^3>0,$ for all $t\in(0,1)$ and $x>0$.
For $\lambda=\lambda_2$, as was mentioned at the beginning of the section,
\eqref{eq:Abelquadquad} has at most one positive closed solution.
Since for $t_B\neq 1/3$ the stability of the origin does not change, Theorem~\ref{theo:bifrotated} implies that
\eqref{eq:Abelquadquad} has at most two positive closed solutions for every $t_B\neq 1/3$ and $t_B<t_A<1$.
To conclude, note that $F_\lambda(t,x,\lambda)$ is monotonic with respect to $-t_B$,
and that singular positive closed solutions have the semistability given by Theorem~\ref{theo:bif},
so that if equation \eqref{eq:Abelquadquad} has more than two positive closed solutions for $t_B=1/3$ and some $t_B<t_A<1$ then
a small perturbation of $t_B$ would keep or increase that number of
positive closed solutions, in contradiction with the maximum of two positive closed solutions
for $t_B\neq 1/3$.
\section{Linear Trigonometric Coefficients}\label{Sect5}
Consider the Abel equation \eqref{eq:Abel_trigonometric_linear}, i.e.,
$x'=A(t)x^3+B(t)x^2$, where
\[
A(t)=a_0+a_1\sin t+a_2\cos t\quad \text{and}\quad B(t)=b_0+b_1\sin t+b_2\cos t,
\]
with $a_i,b_i\in\mathbb{R},\ i=0,1,2$. We prove that
Theorem~\ref{theo:bif} holds in a region where two positive limit cycles bifurcate from the origin,
obtaining an upper bound of two positive limit cycles.
\medskip
Equation \eqref{eq:Abel_trigonometric_linear} has at most one simple positive limit cycle when $A$ or $B$
have definite sign~\cite{Pliss,GL},
when there is a linear combination of $A,B$ having definite sign~\cite{AGG},
or when the coefficients $a_i,b_i$, $i=0,1,2$, belong to certain regions~\cite[Theorem 1.2]{BFG}, in particular
when $a_0 b_0=0$.
Note that the first condition corresponds to $A$ or $B$ having at most one zero in $[0,2\pi)$. Let us check that
the second condition holds either whenever
$A$ or $B$ have at most one zero in $[0,2\pi)$ or when there is no zero of $A$ (resp.~$B$)
between the two zeros of $B$ (resp.~$A$). Therefore, we may assume that $A,B$ have exactly two simple zeros in $[0,2\pi)$
which are interleaved.
\begin{prop}
Assume that $A$ or $B$ have at most one zero in $[0,2\pi)$, or that there is no zero of $A$ (resp.~$B$)
between the two zeros of $B$ (resp.~$A$). Then there exist $\alpha,\beta\in\mathbb{R}$
such that $\alpha A(t)+\beta B(t)\geq 0$ for all $t\in\mathbb{R}$.
\end{prop}
\begin{proof}
If $A$ has at most one zero in $[0,2\pi)$ then $A(t)$ has definite sign in $[0,2\pi)$,
so the result follows by choosing $\alpha=\pm 1$ and $\beta =0$. The same argument applies if $B$ has at most one zero in $[0,2\pi)$.
Assume now that $A,B$ have two zeros in $(0,2\pi)$, and that there is no zero of $B$ between the zeros of $A$
(with the other case being analogous). The change of variables $z\to \tan(t/2)$ transforms $A,B$
into rational functions with denominator $1+z^2$ and numerator a second degree polynomial. Moreover, the
relative position of the roots of $A,B$ are preserved.
So we may assume that, after the change of variables, $A$ and $B$ become
$\bar A(z)=a(z-z_{A_1})(z-z_{A_2})$ and $\bar B(z)=b(z-z_{B_1})(z-z_{B_2})$, respectively. Then $\alpha \bar A+\bar B$ has definite sign if and only if $d(\alpha):=\operatorname{Disc}(\alpha \bar A+\bar B)\leq0$. Since $d(\alpha)$ is a degree two polynomial in $\alpha$ with positive leading coefficient, there exists $\alpha$ such that $d(\alpha)\leq0$ if and only if $\operatorname{Disc}(d(\alpha))\geq0$. From
$$
\operatorname{Disc}(d(\alpha))=16 a^2 b^2 (z_{A_1} - z_{B_1}) (z_{A_2} - z_{B_1}) (z_{A_1} - z_{B_2}) (z_{A_2} - z_{B_2}),
$$
we conclude.
\end{proof}
From Remark \ref{rem:signos}, we may assume that $A(0)=0$, $A'(0) <0 $, and $B(0) > 0$ since the remaining cases are similarly studied. Moreover, rescaling $x$, it is not restrictive to assume $A'(0)=-1$. Hence, in what follows we shall consider the equation
\begin{equation}\label{eq:Abellinear2}
x'=\left(a_0-\sin t-a_0\cos t\right)x^3 + \left(b_0+b_1\sin t+b_2\cos t \right)x^2,
\end{equation}
where $b_0+b_2>0$.
Developing the solution of \eqref{eq:Abellinear2} in power series, we obtain
\[\label{expandsolution}
\begin{split}
u(t,x)=&x+2 b_0 \pi x^2
+ (2 a_0 \pi + 4 b_0^2 \pi^2) x^3 \\
&+ \pi (3 a_0 b_1 - b_2 + 8 b_0^3 \pi^2 + 2 b_0 (1 + 5 a_0 \pi)) x^4+\mathcal{O}(x^5).
\end{split}
\]
In particular, there is a change of stability when $b_0=0$ or $a_0=0$,
whereas when $a_0=b_0=0$ we have that $u(t,x)<x$ for $x>0$ close to the origin, which implies
that at least two limit cycles bifurcate from the origin with $a_0>0$ and $b_0<0$.
Actually, when $a_0=b_0=0$, $u(2\pi,x)<x$ for $x>0$ whenever $u(2\pi,x)$ is defined, as the following result establishes.
\begin{prop}{\cite[Theorem~2.4]{BT}}\label{prop:nosolutions}
For $a_0=b_0=0$, the Abel equation~\eqref{eq:Abellinear2}
has no positive limit cycles. Moreover, $u(2\pi,x)<x$
for any $x>0$ such that $u(t,x)$ is defined for $t\in[0,2\pi]$.
\end{prop}
Let $u(t,x,a_0,b_0)$ be the solution of~\eqref{eq:Abel_trigonometric_linear}
determined by $u(0,x,a_0,b_0)=x$.
Note that the family \eqref{eq:Abellinear2} is monotonic with respect to both $a_0$ and $b_0$, so that the same holds for $u(t,x,a_0,b_0)$. In particular, we obtain the following result.
\begin{coro}
If $a_0,b_0<0$ then \eqref{eq:Abellinear2} has no positive
limit cycles.
\end{coro}
Now we verify that \eqref{eq:Abel_trigonometric_linear} satisfies the hypotheses of Theorem~\ref{theo:bif}
for $a_0,b_0$ close to zero.
\medskip
In the next subsection, we will show that, while Proposition~\ref{prop:upper} can be used to show that $(C_2)$ holds for certain values of $b_1,b_2$, the hypotheses of Proposition \ref{PropC3} do not hold completely in this case. The reason is that both of these propositions much be verified on $[0,2\pi]\times [0,+\infty)$, while conditions $(C_2),(C_3)$ only need to be satisfied for the singular closed solutions. To avoid this problem, we use continuity arguments, studying the behaviour of the solutions at infinity in order to bound the region where there might be singular closed solutions.
\begin{prop}
For each $b_1,b_2$ with $b_2>0$ there exists a neighbourhood of $(a_0,b_0)=(0,0)$ such that \eqref{eq:Abellinear2} satisfies $(C_1)$, $(C_2)$, and $(C_3)$ for any singular positive closed solution.
\end{prop}
\begin{proof}
Fix a neighbourhood $U=[-\epsilon_1,\epsilon_1]\times [-\epsilon_2,\epsilon_2]$ of $(a_0,b_0)=(0,0)$ in which the functions $A$ and $B$ have two zeros in $[0,2 \pi)$ and are interleaved. So, for any $b_1,b_2\neq 0$ and any $a_0,b_0$ in that neighbourhood, $(C_1)$ holds.
\medskip
To prove $(C_2)$,
note that $B$ has a simple zero in each of the intervals $(0,t_A)$ and $(t_A,2\pi]$. Thus, we can choose $\delta_0>0$
so that,
for any positive smooth function $w$ satisfying
\[
|w(t)|,|w'(t)| < \delta_0\quad \text{for all }
t\in[0,2\pi],
\]
the function $2Aw+B$ also has a simple zero in each of the intervals $(0,t_A)$ and $(t_A,2\pi]$.
Denote
\[
u_\infty(t,a_0,b_0)=\sup \{u(t,x,a_0,b_0)\colon u(\cdot,x,a_0,b_0)\text{ being bounded in }[0,2\pi]\}.\]
Then $u_\infty(\cdot,a_0,b_0)$ is well-defined
except for certain values of $t$ where the supremum is infinite. Moreover, it is a solution of~\eqref{eq:Abel_trigonometric_linear} in each interval
where it is defined.
We will show in Appendix~A that $u_\infty(t,a_0,b_0)$ is defined and is continuous for $t > 0$ and $(a_0,b_0)$
in a neighbourhood of $(0,0)$, and that
\[
\lim_{t\to\infty} u_\infty(t,0,0)=0.
\]
Therefore, there exists $n\in\mathbb{N}$ and a neighbourhood $U$ of $(0,0)$ such that $u_\infty(t+2\pi n,a_0,b_0)<\delta_0$ for all $t\in[0,2\pi]$ and $(a_0,b_0)\in U$, and
\[
|\left(a_0-\sin t-a_0\cos t\right)x^3 + \left(b_0+b_1\sin t+b_2\cos t \right)x^2|<\delta_0
\]
for all $t\in[0,2\pi]$, $x\in[0,u_\infty(t+2\pi n,x_0,a_0,b_0)]$, and $(a_0,b_0)\in U$.
\medskip
Let $u(t,\tilde x,a_0,b_0)$ be any singular positive closed solution of~\eqref{eq:Abel_trigonometric_linear} with $(a_0,b_0)\in U$. Then
$ u(t,\tilde x,a_0,b_0)= u(t+2\pi n,\tilde x,a_0,b_0)<u_\infty(t,a_0,b_0)<\delta_0$, and condition $(C_2)$ holds.
\medskip
The last step is to prove that $(C_3)$ holds. It suffices to show that, for each $b_1,b_2$, with $b_2>0$, there exists a neighbourhood of $(a_0,b_0)=(0,0)$ such that
the graph of any singular positive closed solution $\tilde u$ is disjoint with $v^{-1}(0)$. Hence, the sign of $v(t,\tilde u(t,x))$ does not change
and is the same as the sign of $A'(0)B(0)$ (negative in this case).
Since $v(t,0) = 4(B (t)A '(t)-B '(t)A (t))$, then $v(t,0)= -4b_2<0$ for $a_0=b_0=0$.
Making $\delta_0$ and $U$ smaller if necessary, we have
$v(t,x)<0$ for any $0<x\leq \delta_0$ and $(a_0,b_0)\in U$. To conclude,
it suffices to prove that every
singular positive closed solution $\tilde u(t,x)$ satisfies $\tilde u(t,x)\leq \delta_0$.
But that holds by the previous discussion, so there exists a neighbourhood of $a_0=b_0=0$ such that $(C_3)$ holds.
\end{proof}
Let us prove that the maximum number of positive closed solutions is two in a neighbourhood of $(a_0,b_0)=(0,0)$ in the quadrant where the double Hopf bifurcation occurs.
\begin{theo}\label{theo:doslineal}
Assume there exists $\epsilon>0$ such that for every $-\epsilon<b_0<0<a_0<\epsilon$, \eqref{eq:Abellinear2} satisfies $(C_1)$, $(C_2)$, and $(C_3)$ for any singular positive closed solution.
Then \eqref{eq:Abellinear2} has at most two positive closed solutions for every $-\epsilon<b_0<0<a_0<\epsilon$.
\end{theo}
\begin{proof}
By Theorem~\ref{theo:bif}, $ u_{xx}(t,\tilde x)<0$ for every singular positive closed solution~$ u(t,\tilde x)$.
Let $$F(t,x)=\left(a_0-\sin t-a_0\cos t\right)x^3 + \left(b_0+b_1\sin t+b_2\cos t \right)x^2.$$
The derivative of function $F$ respect to $b_0$ is strictly positive for all $t\in(0,2\pi)$,
and there is at most one simple positive closed solution of \eqref{eq:Abellinear2} for any $a_0 > 0$ and $b_0=0$ (see \cite[Theorem 1.2]{BFG}).
Fix $0<a_0<\epsilon$. Then
Theorem~\ref{theo:bifrotated} implies that, for every $-\epsilon<b_0$,
\eqref{eq:Abel_trigonometric_linear} has at most three positive closed solutions. Moreover,
if there are three positive limit cycles, one of them corresponds to a Hopf bifurcation
at the origin and another to a Hopf bifurcation at infinity.
From \eqref{expandsolution}, we have that the origin is unstable for $b_0=0$. If there is a positive closed solution for $b_0=0$, it is
stable, so that infinity is unstable, and therefore, for $b_0<0$, the infinity
remains unstable, so that there is no bifurcation at infinity and the maximum number of positive closed solutions is two.
\end{proof}
If we could prove that \eqref{eq:Abellinear2} satisfies $(C_1)$,
$(C_2), and (C_3)$ for any singular positive closed solution whenever
$A,B$ have two interleaved zeros, then a similar argument would prove
the upper bound of two positive closed solutions for
\eqref{eq:Abellinear2} with no additional conditions.
The main drawback of Theorem~\ref{theo:doslineal} is that there is no
clear way to estimate the value of $\epsilon$ for which the theorem
holds. A possible way of doing that would be to bound the homoclinic
connection at infinity, following~\cite{GGT} for instance, or
improving the results in Section~3. In the following, we shall
illustrate the limitations of these results.
\subsection{Algebraic computation of the conditions}
To conclude the section, we explore the applicability of Propositions~\ref{prop:upper} and \ref{PropC3} to Abel equation \eqref{eq:Abellinear2} with $b_0 + b_2 > 0$.
\medskip
Proposition~\ref{prop:upper} applies in a certain region of the parameter space.
\begin{prop}\label{prop:C2trig}
Assume condition $(C_1)$ holds and let
\[q = \frac{\sqrt{b_1^2+b_2^2}}{b_2^{1/3}}-2^{2/3}.\]
If $\vert a_0 \vert$ and $\vert b_0 \vert$ are small enough and either $q < 0$ or $b_2 > 2$, then condition $(C_2)$ holds. \end{prop}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1.5]
\clip (-3,-.5) rectangle (3,2.5);
\draw[help lines, color=gray!30, dashed] (-2.5,-.5) grid (2.5,2.5);
\draw[->,thick] (-2.5,0)--(2.5,0) node[below left]{$b_1$};
\draw[->,thick] (0,-.5)--(0,2.5) node[below left]{$b_2$};
\draw[smooth=0.7,ultra thick,oro!50!rojo] plot[domain=0:2,samples=200]({sqrt(2^(4/3)*\x^(2/3)-\x^2)},\x);
\draw[smooth=0.7,ultra thick,oro!50!rojo] plot[domain=0:2,samples=200](-{sqrt(2^(4/3)*\x^(2/3)-\x^2)},\x);
\draw[smooth=0.7,ultra thick,oro!50!rojo] plot[domain=-.1:.1,samples=200](\x,{2});
\end{tikzpicture}
\end{center}
\caption{Graph of $q=0$.}\label{fig:q}
\end{figure}
\medskip
Before proving Proposition~\ref{prop:C2trig}, we need some preliminary results.
By the change of variable $t=2\operatorname{atan}(z)+\pi$, we obtain that
\[\bar A(z) := A(2\operatorname{atan}(z)+\pi) = \frac{2(z+a_0)}{z^2+1}\] and \[\bar B(z) := B(2\operatorname{atan}(z)+\pi)= \frac{(b_0+b_2) z^2 -2 b_1 z + b_0 - b_2}{z^2+1}.\] Notice that $\bar A(z)$ has only the zero $-a_0$ and that $\bar B(z)$ has the two zeros \[z^\pm_B := \frac{b_1 \pm \sqrt{b_1^2-(b_0^2-b_2^2)}}{b_0+b_2},\] and, since $2\operatorname{atan}(z)+\pi$ is strictly increasing, $b_0+b_2 > 0$, and the zeros of $A$ and $B$ are interleaved, we have that \[\bar B(-a_0) = \frac{(b_0+b_2)a_0^2 + 2b_1a_0+b_0-b_2}{a_0^2+1} < 0.\]
Moreover, one can easily check that this change of variable transforms
the function $P(t)$ defined in \eqref{ecu:v} into the rational
function $\bar P(z)$ such that $(z^2+1)^3 \bar P(z)$ is a polynomial
of degree six in $z$.
\medskip
Let $a_0 = b_0 = 0$. In this case, set $z_B := z= z_B^+ > 0$, so that \[b_1 = b_2 ( z_B^2-1)/(2 z_B),\quad z_B^- = -1/z_B.\] With this notation, $(z^2+1)^3 \bar P(z)_{\vert a_0=b_0 = 0} = (b_2/z_B^3)\, p(z)$ where
\[
\begin{split}
p(z) = & (b_2^2-4) z_B^3 + 3 b_2^2 z_B^2 (z_B^2-1) z + 3 (-4 z_B^3 +
b_2^2 (z_B - 3 z_B^3 + z_B^5)) z^2
\\ & +b_2^2 (z_B^2-1)(z_B^4-8 z_B^2+1) z^3 -3 (4 z_B^3 + b_2^2 (z_B - 3 z_B^3 + z_B^5)) z^4
\\& + 3 b_2^2 z_B^2 (z_B^2-1)z^5 -((b_2^2+4) z_B^3)z^6
\end{split}
\]
The discriminant of $p(z)$ is \begin{equation}\label{discp}\Delta := 186624\, b_2^8(b_2^2+4)z_B^{15}(z_B^2+1)^{12} \big( b_2^4(z_B^2+1)^6-2^{10}\, z_B^6 \big).\end{equation} Since $z_B, b_2 > 0$, the sign of $\Delta$ is equal to the sign of \[b_2^{2/3}(z_B^2+1)-2^{5/3}\, z_B = 2 z_B \left(\frac{b_2^{2/3}(z_B^2+1)}{2 z_B} - 2^{2/3}\right)\] whose sign agrees with the sign of
\[
\frac{b_2^{2/3}(z_B^2+1)}{2 z_B} - 2^{2/3} = \frac{\sqrt{b_1^2+b_2^2}}{b_2^{1/3}}-2^{2/3}=q.\]
Therefore the number of roots of $p(z)$ remains constant in each of the two connected regions determined by $q = 0$.
\begin{itemize}
\item For $b_1=0$ and $b_2=1$ we have that $q < 0$, and in this case $p(z) = -5z^6-9z^4-15z^2-3$ has no real roots. Hence $\bar P(z)_{\vert a_0=b_0 = 0}$ has no real zeros when $q < 0$, and the same holds for $\bar P(z)$ for $\vert a_0\vert, \vert b_0 \vert$ small enough and $q < 0$.
\item In the region where $q > 0,$ the polynomial $p(z)$ has exactly two real roots (take, for example, $b_1=1$ and $b_2 = 2$). Hence $\bar P(z)_{\vert a_0=b_0 = 0}$ has two real zeros when $q > 0$, and the same holds for $\bar P(z)$ for $\vert a_0\vert, \vert b_0 \vert$ small enough and $q > 0$.
\end{itemize}
\begin{lema}\label{lema4}
If $2 a_0 b_1 + (a_0^2+1) (b_0 + 2) < (1 - a_0^2) b_2$ and $\bar P(z)$ has two real zeros then they are separated by the zero of $\bar A(z)$.
\end{lema}
\begin{proof}
Since $\bar B(-a_0) < 0$, the leading coefficient of $(z^2+1)^3
\bar P(z)$ is equal to $-(b_0+b_2)((b_0+b_2)^2+4) < 0$, and \[\bar
P(-a_0) = \bar B(-a_0)(4-\bar B(-a_0)^2),\] we conclude that $\bar
P(-a_0) > 0$ if $2+\bar B(-a_0) < 0$, or equivalently $2 a_0 b_1 +
(a_0^2+1) (b_0 + 2) < (1 - a_0^2) b_2$. Therefore, if $\bar P(z)$ has
two real zeros and $2 a_0 b_1 + (a_0^2+1) (b_0 + 2) < (1 - a_0^2)
b_2$ then the two zeros of $\bar P(z)$ are separated by $-a_0$, which
is the zero of $\bar A(z)$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:C2trig}]
On the one hand, if $q<0$ then the function $P$ has no zeros in $(0,t_{B_1})$ and $(t_A,t_{ B_2})$. By continuity, it also holds for $|a_0|,|b_0|$ small enough.
On the other hand, since the limit of $2 a_0/(1 - a_0^2) b_1 + (a_0^2+1)(b_0 + 2)/(1-a_0^2)$ as $(a_0,b_0)$ tends to $(0,0)$ is $2$, if $b_2 > 2$, then $q > 0$ and there exist $a_0,b_0$ small enough such that $2 a_0 b_1 + (a_0^2+1) (b_0 + 2) < (1 - a_0^2) b_2$. In this case, by Lemma \ref{lema4}, $P$ has at most one zero in each of the intervals $(0,t_{B_1})$ and $(t_A,t_{ B_2})$.
In both cases, by Proposition \ref{prop:upper}, we conclude that $(C_2)$ is fulfilled.
\end{proof}
\medskip
Finally, let us show that Proposition \ref{PropC3} does not apply in the case of Abel equation \eqref{eq:Abellinear2} with $b_0 + b_2 > 0$.
\begin{prop}\label{prop:noC3}
For $a_0=b_0=0$ and every $b_1, b_2$, the set $v^{-1}(0) \cap \dot v^{-1}(0)$ is not empty.
\end{prop}
\begin{proof}
In this case, the function $Q(t)$ in Corollary \ref{CorC3} is equal to
\[
3 b_2 (b_2 \sin(t) - b_1 \cos(t)).
\]
Thus $Q(t)$ has exactly two roots $t_1 < t_2 = t_1+\pi$ in $[0,2\pi)$. In particular, $t_1 \in [0,\pi)$ and $t_2 \in [\pi, 2\pi).$
Now, if we replace $b_1$ by $b_2 \tan(t_i)$ in $v(t_i,x),\ i = 1,2$, and solve the resulting quadratic equations in $x$, we obtain the following solutions: \[x_i^\pm = \frac{b_2 \pm \sqrt{b_2^2+4 \cos(t_i)^3}}{\sin(2 t_i)},\ i = 1,2.\]
Let us show that at least one of them is positive. For this, we distinguish two cases:
\begin{itemize}
\item If $t_1 \in (0,\pi/2)$ then $x_1^+ > 0$.
\item If $t_1 \in (\pi/2,\pi)$ then $t_2 \in (3\pi/2, 2 \pi)$ and $x_2^->0$.
\end{itemize}
Hence, we conclude that $t_1$ or $t_2$ determines a real positive solution of $v(t_1, x) = 0$ or of $v(t_2,x) = 0$, respectively.
Now, substituting $b_1 =b_2 \tan(t_i),\ i = 1,2$, in both $v(t,x)$ and $\dot v(t,x)$, we obtain that \[\dot v(t_i,x) = x \left(\frac{b_2-\sin(2 t_i) x}{\cos(t_i)} \right) v(t_i,x),\ i = 1,2.\] Thus, by the previous argument, $v^{-1}(0) \cap \dot v^{-1}(0) \neq \emptyset$.
\end{proof}
|
{
"arxiv_id": "2302.13768",
"language": "en",
"timestamp": "2023-02-28T02:29:45",
"url": "https://arxiv.org/abs/2302.13768",
"yymm": "2302"
} | \section{Introduction}
Integrated electro-optic modulators are key component in systems such as classical and quantum optical communications, photonics-based quantum computing and sensing.
These systems target high-speed and low power consumption optical modulators.
Silicon (Si) modulators, which rely primarily on the plasma dispersion \mbox{effect \cite{rahim_taking_2021}}, are intrinsically limited in speed due to their high RC \mbox{constant \cite{sinatkas_electro-optic_2021}.}
Si modulators relying on the Pockels effect could overcome these limitations to produce a fast and pure phase modulation.
Since silicon does not have a natural $\chi^{(2)}$ due to its centrosymmetric structure, such modulation cannot be achieved directly except by straining the crystal \mbox{lattice \cite{berciano_fast_2018}} leading to a low resulting Pockels coefficient.
The integration of \mbox{high-$\chi^{(2)}$} materials on the Si platform has been widely considered.
These include doped polymers, Barium Titanate (BTO) \cite{he_high_performance_2019}, Lead Zirconate Titanate (PZT) \cite{he_high_performance_2019} or lithium niobate \mbox{(LN) \cite{he_high_performance_2019}}.
These approaches require the development of hybrid or heterogeneous integration processes which increase the technology complexity.
An electro-optic modulation in Si can also be achieve through DC Kerr effect that electrically induces an \mbox{effective $\chi^{(2)}$} which can be hence exploited to vary the refractive index by applying an electrical modulation superimposed to a static field.
DC Kerr effect has been studied in bulk silica \cite{liu_measurement_2001}, bulk silicon \cite{chen_pockels_2008, zhu_investigation_2012}, silicon interface \cite{bodrov_terahertz-field-induced_2022}, bulk antiferromagnetic NiO \cite{chefonov_study_2022} and
in integrated platforms including \mbox{silicon-organic hybrid \cite{steglich_electric_2020}} silicon-rich nitride \cite{friedman_demonstration_2021}, silicon rich carbide \cite{chang_demonstration_2022} and in silicon \mbox{nitride \cite{zabelich_linear_2022}.}
It has also been studied in the silicon platform for electric field-induced (EFI) second-harmonic generation (EFISHG) \cite{timurdogan_electric_2017}, electro-optic (EO) modulation (EOM) \cite{bottenfield_silicon_2019, Jain_2019}, slow light regime \cite{xia_high_2022} and in cryogenic experiments \cite{chakraborty_cryogenic_2020}.
However, the high-speed EOM \mbox{in \cite{bottenfield_silicon_2019, Jain_2019, xia_high_2022}} using PN junctions led to a plasma dispersion effect that has a higher contribution to the modulation than the DC Kerr effect.
While the DC Kerr effect has been well studied in the DC regime, no assessment discriminating the contribution of the DC Kerr and plasma dispersion modulation in the dynamic regime has been reported to our knowledge.
This paper presents a comprehensive analysis of the DC Kerr effect induced in a PIN diode inserted in a silicon Mach-Zehnder Interferometer (MZI) in both static and dynamic regimes. Data transmission has been analyzed up to 100 Gbits/s in Non-Return-to-Zero (NRZ) format.
An experimental method has been developed to assess the relative contribution of plasma dispersion from the Kerr effect in the dynamic regime.
The DC Kerr effect, also known as electric field-induced Pockels effect, originates from the third-order nonlinear susceptibility tensor \textbf{$\chi^{(3)}$} in presence of a static electric field.
The refractive index change induced by Kerr effect when a static electric field $F_{DC}$ and an RF field $F_{RF} \cos{\Omega t}$ are applied to the PIN junction is given by \cite{steglich_electric_2020}:
\begin{equation}
\begin{split}
\Delta & n(t) = \\ & \frac{3\chi^{(3)}}{2n_{si}} ( F_{DC}^2 + \frac{1}{2} F_{RF}^2 + 2F_{DC}F_{RF} \cos{\Omega t} + \frac{1}{2}F_{RF}^2 \cos{2\Omega t} )
\end{split}
\label{eq:delta_n_dc_kerr}
\end{equation}
with \mbox{$\Omega = 2\pi f$}, $f$ the RF frequency, \mbox{$n_{si}=3.48$} the silicon refractive index and \mbox{$\chi^{(3)}=2.8 \times 10^{-19}$ m$^2$.V$^{-2}$} at \mbox{$\lambda=1.55 \: \upmu$m}, for a silicon waveguide with a cross-section oriented along the crystallographic \mbox{axis [110] \cite{han_third-order-coef_2011,zhang-anisotropic-nonlinear-2007}}.
\mbox{Eq$.$ (\ref{eq:delta_n_dc_kerr})} exhibits three kinds of dependencies.
The first one corresponds to the static refractive index growing with the square of the field amplitudes that will be called later DC Kerr effect concerning $F_{DC}$.
The second one relies on an index modulation at an angular \mbox{frequency $\Omega$} which has its amplitude growing with the product of the DC and RF fields amplitudes. It will be called later electric field-induced (EFI) linear EO effect.
At last an index modulation at \mbox{a 2$\Omega$} component exhibits an amplitude growing with the square of the RF field amplitude alone. It will be called later quadratic EO effect.
\section{Results and discussions}
Static and dynamic studies are conducted to distinguish Kerr effects from that of plasma dispersion on the index variation in three different unbalanced Mach-Zehnder modulators (MZMs).
They consist of either PN or PIN junctions named PN, PIN2, PIN3 and their respective intrinsic region width are w=0, w=0.33 and 1.05 $\upmu$m (\mbox{Fig$.$ \ref{fig:junction_scheme-delta_n_dc}}).
Each junction waveguide has the same cross-sectional design with a 450 nm width, a 220 nm height, and a 100 nm slab thickness, suitable for the propagation of a single TE polarization mode.
The unbalancing of the MZMs is realized by a length difference \mbox{$\Delta L = 200 \: \upmu$m} between the arms leading to a passive phase shift \mbox{$\Delta\theta = 2\pi / \lambda n_g \Delta L$} with \mbox{$n_g=3.6$}, the group index of our waveguide.
The operating point of the MZM can thus be adjusted at the quadrature \mbox{($\Delta \theta = \pi/2$)} without the need of heaters by only tuning the laser wavelength around \mbox{1550 nm}.
\subsection{Measurement of the DC Kerr modulation}
\begin{figure*}
\centering\includegraphics[width=\linewidth]{figure_dc.pdf}
\caption{(a) Depiction of PN junction, (b) PIN with intrinsic region width \mbox{w = 0.33 $\upmu$m} (PIN2), and (c) PIN with \mbox{w = 1.05 $\upmu$m} (PIN3).
\mbox{(d) Effective} refractive index changes of PN, (e) PIN2, and (f) PIN3 junctions versus the applied reverse DC bias voltage with respective MZM arm lengths of 2, 6 and 6 mm.
Dots are the experimental measurements and lines correspond to the respective simulations of the whole modulation, of the DC Kerr and carrier modulations.}
\label{fig:junction_scheme-delta_n_dc}
\end{figure*}
The first experiments focus on the comparison between the three junctions in MZMs under a DC bias voltage only.
The variation of the effective index of the guided mode ($\Delta n_{DC}$) as a function of the reverse DC voltage ($V_{DC}$) applied to the junction is obtained by measuring the shift of the resonance wavelength $\Delta \lambda_r$:
\begin{equation}
\Delta n_{DC}(V_{DC}) = \frac{\lambda_r \Delta \lambda_r(V_{DC})}{FSR(\lambda_r) L}
\label{eq:delta_n_dc}
\end{equation}
with $\lambda_r$ the resonance wavelength, $FSR(\lambda_r)$ the free spectral range of the MZM and $L$ the length of the electrodes all along the junctions.
See Supplement 1 section S1.
Optical and electro-optic simulations taking into account the DC Kerr and plasma dispersion effects were performed to design the three different PN/PIN waveguides.
The measured and simulated variations of the effective index of the three junctions are presented in \mbox{Fig$.$ \ref{fig:junction_scheme-delta_n_dc}}.
Total refractive index modulations are in good agreement with the simulations.
By increasing the width of the intrinsic region of the junction to 1.05 $\upmu$m, the contribution of the plasma dispersion effect is significantly reduced to become minor compared to the DC Kerr effect, while it is dominant for the PN junction waveguide.
The DC Kerr effect can thus contribute up to 82\% of the total index change in the PIN3 junction waveguide.
\subsection{Measurement of the EFI linear EO effect}
\label{EFI_section}
The study of the electric field-induced (EFI) linear EO effect in the $\Omega$ angular frequency modulation focuses on the PIN3 junction, which shows a dominant contribution of the DC Kerr effect in the effective index change (four times greater than the contribution from plasma dispersion).
A common DC bias voltage is applied to both arms of the MZM and a sinusoidal RF signal \mbox{($f = 5$ GHz)} is split with two opposite phases to be applied in push-pull configuration.
The optical wavelength is chosen to operate at the quadrature point.
A simplified schematic view of the experimental setup to characterize the EOM is provided in \mbox{Fig$.$ \ref{fig:setup-m_modulation}(a)}.
It is worthwhile to notice that the \mbox{push-pull} configuration of the MZM driving leads to assess the index variation versus voltage as an equivalent efficiency of a single path because the measured index variation is twice the index variation in each arm while the considered voltage is twice of what it is applied to each arm.
The RF analysis in push-pull configuration leads moreover to the cancellation of DC shift terms from \mbox{Eq$.$ (\ref{eq:delta_n_dc_kerr})} of the index variation in the MZM output measurements because the shift is the same in each arm.
The transfer function of the MZM as a function of the phase shift $\Delta\phi(t)$ is:
\begin{equation}
\frac{P(t)}{P_0} = \frac{1}{2} \left \{1 + \cos[\Delta \phi(t) + \Delta \theta] \right \}
\label{eq:TF}
\end{equation}
with $P_0$ the maximum output power of the MZM.
The EOM response at the $\Omega$ angular frequency can be approximated at the quadrature \mbox{point ($\Delta\theta = \pi/2$)} as \mbox{$P_{\Omega}(t) = $ \textonehalf $P_0 \Delta \phi(t)$} with \mbox{$\Delta \phi(t) = m_{\Omega} \cos \Omega t$}, $m_{\Omega}$ the modulation index, $m_k$ the EFI linear EO modulation index and $m_c$ the carrier modulation index:
\begin{equation}
m_{\Omega} = m_k + m_c
\label{eq:m}
\end{equation}
\begin{equation}
m_k = \Gamma \frac{2 \pi}{\lambda} L_{eff1} \frac{3\chi^{(3)}}{n_{si}} F_{DC} F_{RF}
\label{eq:mk}
\end{equation}
with the mode overlap \mbox{$\Gamma=0.87$} in the Si waveguide, the effective length \mbox{$L_{eff1} = [1-exp(-\alpha_{RF}L)] / \alpha_{RF}$} and the RF field loss \mbox{$\alpha_{RF}=4.3 \: $dB.cm$^{-1}$}.
See Supplement 1 section S2 and S3 for more details.
Both the EFI linear EO effect and the plasma dispersion effect are expected to increase linearly with the RF amplitude.
Only the EFI linear EO effect is expected to increase with the applied reverse DC bias following the \mbox{Eq$.$ (\ref{eq:mk})}.
The dynamic carrier modulation is expected to decrease with $V_{DC}$ considering a small signal approximation on its static response.
For a 6 mm long junction, a linear behavior of the effective index change \mbox{$\Delta n_{\Omega} = m_{\Omega} \lambda / ( 2 \pi L_{eff1})$}
as a function of the applied reverse DC bias and RF amplitude is observed in \mbox{Fig$.$ \ref{fig:setup-m_modulation}(b)} and \mbox{Fig$.$ \ref{fig:setup-m_modulation}(c)}, respectively.
This is a clear signature of the EFI linear EO effect.
In \mbox{Fig$.$ \ref{fig:setup-m_modulation}(b)}, the non-zero intersection of $\Delta n_{\Omega}$ at $V_{DC}=0$ V indicates that carriers also contributed to the modulation in addition to the EFI linear EO effect at low reverse DC voltages.
The slope of the curve allows to determine the $\chi^{(3)}$ coefficient ($\chi^{(3)}=1.0 \times 10^{-19} \: $m$^2$.V$^{-2}$).
See Supplement 1 section S4 and S5 for more information.
This value is slightly underestimated (Supplement 1 section S5) due to the carriers contribution having a negative evolution with $V_{DC}$.
However, it remains relatively close to the $\chi^{(3)}$ values found in the literature.
\begin{figure*}
\centering\includegraphics[width=\linewidth]{figure_rf.pdf}
\caption{(a) Schematic view of the experimental setup used to measure the EOM from the MZM.
DC voltage is applied to both arms; RF is either applied in single-drive or push-pull configuration.
(EDFA: erbium-doped fiber amplifier).
\mbox{(b) Effective} index variations measured in push-pull configuration versus the reverse DC bias for a fixed RF peak amplitude of 1.4V, (c) versus the RF amplitude for three reverse DC biases.}
\label{fig:setup-m_modulation}
\end{figure*}
\subsection{Measurement of the quadratic EO effect}
The quadratic EO effect at the angular frequency of 2$\Omega$ can only be observed in a single-drive configuration, as it is proportional to the square of the electric field.
We studied the transfer function at angular frequencies of $\Omega$ and 2$\Omega$ to separate the modulation behavior resulting from the distortion produced by the nonlinear transfer function of the MZM (\mbox{Eq$.$ (\ref{eq:TF})}) and the quadratic EO effect.
A bandpass RF filter centered at $\Omega$ was placed at the signal generator output insuring a very high rejection at 2$\Omega$.
We considered the PIN3 junction where distortion due to the carrier absorption modulation is negligible.
The phase shift induced by the plasma dispersion and the Kerr effects can then be written as:
\begin{equation}
\Delta \phi (t) = m_{\Omega} \cos \Omega t + m_{2\Omega} \cos 2\Omega t
\label{eq:phase_m2}
\end{equation}
where $m_{2\Omega}$ is the modulation index associated with the quadratic EO effect:
\begin{equation}
m_{2\Omega} = \Gamma \frac{2 \pi}{\lambda} L_{eff2} \frac{3\chi^{(3)}}{4n_{si}} F_{RF}^2
\label{eq:m2}
\end{equation}
and \mbox{$L_{eff2} = [1-exp(-2\alpha_{RF}L)] / (2\alpha_{RF})$} is the effective length for the 2$\Omega$ component.
The $\Omega$ and 2$\Omega$ components of the MZI spectral response can be written - after inserting the phase shift $\Delta \phi (t)$ \mbox{(Eq$.$ (\ref{eq:phase_m2}))} in the MZM transfer function $P(t) / P_0$ \mbox{(Eq$.$ (\ref{eq:TF}))}, performing a Jacobi-Anger expansion and neglecting intermodulations - as follows:
\begin{equation}
\frac{P_\Omega (t)}{P_0} = \sin (\Delta \theta) J_1 (m_{\Omega}) \cos \Omega t
\label{eq:omega_modulation}
\end{equation}
\begin{equation}
\begin{split}
\frac{P_{2\Omega} (t)}{P_0} = [& - \cos(\Delta \theta) J_2 (m_{\Omega}) \\ & + \sin (\Delta \theta) J_0 (m_{\Omega}) J_1 (m_{2\Omega}) ] \cos 2\Omega t
\end{split}
\label{eq:2omega_modulation}
\end{equation}
where $J_n (m_{\Omega})$ are the Bessel functions of the first kind.
The modulation indices $m_{\Omega}$ and $m_{2\Omega}$ are determined by fitting the DC transmission and the spectral responses using \mbox{Eq$.$ (\ref{eq:omega_modulation})} and \mbox{Eq$.$ (\ref{eq:2omega_modulation})} at a fixed reverse DC and RF voltages. See Supplement 1 section S6.
The measurements performed for a 5 mm long PIN3 junction (\mbox{Fig$.$ \ref{fig:TF_m_m2_fit_m2_contrib_mk}(a)}) show that the $2\Omega$ component is induced by the quadratic EO effect and not the signal distortion (the modulation operates at quadrature).
Then, we can extract the corresponding modulation index $m_{2\Omega}$ from the response of the PIN3 junction.
We can notice that it is however not possible to extract the $m_{2\Omega}$ modulation index from the responses of the PN and PIN2 junctions because the distortion induced by carriers is too important.
See Supplement 1 section S6.
The modulation indices $m_{\Omega}$ and $m_{2\Omega}$ are accurately extracted at different reverse DC and RF bias voltages for the PIN3 junction using this method.
Experimental results are compiled in Supplementary Table S1.
\begin{figure*}
\centering\includegraphics[width=\linewidth]{figure_2omega.pdf}
\caption{The dots and the lines represent respectively the measurements and the corresponding fit or simulations.
(a) Optical MZM transfer function for three electrical spectral components excluding intrinsic losses with $P_0$ the maximum output power, $P_{DC}$ the static power, $P_{\Omega}$ the modulation power at angular frequency $\Omega$ and $P_{2\Omega}$ at frequency $2\Omega$ for the PIN3 junction by applying reverse $V_{DC}=6$ V, $V_{RF}=2.0$ V.
(b) Amplitude of the refractive index modulation at angular frequency 2$\Omega$ versus the applied voltage $V_{RF}$ at frequency $\Omega$ for reverse DC biases from 0 to 15V.
Whatever the value $V_{DC}$, it induces no variation of $\Delta n_{2\Omega}$.
(c) Respective relative contribution of index variation in the $\Omega$ component from EFI linear EOM and from carrier modulation versus the applied reverse DC bias voltage.}
\label{fig:TF_m_m2_fit_m2_contrib_mk}
\end{figure*}
\mbox{Fig$.$ \ref{fig:TF_m_m2_fit_m2_contrib_mk}(b)} shows the linear variation of the refractive index change \mbox{$\Delta n_{2\Omega} = m_{2\Omega} \lambda / (2 \pi L_{eff2}$)} as a function of the square RF voltage (i.e. \mbox{$\Delta n_{2\Omega}$} quadratically increases with the RF voltage).
This variation is independent of the applied reverse DC voltage, as expected with a quadratic EO effect.
In addition, a linear fit of $\Delta n_{2\Omega}$ with respect to $F_{RF}^2$ is performed to extract \mbox{the $\chi^{(3)}$} coefficient \mbox{($\chi^{(3)}=1.5 \times 10^{-19} \: $m$^2$.V$^{-2}$)}.
This value is close to the average value from the literature and is consistent with the value found in the previous section.
Moreover, the measurements of the $\Omega$ and $2\Omega$ components of the spectral response can be used to calculate the EFI linear EOM contribution to the modulation at $\Omega$ using \mbox{Eq$.$ (\ref{eq:mk})} and \mbox{Eq$.$ (\ref{eq:m2})}:
\begin{equation}
m_k = 4\frac{F_{DC}L_{eff1}}{F_{RF}L_{eff2}} m_{2\Omega}
\label{eq:mk_from_m2}
\end{equation}
The DC electric field inside the PIN junction is estimated using \mbox{$F_{DC} = (V_{DC}+V_{bi}) / w$} with $V_{bi}$ the built-in voltage and $w$ the width of the intrinsic region \cite{Jain_2019}.
See Supplement 1 \mbox{section S4}.
The RF field is estimated from the small signal approximation \mbox{$F_{RF} \approx V_{RF} dF_{DC} / dV_{DC}$}.
The contribution of the EFI linear EOM ($m_k/m_\Omega$) and carrier modulation ($(m_\Omega-m_k)/m_\Omega$) in the $\Omega$ spectral response are reported in \mbox{Fig$.$ \ref{fig:TF_m_m2_fit_m2_contrib_mk}(c)} showing that above \mbox{$V_{DC} = 5$ V}, at a modulation frequency of 5 GHz, the EFI linear EO effect contribution to the modulation becomes greater than the carrier modulation and reaches more than a factor of 3 at 15 V.
A good agreement with simulations from \mbox{Fig$.$ \ref{fig:junction_scheme-delta_n_dc}(f)} is obtained.
\subsection{Eye diagram experiments}
\begin{figure*}
\centering\includegraphics[width=\linewidth]{figure_eye_ER_SNR.pdf}
\caption{(a) Set-up used to acquire eye diagrams.
(PPG: pulse pattern generator, DC: reverse DC bias, EDFA: erbium-doped fiber amplifier, DCA: digital communications analyzer).
(b) Extinction ratio and signal to noise ratio at 10 Gbits/s by applying dual 4 V$_{pp}$ DATA/$\overline{\textnormal{DATA}}$ driving in push-pull versus the applied reverse DC bias.
Eye diagrams for reverse DC bias of 2 and 30 V are embedded.}
\label{fig:eye-diagrams1}
\end{figure*}
The data transmission characteristics of EO modulators based on DC Kerr effect using PIN3 diode has been analyzed.
The DATA and $\overline{\textnormal{DATA}}$ signals from a SHF bit pattern generator were amplified and transmitted to the respective arms of the MZM in push-pull configuration.
\mbox{A schematic} view of the setup is shown in \mbox{Fig$.$ \ref{fig:eye-diagrams1}(a)}.
First, optical eye diagrams were acquired at 10 Gbits/s on a digital communication analyzer (DCA) from a 6 mm long modulator with each arm driven at 4 V$_{pp}$ and at different reverse DC bias voltages.
The extinction ratio (ER) and the signal-to-noise ratio (SNR) of the modulated optical signal were computed by the DCA.
ER is greatly improved by reverse biasing $V_{DC}$ (\mbox{Fig$.$ \ref{fig:eye-diagrams1}(b)}).
Indeed, for a $V_{DC}$ varying from 2 V to 30 V, the measured ER increases from 1.5 dB to 3.7 dB, and the SNR increases from 8.9 to 15.6.
More eye diagrams as a function of $V_{DC}$ are presented in \mbox{Supplement} 1 \mbox{Fig$.$ S3}.
At higher data rate, the DC Kerr effect improves the transmission capability, reaching a maximum data rate of \mbox{40 Gbits/s} for the same \mbox{6 mm} long PIN3 modulator with each arm driven at \mbox{4 V$_{pp}$} (\mbox{Supplement 1} \mbox{Fig$.$ S4(b)}).
\mbox{Its speed} is limited by the RF electrodes bandwidth which can be further improved by redesigning the traveling wave electrodes to achieve an expected electro-optic bandwidth of about 40 GHz for 1 cm propagation length \cite{Yang:14}.
Then, the bandwidth limitation of the DC Kerr effect for higher speed optical modulation has been investigated on shorter modulators with 1 mm long PIN3 modulator with each arm driven at 2 V$_{pp}$.
The obtained speed limit shows a closing of the eye diagram around 80 Gbits/s (\mbox{Fig$.$ \ref{fig:eye-diagrams2}(a)}), which is the same as the achieved speed limit of 1 mm long conventional depletion modulation under same test setup \cite{li2022112g}.
At 100 Gbits/s, the use of numerical \mbox{6 taps} feed forward equalization (FFE) has led to open the eye diagram (\mbox{Fig$.$ \ref{fig:eye-diagrams2}(b)}) showing such a DC Kerr modulator associated with the proper equalizing equipment could be promising to achieve very high speed modulation.
\begin{figure}
\centering\includegraphics[width=\linewidth]{figure_eye_diagrams.pdf}
\caption{Optical eye diagram display from \mbox{1 mm} long PIN3 modulator by applying dual \mbox{2 V$_{pp}$} DATA/$\overline{\textnormal{DATA}}$ driving in push-pull and reverse $V_{DC}=30$ V measured at (a) 80 Gbits/s and (b) 100 Gbits/s with a numerical \mbox{6 taps} feed-forward equalization (FFE).}
\label{fig:eye-diagrams2}
\end{figure}
\section{Conclusion}
The electric field-induced Pockels effect (i.e. DC Kerr effect) has been observed in a Si PIN junction-based Mach-Zehnder modulator (MZM).
The refractive index variations as a function of both reverse DC bias voltage and RF amplitude have been measured in the dynamic regime showing a linear response with the DC bias voltage at a fixed RF amplitude.
The refractive index modulations at angular frequencies $\Omega$ and 2$\Omega$ resulting from an applied RF signal at the angular frequency $\Omega$ have been extracted to quantify the EFI linear EO effect contribution to the modulation.
We have shown that the DC Kerr effect is the main reason for the high speed modulation above 5 V DC bias voltages in comparison with plasma dispersion effect.
Furthermore, optical modulation has been demonstrated up to 100 Gbits/s for a 1 mm long Mach-Zehnder modulator.
Silicon modulators based on the electric field-induced linear EO modulation show promising characteristics for high-speed optical communications but also for applications requiring low loss and pure phase modulation.
\section{Methods}
\subsection{Sample fabrication}
The silicon MZI modulators are fabricated through silicon photonics foundry CORNERSTONE \cite{littlejohns2020cornerstone}, which provides detailed fabrication steps based on 8-inch 220 nm SOI wafers and doping information.
The passive waveguides were etched with 250 nm thickness patterned PECVD oxide hard mask.
The hard mask also protects the silicon core during the n-type implantation process.
The junction is optimized through the self-aligned doping steps in \cite{littlejohns2020cornerstone} for the studied PN and PIN junctions.
\subsection{Set-up for dynamic measurements}
A T100S-HP tunable laser is used to inject light into the device via the grating couplers.
A polarization controller is used to ensure a TE-mode injection.
A 90/10 splitter is used to separate the output power.
10\% goes into a CT400 optical components tester to measure the DC optical power and 90\% goes to a Keopsys KPS prebooster set to output a constant 3 dBm power.
The amplified modulated optical signal is collected using a Agilent 83440D photodiode and fed to an Anritsu MS2830A signal analyzer set to monitor either the $\Omega$ or 2$\Omega$ components of the spectral response.
A Keithley 2401 is used to polarized PIN junctions.
The RF signals are generated using an Anritsu MG3694C signal generator.
The signal is then coupled with the DC bias voltage using a Anritsu V251 bias-T.
For push-pull experiments, the RF signal is split in half using an Anritsu V241C power splitter and a phase delay is introduced on one arm using a Waka 02X0518-00 phase shifter.
ACP 50 GHz GSGSG RF probes are used to applied the DC and RF bias voltages to the travelling-wave electrodes.
Measurements are done at the quadrature point by tuning the laser wavelength.
\subsection{Eye diagrams experimental set-up}
MZI modulators was differentially driven with combined $V_{RF}$ and $V_{DC}$ by using two high voltage bias tees (SHF BT45R – HV100).
The high-speed signals were generated from SHF bit pattern generator and amplified to 4 $V_{pp}$ on each arm for modulations bellow 50 Gbits/s and to 2 $V_{pp}$ for higher modulations rate up to 100 Gbits/s.
NRZ signals are sent to the MZI modulators via 67 GHz GSGSG probes and terminated with DC blocks and 50m ohm resistors.
Measurements are done at the quadrature point.
Eye diagrams are displayed using the averaging function of the DCA to reduce optical noise from EDFA.
\section*{Funding}
EP/N013247/1, EP/T019697/1, UF150325
\section*{Acknowledgment}
The authors acknowledge CORNERSTONE team of University of Southampton for the device fabrication.
\mbox{J. Peltier} acknowledge Victor Turpaud for fruitful discussions, and Quentin Chateiller and Bruno Garbin for the development of the Python package Autolab used in his experiments.
This work was supported by funding from EPSRC Platform Grant (EP/N013247/1) and EPSRC Strategic Equipment Grant (EP/T019697/1).
D. J. Thomson acknowledges funding from the Royal Society for his University Research Fellowship (UF150325).
\section*{Disclosures}
The authors declare no conflicts of interest.
\section*{Data Availability}
Data underlying the results presented in this paper are available from the corresponding authors upon reasonable request.
\section*{Supplemental Document}
See Supplement 1 for supporting content.
\section{Static measurement}
Fig$.$ \ref{fig:suppl-DC_shift} shows the optical transfer function of the Mach-Zehnder interferometer (MZI) exhibiting the resonance wavelength $\lambda_r$, the free spectral range $FSR(\lambda_r)$ and the wavelength shift $\Delta \lambda_r$ \mbox{at 0 V} and \mbox{30 V} reverse DC bias.
\begin{figure}
\centering\includegraphics[width=0.5\linewidth]{figure_dc_shift_suppl.pdf}
\caption{Optical transmission of an unbalanced MZI for an applied reverse bias of 0 and 30 V.
\mbox{$\lambda_r$ is} the resonance wavelength, $FSR(\lambda_r)$ the free spectral range, and $\Delta \lambda_r$ the wavelength shift for bias voltage variation from 0 V to 30 V}
\label{fig:suppl-DC_shift}
\end{figure}
\section{Effective index and confinement factor}
The confinement factor of the TE mode needs to be considered to correctly estimate the experimental value of the $\chi^{(3)}$ coefficient.
Effective refractive index definition:
\begin{equation}
\Delta n_{eff} = \frac{2 n_{si}}{N} \iint_{wg}|E|^2 \Delta n dxdy
\label{eq:n_eff}
\end{equation}
with:
\begin{equation}
N = \frac{1}{c \epsilon_0} \iint_{\infty}(\textbf{E} \times \textbf{H}^*+\textbf{E}^*\times \textbf{H}).\hat{\textbf{z}}dxdy
\label{eq:N}
\end{equation}
The confinement factor can be generally defined as:
\begin{equation}
\Gamma = \frac{2 n_{si}}{N} \iint_{wg}|E|^2 dxdy
\label{eq:Gamma}
\end{equation}
resulting in an effective refractive index:
\begin{equation}
\Delta n_{eff} = \Gamma \frac{\iint_{wg}|E|^2 \Delta n dxdy}{\iint_{wg}|E|^2 dxdy}
\label{eq:n_neff_general}
\end{equation}
In the case of Kerr modulations, the refractive index becomes:
\begin{equation}
\Delta n_{eff} = \Gamma \frac{3\chi^{(3)}}{2n_{si}} \frac{\iint_{wg}|E|^2 F^2(t) dxdy}{\iint_{wg}|E|^2 dxdy}
\label{eq:n_neff_general2}
\end{equation}
By assuming a constant F(t) field inside the waveguide, the effective refractive index modulation becomes:
\begin{equation}
\Delta n_{eff} = \Gamma \frac{3\chi^{(3)}}{2n_{si}} F^2(t)
\label{eq:n_neff_general3}
\end{equation}
Therefore, the effective modulation of the $\Omega$ and $2\Omega$ components of the spectral response can be written as:
\begin{equation}
\Delta n_{\Omega} = \Gamma \frac{3\chi^{(3)}}{n_{si}} F_{DC}F_{RF} \cos \Omega t+ \Delta n_{carrier}
\label{eq:n_neff_omega}
\end{equation}
\begin{equation}
\Delta n_{2\Omega} = \Gamma \frac{3\chi^{(3)}}{4n_{si}} F_{RF}^2 \cos 2\Omega t
\label{eq:n_neff_2omega}
\end{equation}
\section{Effective length and RF losses}
\label{seq:Leff_rfloss}
RF losses from our set-up (RF filter, cables, splitter, a phase shifter and RF probes) are measured to accurately estimate the values of the applied RF amplitude $V_{RF}$ to the PIN junction. They are taken into account in $V_{RF}$ in table S1 for the corresponding output power $P_{RF}$ displayed from the generator.
RF signal loss at the position z from the line is calculated using:
\begin{equation}
F_{RF}(z) = F_{RF}(0) \exp(-\alpha_{RF} z)
\label{eq:F_rf_loss}
\end{equation}
The propagation loss of the RF line \mbox{$\alpha_{RF}=4.3 \: dB.cm^{-1} = 50 \: m^{-1}$} was extracted from RF transmissions at different RF line length.
These losses need to be taken into account in the phase shift equation to define the effective lengths.
Phase variation equation:
\begin{equation}
\Delta \phi = \frac{2\pi}{\lambda} \int_0^L \Delta n dz
\label{eq:delta_phi_delta_n}
\end{equation}
The modulation index of the electric field-induced (EFI) linear electro-optic (EO) effect at the $\Omega$ spectral component is calculated using Eq$.$ (\ref{eq:n_neff_omega}) and Eq$.$ (\ref{eq:delta_phi_delta_n}):
\begin{equation}
m_k = \Gamma \frac{2\pi}{\lambda} \int_0^L \frac{3\chi^{(3)}}{n_{si}} F_{DC}F_{RF} \exp(-\alpha z) dz
\label{eq:m_k_int}
\end{equation}
\begin{equation}
m_k = \Gamma \frac{2\pi}{\lambda} L_{eff,1} \frac{3\chi^{(3)}}{n_{si}} F_{DC}F_{RF}
\label{eq:m_k}
\end{equation}
The effective length for this $\Omega$ component is defined as:
\begin{equation}
L_{eff,1} = \frac{1-\exp(-\alpha_{RF} L)}{\alpha_{RF}}
\label{eq:L_eff1}
\end{equation}
The modulation index of the quadratic EO effect at the $2\Omega$ component is calculated using \mbox{Eq$.$ (\ref{eq:n_neff_2omega})} and \mbox{Eq$.$ (\ref{eq:delta_phi_delta_n})}:
\begin{equation}
m_{2\Omega} = \Gamma \frac{2\pi}{\lambda} \int_0^L \frac{3\chi^{(3)}}{4n_{si}} F_{RF}^2 \exp(-2\alpha z) dz
\label{eq:m_2_int}
\end{equation}
\begin{equation}
m_{2\Omega} = \Gamma \frac{2\pi}{\lambda} L_{eff,2} \frac{3\chi^{(3)}}{4n_{si}} F_{RF}^2
\label{eq:m_2}
\end{equation}
The effective length for this $2\Omega$ component is defined as:
\begin{equation}
L_{eff,2} = \frac{1-\exp(-2\alpha_{RF} L)}{2\alpha_{RF}}
\label{eq:L_eff2}
\end{equation}
\section{Field inside the junction}
\label{seq:fields}
The DC electric field inside the PIN junction is estimated to be:
\begin{equation}
F_{DC} = \frac{V_{DC}+V_{bi}}{w}
\label{eq:F_DC}
\end{equation}
with the built-in voltage $V_{bi}$ being:
\begin{equation}
V_{bi} = \frac{kT}{q} \ln{ \left(\frac{N_A N_D}{ni^2}\right)}
\label{eq:V_bi}
\end{equation}
and the intrinsic region $w$:
\begin{equation}
w = w_i + \sqrt{2\epsilon_0 \epsilon_{Si} e \frac{N_A+N_D}{N_A N_D}} \sqrt{V_{bi} + V_{DC}}
\label{eq:w}
\end{equation}
For the PN3 junction, the doping level of Boron ($N_A$) in the P region and of Phosphorus ($N_D$) in the N region are \mbox{$N_A = N_D = 10^{20}$ cm$^{-3}$}.
The intrinsic region has a Boron doping level of $ni = 10^{15}$ cm$^{-3}$. $\epsilon_{Si}$=11.9 is the relative permittivity of silicon.
Resulting in $V_{bi}=0.6$ V and \mbox{$w=1050+5\sqrt{V_{DC}+0.6}$ nm.}
The RF field is estimated from the small signal approximation:
\begin{equation}
F_{RF} \approx \frac{dF_{DC}}{dV_{DC}}V_{RF}.
\label{eq:F_RF}
\end{equation}
Note that this approximation is particularly relevant for the PIN3 junction, even for high RF voltages, due to the small variation of the intrinsic width with the applied reverse DC bias \mbox{($dF_{DC}/dV_{DC} \approx 1/w_i$)}.
\section{Determination of the EFI linear EO effect}
In the push-pull experiment, the amplitude of the EFI linear EO modulation (at $\Omega$ component) is estimated from the slope of the DC sweep using Eq$.$ (\ref{eq:n_neff_omega}):
\begin{equation}
\frac{d\Delta n_{\Omega}}{dF_{DC}} = \Gamma \frac{3\chi^{(3)}}{n_{si}} F_{RF} + \Gamma \frac{3\chi^{(3)}}{n_{si}} F_{DC} \frac{dF_{RF}}{dF_{DC}} + \frac{d\Delta n_{carrier}}{dF_{DC}}
\label{eq:n_neff_omega_slope}
\end{equation}
For the PN3 junction, the RF field variation with the DC field is small due to the small intrinsic region width variation with the applied DC bias and can be neglected ($dF_{RF}/dF_{DC} \propto d^2F_{DC}/dV_{DC}^2 \approx 0$).
The carrier variation is however not neglectable and is expected to be negative ($d\Delta n_{carrier}/dF_{DC} < 0$), resulting in an underestimation of the $\chi^{(3)}$ coefficient using the slope of the measurement:
\begin{equation}
\chi^{(3)} \ge \frac{d\Delta n_{\Omega}} {dF_{DC}} \frac{n_{si} }{3 \Gamma F_{RF}}
\label{eq:n_neff_omega_slope_approx}
\end{equation}
\section{Fitting the spectral components}
\label{seq:fits}
The output DC optical power $P_{DC}$ of the MZI and the $\Omega$ and $2\Omega$ components of the spectral response are measured as a function of the wavelength for a fixed DC bias and RF modulation.
Their respective noise is subtracted.
The expected components are:
\begin{equation}
\frac{P_{DC}}{P_0} = \frac{1}{2} [1 + \cos (\Delta \theta) J_0 (m_{\Omega}) J_0 (m_{2\Omega})]
\label{eq:dc_modulation}
\end{equation}
\begin{equation}
\frac{P_\Omega (t)}{P_0} = \left [-\cos (\Delta \theta) J_1(m_{\Omega}) J_1(m_{2\Omega}) + \sin (\Delta \theta) J_0(m_{2\Omega}) J_1 (m_{\Omega}) \right ] \cos \Omega t
\label{eq:omega_modulation}
\end{equation}
\begin{equation}
\frac{P_{2\Omega} (t)}{P_0} = \left \{ -\cos(\Delta \theta) [J_0 (m_{2\Omega}) J_2 (m_{\Omega}) - J_2(m_{\Omega}) J_2(m_{2\Omega})] + \sin (\Delta \theta) J_0 (m_{\Omega}) J_1 (m_{2\Omega}) \right \} \cos 2\Omega t
\label{eq:2omega_modulation}
\end{equation}
In the main article, these equations are approximated assuming a small modulation index $m_{2\Omega}$ resulting in $J_0(m_{2\Omega}) \approx 1$, $J_1(m_{2\Omega})$ and $J_2(m_{2\Omega}) \approx 0$.
First, we fit the static phase variation $\Delta\theta$ of the DC curve using Eq$.$ (\ref{eq:dc_modulation}) to extract the period of the MZM assuming no dynamic modulation.
This value in then used in Eq$.$ (\ref{eq:omega_modulation}) to fit the modulation $m_{\Omega}$ index at the $\Omega$ spectral component assuming \mbox{$m_{2\Omega}$ = 0}.
Then we find the modulation $m_{2\Omega}$ at the $2\Omega$ spectral component by fitting Eq$.$ (\ref{eq:2omega_modulation}) using the previously found parameters as initial guess.
Fig$.$ \ref{fig:suppl-three_junctions_components} shows one of the measurements done for each of the three studied junctions.
Only the PN3 junction \mbox{(Fig$.$ \ref{fig:suppl-three_junctions_components}(c))} has a $2\Omega$ component with a modulation higher than the MZM distortion, resulting in a $2\Omega$ component aligned with the $\Omega$ component.
For the two other junctions \mbox{(Fig$.$ \ref{fig:suppl-three_junctions_components}(a) and \ref{fig:suppl-three_junctions_components}(b))}, the $2\Omega$ component comes from the distortion and the $m_{2\Omega}$ coefficient cannot be extracted.
Moreover, their spectral components do not fit as well with Eq$.$ (\ref{eq:2omega_modulation}) than for the PIN3 junction.
One possible reason could be that for those modulators, the carrier absorption introduces a chirp effect, that has not been taken into account in the analyses.
\begin{figure}
\centering\includegraphics[width=\linewidth]{figure_three_junctions_suppl.pdf}
\caption{Measurements (dots) of $P_{DC}$ the MZM output DC optical power, $P_{\Omega}$ the modulation at angular frequency $\Omega$ and $P_{2\Omega}$ the modulation at angular frequency $2\Omega$ as a function of wavelength for
(a) the 2 mm long PN1 junction using $V_{DC}=2$ V, $V_{RF}=0.51$ V,
(b) the 5 mm long PN2 junction using $V_{DC}=4$ V, $V_{RF}=1.6$ V,
(c) the 5 mm long PN3 junction using $V_{DC}=6$ V, $V_{RF}=2.0$ V.
The dashed lines represent the corresponding fit.}
\label{fig:suppl-three_junctions_components}
\end{figure}
Table \ref{tab:experimental_results} regroups the modulation index $m_\Omega$ and $m_{2\Omega}$ fits for a given reverse DC bias and RF voltage.
$P_{RF}$ is the output power of the RF generator, $V_{RF}$ is the RF amplitude estimated at the beginning of the traveling-wave electrode, $F_{DC}$ and $F_{RF}$ the corresponding DC and RF field (see section \ref{seq:fields}).
\begin{table}
\centering
\caption{\bf Experimental results for the 5 mm long PIN3 junction}
\begin{tabular}{ccccccccc}
\hline
$V_{DC}$ (V) & $P_{RF}$ (dBm) & $V_{RF}$ (V) & $F_{DC}$ (V/m) & $F_{RF}$ (V/m) & $m_{\Omega}$ & $m_{2\Omega}$ & $m_k$ \\
\hline
0 & 5 & 0.29 & 5.5e+05 & 2.7e+05 & 0.0030 & 1.7e-05 & 0.00015 \\
1 & 10 & 0.51 & 1.5e+06 & 4.8e+05 & 0.0055 & 6.8e-05 & 0.00095 \\
2 & 16 & 0.99 & 2.4e+06 & 9.4e+05 & 0.013 & 0.00035 & 0.0041 \\
3 & 19 & 1.4 & 3.4e+06 & 1.3e+06 & 0.020 & 0.00075 & 0.0087 \\
4 & 22 & 2.0 & 4.3e+06 & 1.8e+06 & 0.030 & 0.0015 & 0.016 \\
5 & 24 & 2.4 & 5.3e+06 & 2.3e+06 & 0.041 & 0.0023 & 0.024 \\
6 & 15 & 0.89 & 6.2e+06 & 8.3e+05 & 0.017 & 0.00031 & 0.011 \\
6 & 18 & 1.2 & 6.2e+06 & 1.2e+06 & 0.024 & 0.00064 & 0.015 \\
6 & 22 & 2.0 & 6.2e+06 & 1.8e+06 & 0.036 & 0.0015 & 0.023 \\
6 & 25 & 2.7 & 6.2e+06 & 2.6e+06 & 0.050 & 0.0029 & 0.031 \\
10 & 15 & 0.89 & 9.9e+06 & 8.3e+05 & 0.021 & 0.00027 & 0.015 \\
15 & 15 & 0.89 & 1.5e+07 & 8.2e+05 & 0.030 & 0.00028 & 0.022 \\
\hline
\end{tabular}
\label{tab:experimental_results}
\end{table}
\section{Eye diagrams experiments}
Fig$.$ \ref{fig:suppl-eye-diagram} shows some of the eye diagrams used to plot the evolution of the ER and SNR with $V_{DC}$ in the main article respectively measured at 2, 10, 18 and 30 V reverse bias for a 10 Gbits/s modulation by applying dual 4 V$_{pp}$ DATA/$\overline{\textnormal{DATA}}$ driving in push-pull.
The displays use the averaging function of the DCA to reduce optical noise from EDFA.
\begin{figure*}
\centering\includegraphics[width=0.85\linewidth]{figure_eye_diagram_10Gbps_suppl.pdf}
\caption{Eye diagram respectively measured at 2/10/18/30 V reverse DC bias at 10 Gbits/s using 4 V$_{pp}$ on each arm corresponding to ER 1.5/2.4/3.3/3.7 and SNR 8.9/12.8/14.3/15.6.}
\label{fig:suppl-eye-diagram}
\end{figure*}
Fig$.$ \ref{fig:suppl-eye-diagram_32_40}(a) shows a data rate transfer of 32 Gbits/s and Fig$.$ \ref{fig:suppl-eye-diagram_32_40}(b) shows a maximum data rate transfer of 40 Gbits/s achieved with the 6 mm long PIN3 based MZI.
\begin{figure}
\centering\includegraphics[width=0.85\linewidth]{figure_eye_diagram_32_40_suppl.pdf}
\caption{Optical eye diagram of 6 mm long PIN3 modulator measured at a data rate of 32 and 40 Gbits/s using 4 V$_{pp}$ on each arm and reverse $V_{DC}=30$ V, with ER 2.7 dB and 2.3 dB, respectively.}
\label{fig:suppl-eye-diagram_32_40}
\end{figure}
\end{document} |
{
"arxiv_id": "2302.13622",
"language": "en",
"timestamp": "2023-02-28T02:25:37",
"url": "https://arxiv.org/abs/2302.13622",
"yymm": "2302"
} | \section*{Introduction}
The past nearly two decades have witnessed tremendous advancements of symmetry-protected topological phases and topological materials \cite{MT-SM-RMP-Kane,MT-SM-RMP-Qi,MT-RMP-Chiu,MT-SM-TCI-Ando,MT-SM-RMP-AV} reshaping our fundamental understanding on the electronic properties in solids. In the well-established paradigm of studying topological phases in condensed matter, one usually utilizes the symmetry/symmetries to classify the protected topological phases. Very recently, topological quantum chemistry (TQC) \cite{MT-SM-TQC,MT-SM-M-TQC} or symmetry-indicators (SIs) \cite{MT-SM-SI,MT-SM-M-SI}, have been applied in discovering topological nonmagnetic \cite{MT-SM-N-1,MT-SM-N-2,MT-SM-N-3,MT-SM-AllTopo}/magnetic \cite{MT-SM-N-4}/superconducting \cite{MT-SM-Tang-TNSC} materials routinely and efficiently using the first-principles calculated high-symmetry point (HSP) symmetry-data with respect to the space groups/magnetic space groups (MSGs) of the pristine materials, revealing the ubiquitous band topology, protected by MSG symmetry. Following the conventional paradigm, finding or designing more symmetries beyond MSG symmetry, such as spin-space group symmetry \cite{MT-SSG}, dual symmetry \cite{MT-Dual-Nature} and gauge-field induced projective symmetry \cite{MT-gauge}, is anticipated to unveil more topological phases, as an on-going research theme in the field of topological materials.
On the other hand, symmetry-breaking can also be utilized in creating new topological phase. For example, a higher-order band topology is induced by applying strain on SnTe to break the mirror symmetry which originally protects a mirror-Chern topological insulator phase \cite{MT-SM-H-Topo-2}. Quantum anomalous Hall effect was realized through gapping the time-reversal symmetry protected Dirac fermion by magnetic dopants \cite{MT-RuiYu,MT-SM-QAH-Bi2Te3-film}. Unconventional fermions can emerge in crystals which break the Poincar$\acute{\mathrm{e}}$ symmetry \cite{MT-SM-NewFermions}, and so on. Hence, symmetry-breaking could introduce fruitful topological phases and can act as a versatile knob to control topological states since the symmetry-breaking can be easily-manipulated.
Furthermore, the advanced techniques on ultrafast light and high magnetic fields have assisted experimentalists recently to stimulate the pristine crystals to tune the topological states. The topological state can be drastically modified leading to giant quantum responses which have been observed in realistic materials, such as the colossal angular magnetoresistance in ferrimagnetic Mn$_3$Si$_2$Te$_6$ \cite{MT-SM-Mn2Si3Te6-N, MT-SM-Mn2Si3Te6-N-2,MT-SM-Mn2Si3Te6-PRB} and the giant anomalous Hall conductivity in Heusler Co$_2$MnAl \cite{MT-SM-CoMnAl}, where the topological states are controlled by rotating the magnetic moments. Noticing that the magnetic structure can be manipulated by magnetic fields to lower the symmetry \cite{MT-tokura-review,MT-SM-Nature-Review-BAB}, here we focus on the magnetic materials listed in MAGNDATA \cite{MT-SM-magndata}, which have been experimentally synthesized and whose magnetic structures have already been deduced from neutron-scattering experiments, and aim at revealing all topological states by symmetry-breaking using SIs \cite{MT-SM-SI, MT-SM-Tang-NP, MT-SM-M-SI} in these materials. The results are expected to guide experimentalists to realize more magnetic topological materials with unprecedentedly facsinating properties driven by the change of topological phase.
Note that though concrete symmetry-breakings are not exhaustible, we can classify symmetry-breakings by the resulting subgroups so that various meticulous concrete symmetry-breakings corresponding to one subgroup, are anticipated to share a common topological diagnosing prediction, since only the transformation properties of HSP wavefunctions account \cite{MT-SM-SI,MT-SM-Tang-NP,MT-SM-M-SI}. In this work, we first construct three datasets based on the 1651 MSGs and their subgroups, suitable to incorporate the symmetry-breaking to the conventional diagnostic scheme of band topology based on SIs, and then apply them to 1267 magnetic materials in MAGNDATA \cite{MT-SM-magndata} combined with first-principles calculations. The three datasets will be described along with the description of the work flow below.
\section*{Work flow}
The work flow is schematically shown in Fig. 1A. The first step is on the first-principles calculations for the materials structures in the database \cite{MT-SM-magndata} and the second step is to consider the symmetry-breaking. These two steps utilize two datasets we constructed: dataset 1 named: ``All\_t-subgroups'' and dataset 2 named: ``All\_atomic\_insulator\_basis\_sets''. In dataset 1, we exhaust all translationengleiche subgroups (t-subgroups) for each of the 1651 MSGs, which characterize symmetry-breaking in a general sense. Here, t-subgroup means that the translation symmetry is still preserved and only the point group symmetry is broken. To realize the t-subgroups, rotating magnetic moments for magnetic materials is one feasible way, a strategy by which is described in SM using Mn$_3$Si$_2$Te$_6$ \cite{MT-SM-Mn2Si3Te6-N, MT-SM-Mn2Si3Te6-N-2,MT-SM-Mn2Si3Te6-PRB} as an example. The exhaustive list of all t-subgroups is expected to be applied in various fields, not limited to the topological materials here. For example, by dataset 1, we can find supergroups of a given MSG. To realize a material with a small and nonvanishing band splitting which could induce giant Berry curvatures in some MSGs, we can first target materials with a higher MSG symmetry (namely, a supergroup) which enforces band degeneracy (for which the band splitting is vanishing) and then apply symmetry-breaking to materials crystallized in such supergroup to introduce a gap on the originally degenerate bands with the gap proportional to the strength of the symmetry-breaking which can be controllable. By dataset 1, all subgroups with a given translation symmetry-breaking can also be identified once the maximal subgroup for the translation symmetry-breaking is identified \cite{MT-FeGe,SM-FeGe-2} (see Supplementary Materials \cite{MT-seeSM} (SM) for details). It is also worth mentioning that, in the materials database, such as the nonmagnetic materials database, Inorganic Crystal Structure Database (ICSD) \cite{MT-SM-icsd} and the magnetic materials database, MAGNDATA \cite{MT-SM-magndata}, the positions of atoms per unit cell are given based on Wyckoff positions \cite{MT-SM-Bilbao} classified by the MSG.
Then more crystallographic symmetries might appear for some material, once the tolerance of judging whether the structure is invariant by a candidate symmetry operation, is relaxed. Hence, the corresponding identified MSG can be a supergroup of the original one. The symmetries of the complement of the original MSG in the supergroup can be regarded as one type of hidden symmetry, which has attracted intensive interest very recently \cite{MT-Dual-Nature,MT-SSG,MT-gauge}. The t-subgroups provided here can be applied in finding such hidden symmetry by directly testing the possible supergroups on the given structure.
After the convergence for some magnetic material is reached in the first-principles calculation based on density functional theory in the first step (also requiring that the resulting MSG reproduces that in MAGNDATA \cite{MT-SM-magndata}, called parent MSG hereafter), the related HSP symmetry data can be computed based on the little groups for HSPs. All related data on the HSPs and the little groups are all provided in dataset 2. From the calculated HSP symmetry-data for the parent MSG, the HSP symmetry-data with respect to each t-subgroup can be found using the compatibility relations between the HSPs of the t-subgroup and the parent MSG, provided in dataset 1. In this way, additional realistic calculations considering symmetry-breaking need not to be performed. The topological classification for some t-subgroup can then be obtained by the HSP symmetry-data with respect to the t-subgroup \cite{MT-SM-SI,MT-SM-Tang-NP}, which can be applicable to any concrete symmetry-breaking as along as it leads to the same t-subgroup. Note that we require that the HSP symmetry-data for the t-subgroup can be induced from those for the parent MSG, which is a reasonable approximation (see SM). In dataset 2, we also provide the atomic insulator basis set \cite{MT-SM-SI, MT-SM-Tang-NP, MT-SM-M-SI} of each MSG classifying any material into one of three cases \cite{MT-SM-Tang-NP}, case I, II or III, of which case II/case III definitely corresponds to a topological material (see SM for details). Note that the t-subgroup is still an MSG, and thus we need not to construct the atomic insulator basis set for the t-subgroup as long as we fix the convention of the t-subgroup as that adopted in dataset 2.
In addition, to characterize the evolution of topological states, we identify all continuous paths for the t-subgroups. We assign an integer to each symmetry-breaking pattern (corresponding to a subgroup of the point group of the parent MSG, see SM for details). We require that 0 represents that all point operations in the parent MSG are preserved while 1 represents that only identity operation ($E$) is preserved. The the continuous path can be written in the form of
$0\rightarrow i\rightarrow j\rightarrow \ldots \rightarrow 1$ where for all adjacent pair $i\rightarrow j$, the point operations for $i$ contain those for $j$, and there exists no other symmetry-breaking pattern between $i$ and $j$. Note that each symmetry-breaking pattern uniquely corresponds to one t-subgroup while there might exist several symmetry-breaking patterns corresponding to a given t-subgroup. We find that the maximal number of such continuous paths from 0 to 1 can be 8476. See an example in Fig. 1A for MSG 12.62 (in the Belov-Neronova-Smirnova notation \cite{MT-SM-BNS}), where all t-subgroups and the corresponding symmetry-breaking patterns are shown in the red circles of a tree-like plot. There are in total 3 continuous paths from 0 to 1:
\[0\rightarrow2\rightarrow1, 0\rightarrow3\rightarrow1, 0\rightarrow4\rightarrow1\] in terms of symmetry-breaking patterns, and
$12.62\rightarrow2.4\rightarrow1.1,12.62\rightarrow5.15\rightarrow1.1,12.62\rightarrow8.34\rightarrow1.1$ in terms of t-subgroups. As listed in dataset 1, the point operations for the symmetry-breaking patterns 0, 1, 2, 3, 4 are,
$\{E, I, \Theta C_{2z},\Theta I\}$, $\{E\}$, $\{E,I\}$,$\{E,\Theta I\}$ and $\{E,\Theta\sigma_z\}$ ($I$ is spatial inversion, $\Theta$ is time-reversal, $C_{2z}$ is 2-fold rotation around $z$-axis and $\sigma_z$ is a mirror operation $=IC_{2z}$), respectively. Once the topological states are identified for all t-subgroups, we can then obtain the evolution of topological states along each continuous path.
The last step is devoted to the identification of all symmetry-enforced band crossings (BCs) in case III. Note that in the earlier applications of SIs/TQC to the high-throughput discovery of topological materials \cite{MT-SM-N-1,MT-SM-N-2,MT-SM-N-3,MT-SM-AllTopo,MT-SM-N-4}, the characteristics of the enforced BCs are not all identified. Very recently, there have been much effort on the complete classification of all the BCs based on $k\cdot p$ models \cite{MT-SM-Yao} or combing $k\cdot p$ models and compatibility relations \cite{MT-SM-Tang-BN} for the 230 space groups or the 1651 MSGs, respectively. Interestingly, the expansion order of $k\cdot p$ model around the BC could have to be very large (6 at most) \cite{MT-SM-Tang-BN} to capture the compatibility relations required nodal structure. Then varying the expansion order introduces a hierarchy of nodal structures around which tiny energy gaps could lead to large Berry curvatures, which has been verified in CoSi experimentally \cite{MT-SM-CoSi-quasi}. To this end, we provide dataset 3 (named ``Enforced\_band\_crossings''), where we list all possible enforced BCs for the 1651 MSGs, the compatibility relations required nodal structures and the $k\cdot p$ models, and how to detect the enforced BCs in high-symmetry lines (HSLs)/high-symmetry planes (HSPLs) by the HSP symmetry-data of the HSPs residing in the HSLs/HSPLs. Using dataset 3, the enforced BCs by HSP symmetry-data and their evolutions in the investigated magnetic materials are all detected in this work.
\begin{figure*}[!t]
\includegraphics[width=1\textwidth]{figure-1.pdf}\\
\caption{\textbf{Work flow of investigating the evolution of topological states.} (A) The first step is performing the first-principles calculations on each selected magnetic material, and then the HSP symmetry-data with respect to the parent MSG are computed based on dataset 2, when the converged magnetic structure fulfills the parent MSG. Then by dataset 1, all HSP symmetry-data for all t-subgroups can be obtained. Combining the HSP symmetry-data and atomic insulator basis set (shown in dataset 2) for each t-subgroup, we then quickly classify the material in the t-subgroup into case I, II or III \cite{MT-SM-Tang-NP}. For case III, we obtain all enforced BCs by scanning all HSPs/HSLs/HSPLs in the Brillouin zone by dataset 3.
(B) shows a case where case I can evolve to case II/III: The topmost level (indicated by the dashed ellipse) participating in the computation of the HSP symmetry-data, might be protected by some hidden symmetry and then carries two identical two-dimensional irreps (represented by two gray dots) at some HSP. By symmetry-breaking, the two-dimensional irrep splits to two different one-dimensional irreps, represented by red and blue dots, respectively. Consider two bands are filled. there is only one possibility of HSP symmetry-data before symmetry-breaking (one gray dot). However, by symmetry-breaking, there are three possibilities of irreps chosen to be occupied. As shown in the two panels, the original phase in case I can evolve to case I and case II/III, respectively. (C) demonstrates that the normal metal can be tuned to be a Kramers-Weyl semimetal \cite{SM-KramersWeyl} by symmetry-breaking: In the normal metal, energy bands are subject to two-fold Kramers degeneracy by the symmetry of combination of time-reversal and spatial inversion ($\Theta I$) with the filling an odd integer. Breaking the $\Theta I$ symmetry, the energy bands would split and only be degenerate at HSPs, where the Weyl points are resided.}\label{figure-1} \end{figure*}
\begin{table*}
{\input{mat.txt}}
\caption{\textbf{The 236 magnetic topological materials predicted to be topological for at least 3 values of $U$ at the intrinsic filling.} ID denotes the ID or label in MAGNDATA \cite{MT-SM-magndata} and N denotes the number of t-subgroups with respect to which the corresponding material is topological for the different choices of values of $U$. For example, ID = 1.308 in the last column uniquely corresponds to the magnetic topological insulator MnBi$_2$Te$_4$ \cite{MT-SM-Nature-Review-BAB}.}\label{table-1}
\end{table*}
\begin{figure}[!t]
\includegraphics[width=0.5\textwidth]{figure-2.pdf}\\
\caption{\textbf{Statistics of identified magnetic topological materials in this work.} The filling is tuned up to a difference by 4. For $|\Delta\nu|\le0, 1, 2, 3$ or $4$, four numbers are counted: two are for the pristine magnetic topological materials (for which only the parent MSGs are exploited) and two are related with all t-subgroups (including the parent MSGs).}\label{figure-2}
\end{figure}
\section*{High-throughput investigations on 1267 magnetic materials}
Next we discuss the high-throughput first-principles investigations on the magnetic materials as listed in MAGNDATA \cite{MT-SM-magndata}, all synthesized experimentally whose magnetic structures are measured. Of the 1721 magnetic materials in total which we collected at the time of initiating this work, we first filter out materials with incommensurate magnetic ordering, without a definite MSG, and further requiring that the atoms fully occupy their sites, the lattice parameters are compatible with the MSG and the numbers of ions are compatible with the chemical formula, we finally obtain 1267 ``high-quality'' magnetic materials as our starting point for subsequent first-principles calculations and topological classification following the work flow shown in Fig. 1A. The experimentally identified MSGs \cite{MT-SM-magndata} are set as the parent MSGs. The electron correlation is considered by choosing various values of Hubbard $U$, and thus we have 5883 jobs in total for the 1267 materials (one material with one value of $U$ is regarded as one job). Finally, 5062 jobs reach convergence in the first-principles calculations. However, the converged magnetic structure might break the original MSG symmetry. We then have 4012 jobs with successfully computed HSP symmetry-data corresponding to 295 MSGs and 1013 magnetic materials. We list all these 295 MSGs in Table S18 of SM where the number of t-subgroups, the number of materials in each MSG and the number of jobs are also listed. For these jobs, we compute the representations for all the energy levels at the HSPs for the parent MSG by the first-principles calculated wavefunctions. In order to detect all topological bands around the Fermi level, we vary the electron filling to be $\nu=\nu_0, \nu_0\pm1,\nu_0\pm2,\nu_0\pm3,\nu_0\pm4$ ($\nu_0$ is the intrinsic filling, which is the number of valence electrons per primitive unit cell). Tuning filling might also lead to different topological phases, as has realized in a topological phase change transistor by electrostatic gating \cite{MT-transistor}. In total, for each filling, we collect 150882 sets of HSP symmetry-data considering all the 4012 jobs and all t-subgroups, all listed in SM. These HSP symmetry-data are then exploited to conduct all subsequent topological classifications including the identification of all enforced BCs.
\section*{Magnetic topological materials statistics}
The statistics of the numbers of the identified magnetic topological materials in this work is shown in Fig. 2, which monotonically increase and finally almost approach constants with the variations of fillings. Considering the intrinsic filling first, 392 magnetic materials are identified to be topological, belonging to case II or III, and 236 are predicted to be topological for at least 3 different values of Hubbard $U$, whose ID$^,$s in MAGNDATA \cite{MT-SM-magndata} are listed in Table 1. The proportion of magnetic topological materials is consistent with that revealed in Ref. \cite{MT-SM-N-4}. The number of topological materials is gradually increased to 945 when the fillings satisfying $|\Delta\nu|\le4$ ($\Delta\nu=\nu-\nu_0$), indicating that more than 90\% of the investigated magnetic materials can be topological.
For all considered fillings, we verify that, when all the topmost levels are fully occupied for all HSPs in the parent MSG, the evolution of topological states in all these materials obey the intuitive picture: case I changes to case I, case II changes to case I/II and case III changes to case I/II/III for any adjacent pair $i\rightarrow j$ in each continuous path by symmetry-breaking. To what extent the band topology by symmetry-breaking can be preserved could be meaningful in the optimization of topological materials, which might break some symmetry. In Table 1, we then show the number of t-subgroups in which the nontrivial topology is preserved for at least three values of $U$ for the listed magnetic materials. Interestingly, for other materials, it might occur that case I can changes to case II/III and case II can changes to case III: When the topmost level at some HSP is not fully occupied and contains more than one (co-)irreps (which might be caused by some hidden symmetry), one need to choose one or several (co-)irreps to be occupied while a lowered symmetry can allow different (co-)irreps to be chosen, as shown in Fig. 1B leading to the counterintuitive result. Note that In Fig. 1C, we also show a possibility that a normal metal phase, where the filling is an odd number while the bands are subject to a Kramers degeneracy in the whole Brillouin zone, changes to a Kramers-Weyl semimetal phase \cite{SM-KramersWeyl} by symmetry-breaking. Considering all t-subgroups, there are 1003 magnetic materials which \textit{can} be topological at some filling with respect to some t-subgroup and for some value of $U$. We list all these 1003 materials in Tables S19-S94 of SM, where the corresponding fillings for case II, case III and case II/III only when considering symmetry-breaking are listed for each of these material. We also assemble all results for all the 5883 jobs in Table S95-S287 of SM, where the calculated magnetic moments, the detailed topological properties, the plots of band and density of states (DOS) can be found.
We manually select 9 magnetic topological materials whose electronic band plots are depicted in Fig. 3 and describe the results by symmetry-breaking in these materials.
These materials own either a noticeable band gap or enforced BC almost at the Fermi level. Consider the intrinsic filling. The materials in Fig. 3 are topological for at least four values of $U$ other than Sr$_2$TbIrO$_6$, for which only two jobs ($U = $0, 4 eV) reach convergence. The topological classifications for Sr$_2$TbIrO$_6$ with $U = 0$ and 4 eV are identical, with respect to all t-subgroups (including the parent MSG): for t-subgroup MSG 2.4, SI $=$ $(0,0,0,2)\in\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_4$ \cite{MT-SM-M-SI} while for other t-subgroups, the classifications are case I. Note that different values of $U$ might correspond to different topological predictions (case II/III) in UAsS and Ba$_3$CoIr$_2$O$_9$. For others, different values of $U$ with topological prediction give either case II or case III. By symmetry-breaking, the nontrivial band topology in these materials can be maintained, transformed or trivialized. For example, 21 of the 35 t-subgroups for UAsS ($U$ $=$ 0, 2, 4, 6 eV) give case II/case III prediction, indicating the robustness of nontrivial band topology against symmetry-breaking. For CaMnSi ($U$ $=$ 0, 1, 2, 3 eV), 5 of the 35 t-subgroups give case II while the rest give case I. For CsMnF$_4$ with $U = 0, 1, 2, 3$ eV, the topological classifications share the same results of topological states with respect to all t-subgroups. For this material, 8 of the 16 t-subgroups give case III prediction and the rest give case I, and the enforced BCs include two pairs of opposite-charged isoenergetic Weyl points and one Weyl nodal line for the parent MSG. The Weyl points of each pair are related by inversion symmetry. By symmetry-breaking, the original Weyl points/nodal line can be gapped: For example, with respect to the t-subgroup MSG 6.18, the Weyl points are all gapped while the Weyl nodal line still exists); With respective to the t-subgroup 18.19, the original inversion related Weyl points are allowed to own different energies.
In the following, we take HoB$_2$ and CeMn$_2$Si$_2$ as examples to show the detailed evolutions of topological states along the continuous paths.
\begin{figure*}[!t]
\includegraphics[width=1\textwidth]{figure-3.pdf}\\
\caption{\textbf{Electronic band plots of nine manually selected magnetic topological materials identified in this work.} In all the band plots in this work, the Fermi level is set to be 0 eV. The chemical formula, the ID in MAGNDATA \cite{MT-SM-magndata}, the MSG, the Hubbard $U$ are all provided in the head of each band plot. Consider the intrinsic filling. Other than CsMnF$_4$ in (B) and CeMn$_2$Si$_2$, which are classified in case III, the rest are in case II. Sr$_2$TbIrO$_6$, UAsS, CaMnSi, Ba$_3$CoIr$_2$O$_9$, EuMg$_2$Bi$_2$, Mn$_2$AlB$_2$ and HoB$_2$ own SI $=$ $(1)\in\mathbb{Z}_2,(0,1)\in\mathbb{Z}_2\times\mathbb{Z}_4,(1)\in\mathbb{Z}_2,$$(1,0)\in\mathbb{Z}_2\times\mathbb{Z}_2,(1)\in\mathbb{Z}_2,(1)\in\mathbb{Z}_2,(0,0,2)\in\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_4$, respectively. For CsMnF$_4$, the enforced BCs appear in $(-w,0,0),(-w,0,\frac{1}{2}),(0,w,0), (0,w,\frac{1}{2}),(u,0,v)$. For CeMn$_2$Si$_2$, the enforced BCs appear in $(0,w,\frac{1}{2}),(w,w,\frac{1}{2}),(u,v,\frac{1}{2})$.
The effect of symmetry-breaking on the topological properties are described in the main text. The topological properties with respect to all t-subgroups (including the parent MSG) can be found in SM.}\label{figure-3}
\end{figure*}
\begin{figure*}[!t]
\includegraphics[width=1\textwidth]{figure-4.pdf}\\
\caption{\textbf{Evolution of topological states in HoB$_2$ and CeMn$_2$Si$_2$.} In the upper panel of (A) are shown the band structure where all the bands are nondegenerate and the plot of DOS. The continuous gaps corresponding to the intrinsic filling $\nu_0$ and the fillings $\nu_0-1,\nu_0+1,\nu_0+2,\nu_0+3$ are indicated by the regions in yellow, cyan, orange, green and magenta, respectively. The lower panel of (A) shows the tree-like plots for the topological state evolution corresponding to the gaps indicated by the colors shown in the insets. The symmetry-breaking patterns are shown: 0, 1, 2, 3 and 4 correspond to the t-subgroups 12.62, 1.1, 2.4, 5.15 and 8.34, respectively.
Note that the peaks in the DOS plot correspond to the two bands (the band indices are $\nu_0+2$ and $\nu_0+3$) indicated by the white double-arrow and such quasi-flat bands are topological (with SI $=$ $(1,1,3)$).
(B) shows the band structures along two HSLs, $(0,w,1/2)$ and $(w,w,1/2)$, where the orange and blue colors encode two different co-irreps of the corresponding HSLs. Various BCs composed of these two co-irreps can be found and the BCs indicated by the red arrows are deduced by the HSP symmetry-data: Along these two HSLs are the HSPs Z $(0,0,\frac{1}{2})$-R $(0,\frac{1}{2},\frac{1}{2})$-Z $(0,1,\frac{1}{2})$, and Z $(0,0,\frac{1}{2})$-A $(-\frac{1}{2},-\frac{1}{2},\frac{1}{2})$-Z $(-1,-1,\frac{1}{2})$, respectively. Note that in each of these HSLs, there exists another BC related with the one indicated by the red arrow by an MSG operation.
(C) shows the tree-like plot for the evolution of the original Dirac nodal lines in CeMn$_2$Si$_2$. In each circle, the integer denotes the symmetry-breaking patten, and II stands for case II and WNL/DNL stands for
Weyl nodal line/Dirac nodal line in case III.
}\label{figure-4}
\end{figure*}
\section*{Materials example: HoB$_2$}
We first show an example of topological magnetic material, a rare-earth diboride, HoB$_2$, classified to belong to case II. The ID in MAGNDATA \cite{MT-SM-magndata} is 0.616. HoB$_2$ in the paramagnetic state is crystallized in a hexagonal lattice (space group: $P6/mmm$) with a very simple crystal structure containing only one chemical formula per primitive unit cell, while a ferromagnetic state breaks the three-fold rotation symmetry, resulting in MSG 12.62, reported to display a gigantic magnetocaloric effect \cite{MT-HoB2,MT-HoB2-2}.
For MSG 12.62, all energy bands should be nondegenerate, then BCs are not allowed to exist stably at the special $k$ points, as shown in dataset 3, while it owns a nontrivial SI group \cite{MT-SM-M-SI}, $\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_4$. Indeed, for all t-subgroups, HoB$_2$ can only be case I or case II as shown for different fillings in the lower panel of Fig. 3A ($\nu=\nu_0,\nu_0-1,\nu_0+1,\nu_0+2$ and $\nu_0+3$) which characterize the evolution of topological states by the tree-like plots. The fillings correspond to the gaps shown in the upper panel of Fig. 3A, indicated by different colors. It is interesting to note that there are considerable gaps for the filling $\nu_0+1$ (the region in orange in the upper panel of Fig. 3A), the nontrivial topology of which is expected to bring about noticeable protected boundary states. Such gaps share the same evolution of topological states as those for another three fillings as shown in the lower panel of Fig. 3A, where the colors shown in the inset correspond to the gaps in the respective colors. Furthermore, the quasi-flat bands (whose band indices are $\nu_0+2$ and $\nu_0+3$) are found to topological by comparing the SIs in two tree-like plots of the lower panel of Fig. 3A. The coexistence of magnetic order, topological quasi-flat bands and multiple topological gaps around the Fermi level makes this material a fascinating platform to study possible exotic quantum excitations.
\section*{Materials example: CeMn$_2$Si$_2$}
We then show another example for CeMn$_2$Si$_2$ whose ID in MAGNDATA is 1.490 \cite{MT-SM-magndata} and MSG is 126.386. The Ce ion is found to own vanishing magnetism while the magnitude of the magnetic moments in all Mn ions are 1.9 $\mu$B experimentally \cite{MT-CeMn2Si2}. We chose the results for the calculations in which all the values of $U$ on the Ce and Mn ions are $0$ eV, in which the calculated magnetic moments reasonably reproduce those by experiments.
It is noted that the other results for the rest values of $U$ share similar topological properties discussed here. Consider the intrinsic filling. In the parent MSG, CeMn$_2$Si$_2$ is predicted to be in case III and furthermore, the HSP symmetry-data guarantee that the enforced BCs appear in HSLs, $(0,w,\frac{1}{2})$ and $(w,w,\frac{1}{2})$ by dataset 3, as shown in Fig. 3B, where these BCs are also verified from the representations of all the $k$ points in a dense $k$ mesh in the HSLs. It is also found that the BCs in these HSLs actually lie in a nodal line in an HSPL ($(u,v,\frac{1}{2})$) by dataset 3, with a degeneracy 4, denoted to be Dirac nodal line. Then consider the effect of the symmetry-breaking. These Dirac nodal lines can be gapped in some symmetry-breaking patterns, resulting in a possibly-trivial insulator (in case I) or a topological phase in case II. In other symmetry-breaking patterns, the resulting topological phases are all in case III, and furthermore, the guaranteed BC lies in either Dirac nodal line or Weyl nodal line (whose degeneracy is two). We show the evolution of topological states for this material in Fig. 3C. It is worth mentioning that the tree-like plot in Fig. 3C only contains the symmetry-breaking patterns resulting in case II/III, and the symmetry-breaking patterns following the symmetry-breaking pattern in the end of the tree-like plot (15 and 57) all result in case I, a possibly trivial topological phase.
\section*{Conclusion and perspective}
To conclude, a complete dataset of all t-subgroups for the 1651 MSGs and the compatibility relations between the HSPs of each pair of the parent MSG and its t-subgroup was constructed. The dataset can also be applied in studying the symmetry-breaking of the superconducting pairing belonging to higher-dimensional irreps of point groups \cite{MT-SM-Ono-PRR,MT-SM-Tang-TNSC}. The 1651 atomic insulator basis sets were also provided, which can be applied to any magnetic/nonmagnetic material with the identified MSG, experimentally \cite{MT-SM-magndata} or theoretically \cite{MT-SA-TMO}. The HSPs, their little groups and the characters of (co-)irreps were explicitly provided in the 1651 MSGs for both Bradley-Cracknell \cite{MT-SM-BC} and Bilbao \cite{MT-SM-Bilbao} conventions, two frequently-adopted conventions. The topological classification results for more than one thousand magnetic materials obtained in this work can guide experimentalists to choose their interested material and the way of breaking symmetry, which might pave the avenue to the realization of highly-sensitive magnetism-controlled band topology and the topological magnets with high operational temperature. They can also be used as the training dataset to develop a simple-to-use heuristic chemical rule of diagnosing band topology such as the topogivity \cite{MT-ML} by machine-learning. The enforced of BCs are identified by the HSP symmetry-data, already producing a large number of BCs with various topological characters: We have collected 179 and 2006 independent BCs at HSPs and lying in HSLs, respectively, considering the intrinsic fillings. It should be noted that to identify all BCs, we need to adopt a poor man's strategy: a dense $k$-point mesh should be chosen in HSLs/HSPLs and all the symmetry-data at the $k$ points in the mesh should be evaluated, which is left to the future work. Note that the topological classifications for the t-subgroups here for magnetic materials are irrelevant with the origin of the symmetry-breaking, so that the results are applicable widely in a general sense. Since we identified all continuous paths of t-subgroups by symmetry-breaking, the corresponding evolution of topological states along some continuous path guarantees no omission of topological states detected in a concrete process of symmetry-breaking. Inversely, the topological state evolution might indicate the type of symmetry-breaking. Other than useful in guiding the control of the topological states, the results are expected to aid in the assessment on the robustness of the nontrivial band topology against symmetry-breaking, which could be useful in the optimization of topological materials or the transport measurements which need an application of external stimuli.
The correlation of magnetism and nontrivial band topology with other ordering, such as ferroelectric order \cite{MT-ferro-physics}, and novel crystal structure (e.g. Kagom$\acute{\mathrm{e}}$ lattice), merits future studies by choosing suitable candidates from our predicted magnetic topological materials. Lastly, a more generic symmetry-breaking could include the translation symmetry breaking enlarging the primitive unit cell and lead to a Brillouin zone folding (e.g. a charge-density wave order \cite{MT-FeGe, SM-FeGe-2}), which is not exhaustible, but can still apply the t-subgroups we obtained here after firstly identifying the subgroup(s) by only considering the translation symmetry-breaking, of which the details are described in SM. Another further work that can be based on the t-subgroups exhaustively listed here is to explore the splitting of each Wyckoff position for each MSG in the t-subgroup, which might guide the design for approximate MSG symmetry simply by the atomic positions once the Wyckoff position splitting can be ignored approximately. The t-subgroups for the 1651 MSGs could also be utilized to tabulate novel groups beyond the MSGs by homomorphism theorem. Besides, one can use the t-subgroups (or supergroups) of MSG for the Kagom$\acute{\mathrm{e}}$ lattice, Lieb lattice or other interesting lattice showing exotic physics properties to find materials realizations \cite{MT-flatband}.
\section*{Acknowledgments}
We are very grateful for earlier collaborations on related topics with Ashvin Vishwanath, Hoi Chun Po, Haruki Watanabe and Seishiro Ono. F.T. appreciates insightful discussions with Wei Chen, Qun-Li Lei, Kai Li and Yang-Yang Lv. We also thank very helpful suggestions from Ge Yao on high-performance computing. \textbf{Funding: }F.T. was supported by National Natural Science Foundation of China (NSFC) under Grant No. 12104215 and Young Elite Scientists Sponsorship Program by China Association for Science and Technology. F.T. and X.W. were supported by NSFC Grants No. 12188101, 11834006, 51721001, and 11790311, and the excellent program at Nanjing University. X.W. also acknowledges the support from the Tencent Foundation through the XPLORER PRIZE. \textbf{Author contributions:} F.T. conceived and designed the project, and performed all calculations. All authors contributed to the writing and editing of the manuscript. \textbf{Competing interests:} None declared. \textbf{Data and materials availability:} All data are available in the manuscript or the supplementary materials. The three datasets constructed in this work can be found in: \url{https://box.nju.edu.cn/published/three-datasets/}.
|
{
"arxiv_id": "2302.13715",
"language": "en",
"timestamp": "2023-02-28T02:28:12",
"url": "https://arxiv.org/abs/2302.13715",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
The perturbative description of gauge theory quantities in terms of a coupling constant is of central
importance to our understanding of such theories, and thereby of our
ability to use them for phenomenology. A full description of these quantities in fact
should include non-perturbative contributions, usually in the form of
power corrections, themselves multiplied by associated perturbative series in the coupling constant.
The direct calculation of such non-perturbative corrections is in many cases
very challenging, and often not even fully understood.
Access to non-perturbative sectors can be gained via the perturbative
series itself. Being an asymptotic series, the series sum must be properly defined.
The intrinsic ambiguities of such a definition take the form of
non-perturbative power corrections. This interesting aspect of
perturbative series was first explored in \cite{PhysRevD.10.3235,
Lautrup:1977hs} and in particular in \cite{tHooft:1977xjm}, where the
factorial growth of the perturbative series due to fermion
loop diagrams was discussed, and termed {\em renormalon divergence}.
Excellent reviews on renormalons are \cite{Beneke:1998ui,Shifman:2013uka}.
In this paper, we shall explore such aspects for the Adler function. To learn more about its non-perturbative aspects, we apply the mathematical techniques of resurgence to this function.
The Adler function $D(Q^2)$ is defined as the logarithmic derivative with respect to $Q^2$ of the
vacuum polarization $\Pi(Q^2)$ in a gauge theory:
\begin{equation}
\label{eq:adlerintro}
D(Q^2) = 4\pi^2Q^2\td{\Pi(Q^2)}{Q^2}\,,
\end{equation}
where $\Pi(Q^2)$ is related to the correlator of two vector currents
$j^\mu=\bar\psi \gamma^\mu \psi$,
\begin{equation}
\label{eq:12}
i \int d^4x \, e^{i q x} \,\langle 0 \vert T \,\big\{ j^\mu(x)
j^\nu(0)\big\} \vert 0 \rangle = (q^\mu q^\nu - q^2) \Pi(Q^2)\,, \qquad Q^2 = -q^2\,.
\end{equation}
The Adler function has long featured as an object to study asymptotic behaviour
of perturbative series and renormalons, see e.g. \cite{Beneke:1992ch, Broadhurst:1992si, Neubert:1994vb}. This is because, as a
self-energy, it depends on only one scale, and because its
imaginary part, for timelike $Q^2$, is
directly related to the $R$-ratio of the inclusive
$e^+ e^-$ cross section to hadrons and muons.
There has been much earlier work on renormalon contributions to the Adler function \cite{Beneke:1992ch,Beneke:1993ee,Broadhurst:1992si,Neubert:1994vb,Vainshtein:1994ff,Ball:1995ni,Beneke:1995qq,Peris:1997dq}, as well as their connection to its operator production expansion \cite{Parisi:1978az,Mueller:1984vh}. The Adler function appears in more recent renormalon studies \cite{Cvetic:2018qxs,Ayala:2021mwc,Ayala:2022cxo,Boito:2018rwt,Caprini:2020lff}, including a study whether field theories can have renormalons beyond the usual first Borel plane \cite{Cavalcanti:2020osb}. The Adler function has recently been studied using resurgence techniques \cite{Maiezza:2021mry,Maiezza:2021bed}, based on renormalization group considerations \cite{Maiezza:2019dht,Bersini:2019axn}.
In this paper we dive deeper into the method of resurgence, a set of mathematical techniques already introduced by J.~Écalle in the 1980s \cite{ecalle1985fonctions}. It has become a popular technique for the study of non-perturbative effects in quantum field theories and string theory over the past decade or so. Good reviews from a physics point of view are e.g.\ \cite{Aniceto:2018bis, Marino:2012zq, Dorigoni:2014hea}, as well as the review sections of \cite{Aniceto:2011nu}. For a nice mathematical introduction, see \cite{Sauzin:2014}. The application of resurgence to renormalon physics was studied e.g.\ in \cite{Shifman:2022xsa} and in a series of papers starting with \cite{Marino:2019eym} and summarized in \cite{Reis:2022tni}.
Resurgence provides tools to systematically decode non-perturbative information directly from the perturbative data. The techniques usually require a substantial number of perturbative terms (of order 10 at least, and for precision numerics of order 100), which explains why resurgence has not been widely applied to particle physics and phenomenology yet. Let us note that more broadly in quantum field theory, resurgence techniques have been introduced in e.g. \cite{Dunne:2012ae, Dunne:2013ada, Krichever:2020tgp, Borinsky:2022knn} and follow-up works by the same authors. The time seems ripe for application to more phenomenologically relevant quantities, since for many observables in particle physics a substantial number perturbative terms can be computed.
The Adler function is a good starting point. In fact, we shall be able extract new non-perturbative results for the Adler function at $\ord{1/N_f}$ and $\ord{1/N^2_f}$, where $N_f$ is the number of flavours in the fermion loop diagram. In the resurgence literature the description of both perturbative and non-perturbative contributions is conveniently summarized in a {\em transseries} (see e.g.\ \cite{edgar2010transseries} for an introduction to the concept) and here we will do the same.
To obtain our perturbative data, we compute two- and three-loop skeleton diagrams for the Adler function for gauge theory, using the gauge field propagator with fermion loop insertions summed in Borel space. For the two-loop case ($\ord{1/N_f}$) we find, using our resurgence techniques,the complete transseries of the Adler function due to renormalon contributions, including subleading effects, which reproduces and extends earlier results in \cite{Beneke:1992ch,Broadhurst:1992si,Neubert:1994vb,Maiezza:2021mry}. Subsequently we develop a powerful convolution method that facilitates the calculation of renormalon contributions due to adding new fermion-loop-summed propagators.
We test our findings in calculations of three-loop skeleton diagrams ($\ord{1/N^2_f}$), where we combine the decomposition into master integrals with the convolution method and very high order expansions in the Borel plane. We find asymptotic series in the non-perturbative sectors, and operators that establish relations between these sectors. Moreover, we find a new logarithmic type of non-perturbative power correction in the coupling constant plane. When we put all ingredients together, an intricate transseries structure emerges. Our results derive from thorough analytical and numerical analyses, which we describe extensively in the later sections.
\bigskip
\noindent Before we describe the organization of the paper in more detail, we note that it addesses two communities: the high energy physics community interested in renormalons, and the mathematical physics community interested in resurgence. Our hope is to interest the former community about the latter topic and vice versa. For this reason, we have tried to make the paper relatively self-contained on both topics.
Thus, the paper starts with introductory sections on renormalons and resurgence. \sec{renormalons} covers the basics of renormalons, in particular as they occur in QED and QCD. After introducing the technique of Borel summation that is central to all of our studies, we explain the difference between UV and IR renormalons and see how they originate in the so-called bubble diagrams. \sec{resurgence} then contains an introduction to the topic of resurgence. We do not aim to be exhaustive, but introduce the concepts that are relevant for us, such as alien calculus, transseries, alien chains/lattices and large order relations and refer to the literature mentioned above for further details. We end the section by highlighting in which ways the resurgent structures appearing in this paper go beyond the simplest toy models and general lore one usually encounters.
We then turn to the Adler function. \sec{adlerLO} discusses how renormalons are usually studied for the Adler function: by introducing a single bubble chain in the Feynman diagrams. This section contains many known results, but we try to present these in a way that is most suitable for a resurgent analysis. We start with a brief introduction on the flavour expansion in QED/QCD, formalizing the bubble diagrams already discussed in \sec{renormalons} and showing a framework in which these bubble diagrams can be ordered. We give the exact expression of the Adler function at leading order in the flavour expansion. Subsequently we perform the resurgence analysis of the Adler function at this order, showing how resurgence and large order relations can be used to extract all non-perturbative sectors in the corresponding transseries. We finish this section with a discussion on some of the more subtle issues encountered at this order.
Our aim is then to explore higher orders in the flavour expansion and investigate which aspects of the leading order analysis persist, and which ones may even show a richer structure. Before doing computations at next to leading order, we discuss in \sec{convoInt} the convolution integral that is an important ingredient in the calculation at higher orders. Building on toy models, we explore the resurgent structure that arises from these convolution integrals in the computations for Adler function Feynman diagrams with several bubble chains. While many of the mathematical results in this section may be familiar to the resurgence community, our application and interpretation of these results sheds a new light.
In \sec{adlerNLO} we then compute a set of planar diagrams present at next to leading order in the flavour expansion of the Adler function. The degree of difficulty here varies strongly from diagram to diagram; we are able to obtain results for some diagrams as arbitrary order expansions and for one as an expansion to a certain finite order. Despite only having partial data, we can investigate much of the resurgent structure that occurs at order $1/N_f^2$. We discuss the non-perturbative sectors for individual diagrams as well as the alien lattice structure, which is considerably richer than at order $1/N_f$.
We summarize our findings in \sec{conclusion} and discuss some open questions. In four appendices, we provide background on the calculation of the diagrams and give details on the numerical methods used in the main text.
\section{Renormalons}
\label{sec:renormalons}
In gauge theories, and more generally in quantum physics, perturbatively expanded observables $F = \sum_n c_n \, \alpha^{n+1}$ often have coefficients that grow as $c_n \sim n!$. This causes the perturbative series to be asymptotic, at first decreasing with increasing order, but then succumbing to the factorial growth. In particular, the series has zero radius of convergence irrespective of the size of $\alpha$. The factorial growth of the coefficients often indicates the presence of effects that are non-analytic at $\alpha = 0$, generally of the form $e^{-A/\alpha}$. One well-known source of such effects is the occurrence of instantons in the theory. The occurrence of an instanton is associated to the fact that the number of Feynman diagrams that one can draw at order $\alpha^n$ itself grows like $n!$ \cite{Lipatov:1976ny, tHooft:1977xjm, LeGuillou:1990nq}.
However, there is a second type of effect that causes perturbative coefficients to grow as $c_n \sim n!$: {\em renormalons}. Renormalon divergences \cite{tHooft:1977xjm} occur when individual diagrams in the Feynman expansion at order $\alpha^n$ are of size $n!$. Like instantons, these effects lead to singularities of the Borel transform (to be introduced shortly), but in this case their cause is the large or small loop momentum behaviour of these particular diagrams \cite{Beneke:1998ui}. Often, these diagrams contain chains of fermion bubbles, and as we shall review below, this is also the case for the Adler function.
To an asymptotic perturbative series, whether it comes from instantons or from renormalons, a
finite value can nevertheless be assigned, even for non-zero $\alpha$. The simplest method to do so is by summing the series up to its smallest term. However, any assignment leaves ambiguities,
which will in fact be the main occupation of this paper. Many quantum field theories exhibit
renormalon behavior, including gauge theories in four dimensions, and their associated ambiguities can act as a portal to non-perturbative information -- see e.g.\ \cite{Beneke:1998ui,Shifman:2013uka} for reviews.
\subsection{Borel summation}
\label{sec:borelSummation}
An alternative way of assigning a finite value to an asymptotic series
involves the Borel transform. Effectively, it amounts to dividing the $n$th coefficient by $n!$:
\begin{equation}
\label{eq:2.1}
F=\sum_nc_n\alpha^{n+1}
\quad\to\quad
\mathcal{B}[F](t)=\sum_n \frac{c_n}{n!} t^n,
\end{equation}
where $t$ is the Borel transformation parameter conjugate to $\alpha$,
and where we have included an extra power of $\alpha$ on the left for
later convenience. For asymptotic series whose coefficients grow factorially, the additional factorial in the denominator leads to a finite radius of convergence. In this case
one may endeavour to sum the Borel tranformed series, and after that
invert the transform to obtain a ``Borel-summed" expression for the
original series. To make the inversion precise one traditionally defines the Borel
sum as follows
\begin{equation}
\label{eq:borelResummation}
\mathcal{S}_0[F](\alpha)
= \int_0^\infty dt\, \mathcal{B}[F](t)\, e^{-\frac{t}{\alpha}}.
\end{equation}
Instead of integrating along the real positive line, one can instead integrate in a different direction $\theta$ in the complex $t$-plane for which the integral converges.
This leads to a Borel sum $\mathcal{S}_\theta[F]$ where the integration in \eq{borelResummation} is now from $0$ to $e^{\mathrm{i}\theta}\infty$. We will need such more general Borel transforms in what follows.
It might appear as if the process of Borel transforming and then Borel summing is a tautology, since the integral reinstates the factorial growth, e.g.
\begin{equation}
\int_0^\infty dt\, t^n\, e^{-\frac{t}{\alpha}}
= \Gamma(n+1)\alpha^{n+1}.
\end{equation}
However, one can now first perform the infinite sum $\mathcal{B}[F]$,
analytically continue in the complex $t$ plane and then perform the integral in
\eq{borelResummation}, thus effectively interchanging integration and
summation. One can show that this leads to a finite function $\mathcal{S}_0[F]$
\cite{Borel:1899}.
The Borel sum $\mathcal{B}[F]$ can still have singularities in the
complex $t$ plane, which are in fact related to the factorial
growth. A simple example illustrates this point. In \eq{2.1} let us
choose $c_n=n!$. In that case we have
\begin{equation}
\label{eq:irexample}
c_n = n! \,, \quad \to \quad \mathcal{B}[F] = \frac{1}{1-t}\,,
\end{equation}
leading to a pole at $t=1$. As another example, consider
\begin{equation}
\label{eq:4}
c_n = (n-1)!\,,
\quad\to\quad
\mathcal{B}[F](t)=\sum_{n=1}^\infty \frac1n t^n = -\log(1-t)\,,
\end{equation}
which leads to a branch cut running from 1 to $\infty$.
In the case that these singularities lie on the integration contour
from $0$ to $\infty$ in \eq{borelResummation} this leads to an
ambiguity, as one can deform the contour such
that one integrates either slightly above the singularity under an angle $\theta=0^+$, or below the singularity under an angle $\theta=0^-$, see the red curve in \fig{lasso}.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{figures/lasso_argument.pdf}
\caption{Deformations of the Borel integration contour.}
\label{fig:lasso}
\end{figure}
Deciding to integrate either above or below for the example \eq{irexample} e.g.\ leads to an ambiguity
\begin{equation}
\label{eq:3}
\mathcal{S}_{0^+}[F]-\mathcal{S}_{0^-}[F] = 2\pi\mathrm{i}\, e^{-\frac1\alpha}.
\end{equation}
Other singularities in the complex $t$ plane will influence the
integral in \eq{borelResummation}; the contour may be chosen in many ways, for example as the
blue curve in \fig{lasso}, adding sensitivity to singularities on the
negative Borel axis to the Borel-summed result. Notice the
non-perturbative aspect of the ambiguities like in \eqref{eq:3}.
For QED and QCD the type of (renormalon)
singularity in the Borel plane depends on the loop-momentum regions, specifically the
ultraviolet (UV) and infrared (IR) ones. We review this in the next subsection.
\subsection{UV and IR renormalons}
\label{sec:uvirrenormalons}
A class of diagrams that by itself leads to factorial coefficient
growth, and is typical for renormalons, are bubble-chain diagrams, in which $n$ fermionic self energies
are inserted in the photon or gluon propagator, see
\fig{vacpol}. To see the occurrence of the factorial growth, consider its constituent element, the
one-loop vacuum-polarization graph (``bubble'') in \fig{vacpol}(a). Its expression reads
\begin{equation}
\label{eq:vacpolone}
\Pi_{\mu\nu} = (k_\mu k_\nu- k^2 \eta_{\mu\nu}) \pi(k,\mu)
\end{equation}
with
\begin{equation}
\label{eq:vacpolint}
\pi(k,\mu) =\frac{-2\mathrm{i} N_f e^2 \mu^{2\epsilon}}{(2\pi)^n} \left( \frac{2-n}{1-n} \right) \int d^nq \frac{1}{(q+\tfrac{1}{2}k)^2(q-\tfrac{1}{2}k)^2}\,.
\end{equation}
This integral can be readily carried out using a Feynman parameter leading to
\begin{align}
\label{eq:vacpolint2}
\pi(k,\mu) = &\frac{2N_f e^2}{(4\pi)^2}\left(\frac{4\pi \mu^2}{-k^2} \right)^\epsilon \frac{(2-2\epsilon) \Gamma(1+\epsilon)\Gamma(1-\epsilon)^2}{\epsilon\, (3-2\epsilon) \Gamma(2-2\epsilon)}\nonumber \\
=& \frac{\alpha N_f}{3\pi}\left[\frac{1}{\epsilon} -\gamma_E + \log 4\pi - \log\left(\frac{-k^2}{\mu^2} \right) + \frac{5}{3} \right] + \mathcal{O}(\epsilon)\,.
\end{align}
To remove the ultraviolet divergence in \eqref{eq:vacpolint2} one
includes an $\overline{\mathrm{MS}}$ counterterm, which leads to
\begin{equation}
\label{eq:vacpolint3}
\pi(k,\mu) = -\alpha \beta_{0f}
\log\left(\frac{-k^2 e^{-5/3}}{\mu^2}
\right) \,,
\end{equation}
where $\beta_{0f} = T \, N_f /3\pi$, with $T=1$ for QED and
$T=1/2$ for QCD.
Bubble-chain diagrams produce factorial growth at leading power in
$N_f$, i.e. the $n$-bubble contribution is proportional to $(\alpha N_f)^n$.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{figures/vacpol.pdf}
\caption{(a) One fermion bubble, (b) chain of fermion bubbles. }
\label{fig:vacpol}
\end{figure}
For the Adler function, we consider the diagrams in \fig{vacpol2}, where a fermion bubble chain with $n$ bubbles
is inserted.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{figures/Adler_chain_intro.pdf}
\caption{Bubble chain inserted into Adler function. }
\label{fig:vacpol2}
\end{figure}
The diagrams, summed over the number of bubbles, yield the expression
\begin{equation}
\label{eq:1}
D = \alpha\int_0^\infty \frac{d\hat{k}^2}{\hat{k}^2} \, F(\hat{k}^2)
\sum_n \left[ \alpha \beta_{0f}
\log\left(\frac{\hat{k}^2Q^2 e^{-5/3}}{\mu^2}
\right) \right]^n\,,
\end{equation}
where $\hat{k}^2 = -k^2/Q^2$, and $F$ represents the rest of the
diagram (see \cite{Neubert:1994vb}). We can examine the $n$-dependence of
the integral separately for small and large $\hat{k}^2$ (the
separation being defined by the argument
in the logarithm in \eqref{eq:1} being unity), using the small and
large $\hat{k}^2$ dependence of $F$. Up to subleading corrections the result \cite{Neubert:1994vb, Beneke:1998ui} reads
\begin{equation}
\label{eq:2}
D = \frac{C_F}{\pi} \sum_{n=0}^\infty \alpha_s^{n+1}
\Big[\frac{3}{4}\left(\frac{Q^2 e^{-5/3}}{\mu^2}
\right)^{-2} \left(\frac{-\beta_{0f}}{2} \right)^2\,n! +
\frac{1}{3} \frac{Q^2 e^{-5/3}}{\mu^2} \beta_{0f}^n \, n!\,
\left(n+\frac{11}{6}\right)\Big]\,,
\end{equation}
where the first part is due to the IR behavior and the second to the
UV behavior of the integrand in \eqref{eq:1}.
To see what this implies for the Borel transform of the
Adler function, we use \eqref{eq:2.1}.
For the IR case one finds ($u\equiv-\beta_{0f} t$)
\begin{equation}
\label{eq:11}
\mathcal{B}[D]_{\mathrm{IR}} (u) \sim \frac{1}{u-2}\,, \qquad \mathcal{B}[D]_{\mathrm{UV}} (u) \sim \frac{1}{(u+1)^2}\,,
\end{equation}
where we only exhibit the leading $u$-poles and suppress prefactors.
Thus IR renormalon poles for QED will lie on the negative
Borel parameter axis, and vice versa for the UV renormalon
poles. As we will discuss in section \ref{sec:flavour}, for QCD one
changes $\beta_{0f} \to \beta_{0} = -(11-2N_f/3)/(4\pi)$, which also
implies a change of sign for $u$, so that IR (UV) renormalon poles are
on the positive (negative) real $u$ axis.
As shown in \eqref{eq:3} and \eqref{eq:borelResummation}, the poles on
the positive $u$-axis imply ambiguities in the Borel-resummed
perturbative series. In fact, they imply non-perturbative corrections;
the QCD version of the ambiguity in equation \eqref{eq:3}, due
the pole at $u=2$ in \eqref{eq:11}, is proportional to
\begin{equation}
\label{eq:10}
e^{\frac{2}{\beta_0\alpha}} \sim \left(\frac{\Lambda}{Q}\right)^4\,,
\end{equation}
with $\Lambda$ the QCD scale parameter, and $Q$ the
scale of the Adler function. Non-perturbative effects in QCD are
proportional to this typically very small ratio, and are therefore
often also referred to as power corrections. Our goal in this paper, then, is
to learn more about power corrections for the Adler function, using
the methods of resurgence.
\section{Resurgence}
\label{sec:resurgence}
\noindent
In the previous section we saw that factorial growth in the perturbative
series of a quantity leads to ambiguities that have the form of non-perturbative
corrections expressed in terms of $e^{-\frac{1}{\alpha}}$. This indicates that correction terms must be added to the
series expansion to remove such ambiguities, and as a bonus
non-perturbative information is included. {\em Resurgence}, an idea which originated in \cite{ecalle1985fonctions}, is a way to do
this systematically; the word refers to non-perturbative sectors
resurging through analysis of asymptotic behaviour of other sectors.
The perturbative expansion enhanced by such sectors is known as the
\emph{transseries} (see \cite{edgar2010transseries} for an
introduction), which will feature extensively in what follows.
The goal of this section is twofold. First, we introduce notation and definitions needed for the resurgence analysis of the Adler function in later sections.
Secondly, for those who have not yet encountered the concepts and
methods of resurgence, this section can be read as a concise
introduction to the topic. Of course, resurgence is a broad topic, and
we refer the interested reader to \cite{Aniceto:2018bis, Marino:2012zq, Dorigoni:2014hea, Sauzin:2014} or the review sections of \cite{Aniceto:2011nu} for a more extensive exposition. Below, we will mostly adhere to the notation and presentation of \cite{Aniceto:2018bis}.
\subsection{From Borel summation to alien calculus}
\label{sec:alienCalculus}
\noindent
In physics, singularities and divergences are usually regarded as
troublesome. However, in a resurgence context, all the
information about an asymptotic series and the associated
non-perturbative data is encoded in the singularity
structure of its Borel transform. Before discussing how to
systematically extract this information, we need to introduce some
concepts and terminology.
Let us again consider a formal power series and its Borel transform,
\begin{equation}
F(\alpha) = \sum_{n=0}^\infty f_n \alpha^{n+1},
\qquad
\mathcal{B}[F](t) = \sum_{n=0}^\infty \frac{f_n}{\Gamma(n+1)} t^n.
\label{eq:pertBorel}
\end{equation}
A function whose expansion gives a formal power series $F(\alpha)$ is said to be a {\em simple resurgent function} if the Borel transform $\mathcal{B}[F](t)$ has only simple poles or logarithmic branch cuts as singularities. That is, near each singularity, say at $t = \omega$, we have that
\begin{equation}\label{eq:defSimpleResurgentFunction}
\mathcal{B}[F](t)
= \frac{a}{2\pi \mathrm{i}\, (t-\omega)} + \Psi(t-\omega)\frac{\log(t-\omega)}{2\pi \mathrm{i}} + \Phi(t-\omega),
\end{equation}
for some $a\in\mathbb{C}$ and where the functions $\Psi,\Phi$ are
analytic around the origin. The Borel transform can also contain other
singularities, e.g.\ double or higher order poles. Although this will
be the case for the Adler function, most of our discussion in this
section is focused on the simplest case of a single pole and a $\log$-branch
cut that are commonly considered in the literature. In \sec{generalizations}, we then give the necessary details to
extend the resurgence analysis needed for the discussions of the Adler
function in \secs{adlerLO}{adlerNLO}.
To obtain a better understanding of these Borel
singularities we introduce {\em alien calculus}. The
fundamental object in alien calculus is the linear differential
operator $\Delta_\omega$, the {\em alien derivative}, which acts on simple
resurgent functions. Being a derivative, it
satisfies the Leibniz rule when acting on a product of simple
resurgent functions. Furthermore, for those functions it has a rather
simple expression: by rewriting $\Psi(t)$ in
\eq{defSimpleResurgentFunction} as the Borel transform of a resurgent
function $G(\alpha)$, i.e.\ $\mathcal{B}[G](t) = \Psi(t)$, the alien derivative
at a singular point $\omega$ is
\begin{equation}\label{eq:formalDefAlienDerivative}
\Delta_\omega F(\alpha) = a + G(\alpha).
\end{equation}
In fact, this operation is defined both `before' and `after' Borel summation: the alien derivative can also be viewed as a map relating the power series expansion of $F(\alpha)$ to that of $G(\alpha)$.
When $\omega$ is not a singular point of $\mathcal{B}[F](t)$, then $\Delta_\omega F(\alpha) = 0$.
In other words, the alien derivatives fully encode the singular properties of the Borel transforms of simple resurgent functions.
Next, let us consider a singular direction $\theta$ in the complex
plane of the Borel variable $t$ along which $\mathcal{B}[F]$ has
singularities. Such a direction is known as a {\em Stokes line}, since as we shall see Stokes' phenomenon \cite{stokes_2009a, stokes_2009b} occurs there. Because of
the singularities on the integration path, the Borel summation
$\mathcal{S}_\theta[F]$ is no longer well-defined. Of course, the
singularities are easily avoided by integrating slightly above or
below them, leading to summations that we denote by
$\mathcal{S}_{\theta^+}[F]$ and $\mathcal{S}_{\theta^-}[F]$. These different
summations lead to an ambiguity and one may ask how these two
distinct functions are related. They are in fact connected by the {\em
Stokes automorphism} $\underline{\mathfrak{S}}_\theta$, or its
related {\em discontinuity} ${\mathrm{Disc\,}}_\theta$.
These are defined as
\begin{equation}\label{eq:defDiscontinuity}
\mathcal{S}_{\theta^+}
= \mathcal{S}_{\theta^-}\circ \underline{\mathfrak{S}}_\theta
=\mathcal{S}_{\theta^-} \circ (1 - {\mathrm{Disc\,}}_{\theta^-}).
\end{equation}
One can show (see e.g.\ \cite{Sauzin:2014}) that these operators can be expressed in terms of the alien derivatives via the exponential map
\begin{equation}\label{eq:StokesAutomorphism}
\underline{\mathfrak{S}}_\theta
= \exp \Bigg[\sum_{\{\omega_\theta\}} e^{-\frac{\omega_\theta}{\alpha}}\Delta_{\omega_\theta}\Bigg],
\end{equation}
where the set $\{\omega_\theta\}$ denotes all the singular points along
the $\theta$-direction. The main point now is that if we know the
Stokes automorphisms (or equivalently, the alien derivatives), then we
also know how to relate the different summations. Consequently, at least in principle a full
reconstruction anywhere in the complex $\alpha$ plane, including non-perturbative sectors, of the function $F(\alpha)$ is then possible.
For future purposes we define the {\em pointed alien derivative}, related to the ordinary alien derivative by
\begin{equation}
\dot{\Delta}_\omega \equiv e^{-\frac{\omega}{\alpha}}\Delta_\omega.
\end{equation}
This turns out to be a convenient operator as it commutes with the usual derivative (see again \cite{Sauzin:2014} for details), i.e.
\begin{equation}
\label{eq:commutatorDeltaDotPartial}
[\dot{\Delta}_\omega,\partial_\alpha] = 0.
\end{equation}
This will momentarily be used to derive a bridge equation.
\subsection{Transseries and bridge equations}
\label{sec:bridgeEquation}
\noindent
From the relation between Stokes automorphisms and alien derivatives,
\eq{StokesAutomorphism}, we notice that the ambiguity arising in Borel
summation is non-perturbative in $\alpha$, being of order
$e^{-\frac\omega\alpha}$.
This implies that the non-perturbative solution we are trying to
construct must
contain such non-perturbative exponential contributions.
As we will see in a moment, resurgence relations can be captured in a
universal way through transseries \cite{edgar2010transseries}. Transseries are
generalizations of perturbative series by the inclusion of terms with
non-perturbative (non-analytic) factors like
$e^{-\frac{\omega}{\alpha}}$. Factors of this type are called {\em transmonomials}.
Let us start by assuming that our resurgent function arises as a
solution to some (potentially non-linear) problem depending on a
single boundary condition, i.e.\ we consider the single parameter
transseries Ansatz
\begin{equation}\label{eq:transseries}
F(\alpha,\sigma) = \sum_{n=0}^\infty \sigma^n e^{-n\frac{A}{\alpha}} \Psi^{(n)}(\alpha)\,,
\qquad\text{with}\qquad
\Psi^{(n)}(\alpha)
= \alpha^{\beta_n} \; \sum_{h=0}^\infty f_h^{(n)}\alpha^{h}
\end{equation}
where $\Psi^{(0)}$ is simply a perturbative series, as in \eq{pertBorel}, and
$\Psi^{(n)}$, for $n\geq1$, are the non-perturbative contributions. The transseries parameter $\sigma$ counts the number of $e^{-\frac{A}{\alpha}}$ factors \cite{Aniceto:2018bis} and
parameterizes different choices of the boundary condition. The
$\beta_n$ are called the {\em characteristic exponents}; we discuss their
role in resurgence equations in the end of \sec{generalizations}. In
the resurgence literature, $\Psi^{(n)}$ is often
called the {\em $n$-instanton sector}, even though in practice $n$ may
count solitons, renormalons, or some other non-perturbative physical
quantity. To avoid confusing readers with a high-energy background, we
shall call these quantities the {\em $n$-th non-perturbative sector} instead.
After introducing an intuitive pictorial representation of
non-perturbative transseries sectors and their interrelations in the
form of the {\em alien chain} in the next subsection, it will be
straightforward in \sec{generalizations} to generalize this one-parameter transseries to
multi-parameter transseries, by including further non-perturbative
monomials like $e^{+nA/\alpha}$, $\log^n(\alpha)$, etc.
We saw that the alien derivatives play an important role in the
construction of the complete non-perturbative solution to a
problem. It is however still unclear how to compute these derivatives
in a systematic way. This is done through the construction of the {\em
bridge equations}, so named because they form a bridge between the
ordinary calculus of derivatives and alien calculus.
Assume for the moment that $F(\alpha,\sigma)$ is the solution to some differential equation (in the variable $\alpha$). From \eq{commutatorDeltaDotPartial} we get that acting on this equation with $\dot{\Delta}_{\omega}$ yields a new, linear differential equation to which $\dot{\Delta}_{\omega} F(\alpha,\sigma)$ is a solution. At the same time, acting on the original equation with $\partial_\sigma$ shows that $\partial_\sigma F(\alpha,\sigma)$ is a solution to the {\em same} equation.
As an example, consider the non-linear differential equation
\begin{equation}
\partial_\alpha F(\alpha,\sigma)
= 6+ F(\alpha,\sigma)^3.
\end{equation}
Acting on this with $\dot{\Delta}_{\omega}$ yields
\begin{equation}\label{eq:bridgeEqExample}
\partial_\alpha \dot{\Delta}_{\omega} F(\alpha,\sigma) = 3 F(\alpha,\sigma)^2 \cdot \dot{\Delta}_{\omega} F(\alpha,\sigma),
\end{equation}
which is a new, linear differential equation for $\dot{\Delta}_{\omega} F(\alpha,\sigma)$. Similarly, acting with $\partial_\sigma$ leads to the same equation as \eq{bridgeEqExample}, with $\dot{\Delta}_{\omega} F(\alpha,\sigma)$ replaced by $\partial_\sigma F(\alpha,\sigma)$.
Supposing the new linear differential equation is of first order (as is natural for a problem with a single boundary condition) we conclude that the two new solutions must be proportional to each other, i.e.
\begin{equation}\label{eq:bridgeEq}
\dot{\Delta}_{\omega} F(\alpha,\sigma) = S_\omega(\sigma) \partial_\sigma F(\alpha,\sigma),
\end{equation}
where $S_\omega(\sigma)$ is a proportionality factor which still may depend
on $\sigma$. This relation is \'Ecalle's bridge equation \cite{ecalle1985fonctions}; it indeed
presents a bridge between the alien derivatives and the regular ones. By substituting $\dot{\Delta}_{\omega} = e^{-\frac{\omega}{\alpha}}\Delta_{\omega}$ and expanding the transseries, the LHS equals
\begin{equation}\label{eq:bridgeEqLHS}
\dot{\Delta}_{\omega} F(\alpha,\sigma)
= \sum_{n=0}^\infty \sigma^n e^{-\frac{nA+\omega}{\alpha}}
\Delta_{\omega}\Psi^{(n)}(\alpha),
\end{equation}
while the RHS yields
\begin{equation}\label{eq:bridgeEqRHS}
S_\omega(\sigma) \partial_\sigma F(\alpha,\sigma)
= S_\omega(\sigma)\sum_{n=0}^\infty n\, \sigma^{n-1} e^{-n\frac{A}{\alpha}}\Psi^{(n)}(\alpha).
\end{equation}
We need to match both sides term by term according to the powers of $\sigma$ and $e^{-\frac{A}{\alpha}}$.
To this end, \cite{Aniceto:2011nu} defined a notion of {\em degree} as
\begin{equation}
\deg\big(\sigma^n e^{m\frac{A}{\alpha}}\big) = n+m.
\end{equation}
Since the transseries \eq{transseries} has degree $\deg( F(\alpha,\sigma))=0$, it follows that the bridge equation can only contain nontrivial information at $\omega=\ell A$, $\ell\in \mathbb{Z}$. Thus, for transseries of this type we expect singularities in the Borel plane at $\omega = A$ but also at integer multiples of $A$ -- something which we will also see for the Adler function. Note that here $\ell=0$ is excluded because the Borel transform is regular at the origin. Furthermore, $\dot{\Delta}_{\ell A} F(\alpha,\sigma) $ only contains positive powers of $\sigma$, so we can write the proportionality constant as a formal power series expansion
\begin{equation}
S_{\ell A}(\sigma) = \sum_{k=0}^\infty S_\ell^{(k)}\sigma^k.
\end{equation}
Taking the degree of both \eqs{bridgeEqLHS}{bridgeEqRHS} implies that $S_\ell^{(k)}$ is only nonzero at $k=1-\ell$, and therefore, writing $S_\ell^{(1-\ell)} \equiv S_\ell$, we have that
\begin{equation}
S_{\ell A}(\sigma) = S_\ell \,\sigma^{1-\ell}, \quad \ell\leq1.
\end{equation}
The bridge equation \eq{bridgeEq} now reads
\begin{equation}
\sum_{n=0}^\infty \sigma^n e^{-(n+\ell)\frac{A}{\alpha}}
\Delta_{\ell A}\Psi^{(n)}(\alpha)
= \sum_{n=0}^\infty S_\ell\, n\, \sigma^{n-\ell} e^{-n\frac{A}{\alpha}}\Psi^{(n)}(\alpha),
\end{equation}
or equivalently
\begin{equation}\label{eq:resurgenceEqs}
\Delta_{\ell A} \Psi^{(n)}(\alpha) =
\begin{cases}
0 & \ell > 1,\\
(n+\ell)S_\ell \Psi^{(n+\ell)} & \ell\leq 1, \quad \ell\neq0,
\end{cases}
\end{equation}
where we used that $\Psi^{(n)}= 0$ for all $n< 0$.
With this expression, we can compute all alien derivatives as long as
the yet unknown constants $\{S_1,S_{-1},S_{-2},...\}$ are known. We
refer to these constants as {\em Stokes constants}. In general,
computing Stokes constants is a difficult task and depends on the
specific system that one tries to solve, and we shall not need to do so in this paper.
Although the alien derivative has an involved definition, we see a
remarkable result in alien calculus: the final result for the alien
derivative is surprisingly simple, and works purely algebraically on
the building blocks of the transseries, \eq{transseries}.
For simple resurgent functions with a single parameter, we now have two equations for the alien derivative, i.e.\ \eqs{formalDefAlienDerivative}{resurgenceEqs}. Comparing the two yields
\begin{equation}\label{eq:borelSingularities}
\mathcal{B}[\Psi^{(n)}](t+\ell A)
= \frac{a}{2\pi \mathrm{i}\, t} + \mathsf{S}_{n\to n+\ell}\ \mathcal{B}[\Psi^{(n+\ell)}](t)\frac{\log(t)}{2\pi \mathrm{i}} + \text{holomorphic}.
\end{equation}
That is, near the singularity $t=\ell A$ of $\mathcal{B}[\Psi^{(n)}](t)$, we find the resurgence of the the $\Psi^{(n+\ell)}$ sector.
In this expression, the so-called {\em Borel residues} $\mathsf{S}_{n\to n+\ell}$ are constants that can be expressed in terms of the Stokes constants via Eqs.\ \eqref{eq:defDiscontinuity}, \eqref{eq:StokesAutomorphism} and \eqref{eq:resurgenceEqs}. For example we find
\begin{equation}
\mathsf{S}_{n\to n+1}
= -(n+1)S_1,
\quad
\mathsf{S}_{n\to n+2} = -\frac12 (n+1)(n+2) S_1^2,
\quad\text{etc.}
\end{equation}
and similarly
\begin{equation}
\mathsf{S}_{n\to n-2}
= -(n-1)S_{-1},
\quad
\mathsf{S}_{n\to n-2}
= -(n-2)\left(S_{-2} + \frac12(n-1)S_{-1}^2\right),
\quad\text{etc.}
\end{equation}
See \cite{Aniceto:2011nu} for closed-form expressions for the Borel residues.
\subsection{One-dimensional alien chain}
\label{sec:1Dalienchain}
\noindent
Instead of entering the world of alien calculus, we will follow the
more pedagogical picture of the {\em alien chain} developed in
\cite{Aniceto:2018bis} as it will help us build an intuitive language
in which resurgence equations can be better understood. For example,
in the case of a single boundary condition, Stokes' automorphism
\eq{StokesAutomorphism} can be fully computed using
\eq{resurgenceEqs} (see e.g. \cite{Aniceto:2011nu}), but once one
needs to generalize this to multi-parameter transseries, the equations
can become quite intricate. Instead, for practical situations, the
simple setup of the alien chain can be used and generalizations will
come naturally. (For multi-parameter transseries, we will introduce
alien {\em lattices} in \sec{generalizations}.)
From the point of view of the alien chain, the sectors $\Psi^{(n)}$ of the
transseries, \eq{transseries}, are viewed as nodes:
\begin{equation*}
\vcenter{\hbox{\includegraphics[width=.8\linewidth]{figures/alien_chain.pdf}}}
\end{equation*}
Later these will be connected by alien derivatives to form a chain. We can then reinterpret the resurgence equation, \eq{resurgenceEqs},
as a set of allowed resurgence ``motions'' along this chain. That is,
there is only one type of forward motion (i.e.\ with $\ell$ positive)
because of the constraint that $\Delta_{\ell A}\Psi^{(n)} = 0$ for
$\ell>1$. However, from the fact that nonzero $\ell\leq1$ can give
nonvanishing alien derivatives, we see that there are multiple
backwards motions. Recall that (for real and positive $A$) to compute the Stokes
automorphism \eq{StokesAutomorphism} at $\theta = 0$ one only needs
alien derivatives with $\ell>0$, whereas the Stokes automorphism at
$\theta=\pi$ only requires $\ell<0$. Therefore, we never need to
consider combinations of forward and backward motions together.
As an example, there is only one forward path to go from $\Psi^{(1)}$ to $\Psi^{(4)}$ by repeatedly acting with $\Delta_A$:
\begin{equation*}
\includegraphics[width=.5\linewidth]{figures/forward_motion.pdf}
\end{equation*}
However, there are multiple
backwards motions to go from $\Psi^{(4)}$ to $\Psi^{(1)}$ using different
combinations of $\Delta_{-A}$, $\Delta_{-2A}$ and $\Delta_{-3A}$:
\begin{equation*}
\includegraphics[width=.5\linewidth]{figures/backward_motion.pdf}
\end{equation*}
Before we can compute Stokes automorphism using allowed motions on the
alien chain, we need to introduce some further terminology and set up
some computational rules. We denote a step $\mathcal{S}_{n\to m}$ as a single link connecting two nodes
$n$ and $m$ on the chain and a path $\mathcal{P}$ as a combination of
consecutive steps. The length $\ell(\mathcal{P})$ of a path is
then defined as the number of steps composing the path:
\begin{equation}
\ell(\mathcal{P}) = \#\{\mathcal{S}_{n\to m}\in\mathcal{P}\}\,.
\end{equation}
Looking at the proportionallity factor on the RHS of
\eq{resurgenceEqs}, we see that it is natural to define the weight
$w$ of a step $\mathcal{S}_{n\to m}$ in terms of the Stokes constants $S_k$ as
\begin{equation}\label{eq:weightStep}
w(\mathcal{S}_{n\to m}) = mS_{m-n}\,.
\end{equation}
The weight of a path is then simply the product of the weights of the
steps that make up the path
\begin{equation}
w(\mathcal{P}) = \prod_{\mathcal{S}_i\in\mathcal{P}}w(\mathcal{S}_i)\,.
\end{equation}
Finally, it is convenient to define
a ``statistical factor'' \cite{Aniceto:2018bis} linking two nodes as
\begin{equation}\label{eq:statisticalFactor}
\text{SF}_{(n\to m)}
=\sum_{\mathcal{P}(n\to m)}\frac{w(\mathcal{P})}{\ell(\mathcal{P})!}\,,
\end{equation}
where the sum is over all allowed paths linking nodes $n$ and $m$.
Equipped with these definitions, let us now see how they turn up in the
calculation of Stokes automorphisms, through a specific
example. Reviewing \eqs{StokesAutomorphism}{resurgenceEqs}, we notice
that the actual form of Stokes automorphism depends on the object it
acts on. Focusing on the example of the transseries sector $\Psi^{(3)}$, it follows from
the allowed motions on the alien chain that this sector has
singularities both in the $\theta=0$ and $\theta=\pi$ directions. As
the only allowed forward motions consist of repeated actions of
$\Delta_A$, we see that the Stokes automoprhism in the $\theta=0$
direction takes a simple form:
\begin{align}
\underline{\mathfrak{S}}_0\Psi^{(3)}
&= \exp\Big(e^{-\frac{A}{\alpha}}\Delta_A\Big)\Psi^{(3)}\nonumber\\
&= \Big[1+e^{-\frac{A}{\alpha}}\Delta_A+\frac{1}{2!}\Big(e^{-\frac{A}{\alpha}}\Delta_A\Big)^2
+\frac{1}{3!}\Big(e^{-\frac{A}{\alpha}}\Delta_A\Big)^3+\ldots\Big]\Psi^{(3)}\nonumber\\
&= \Psi^{(3)} + 4S_1e^{-\frac{A}{\alpha}}\Psi^{(4)} + \frac{20}{2!}S_1^2e^{-2\frac{A}{\alpha}}\Psi^{(5)}
+ \frac{120}{3!}S_1^3e^{-3\frac{A}{\alpha}}\Psi^{(6)}+\ldots
\end{align}
In the $\theta=\pi$ direction, the Stokes automorphism does not take
such a simple form, as there are multiple allowed backward motions on
the alien chain. Luckily, when acting on $\Psi^{(3)}$ the possible set of
backward paths is finite, and we obtain
\begin{align}
\underline{\mathfrak{S}}_\pi\Psi^{(3)}
&= \exp\bigg(\sum_{\ell=1}^3e^{\ell\frac{A}{\alpha}}\Delta_{-\ell A}\bigg)\Psi^{(3)}\nonumber\\
&= \bigg[1+\sum_{\ell=1}^3e^{\ell\frac{A}{\alpha}}\Delta_{-\ell A}
+\frac{1}{2!}\bigg(\sum_{\ell=1}^3e^{\ell\frac{A}{\alpha}}\Delta_{-\ell A}\bigg)^2
+\frac{1}{3!}\bigg(\sum_{\ell=1}^3e^{\ell\frac{A}{\alpha}}\Delta_{-\ell A}\bigg)^3+\ldots\bigg]\Psi^{(3)}\nonumber\\
&= \Psi^{(3)} + 2S_{-1}e^{\frac{A}{\alpha}}\Psi^{(2)} + \big(S_{-2}+S_{-1}^2\big)e^{2\frac{A}{\alpha}}\Psi^{(1)}.
\end{align}
Having computed these actions explicitly, let us now
translate these results to the terminology we introduced above. We see
that under the action of $\underline{\mathfrak{S}}_0$ on $\Psi^{(3)}$ we
obtain an infinite sum of higher sectors $\Psi^{(n\geq3)}$: the nodes on
the alien chain that can be reached by forward motions. The
coefficients of the terms containing these sectors can be expressed in
terms of statistical factors \eq{statisticalFactor}. For example going from $\Psi^{(3)}\to\Psi^{(4)}$, we
see that there is only a single path, of length $\ell=1$ and weight
$w=4S_1$. Furthermore, we have to
include a non-perturbative factor $e^{-\frac{A}{\alpha}}$. Likewise, the
path to go from $\Psi^{(3)}\to\Psi^{(5)}$ has length $\ell=2$ with weight
$w=20S_1^2$. In order to get the correct coefficient of the $\Psi^{(5)}$
term, we have to multiply by $\frac{1}{2!}$ and a non-perturbative
term, i.e. SF$_{(3\to5)}e^{-2\frac{A}{\alpha}}$ in total. Similarly, the
coefficient in front of $\Psi^{(6)}$ is SF$_{(3\to6)}e^{-3\frac{A}{\alpha}}$
etc. Adding up all different terms for all possible paths gives the
full action of $\underline{\mathfrak{S}}_0$ on $\Psi^{(3)}$.
For the full action of $\underline{\mathfrak{S}}_\pi$, we need to
consider the allowed backward motions. Again, there is only a single
path of length $\ell=1$ to go from $\Psi^{(3)}\to\Psi^{(2)}$, leading to a
statistical factor SF$=2S_{-1}$ and non-perturbative term
$e^{\frac{A}{\alpha}}$. To go from $\Psi^{(3)}\to\Psi^{(1)}$ however, we have two
allowed paths. One has $\ell=1$ with $w=S_{-2}$, and the other path
has $\ell=2$ with $w=2S_{-1}^2$, so SF$_{(3\to1)}=S_{-2}+S_{-1}^2$. In
both cases we need to multiply by the non-perturbative factor
$e^{2\frac{A}{\alpha}}$. Adding all terms for all possible paths again
gives the action of the Stokes automorphism.
To summarize what we have learned: $\underline{\mathfrak{S}}_0\Psi^{(n)}$
(resp. $\underline{\mathfrak{S}}_\pi\Psi^{(n)}$) is a sum over all forward
(backward) paths linking nodes to the right (left) of $\Psi^{(n)}$,
i.e.\ the terms in this sum can be written as
\begin{equation}\label{eq:StokesAutoInTermsOfAllowedMotions}
\underline{\mathfrak{S}}_0\Psi^{(n)}
= \Psi^{(n)}+\sum_{m>n}\text{SF}_{(n\to m)}e^{-(m-n)\frac{A}{\alpha}}\Psi^{(m)} ,
\end{equation}
and likewise for $\underline{\mathfrak{S}}_\pi\Psi^{(n)}$, where the only
difference is that one should sum over $m<n$.
\subsection{Large order behaviour and asymptotics}
\label{sec:asymptotics}
\noindent
With the knowledge of the previous sections, we can now return to our
main goal: the understanding of asymptotic behaviour of
perturbative series in QFT and its relation to
non-perturbative sectors. In fact, the resurgent structure is even
more general and can be used to relate the asymptotic series of
all non-perturbative sectors to each other. To see this, we
apply Cauchy's theorem
\begin{align}\label{eq:Cauchy}
f(\alpha)
&= \oint_{\mathcal{C}}\frac{dy}{2\pi\mathrm{i}}\frac{f(y)}{y-\alpha}\nonumber\\
&= -\int_{0}^{\infty} \frac{dy}{2\pi\mathrm{i}}\frac{{\mathrm{Disc\,}}_0 f(y)}{y-\alpha}
-\int_{0}^{-\infty} \frac{dy}{2\pi\mathrm{i}}\frac{{\mathrm{Disc\,}}_\pi f(y)}{y-\alpha} +\oint_{(\infty)}\frac{dy}{2\pi\mathrm{i}}\frac{f(y)}{y-\alpha},
\end{align}
where we assumed discontinuities only in the $\theta=0$ and $\theta=\pi$
directions. See \fig{deformedCauchyContour} for the deformation of the
contour to go from the first to the second line in \eq{Cauchy}. Notice that we deformed the contours even further, such that the first two integrals in \eq{Cauchy} start from $0$.
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth]{figures/deforming_contour_large_order.pdf}
\caption{In blue, (the inner contour) we show the contour $\mathcal{C}$ of the first line of the
Cauchy integral, \eq{Cauchy}. In red (the outer contour) the deformed contour
of the second line in that equation is shown. The zigzag lines on the right and
left denote the rays where ${\mathrm{Disc\,}}_0$ and ${\mathrm{Disc\,}}_\pi$ occur.}
\label{fig:deformedCauchyContour}
\end{figure}
For example, we can apply this to the perturbative sector
$\Psi^{(0)}(\alpha)$. The discontinuities of this sector are
easily computed using the rules from the previous subsection:
\begin{equation}
{\mathrm{Disc\,}}_0 \Psi^{(0)} = (1 - \underline{\mathfrak{S}}_0)\Psi^{(0)} = -\sum_{\ell=1}^\infty S_1^\ell e^{-\ell\frac{A}{\alpha}}\Psi^{(\ell)},
\qquad\text{and}\qquad
{\mathrm{Disc\,}}_\pi \Psi^{(0)} = 0.
\end{equation}
In many cases (including the ones in this paper), the integral around
infinity vanishes, yielding
\begin{equation}\label{eq:largeOrderWithIntegral}
\sum_{k=0}^\infty f_k^{(0)}\alpha^{k}
= \sum_{\ell=1}^\infty \frac{S_1^\ell}{2\pi\mathrm{i}} \int_{0}^\infty dy
\frac{e^{-\ell\frac{A}{y}}y^{\beta_\ell}}{y-\alpha}\sum_{h=0}^\infty f_h^{(\ell)}y^h.
\end{equation}
By expanding around $\alpha=0$, we can match equal powers in $\alpha$
on both sides of this equation and perform the $y$-integrations, after
which we arrive at a remarkable connection between the perturbative
and non-perturbative expansions
\begin{align}
f_k^{(0)}
&\sim\sum_{\ell=1}^\infty\frac{S_1^\ell}{2\pi\mathrm{i}}\sum_{h=0}^\infty f_h^{(\ell)}\frac{\Gamma(k-h-\beta_\ell)}{(\ell A)^{k-h-\beta_\ell}}.
\label{eq:largeOrderRelation}
\end{align}
Here, we used a $\sim$ symbol instead of an equals sign to indicate that this relation only describes the behavior as $k \to \infty$. In particular, we have exchanged the sum over $h$ and the integral in \eq{largeOrderWithIntegral}, which is only allowed formally -- leading e.g. to the fact that $\Gamma(k-h-\beta_\ell)$ may be ill-defined when $h$ is large compared to $k$.
In words: \eq{largeOrderRelation} states that asymptotic behaviour of the perturbative
coefficients $f_k^{(0)}$ is completely determined by the
non-perturbative expansion coefficients $f_h^{(\ell)}$. In
\secs{adlerLO}{adlerNLO} we shall explain in detail how one can unpack
this equation systematically, and decode from the perturbative
coefficients the non-perturbative ones.
We can repeat this derivation to obtain resurgent large order
relations for other non-perturbative sectors as well. The key
ingredient is to rewrite Cauchy's theorem in terms of the Stokes
discontinuities, so that from the rules discussed in the previous
subsection, one can write down large order relations by looking at
allowed motions on the alien chain. Let us note that of course all of this is only possible if the non-perturbative sectors indeed have an asymptotic expansion (as opposed to a convergent or even finite one) -- we shall see later that this is not always the case for the Adler function.
To be explicit, we substitute
\eq{StokesAutoInTermsOfAllowedMotions} into Cauchy's theorem to get
\begin{equation}\label{eq:Cauchy3}
\Psi^{(n)}(\alpha)
= \sum_{\ell>n} \frac{\text{SF}_{(n\to \ell)}}{2\pi\mathrm{i}}\int_{0}^\infty dy
\frac{e^{-(\ell-n)\frac{A}{y}}}{y-\alpha}\Psi^{(\ell)}(y)
+\sum_{\ell< n} \frac{\text{SF}_{(n\to \ell)}}{2\pi\mathrm{i}}\int_{0}^{-\infty} dy
\frac{e^{-(\ell-n)\frac{A}{y}}}{y-\alpha}\Psi^{(\ell)}(y).
\end{equation}
Again expanding around $\alpha=0$, matching equal powers in $\alpha$ and
performing the $y$-integrals, one finds
\begin{equation}\label{eq:1DlargeOrderEquation}
f_k^{(n)}
\sim \sum_{\ell\neq n} \frac{\text{SF}_{(n\to \ell)}}{2\pi\mathrm{i}}\chi_{(n\to\ell)}(k),
\end{equation}
where it is convenient to define
the large order factor\footnote{Note that we define these factors
slightly different from \cite{Aniceto:2018bis}, as we include an
explicit factor of $\Gamma(k)/A^k$.}
\begin{equation}\label{eq:1DlargeOrderFactor}
\chi_{(n\to\ell)}(k)
=\sum_{h=0}^\infty f_h^{(\ell)}
\frac{\Gamma(k+\beta_n-h-\beta_\ell)}{((\ell-n) A)^{k+\beta_n-h-\beta_\ell}}.
\end{equation}
Thus, \eqs{1DlargeOrderEquation}{1DlargeOrderFactor} show how, using the alien chain formalism, the
asymptotic behaviour of expansion coefficients in non-perturbative sectors encodes all
expansion coefficients in other non-perturbative sectors.
\subsection{Generalizations and extensions}
\label{sec:generalizations}
\noindent
There are many generalizations of the concepts and constructions we
have seen so far, but in this work we only need two of them. First, we
have worked thus far with examples with a single boundary condition, but we
will also need the concept of {\em multi-parameter transseries}.
Second, we need to consider transseries with terms that include
logarithmic factors. In this section, we follow the
exposition in \cite{Aniceto:2011nu,Aniceto:2018bis}. We finish
with a short discussion on the types of singularities in the
Borel plane that are different than those discussed so far.
As we will see, for the Adler function it is not enough to capture all
its
non-perturbative contributions with a single exponential transmonomial
$e^{-\frac{A}{\alpha}}$. Instead, we need to allow for more such
exponentials, $e^{-\frac{A_i}{\alpha}}$. In fact, two
exponents seem to suffice for the Adler function at the order we are
interested in, but we shall be somewhat more
general. Thus, by writing $\bm{A}=(A_1,...,A_k)$, a typical
Ansatz for a $k$-parameter transseries solution to a non-linear
problem is
\begin{equation}\label{eq:multiParameterTransseries}
F(\alpha,\bm{\sigma})
= \sum_{\bm{n}\in\mathbb{N}^k_0}\bm{\sigma}^{\bm{n}}e^{-\frac{\bm{n}\cdot \bm{A}}{\alpha}}\Psi^{(\bm{n})}(\alpha),
\qquad\text{with}\qquad
\Psi^{(\bm{n})}(\alpha) = \alpha^{\beta_{\bm{n}}}\sum_{h=0}^\infty f_h^{(\bm{n})}\alpha^h.
\end{equation}
Here we used the notation
$\bm{\sigma}^{\bm{n}}=\prod_{i=1}^k \sigma_i^{n_i}$. Understanding the
resurgence properties of such a transseries is again best understood
in terms of the alien derivatives $\Delta_\omega$. For the one-parameter
transseries, a key ingredient in the calculation of alien derivatives
was the bridge equation \eq{bridgeEq}. In the case of the
multi-parameter transseries, the bridge equation usually takes the
form
\begin{equation}\label{eq:multiParameterBridgeEq}
\Delta_{\bm{\ell}\cdot \bm{A}} \Psi^{(\bm{n})}(\alpha) =
\begin{cases}
\bm{S}_{\bm{\ell}}\cdot (\bm{n}+\bm{\ell}) \Psi^{(\bm{n}+\bm{\ell})}
& \quad\ell_i\leq \delta_{ij}, \quad \bm{\ell}\neq\bm{0},\\
0 & \quad\text{elsewhere},
\end{cases}
\end{equation}
where for each combination $\bm{\ell}\cdot \bm{A}$ we now need a
whole vector of Stokes parameters
$S_{\bm{\ell}}=(S_{\bm{\ell}}^{(1)},...,S_{\bm{\ell}}^{(k)})$. This
equation can be derived by generalizing the steps we took in
\sec{bridgeEquation} to the case of a multi-parameter transseries.
The Borel singularities of the sectors $\Psi^{(\bm{n})}$ lie at
positions $t=\bm{\ell}\cdot\bm{A}$ in the Borel plane, with
$\bm{\ell}\in\mathbb{Z}^k$ with entries bounded from below by
\eq{multiParameterBridgeEq}. Thus \eq{borelSingularities} becomes
\begin{equation}\label{eq:BorelSingularitiesMultiParTrans}
\mathcal{B}[\Psi^{(\bm{n})}](t+\bm{\ell}\cdot\bm{A})
= \frac{a}{2\pi \mathrm{i}\, t} + \mathsf{S}_{\bm{n}\to \bm{n}+\bm{\ell}}\ \mathcal{B}[\Psi^{(\bm{n}+\bm{\ell})}](t)\frac{\log(t)}{2\pi \mathrm{i}} + \text{holomorphic}.
\end{equation}
where the {\em Borel residues}
$\mathsf{S}_{\bm{n}\to \bm{n}+\bm{\ell}}$ can be computed in terms of
the Stokes parameters $S_{\bm{\ell}}^{(i)}$ using
\eq{multiParameterBridgeEq}.
In \sec{1Dalienchain}, we explained how, in the one-parameter case, the
bridge equation translates to a set of allowed motions along an
{\em alien chain} of non-perturbative sectors. Furthermore, we gave
computational rules for the computation of Stokes
discontinuities and large order formulae for the asymptotic behaviour
of transseries. The natural extension for multi-parameter transseries
is to think of the sectors $\Psi^{(\bm{n})}$ as living on a
$k$-dimensional {\em alien lattice}. The computational rules outlined
in the previous subsections are then to a large extent unaltered, the main
exception being that we have a richer structure of allowed resurgence
motions on the (multi-dimensional) alien lattice.
As an example, we consider the two-dimensional case $k=2$ (see
\fig{alienLatticeExample}), and consider all the motions consisting of
a single step starting from the node $\Psi^{(2,1)}$.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{figures/alien_lattice.pdf}
\caption{Two-dimensional alien lattice. We show all
allowed motions of a single step starting from node
$\Psi^{(2,1)}$. Compared to the allowed motions for the
one-dimensional alien chain, we observe a much richer structure of
allowed resurgence motions.}
\label{fig:alienLatticeExample}
\end{figure}
Similar to what happens for the one-parameter transseries, the
constraints in the bridge equation \eq{multiParameterBridgeEq} lead to
only two types of forward motion in the $\bm\ell=(1,0)$ and
$\bm\ell=(0,1)$ directions: $\Delta_{A_1}$ and $\Delta_{A_2}$. In other
words, starting from node $\Psi^{(2,1)}$, one can only
reach\footnote{As will shortly be made clear, the concept of reaching
a node means that the corresponding sector then occurs in the large
order description of the coefficients of the original sector.} node
$\Psi^{(2,3)}$ by acting with $\Delta_{A_2}$ twice. For pure backwards
motion we have, as before, more options. In the example in
\fig{alienLatticeExample}, we see the allowed purely backwards motions
consisting of a single step, obtained by acting with $\Delta_{-A_1}$,
$\Delta_{-A_2}$, $\Delta_{-2A_1}$ and $\Delta_{-A_1-A_2}$. This means that from
$\Psi^{(2,1)}$ one can reach node $\Psi^{(0,1)}$ using either
$\Delta_{-2A_1}$ or twice $\Delta_{-A_1}$.
The example shows that we now also have a type of mixed forward and
backward motion, obtained by acting with $\Delta_{-A_1+A_2}$,
$\Delta_{-2A_1+A_2}$ and $\Delta_{A_1-A_2}$. However, similarly to pure
forward motion, the constraint of the bridge equation limits the
forward part of the motion in particular to minimal step sizes. E.g.\
acting with $\Delta_{-A_1+2A_2}$ does not lead to an allowed motion.
Finally, we should emphasize that for paths of multiple steps it is
not allowed to mix single steps with different directions, similar to
the fact that one could not mix forward and backward motion in the
one-parameter case. The reason for this is that for the computation
of the Stokes automorphism, \eq{StokesAutomorphism}, in a singular
direction
\begin{equation}\label{eq:singularDirection}
\theta_{\bm\ell} = \arg(\bm{\ell}\cdot \bm{A})\,,
\end{equation}
one only requires alien derivatives in the $\bm\ell$-direction. To
clarify this with an example, note that to reach node $\Psi^{(0,3)}$ starting from
$\Psi^{(2,1)}$ one can act with $\Delta_{-A_1+A_2}$ twice, but the path
where one first acts with $\Delta_{-2A_1+A_2}$ and subsequently with
$\Delta_{A_2}$ is not allowed since that combination does not occur in
any Stokes automorphism.
For the computational rules, the concepts of step, path, their
length, etc.\ stay unchanged in the multi-parameter setting. However,
as we now have a vector of Stokes coefficients, the weight of a step,
\eq{weightStep}, becomes an inner product
\begin{equation}
w(\mathcal{S}(\bm{n}\to\bm{m})) = \bm{m}\cdot \bm{S}_{\bm{m}-\bm{n}}.
\end{equation}
The expression for Stokes' automorphism acting on node $\Psi^{(\bm{n})}$
in the singular direction
$\theta_{\bm\ell} = \arg(\bm{\ell}\cdot \bm{A})$, is now given by the
sum over all paths linking the nodes $\Psi^{(\bm{n}+m\bm{\ell})}$, with
$m>0$. Therefore, we can write in general
\begin{equation}
\underline{\mathfrak{S}}_{\theta_{\bm\ell}}\Psi^{(\bm{n})}
= \Psi^{(\bm{n})}+\sum_{m>0}\text{SF}_{(\bm{n}\to \bm{n}+m\bm{\ell})}e^{-m\frac{\bm{\ell}\cdot\bm{A}}{\alpha}}\Psi^{(\bm{n}+m\bm{\ell})}.
\end{equation}
Likewise, the large order relation \eq{1DlargeOrderEquation} becomes
\begin{equation}\label{eq:MultiParameterLargeOrderFactor}
f_k^{(\bm{n})}
\sim \sum_{\bm{\ell}\neq\bm{n}}
\frac{\text{SF}_{(\bm{n}\to\bm{ \ell})}}{2\pi\mathrm{i}}
\chi_{(\bm{n}\to\bm{\ell})}(k),
\end{equation}
with
\begin{equation}\label{eq:largeOrderFactorMultipParameter}
\chi_{(\bm{n}\to\bm{\ell})}(k)
=\sum_{h=0}^\infty f_h^{(\bm{\ell})}
\frac{\Gamma(k+\beta_{\bm{n}}-h-\beta_{\bm{\ell}})}{((\bm{\ell}-\bm{n})\cdot\bm{A})^{k+\beta_{\bm{n}}-h-\beta_{\bm{\ell}}}}
\end{equation}
the generalization of the large order factor,
\eq{1DlargeOrderFactor}. These generalizations to multi-parameter
transseries will all play a role when we study the Adler function.
As a final note on multi-parameter transseries, we
want to mention that it might be the case that despite what we have said, one finds that the
action of an operator like $\Delta_{2A_i}$, for some value of $i$, is non-zero. One possibility is that there is
an additional transseries parameter related to an exponential
transmonomial $e^{-\frac{2A_i}{\alpha}}$. As a result, a sector with
exponential $e^{-\frac{2A_i}{\alpha}}$ lies on top of a sector with
$\left(e^{-\frac{A_i}{\alpha}}\right)^2$. In \cite{Borinsky:2022knn},
which studies the all-order resurgence of factorially divergent series
associated to a renormalon in six-dimensional scalar $\phi^3$ theory,
such a transseries was actually found with three transseries
parameters and exponential transmonomials $e^{-\frac{A_i}{\alpha}}$,
$e^{-\frac{2A_i}{\alpha}}$ and $e^{-\frac{3A_i}{\alpha}}$. A second way in
which the action of $\Delta_{2A_i}$ does not vanish is that it might be
the case that the problem at hand does not allow for a bridge equation
of the form \eq{multiParameterBridgeEq}. In other words, the bridge
equation does not have the constraint that $\Delta_{2A_i}$ vanishes. We
will come back to this in our discussion of the Adler function in
\secs{adlerLO}{adlerNLO}.
\label{page:resonance}
Next, we discuss transseries with logarithmic transmonomials $\log(\alpha)$. Note that in the literature, see e.g.~\cite{Garoufalidis:2010ya,Aniceto:2011nu,Aniceto:2018bis,Borinsky:2022knn}, both multi-parameter transseries and logarithmic factors often occur in the case of {\em resonant} transseries, which are transseries where multiple $A_i$ add up to 0 in such a way that logarithmic factors must occur to solve the problem at hand. Although for the Adler function we have two exponents $A_1=-A_2$ and logarithmic factors do occur, in its transseries the two effects are not related and our transseries is not resonant.
One can
of course add such logarithmic transmonomials to the general multi-parameter
transseries \eq{multiParameterTransseries}. However, despite the fact
that we need such a multi-parameter transseries for the Adler function, we
will see in \sec{adlerNLO} that the part of the transseries with
logarithms effectively looks like a one parameter
transseries.
Therefore, here we only discuss how to extend the one-parameter transseries of
\eq{transseries} by including logarithmic transmonomials. Together with
the details given so far for multi-parameter transseries, it is then
straightforward to generalize this to the case of multi-parameter
transseries with logarithms.
The one-parameter transseries Ansatz with logarithms becomes:
\begin{equation}\label{eq:ansatzWithLogs}
F(\alpha,\sigma)
= \sum_{n=0}^\infty \sigma^n e^{-n\frac{A}{\alpha}} \Psi^{(n)}(\alpha),
\quad\text{with}\quad
\Psi^{(n)}(\alpha)
= \sum_{p=0}^{p_n}\log^p(\alpha)\
\sum_{h=0}^\infty f_h^{(n)[p]}\alpha^{h+\beta_n^{[p]}},
\end{equation}
where we included an expansion in logarithmic powers of $\alpha$ up to some finite logarithmic power $p_n$.
Note that in doing this, we add a new transmonomial $\log(\alpha)$ to the
transseries, but {\em not} a new transseries parameter $\hat{\sigma}$
in addition to $\sigma$. The reason for this is that the addition of
logs generally does not change the location of singularities in the Borel plane,
so there are no new Stokes automorphisms that would act on such a
parameter. For similar reasons, we consider all of the $f_h^{(n)[p]}$
to belong to the same non-perturbative sector and we will always draw
them as a single box in alien chains and lattices.
The derivation of the bridge equation, \eq{resurgenceEqs}, is
unaltered (and thus the allowed motions along the alien chain are the
same), but when we apply Cauchy's residue theorem to obtain large
order relations, we get additional and more complicated integrals. To
be precise, when we expand \eq{Cauchy3} around $\alpha=0$, we
additionally need to perform integrals of the form
\begin{align}
\int_0^\infty dy\ y^s e^{-m\frac{A}{y}} \log^p(y)\,,
\end{align}
for some $s$, $m$ and $p$. These integrals can be
evaluated exactly, and it is straightforward to show that this yields
the following large order relation for the perturbative coefficients
\begin{align}
f_k^{(0)[0]}
&\sim \sum_{\ell=1}^\infty \frac{S_1^\ell}{2\pi\mathrm{i}}\sum_{h=0}^\infty
f_h^{(\ell)[0]}\frac{\Gamma(k-h-\beta_\ell^{[0]})}{(\ell A)^{k-h-\beta_\ell^{[0]}}}\nonumber\\
&\hspace{2cm}
+\sum_{\ell=1}^\infty \frac{S_1^\ell}{2\pi\mathrm{i}}\sum_{h=0}^\infty \sum_{p=1}^{p_\ell}
f_h^{(\ell)[p]}\frac{\big[\log(\ell A)-\partial_g\big]^p\Gamma(g)}{(\ell A)^{k-h-\beta_\ell^{[p]}}}\Bigg|_{g=k-h-\beta_\ell^{[p]}}\,,
\label{eq:largeOrderWithLogs}
\end{align}
where the first term is analogous to \eqref{eq:largeOrderRelation}
and the second follows from the logarithmic transmonomials. One
can derive similar equations for the large order behaviour of the
coefficients of the non-perturbative sectors.
Although most of our discussion in \sec{resurgence} was focused on the
case of a single pole and a $\log$-branch cut, we finish this section
with a short discussion on other types of singularities in the Borel
plane that might appear. Indeed we will need this in the upcoming
sections for the Adler function. In fact, the characteristic
exponents $\beta_{\bm{n}}$ we added in the transseries Ansatz already
allow for higher order poles in the Borel plane. Looking at
\eqs{1DlargeOrderFactor}{MultiParameterLargeOrderFactor}, we notice
that these characteristic exponents play a role in the large order
behaviour of the perturbative and non-perturbative coefficients. To
see how this translates to the Borel plane, consider a formal power
series $F(\alpha)$ and its Borel transform
\begin{equation}\label{eq:valueBetaCoefficientHigherOrderPoles}
F(\alpha) = \sum_{n=0}^\infty \Gamma(n-\beta)\,\alpha^{n+1}
\qquad\implies\qquad
\mathcal{B}[F](t) = \frac{\Gamma(-\beta)}{(1-t)^{-\beta}},
\end{equation}
where we assumed that $\beta<0$. In particular, $\beta=-1,-2,-3,...$
correspond to a single, double, triple pole etc. in the Borel plane.
Furthermore, the sectors with logarithms in the transseries Ansatz
\eq{ansatzWithLogs} yield new types of singularities. As an example, consider the $p=1$ terms of \eq{largeOrderWithLogs} with $\partial_g\Gamma(g) = \Gamma(g)\psi(g)$ (with $\psi(z)=\td{}{z}\log\Gamma(z)$ the digamma function), i.e.\ consider the formal series
\begin{equation}\label{eq:logTransSingularityExample1}
G(\alpha) = \sum_{k=0}^\infty \Gamma(k-\beta)\psi(k-\beta)\,\alpha^{k+1}
\end{equation}
with $\beta<0$ again.
Its Borel transform is given
by
\begin{equation}\label{eq:logTransSingularityExample}
\mathcal{B}[G](t)
= -\Gamma(-\beta)\frac{\log(1-t)-\psi(-\beta)}{(1-t)^{-\beta}},
\end{equation}
and we indeed observe that these terms lead to a type of singularity we have not considered so far. We
will discuss the role of these Borel singularities for the Adler
function when we encounter them in \secs{adlerLO}{adlerNLO}.
\section{Adler function with one bubble chain (\texorpdfstring{$\ord{1/N_f}$)}{ }}
\label{sec:adlerLO}
We briefly discussed, in \sec{uvirrenormalons},
how bubble chain diagrams cause perturbative series to
show factorial growth. Here, in \sec{flavour} we formalize this with a
discussion of the flavour expansion in QED and QCD. We then turn to
the Adler function in \sec{adler} and compute the
diagrams that contain one bubble chain and are leading in the flavour expansion. Subsequently we analyze the resulting
asymptotic series using the resurgence techniques developed in
\sec{resurgence}: in \sec{resurgenceLO} for the leading and subleading non-perturbative orders and in \sec{resurgenceLOall} for all other nonvanishing orders. In \sec{discussionLO} we briefly discuss the results we found and explain how they set us up for the $\ord{1/N_f^2}$ investigations in the rest of the paper.
\subsection{Flavour expansion}
\label{sec:flavour}
Let us consider the flavour expansion of QED/QCD with $N_f$ massless fermions. We can write an observable in perturbation theory as
\begin{equation}
F(\alpha) = \sum_{n=0}^\infty c_n\,\alpha^{n},
\end{equation}
where, in general, the coefficients $c_n$ can be written as an expansion in $N_f$,
\begin{equation}\label{eq:coefficientsExpandedInNf}
c_n = c_n^{(0)} + ... + c_n^{(n-1)}N_f^{n-1}.
\end{equation}
In this flavour expansion we take the large $N_f$ limit while keeping the 't Hooft coupling $N_f\alpha$ fixed. In this limit, the dominant contribution to $c_n$ is given by $c_n^{(n-1)}$ and we therefore reorder the perturbative expansion as an expansion in $1/N_f$:
\begin{equation}\label{eq:flavourExpansion}
F = \frac{1}{N_f}\sum_{n=0}^\infty c_{n}^{(n-1)}(N_f \alpha)^n
\left(
1+ \ORD{\frac{1}{N_f}}
\right).
\end{equation}
To see what diagrams go into each order of $1/N_f$, recall the
one-loop vacuum-polarization graph (fermion bubble)
\begin{equation}
\Pi_{\mu\nu}(k)
=\vcenter{\hbox{\includegraphics[width=.16\textwidth]{figures/fermion_bubble}}}
= (k_\mu k_\nu-\eta_{\mu\nu}k^2)\ \pi(k^2),
\end{equation}
which we already calculated in \eqs{vacpolint}{vacpolint3}, where we found
\begin{equation}\label{eq:fermionBubble}
\pi(k^2) = -\alpha \beta_{0f} \bigg[\log\bigg(\frac{-k^2}{\mu^2}\bigg) + C\bigg].
\end{equation}
Note that we have canceled the UV divergence with a counterterm.
Here $\beta_{0f}=T N_f/(3\pi)$ with $T=1$ in QED and $T=1/2$ in
QCD, while $C$ is a scheme dependent constant (e.g.\ $C=-\frac{5}{3}$ in the $\overline{\text{MS}}$-scheme, as in \sec{uvirrenormalons}).
We notice that such a fermion bubble counts as $N_f\alpha =\ord{1}$ in the flavour expansion.
Including the external lines, the effective photon propagator with $n$ such fermion bubbles reads
\begin{equation}\label{eq:bubbleChain}
\vcenter{\hbox{\includegraphics[width=.17\textwidth]{figures/multiple_fermion_bubbles.pdf}}}
\equiv\underbrace{\vcenter{\hbox{\includegraphics[width=.32\textwidth]{figures/bubble_chain.pdf}}}}_{n\ \text{fermion bubbles}}
=-\frac{\mathrm{i}}{k^2}\Big(\eta_{\mu\nu} - \frac{k_\mu k_\nu}{k^2}\Big)\Big[-\pi(k^2)\Big]^n,
\end{equation}
so that a bubble chain counts as $(N_f\alpha)^n =\ord{1}$ in the flavour expansion as well.
In other words, the coefficients $c_n^{(n-1)}$ in \eq{flavourExpansion}, and also the coefficients at higher orders in the flavour expansion can be computed by replacing virtual photons/gluons with the bubble chain \eq{bubbleChain}. From \eq{fermionBubble} it then follows that this leads to the inclusion of logarithms in the Feynman integral.
As already explained in \sec{uvirrenormalons}, integrating over such logarithms in a Feynman integral leads to factorial growth.
Diagrams including the sum over bubble chains can be computed using the effective (Dyson summed) propagator of such a bubble chain:
\begin{align}
D_{\mu\nu}(k)
&= -\frac{\mathrm{i}}{k^2}\Big(\eta_{\mu\nu} - \frac{k_\mu k_\nu}{k^2}\Big) \frac{1}{1+\pi(k^2)}
-\mathrm{i}\xi \frac{k_\mu k_\nu}{k^4},
\end{align}
with $\xi$ the gauge fixing parameter.
However, instead of working with this Dyson-summed effective propagator, it will be more convenient to work with its Borel transform with respect to $\alpha$ (which appears in $\pi(k^2)$), which reads \cite{Beneke:1992ch}
\begin{align}
\mathcal{B}\big[\alpha D_{\mu\nu}(k)\big] (u)
=-\frac{\mathrm{i}}{k^2}\bigg(\eta_{\mu\nu}-\frac{k_\mu k_\nu}{k^2}\bigg)
\bigg(-\frac{\mu^2}{k^2}e^{-C}\bigg)^u -\mathrm{i}\xi\frac{k_\mu k_\nu}{k^4},
\label{eq:BorelChain}
\end{align}
where we conveniently rescaled the usual variable $t$ in the Borel plane to $u=-\beta_{0f}t$.
Furthermore, before Borel transforming we inserted an overall factor
of $\alpha$ that will be convenient later, when we perform the actual
diagram calculations.
When we have $n_c>1$ bubble chains in a diagram, we can use the fact that the Borel transform of a product of series is obtained by taking the convolution of the individual Borel transforms:
\begin{equation}\label{eq:flavourExpansionConvoInt}
\mathcal{B}\bigg[\prod_{j=1}^{n_c} \alpha D_{\mu_j\nu_j}(k_j)\bigg](u)
= \frac{1}{(-\beta_{0f})^{n_c-1}} \int_0^u \bigg[\prod_{j=1}^{n_c}du_j\bigg] \delta\Big(u-\sum_{j=1}^{n_c}u_j\Big) \prod_{j=1}^{n_c} \mathcal{B}\big[\alpha D_{\mu_j\nu_j}(k_j)\big] (u_j).
\end{equation}
This can be verified by taking the Laplace transform of both sides of
the equation with respect to $t$. In \sec{convoInt} we give more details on the effect of the convolution integral on the resurgence structure in the case of general $n_c$. In \sec{adlerNLO}, where we discuss the Adler function at order $1/N_f^2$, we will only need the case $n_c=2$.
For the case of QED we notice that positive $t$ corresponds to negative $u$,
since $\beta_{0f}$ is positive. For QCD one completes $\beta_{0f}$ to the
full first coefficient of the $\beta$-function $\beta_0 = -(11-2 N_f/3)/(4\pi)$ \cite{Beneke:1998ui,Parisi:1978bj,Mueller:1984vh}, so that now positive $t$
corresponds to positive $u$ (thus, infrared renormalons appear on the
positive $u$ axis for QCD, and on the negative axis for QED).
In practice this implies that one can perform a QED calculation to obtain the non-abelian counterpart in QCD by replacing $\beta_{0f}$ with $\beta_{0}$ and by adding the appropriate SU$(3)$ color factors. In the literature, this procedure is also known as the process of naive non-abelianization \cite{Broadhurst:1994se,Ball:1995ni,Beneke:1994qe}.
We should mention here that for the large $N_f$ limit, $\beta_0$ changes sign (this happens for $N_f>16$) after which QCD is no longer asymptotically free.
Hence, for QCD, instead of a large $N_f$ expansion one uses a large
$\beta_0$ expansion, and we rewrite \eq{coefficientsExpandedInNf} as an expansion in $\beta_0$
\begin{equation}
c_n = \tilde{c}_n^{(0)} + ... + \tilde{c}_n^{(n-1)}\beta_0^{n-1}.
\end{equation}
Thus, in the large $\beta_0$ expansion for QCD, the expansion of a generic observable \eq{flavourExpansion} becomes
\begin{equation}\label{eq:flavourExpansionQCD}
F= \frac{1}{\beta_0}\sum_{L=0}^\infty \tilde{c}_{n}^{(n-1)}\left(\beta_0 \alpha\right)^n
\left(1+\ORD{\frac{1}{\beta_0}}\right).
\end{equation}
where we now define the 't Hooft coupling as $\beta_0 \alpha$ for the QCD case. In what follows, we refer to the large $N_f$ expansion for both QED and QCD, even though we use the large $\beta_0$ expansion in QCD. We perform our calculations in QED and in the end use the procedure of naive non-abelianization to convert our results to obtain the QCD result.
\subsection{Adler function}
\label{sec:adler}
We consider the Fourier transform of the correlation function of two vector currents $j_\mu=\bar \psi\gamma_\mu \psi$ of massless quarks, which can be written as
\begin{equation}
(-i)\int d^4x\, e^{-iqx} \mae{0}{T\{j_\mu(x)j_\nu(0)\}}{0} = (q_\mu q_\nu - \eta_{\mu\nu}q^2)\Pi(Q^2),
\end{equation}
with $Q^2=-q^2$. The Adler function is defined as
\begin{equation}
\label{eq:pitoadler}
D(Q^2) = 4\pi^2Q^2\td{\Pi(Q^2)}{Q^2},
\end{equation}
which is once again an expansion in the coupling $\alpha$. Understanding its asymptotic behaviour in perturbation theory and how
this is related to non-perturbative expansions will be the main focus
for the rest of this work. We use the flavour expansion to isolate
the relevant Feynman diagrams that lead to asymptotic series. In this
section we focus on the diagrams in \fig{AdlerLO} that contribute
to the Adler function at leading order in the flavour expansion. A
brief discussion on the contribution of these diagrams to
the Adler function was already given in \sec{uvirrenormalons}. In
\sec{adlerNLO}, we will then discuss (a subset of) the diagrams at
next-to-leading order in $1/N_f$ for the Adler function.
\begin{figure}
\centering
\begin{subfigure}{4cm}
\includegraphics[width=\textwidth]{figures/adlerLO1.pdf}
\caption{}
\label{fig:adlerLO1}
\end{subfigure}
\begin{subfigure}{4cm}
\includegraphics[width=\textwidth]{figures/adlerLO2.pdf}
\caption{}
\label{fig:adlerLO2}
\end{subfigure}
\caption{Diagrams at $\ord{1/N_f}$ in the flavour expansion. Diagram $(a)$ contributes with a factor of 2 to account for the similar diagram where the bubble chain connects below the vertices.}
\label{fig:AdlerLO}
\end{figure}
Instead of calculating diagrams directly, we shall calculate their
Borel transform, as the bubble chain then simplifies to an analytic,
regularized photon propagator (recall \eq{BorelChain}). In what
follows, we will work in Landau gauge,
i.e.\ $\xi=0$, and in $d=4$ dimensions since the Adler function is
UV finite after counterterms for the fermion bubbles are included.
As we will explain in the next subsection, the
perturbative coefficients of the Adler function itself can then be
easily extracted by applying an inverse Borel transform.
The calculation of the diagram in \fig{adlerLO1} is relatively
straightforward. Using results of \app{masters} and \ref{app:momentumIntegrals} one can show that
the two-loop integral factorizes into one-loop ones. With the one-loop master integral \eq{6} we find
\begin{equation}\label{eq:LOdiagram1}
\mathcal{B}[\Pi_a(Q^2)](u)
= \frac{1}{2\pi^2}\bigg(\frac{Q^2}{\mu^2}e^{C}\bigg)^{-u}
\frac{1}{u(u+1)(u-1)^2(u-2)^2}.
\end{equation}
where for notational simplicity we ignored an overall factor of the colour Casimir factor $C_F$ that we will reinstate in our final expressions. In order to calculate the diagram of \fig{adlerLO2}, one needs the two-loop scalar master integral given in \eq{resultZakarov}.
By similar methods as for the diagram in \fig{adlerLO1} we derive
\begin{align}\label{eq:LOdiagram2}
\mathcal{B}[\Pi_b(Q^2)](u)
&= -\frac{1}{6\pi^2}\bigg(\frac{Q^2}{\mu^2}e^{C}\bigg)^{-u}
\bigg[
\frac{6}{u(u+1)(u-1)^2(u-2)^2}\nonumber\\
&\hspace{2cm}+
\frac{\psi^{(1)}(\tfrac{4-u}{2})
-\psi^{(1)}(\tfrac{3-u}{2})
+\psi^{(1)}(\tfrac{u+1}{2})
-\psi^{(1)}(\tfrac{2+u}{2})}
{u(u-1)(u-2)}
\bigg],
\end{align}
with $\psi^{(1)}(z)=\td{^2}{z^2}\log\Gamma(z)$ the trigamma function.
Taking the two diagrams together and using \eq{pitoadler} to translate
the result for $\Pi(Q^2)$ to that of the Adler function $D(Q^2)$, we
obtain the Borel transform of the Adler function at leading order (LO) in the flavour expansion (which has also recently been derived in \cite{Mikhailov:2023lqe}):
\begin{align}
\mathcal{B}[D_{\mathrm{LO}}(Q^2)](u)
&=4\pi^2Q^2\td{}{Q^2}\Big[\mathcal{B}[\Pi_b(Q^2)](u) + 2 \mathcal{B}[\Pi_a(Q^2)](u)\Big]\nonumber\\
&= \frac{2}{3}\bigg(\frac{Q^2}{\mu^2}e^C\bigg)^{-u}
\frac{\psi^{(1)}\big(\tfrac{4-u}{2}\big)
-\psi^{(1)}\big(\tfrac{3-u}{2}\big)
+\psi^{(1)}\big(\tfrac{1+u}{2}\big) -\psi^{(1)}\big(\tfrac{2+u}{2}\big)}{(u-1)(u-2)}.\label{eq:LOAdler}
\end{align}
In the next subsection we use this result to perform a resurgence
analysis, as the expression readily allows for an expansion around $u=0$. Note that this result was already known in the equivalent form \cite{Broadhurst:1992si}
\begin{align}
\mathcal{B}[D_{\mathrm{LO}}(Q^2)](u)
&= \frac{32}{3}\bigg(\frac{Q^2}{\mu^2}e^C\bigg)^{-u}
\frac{1}{2-u}
\sum_{n=2}^\infty \frac{(-1)^n n}{(n^2-(1-u)^2)^2}\label{eq:Broadhurst}\\
&= \frac{32}{3}\bigg(\frac{Q^2}{\mu^2}e^c\bigg)^{-u}
\frac{1}{2-u} \sum_{n=2}^\infty \frac{(-1)^n}{4(1-u)} \bigg[\frac{1}{(n-1+u)^2}-\frac{1}{(n+1-u)^2}\bigg],\label{eq:AdlerNLOReadPoleStructure}
\end{align}
where the second form of this equation allows one to easily read off
the pole structure. We see that there exists an infinite set of both
UV ($u<0$) and IR ($u>0$) singularities (for QCD) at integer values of $u$, all
of which are double poles, except for the singularity at $u=2$ which
is a single pole and the singularity at $u=1$ which is
absent\footnote{Note that although there appears to be a pole at $u=1$
in \eq{AdlerNLOReadPoleStructure}, it vanishes because the
expansion around $u=1$ of the terms in square brackets starts at
order $\ord{u-1}$.} (see also \fig{BorelSingularities}). This also agrees with the calculation that led to \eq{11}, which presented the leading IR and UV poles in the Borel plane. We should
mention here that the singularity at $u=1$ really is absent:
it is present in \eq{LOdiagram1} and \eq{LOdiagram2} separately, but cancels when we take the two diagrams together. The
fact that the IR renormalon at $u=1$ is absent is characteristic for
the Adler function and is already expected on physical grounds, as there is no dimension-2 operator in the OPE for the Adler function with massless fermions \cite{Parisi:1978az,Mueller:1984vh}.
\begin{figure}
\centering
\includegraphics[width=.7\linewidth]{figures/singularity_plot}
\caption{Singularities of $\mathcal{B}[D_\mathrm{LO}](u)$. The UV-renormalons lie at $u=-1,-2,-3,...$ and the IR-renormalons at $u=2,3,4,...$. Characteristic for the Adler function is that the singularity at $u=1$ is missing (see the discussion below \eq{AdlerNLOReadPoleStructure}).}
\label{fig:BorelSingularities}
\end{figure}
\subsection{Resurgence analysis: first two non-perturbative sectors}
\label{sec:resurgenceLO}
Our goal is to construct the transseries
for the Adler function. For the associated resurgence analysis we need the perturbative coefficients of the leading order
Adler function $D_{\mathrm{LO}}(Q^2)$ itself, i.e. we need to do an inverse
Borel transform on the results obtained in the previous subsection. In this section, and the sections hereafter, we will work mostly with the variable $u=-\beta_{0}t$ as the actual Borel parameter instead of $t$, so that singularities in the Borel plane are conveniently placed at integer positions.
Therefore, we expand the Borel transform \eq{LOAdler} around $u=0$ and
write the result as
\begin{equation}
\mathcal{B}[D_{\mathrm{LO}}(Q^2)](u)
= \sum_{n=0}^\infty \frac{d_n}{\Gamma(n+1)} u^n,
\end{equation}
after which the perturbative coefficients $d_n$ of $D_{\mathrm{LO}}$ can be read off:
\begin{align}
\hat{D}_{\mathrm{LO}}(\alpha)
&= \sum_{n=0}^\infty d_n\hat\alpha^{n+1}\label{eq:AdlerLOPerturbativeSector1}\\
&= \hat\alpha
+ \left(\frac{23}{6}-4\zeta_3\right)\hat\alpha^2
+ \left(18-12\zeta_3\right)\hat\alpha^3
+ \left(\frac{201}{2}-42\zeta_3-60\zeta_5\right)\hat\alpha^4
+...\nonumber
\end{align}
Here $\hat\alpha=-\beta_{0}\alpha$ is the variable conjugate to the Borel variable $u$. As the inverse Borel transform gives an additional factor of $\hat{\alpha}$, we also defined $\hat{D}(Q^2)=-\beta_0D(Q^2)$, to compensate for the additional factor of $-\beta_0$. Furthermore, to avoid logarithms of
$\frac{Q^2}{\mu^2}e^C$ that will make the analysis needlessly
complicated, we choose $\mu^2=Q^2e^C$. For notational convenience, we will drop the hats on $\hat{D}$ and $\hat{\alpha}$ in what follows. When we give the full transseries expression in
the end of this section, we will reinstate the factors of
$\beta_{0}$.
With the exact Borel transform \eq{LOAdler} one can easily compute the
first, say $n=1000$, perturbative coefficients. (However, for the resurgence
analysis we perform below, we found that $200$ coefficients was
enough.) With these coefficients at our disposal, we can start
thinking about what we may expect the transseries to look like. As
explained near \eq{BorelSingularitiesMultiParTrans}, for
a $k$-parameter transseries the Borel singularities of the
transseries sector lie at positions $u=\bm{\ell}\cdot\bm{A}$ with
$\ell\in\mathbb N^k$ and $\bm{A}=(A_1,...,A_k)$ the non-perturbative
exponents. As the poles of $\mathcal{B}[D_\mathrm{LO}](u)$ lie at both positive and
negative integer values of $u$, a minimal Ansatz for the transseries
is a two-parameter transseries with $A_1 = 1$ and $A_2 = -1$ the
non-perturbative exponents. Therefore, we write the perturbative
sector as
\begin{equation}
\label{eq:7}
D_{\mathrm{LO}}^{(0,0)}(\alpha)
\equiv \sum_{n=0}^\infty d_n^{(0,0)}\alpha^{n+1},
\end{equation}
with $d_n^{(0,0)}\equiv d_n$ the perturbative coefficients of \eq{AdlerLOPerturbativeSector1}. This will now be the $(0,0)$-sector of a two-parameter transseries
\begin{equation}
\label{eq:5}
D_{\mathrm{LO}}(\alpha)
= \sum_{n=0}^\infty\sum_{m=0}^\infty \sigma_1^n\sigma_2^m e^{-n\frac{A_1}{\alpha}}e^{-m\frac{A_2}{\alpha}}D_{\mathrm{LO}}^{(n,m)}(\alpha).
\end{equation}
We should emphasize here that this is a {\em minimal} transseries Ansatz and, recalling our discussion in \sec{generalizations}, it might be the case that one needs additional transseries parameters on top of the parameters $\sigma_1$ and $\sigma_2$, as well as further (e.g.\ logarithmic) transmonomials.
We shall discuss the interpretation of the transseries parameters in \sec{discussionLO} and will indeed find logarithmic transmonomials when we extend our analysis to order $1/N_f^2$ in section \sec{adlerNLO}.
In order to test the Ansatz \eq{5}, and construct the non-perturbative sectors
\begin{equation}
D_{\mathrm{LO}}^{(n,m)}(\alpha) = \alpha^{\beta_{nm}}\sum_{h=0}^\infty d_h^{(n,m)}\alpha^h
\end{equation}
we will use resurgent large order relations. Note that we use the conventional notation $\beta_{nm}$ (and sometimes for readability $\beta_{n,m}$) for the starting orders in the non-perturbative sectors; of course these orders (that always have two indices) should not be confused with the $\beta$-function and its coefficients. Recall that we can
readily write large order relations using the allowed motions on the
alien lattice, which for our two-parameter Ansatz looks as shown in
\fig{alienLatticeExample}. Since our starting node is the
$(0,0)$-sector, we know that we can only have forward steps in the
$(1,0)$ and $(0,1)$ direction, so that only the $(\ell,0)$ and
$(0,\ell)$ sectors will play a role. In other words, for the large
order behaviour of the perturbative coefficients, we get
\begin{align}
\label{eq:8}
d_k^{(0,0)}
\sim \sum_{\ell=1}^\infty \frac{S_{1,0}^\ell}{2\pi\mathrm{i}}\sum_{h=0}^\infty d_h^{(\ell,0)}\frac{\Gamma(k-h-\beta_{\ell,0})}{(\ell A_1)^{k-h-\beta_{\ell,0}}}
+ \sum_{\ell=1}^\infty \frac{S_{0,1}^\ell}{2\pi\mathrm{i}}\sum_{h=0}^\infty d_h^{(0,\ell)}\frac{\Gamma(k-h-\beta_{0,\ell})}{(\ell A_2)^{k-h-\beta_{0,\ell}}},
\end{align}
which is a relation between the perturbative coefficients $d_k^{(0,0)}$ and the non-perturbative coefficients $d_h^{(\ell,0)}$ and $d_h^{(0,\ell)}$.
As we will not determine the Stokes constants $S_{1,0}$ and $S_{0,1}$ in this paper, and to avoid writing down ubiquitous factors of $2\pi\mathrm{i}$, we will absorb these factors in the non-perturbative coefficients. Therefore we write
\begin{equation}
\tilde{d}_h^{(\ell,0)} \equiv \frac{S_{1,0}^\ell}{2\pi\mathrm{i}} d_h^{(\ell,0)}
\qquad\text{and}\qquad
\tilde{d}_h^{(0,\ell)} \equiv \frac{S_{0,1}^\ell}{2\pi\mathrm{i}} d_h^{(0,\ell)}\,.
\end{equation}
We shall see in a moment that ordering the elements of \eq{8} by their
size gives us a way to extract the non-perturbative coefficients,
keeping in mind that this relation is valid in the large $k$ limit.
First, we observe that nodes in the alien lattice that are further away from the
$(0,0)$-node, i.e.\ with larger $\ell$, have a more exponentially
suppressed contribution to the large order growth of $d_k^{(0,0)}$.
As $A_1=1$ and $A_2=-1$ are equal in size, the leading order growth
comes from the $(1,0)$ and $(0,1)$ sectors. However, we saw above that
the pole in the Borel plane at $u=1$ is missing, meaning that the
$(1,0)$-contribution is actually absent, so hence the leading
growth will be coming from the $(0,1)$ sector:
\begin{align}\label{eq:largeOrderSector1}
d_k^{(0,0)}
&\sim
\sum_{h=0}^\infty \tilde{d}_h^{(0,1)}\frac{\Gamma(k-h-\beta_{0,1})}{A_2^{k-h-\beta_{0,1}}}
+ \ord{2^{-k}} \\
&=
\frac{\Gamma(k-\beta_{0,1})}{A_2^{k-\beta_{0,1}}}
\bigg[
\tilde{d}_0^{(0,1)}
+ \frac{A_2\, \tilde{d}_1^{(0,1)}}{k}
+ \frac{A_2(\beta_{0,1}+1) \tilde{d}_1^{(0,1)}+A_2^2\, \tilde{d}_2^{(0,1)}}{k^2}
+...
\bigg]
+ \ord{2^{-k}}\,, \nonumber
\end{align}
where $\ord{2^{-k}}$ refers to contributions from the $(\ell,0)$ and $(0,\ell)$ non-perturbative
sectors with $\ell \geq 2$.
In the second line we factored out the leading order growth in
$k$ and explicitly wrote the first two $1/k$ corrections.
To show that this expression indeed captures the large order
behaviour of the perturbative coefficients, and in order to find the various
as yet unknown coefficients that appear in this large order relation, we
can perform so called {\em ratio tests} (see e.g.\ \cite{Marino:2007te, Garoufalidis:2010ya, Aniceto:2011nu}) on the known
perturbative coefficients $d_k^{(0,0)}$. To see this, we start with $A_2$ and
$\beta_{0,1}$, and consider
\begin{equation}\label{eq:ratioA}
\mathbb A(k) \equiv k\,\frac{d_k^{(0,0)}}{d_{k+1}^{(0,0)}} \sim A_2 +\ORd{\frac{1}{k}}
\end{equation}
and
\begin{equation}\label{eq:ratioBeta}
\mathbb B(k) \equiv k\log\bigg[\frac{f_{k+1}}{f_k}\bigg]
\sim - \beta_{0,1}+\ORd{\frac{1}{k}}
\qquad\text{with}\qquad
f_k = \frac{d_k^{(0,0)} A_2^k}{\Gamma(k)},
\end{equation}
which should converge to $A_2$ and $-\beta_{0,1}$ in the large
$k$-limit, respectively. In \figs{ratioA}{ratioBeta} we have plotted
these two sequences together with their first {\em Richardson
transform} for the first 200 perturbative coefficients.
The Richardson transform method accelerates convergence of
sequences of the form
\begin{equation}
s_0=\lim_{k\to\infty}\mathbb S(k)
\qquad\text{with}\qquad
\mathbb S(k) = s_0+\frac{s_1}{k}+\frac{s_2}{k^2}+...
\end{equation}
where the coefficients $s_n$ are not known explicitly. We denote
the $N$th Richardson transform of $\mathbb S(k)$ by RT$[\mathbb S](k,N)$. We refer
the interested reader to \app{richardson} for more details.
\begin{figure}[t]
\centering
\begin{subfigure}{7cm}
\includegraphics[width=\textwidth]{figures/ratioA.pdf}
\caption{}
\label{fig:ratioA}
\end{subfigure}
\hspace{.5cm}
\begin{subfigure}{7cm}
\includegraphics[width=\textwidth]{figures/ratioBeta.pdf}
\caption{}
\label{fig:ratioBeta}
\end{subfigure}
\caption{Fig.~(a) shows the sequence $\mathbb A(k)$, \eq{ratioA} (top curve, blue),
together with its first Richardson transform (bottom curve, orange) to accelerate
the convergence. Similar, (b) shows the sequence $\mathbb B(k)$ (bottom curve, blue),
\eq{ratioBeta}, and its first Richardson transform (top curve, orange). We observe that
already the first Richardson transform significantly accelerates the
convergence. As explained in the text, higher order Richardson
transforms converge even faster and become indistinguishable from
the gray, horizontal lines denoting the expected values $A_2=-1$ and
$\beta_{0,1}=-2$.}
\label{fig:ratioLO}
\end{figure}
Above \eq{7} we expressed the expectation that the non-perturbative exponent $A_2$ is equal
to $-1$. Recalling our discussion around
\eq{valueBetaCoefficientHigherOrderPoles}, also the value
$\beta_{0,1}=-2$ for the characteristic exponent was expected since
the leading singularity in the Borel plane at $u=-1$ is a double pole. Using Richardson transforms, we confirm these expectations to good numerical precision. For example, RT$[\mathbb A](190,10)$
agrees to 23 decimal places with $A_2=-1$, and we obtain similar
precision for $\beta_{(0,1)}= -2$.
We can now systematically extract the non-perturbative coefficients
$d_h^{(0,1)}$ in \eq{largeOrderSector1}. First, we rewrite that equation in the form
\begin{equation}\label{eq:ratioCoef1}
\mathbb D_0(k)
\equiv\frac{d_k^{(0,0)} A_2^{k-\beta_{0,1}}}{\Gamma(k-\beta_{0,1})}
\sim
\tilde{d}_0^{(0,1)}+\ORD{\frac{1}{k}}.
\end{equation}
In \fig{LO_growth} we have plotted the sequence together with its 10th Richardson transform, and we conclude that
\begin{equation}
\tilde{d}_0^{(0,1)} = \frac{4}{9}
\end{equation}
to great precision, e.g.\ RT$[\mathbb D_0](190,10)$ agrees with $4/9$ to 26 decimal
places. We should notice here that we determined the value of
$\tilde{d}_0^{(0,1)}$, in which the unknown Stokes constant $S_{0,1}$
is absorbed. This is actually a generic phenomenon in the resurgence of
transseries with free parameters such as $\sigma_1$ and $\sigma_2$ in \eq{5}; one can see from that expression that rescaling these
parameters will rescale the expansion coefficients in the
non-perturbative sectors. Moreover, as $\sigma_i$ is rescaled, the overall Stokes constants which determine the jump in $\sigma_i$ are similarly rescaled. As a
result, only `scale invariant' combinations of the expansion
coefficients and the Stokes constants have a physical
meaning. Computing the Stokes constants (after fixing this scale invariance) is a different enterprise, which we will not
pursue in this paper.
\begin{figure}
\centering
\begin{subfigure}{7cm}
\includegraphics[width=\textwidth]{figures/LO_growth.pdf}
\caption{}
\label{fig:LO_growth}
\end{subfigure}
\hspace{.5cm}
\begin{subfigure}{7cm}
\includegraphics[width=\textwidth]{figures/NLO_growth.pdf}
\caption{}
\label{fig:NLO_growth}
\end{subfigure}
\caption{Fig. $(a)$ and $(b)$ show the sequence $\mathbb D_0(k)$ (top curve left, blue),
\eq{ratioCoef1}, and $\mathbb D_1(k)$ (bottom curve right, blue), \eq{ratioCoef2}, respectively. To
accelerate the rate of convergence, the other curves show the 10th Richardson
transform in orange, which shows good convergence to the exact values
$\frac49$ and $\frac{10}{27}$, respectively, denoted by the gray horizontal
lines.}
\label{fig:sector01}
\end{figure}
Next, we can plot the sequence
\begin{equation}\label{eq:ratioCoef2}
\mathbb D_1(k)
\equiv k\left[\mathbb D_0(k) -\frac{4}{9}\right]
\sim A_2\, \tilde{d}_1^{(0,1)}+\ORd{\frac{1}{k}}
\end{equation}
and from \fig{NLO_growth} we deduce that this sequence converges to
\begin{equation}
A_2\, \tilde{d}_1^{(0,1)} = \frac{10}{27}.
\end{equation}
which can again be verified up to 26 decimal places by doing 10
Richardson transforms. We can repeat this process and in fact derive a
closed form for all of the $1/k$ contributions in the large order
relation:
\begin{align}
d^{(0,0)}_k
&\sim \frac{\Gamma(k+2)}{(-1)^{k}}
\bigg[
\frac49
+ \frac{10}{27}\frac{1}{k}\sum_{n=0}^\infty \bigg(\frac{-1}{k}\bigg)^n
\bigg]
+ \ord{2^{-k}}\nonumber\\
&= \frac{\Gamma(k+2)}{(-1)^{k}}
\bigg[
\frac49
+ \frac{10}{27}\frac{1}{k+1}
\bigg]
+ \ord{2^{-k}}.
\end{align}
Using \eq{largeOrderSector1}, we can also rewrite this in terms of the non-perturbative coefficients $d_h^{(0,1)}$, which are seen to be
\begin{equation}\label{eq:LOAdler(0,1)coeffs}
\tilde{d}_0^{(0,1)}= \frac{4}{9},
\qquad
\tilde{d}_1^{(0,1)}= -\frac{10}{27},
\qquad
\tilde{d}_{h\geq2}^{(0,1)}= 0.
\end{equation}
Most of the coefficients vanish and as a result, the
$(1,0)$ sector is not an asymptotic series but a finite one. We
will revisit this point more extensively later.
So far, from the first 200 perturbative coefficients that we computed numerically and analyzed, we have extracted the complete first non-perturbative sector $D_\mathrm{LO}^{(0,1)}$. For the leading order Adler function, we know the
exact Borel transform \eq{LOAdler} and can therefore
check our large order relations. That is, we have that
\begin{align}
\mathcal{B}[D_{\mathrm{LO}}^{(0,0)}](u)\Big|_{u=-1}
&= \frac{4/9}{(u+1)^2} + \frac{10/27}{(u+1)} + \text{regular terms},
\end{align}
which indeed agrees with \eq{LOAdler(0,1)coeffs}. (This is also the
reason that we wrote the above numerical estimates with an equal
sign.) Note that this leading UV behaviour was already indicated in \eq{11}, but here we determined the precise coefficient, as well as the subleading term. We emphasize that this is a rather unique situation. In
many resurgence examples one does not have the luxury of knowing the exact
Borel transform. In fact, in \sec{adlerNLO} when we study the Adler
function at the next order in the flavour expansion, we cannot compute the Borel transform exactly
anymore. Clearly, however, we can study
the perturbative coefficients numerically (using similar ratio tests as
the ones above), and extract the non-perturbative sectors without
ambiguity.
Having obtained the complete first non-perturbative sector, we can now subtract the corresponding leading order growth from the large order expression for the perturbative sector coefficients. That is, we construct the sequence
\begin{equation}\label{eq:LOsubtractSector1}
\delta_k^{(1)}
\equiv d_k^{(0,0)}-\frac{\Gamma(k+2)}{(-1)^{k}}
\bigg[
\frac49
+ \frac{10}{27}\frac{1}{k+1}
\bigg]
\end{equation}
where the asymptotic growth should now be dominated by the non-perturbative sectors $(2,0)$ and $(0,2)$, i.e.
\begin{align}\label{eq:largeOrderLOsector2}
\delta_k^{(1)}
\sim
\sum_{h=0}^\infty \tilde{d}_h^{(2,0)}\frac{\Gamma(k-h-\beta_{2,0})}{(2 A_1)^{k-h-\beta_{2,0}}}
+
\sum_{h=0}^\infty \tilde{d}_h^{(0,2)}\frac{\Gamma(k-h-\beta_{0,2})}{(2 A_2)^{k-h-\beta_{0,2}}}
+ \ord{3^{-k}}.
\end{align}
Since the Borel transform has poles at both $u=-2$ and $u=+2$, we now
see two sectors appearing in this formula. Note that also
$A_1=-A_2=1$, i.e. they are equal but opposite in sign. Even though the rest of the
$(\ell,0)$ and $(0,\ell)$ sectors are quite different from
each other, as we shall see later, the parity of $k$ will
have an important effect on the right hand side of this expression. In
particular, in order to use Richardson transforms to speed up the
convergence of series, we have to separate even and odd $k$ and
perform Richardson transforms on them separately. In
\figs{sector2A}{sector2beta}, we show the sequences of \eq{ratioA} and
\eq{ratioBeta}, but now constructed from $\delta_k^{(1)}$. Notice that
the ratio test \eq{ratioBeta} can be taken, if we assume that the starting orders
$\beta_{2,0}=\beta_{0,2}\equiv\beta_2$ are the same.
Even though we find in \eq{LOsector2parameters} below that strictly speaking $\beta_{2,0}\neq \beta_{0,2}$, this is not an issue since we can compute with $\beta_2 = \min(\beta_{2,0},\beta_{0,2})$ and allow for a finite number of vanishing leading coefficients in one of the two series. Here, we
conclude that $2A_2 = -2A_1=-2$ and $\beta_{2}=-2$.
\begin{figure}
\centering
\begin{subfigure}{7cm}
\includegraphics[width=\textwidth]{figures/sector2AforPaper.pdf}
\caption{}
\label{fig:sector2A}
\end{subfigure}
\hspace{.5cm}
\begin{subfigure}{7cm}
\includegraphics[width=\textwidth]{figures/sector2betaforPaper.pdf}
\caption{}
\label{fig:sector2beta}
\end{subfigure}
\caption{In Fig. $(a)$ and $(b)$, we show the sequences $\mathbb A(k)$ of \eq{ratioA} (outer two curves, in blue), and $\mathbb B(k)$ of \eq{ratioBeta}, but now constructed from $\delta_k^{(1)}$ given in \eq{LOsubtractSector1}. We also show their 2nd Richardson transforms. (Middle two curves, in orange.) Note that as the parity of $k$ is important, we took the Richardson transforms separately on even and odd $k$.}
\label{fig:sector2}
\end{figure}
Following \eqs{ratioCoef1}{ratioCoef2}, but now also taking the parity of $k$ into account by using ratio tests for even and odd $k$ separately, we obtain
\begin{equation}\label{eq:LOsector2parameters}
\tilde{d}_0^{(2,0)} = 0,
\qquad
\tilde{d}_1^{(2,0)} = 2
\qquad \mbox{and} \qquad
\tilde{d}_0^{(0,2)} = -\frac29,
\qquad
\tilde{d}_1^{(0,2)} = \frac{7}{54},
\end{equation}
with all other coefficients equal to zero.
This can be compared to the expansions around $u=-2$ and $u=2$ respectively, and we find the expected agreement with the exact Borel transform:
\begin{align}
\mathcal{B}[D_{\mathrm{LO}}^{(0,0)}](u)\Big|_{u=-2}
&= -\frac{2/9}{(2+u)^2} - \frac{7/54}{(2+u)} + \text{regular terms}\nonumber\\
\mathcal{B}[D_{\mathrm{LO}}^{(0,0)}](u)\Big|_{u=2}
&= \frac{2}{(2-u)} + \text{regular terms}\,,
\end{align}
where this leading IR behaviour was indicated in \eq{11}, here given with the precise coefficient.
Notice that in the second line the leading singularity around $u=2$ is a single pole and
therefore one should expect $\beta_{2,0}=-1$ instead of $-2$, as mentioned above. Indeed,
we found that $\tilde{d}_0^{(2,0)}=0$ and one can conclude that the series effectively starts at order $-1$.
\subsection{Resurgence analysis: all non-perturbative sectors}
\label{sec:resurgenceLOall}
Having used resurgence to recover the expressions for the first and second
non-perturbative sectors of the Adler function transseries, we can
repeat the analysis of the previous subsection to also obtain other
sectors. Since the procedure is very similar, we will be more brief here about the techniques and will focus more on the results and on the general structure that emerges.
To find higher sectors, we recursively probe the next sector by
subtracting from the large order expression for the perturbative coefficients all sectors we have found
so far. In this way we find the following pattern:
\begin{equation}\label{eq:LOallOrderLargeOrderFormula}
\delta_k^{(\ell)}
\equiv \delta_k^{(\ell-1)} - \frac{\Gamma(k+2)}{(-\ell)^{k+2}}\bigg(s_k^{(\ell)}+\frac{t_k^{(\ell)}}{k+1}\bigg),
\end{equation}
where $\ell$ labels the sectors, and the coefficients $\delta_k^{(\ell)}$ are the ones whose large order
behaviour can be used to probe sector $\ell+1$. Each sector provides
two nonzero coefficients, $s_k^{(\ell)}$ and $t_k^{(\ell)}$, whose
values for $1\leq\ell\leq8$ are given in \tab{adlerHigherSectors}. As
before, we obtained these numbers numerically, using
ratio tests. However, for these specific diagrams, we fortunately have
an analytic Borel transform result, and as in the previous section our
numbers can be directly checked with the expansion of the Borel
transform around the poles. Thus, our numbers are not only
approximately equal to the given fractions, but
turn out to be exact.
\begin{table}
\centering
\renewcommand{\arraystretch}{1.7}
\begin{tabular}{|c|c|c|}
\hline
$\ell$ & $s_k^{(\ell)}$ & $t_k^{(\ell)}$ \\
\hline
1 & $\frac49$ & $\frac{10}{27}$\\
2 & $-\frac29$ & $-\frac{7}{27}+4(-1)^k$\\
3 & $\frac{2}{15}-\frac43(-1)^k$ & $\frac{9}{50}-6(-1)^k$ \\
4 & $-\frac{4}{45}+\frac49(-1)^k$ & $-\frac{88}{675}+\frac{40}{27}(-1)^k$\\
\hline
\end{tabular}
\hspace{.2cm}
\begin{tabular}{|c|c|c|}
\hline
$\ell$ & $s_k^{(\ell)}$ & $t_k^{(\ell)}$ \\
\hline
5 & $\frac{4}{63}-\frac{2}{9}(-1)^k$ & $\frac{130}{1323}-\frac{35}{54}(-1)^k$\\
6 & $-\frac{1}{21}+\frac{2}{15}(-1)^k$ & $-\frac{15}{196}+\frac{9}{25}(-1)^k$\\
7 & $\frac{1}{27}-\frac{4}{45}(-1)^k$ & $\frac{119}{1944}-\frac{154}{675}(-1)^k$\\
8 & $-\frac{4}{135}+\frac{4}{63}(-1)^k$ & $-\frac{304}{6075}+\frac{208}{1323}(-1)^k$\\
\hline
\end{tabular}
\caption{This table gives the values of $s_k^{(\ell)}$ and $t_k^{(\ell)}$ for $1\leq\ell\leq8$ contributing to the large order behaviour of the perturbative coefficients via \eq{LOallOrderLargeOrderFormula}. As explained in the text, these values can be extracted numerically from the perturbative coefficients, but as these values can also be extracted from the exact Borel transform, this table is exact.}
\label{tab:adlerHigherSectors}
\end{table}
A closer examination of these numbers reveals the following general pattern:
\begin{equation}
s_k^{(\ell)}
= \frac83(-1)^\ell
\begin{cases}
\frac{-1}{(\ell+1)(\ell+2)}
&\quad\text{for}\quad \ell<3\\
\frac{-1}{(\ell+1)(\ell+2)}
+ (-1)^k\frac{1}{(\ell-2)(\ell-1)}
&\quad\text{for}\quad \ell\geq3.
\end{cases}
\end{equation}
Similarly, we find a closed form for $t_k^{(\ell)}$:
\begin{equation}
t_k^{(\ell)}
= \frac83(-1)^\ell
\begin{cases}
-\frac{10}{27} &\quad\text{for}\quad \ell=1\\
-\frac{7}{72}+\frac32(-1)^k &\quad\text{for}\quad \ell=2\\
\frac{-\ell(2\ell+3)}{(\ell+1)^2(\ell+2)^2}
+ (-1)^k\frac{\ell(2\ell-3)}{(\ell-2)^2(\ell-1)^2}
&\quad\text{for}\quad \ell\geq3.
\end{cases}
\end{equation}
Observe that in all these results, there are terms without $k$-dependent signs as well as terms with $(-1)^k$ factors, respectively probing the coefficients of the $(\ell,0)$ sectors (with non-perturbative exponent $\ell A_1 = +\ell$) and the $(0,\ell)$ sectors (with non-perturbative exponent $\ell A_2 = -\ell$).
Also notice that we have a closed form with a fixed pattern for all $\ell\geq3$, while
$\ell=1$ and $\ell=2$ are different. This can be traced back to the
fact that the LO Adler function is the sum of the two diagrams of
\fig{AdlerLO}, where the first diagram only contributes
singularities at $u=-1$, $u=1$ and $u=2$ to the Adler function. Here
we recall \eq{LOdiagram1}, and note that the singularity at $u=0$ from that expression vanishes
when we take the derivative w.r.t $Q^2$ to obtain the Adler function.
Combining all results, we arrive at the first main result of this
paper: the complete asymptotic expansion of the perturbative
coefficients of the Adler function at LO in the flavour expansion,
\begin{equation}
d_k^{(0,0)}
= \frac{\Gamma(k+2)}{(-1)^k}
\sum_{\ell=1}^\infty\frac{1}{\ell^{k+2}}\bigg(
s_k^{(\ell)}+\frac{t_k^{(\ell)}}{k+1}
\bigg).
\end{equation}
The sum over $\ell$ converges, and for each term within the sum we only have
finitely many coefficients rather than asymptotic expansions, so this is now
not a large order formula but an exact form for the
coefficients (hence the equal sign). At the next order in $1/N_f$,
in \sec{adlerNLO}, we will mostly lack exact expressions, but resurgence
will still provide us with
very precise large order relations, that involve non-perturbative sectors.
We have seen now that for the Adler function at $\ord{1/N_f}$ the non-perturbative
sectors consist of only one or two coefficients, i.e. the $1/k$ expansions
are not asymptotic. This means that these sectors do not lead to further resurgence. Thus we have now carried out this resurgence analysis
to its natural end: there is no resurgence of non-perturbative sectors at any other
sector than the $(0,0)$-sector. In other words, the picture of
the alien lattice in this case simplifies to that of \fig{alienLatticeLOAdler}.
This structure implies that the ${(\ell,0)}$-sector with $\ell>1$ is not reached by repeatedly applying $\Delta_A$, i.e.\ for $\ell>1$, $\Delta_{A}^\ell D_\mathrm{LO}^{(0,0)}=0$. Instead one needs non-vanishing alien derivatives for larger steps forward: $\Delta_{\ell A} D_\mathrm{LO}^{(0,0)}\sim D_\mathrm{LO}^{(\ell,0)}$. Similarly, we have for $\ell>1$ that $\Delta_{-A}^\ell D_\mathrm{LO}^{(0,0)}=0$, but $\Delta_{-\ell A} D_\mathrm{LO}^{(0,0)}\sim D_\mathrm{LO}^{(0,\ell)}$. We will comment more on the implications of this in the next subsection.
\begin{figure}
\centering
\includegraphics[width=.6\linewidth]{figures/alien_lattice_LO.pdf}
\caption{Alien lattice for the LO Adler function. The only asymptotic sector is the $(0,0)$-sector, which we denoted by a filled box to distinquish it from the $(\ell,0)$ and $(0,\ell)$ sectors which are not asymptotic. The dashed boxes are sectors that completely vanish.}
\label{fig:alienLatticeLOAdler}
\end{figure}
Finally, observe that at order $1/N_f$, the alien `lattice' in fact is hardly a lattice: all internal $(n,m)$ sectors with $n \neq 0$ and $m \neq 0$ vanish. In \sec{adlerNLO}, we will see that this is a result of the relative simplicity of the expressions at order $1/N_f$, and that at higher orders such internal sectors {\em will} appear.
As all non-perturbative sectors containing both non-zero powers of $\sigma_1$ and $\sigma_2$ vanish, we can write the complete transseries \eq{5} in the following form:
\begin{equation}
D_{\mathrm{LO}}(\alpha,\sigma_1,\sigma_2)
= D_{\mathrm{LO}}^{(0,0)}(\alpha)
+ D_\mathrm{LO}^{\text{IR}}(\alpha,\sigma_1)
+ D_\mathrm{LO}^{\text{UV}}(\alpha,\sigma_2),
\end{equation}
where we labeled the different pieces according to their QCD nature. That is, $D_\mathrm{LO}^{\text{IR}}$ contains all IR renormalons in QCD, meaning that these are the UV renormalons in QED. Likewise, $D_\mathrm{LO}^{\text{UV}}$ contains all UV (IR) renormalons in QCD (QED).
For the IR sectors, we get
\begin{align}
D_\mathrm{LO}^{\text{IR}}(\alpha,\sigma_1)
&= -\frac{4\pi\mathrm{i}\,C_F}{\beta_{0}}\left(\frac{\sigma_1}{S_{1,0}}\right)^2\,e^{\frac{2}{\beta_{0}\alpha}}\label{eq:DLOIR1}\\
&\hspace{.5cm}
- \frac{2\pi\mathrm{i}\,C_F}{\beta_{0}}\sum_{\ell=3}^\infty
\left(\frac{\sigma_1}{S_{1,0}}\right)^\ell\, e^{\frac{\ell}{\beta_{0}\alpha}}
\bigg(
\frac{\frac83(-1)^\ell}{(\ell-1)(\ell-2)}\frac{-1}{\beta_{0}\alpha}
+ \frac{\frac83(-1)^\ell(2\ell-3)}{(\ell-1)^2(\ell-2)^2}
\bigg)\,,\nonumber
\end{align}
and for the UV sectors
\begin{align}
D_\mathrm{LO}^{\text{UV}}(\alpha,\sigma_2)
&= -\frac{2\pi\mathrm{i}\,C_F}{\beta_{0}}\frac{\sigma_2}{S_{0,1}}\,e^{\frac{-1}{\beta_{0}\alpha}}
\bigg(\frac{4}{9}\frac{-1}{\beta_{0}\alpha} -\frac{10}{27}\bigg)\label{eq:DLOUV1}\\
&\hspace{.5cm}
-\frac{2\pi\mathrm{i}\,C_F}{\beta_{0}}\left(\frac{\sigma_2}{S_{0,1}}\right)^2\,e^{\frac{-2}{\beta_{0}\alpha}}
\bigg(-\frac{2}{9}\frac{-1}{\beta_{0}\alpha} +\frac{7}{54}\bigg)\nonumber\\
&\hspace{.5cm}
- \frac{2\pi\mathrm{i}\,C_F}{\beta_{0}}\sum_{\ell=3}^\infty
\left(\frac{\sigma_2}{S_{0,1}}\right)^\ell\, e^{\frac{-\ell}{\beta_{0}\alpha}}
\bigg(
\frac{-\frac83(-1)^\ell}{(\ell+1)(\ell+2)}\frac{-1}{\beta_{0}\alpha}
+ \frac{\frac83(-1)^\ell(2\ell+3)}{(\ell+1)^2(\ell+2)^2}
\bigg)\,.\nonumber
\end{align}
In these expressions, we switched back to the original real coupling constant $\alpha$ -- recall the discussion below \eq{AdlerLOPerturbativeSector1}, where we switched to the variable $\hat{\alpha} = - \beta_0\alpha$ -- and reinstated all factors of $\beta_0=-(11-2N_f/3)/(4\pi)$, again expressed in QCD variables. We observe the overall factor $1/\beta_0$, and that the coupling constant $\alpha$ always comes with a factor $\beta_0$. Recalling \eq{flavourExpansionQCD}, this was to be expected as the flavour expansion is an expansion in $1/\beta_0$. Furthermore, in \eqs{DLOIR1}{DLOUV1} we reinstated factors of $2\pi\mathrm{i}$ and the Stokes constants $S_{1,0}$ and $S_{0,1}$.
We will discuss the role of the transseries parameters $\sigma_1$ and $\sigma_2$ in the next subsection.
The sums over the sector number $\ell$ converge and can be carried out explicitly.
We define
\begin{equation}
x_1=\frac{\sigma_1}{S_{1,0}}\,e^{\frac{1}{\beta_{0}\alpha}}
\qquad\text{and}\qquad
x_2=\frac{\sigma_2}{S_{0,1}}\,e^{-\frac{1}{\beta_{0}\alpha}}\,,
\end{equation}
to highlight the fact that the transseries parameters, the exponential factors and the Stokes constants $S_{1,0}$ and $S_{0,1}$ always group together.
Performing the sums in \eq{DLOIR1} yields
\begin{equation}\label{eq:DLOIR2}
D_\mathrm{LO}^{\text{IR}}(\alpha,\sigma_1)
= \frac{16\pi\mathrm{i}\,C_F}{3\beta_{0}}\left[
x_1^2\left(\frac{1}{\beta_{0}\alpha}-\frac{7}{4}\right)
-x_1(1+x_1)\left(\frac{\log(1+x_1)}{\beta_{0}\alpha}
+\text{Li}_2(-x_1)\right)
\right]\,,
\end{equation}
where Li$_2$ is the dilogarithm. For \eq{DLOUV1} we get
\begin{align}
D_\mathrm{LO}^{\text{UV}}(\alpha,\sigma_2)
&= \frac{2\pi\mathrm{i}\,C_F}{\beta_{0}}x_2
\bigg(\frac{4}{9}\frac{1}{\beta_{0}\alpha} +\frac{10}{27}\bigg)
+\frac{2\pi\mathrm{i}\,C_F}{\beta_{0}}x_2^2
\bigg(\frac{4}{9}\frac{1}{\beta_{0}\alpha} -\frac{8}{27}\bigg)\label{eq:DLOUV2}\\
&\hspace{1cm}
+ \frac{16\pi\mathrm{i}\,C_F}{3\beta_{0}}
\frac{1}{x_2}\left(1+\frac{1}{x_2}\right)
\left(\frac{\log^{(>3)}(1+x_2)}{\beta_{0}\alpha}
+\text{Li}_2^{(>3)}(-x_2)
\right)
,\nonumber
\end{align}
where we defined
\begin{equation}
\log^{(>n)}(1+x)
\equiv\log(1+x)+\sum_{j=1}^n\frac{(-x)^n}{n}
\quad\text{and}\quad
\text{Li}_2^{(>n)}(-x)
\equiv\text{Li}_2(-x)-\sum_{j=1}^n\frac{(-x)^n}{n^2}
\end{equation}
as the logarithm and polylogarithm with the first $n$ terms of their Taylor expansion removed. Although \eq{DLOUV2} appears to generate factors such as
$1/x_2$, these in fact cancel as the first three coefficients of the Taylor
expansion of the $\log$ and Li$_2$ are subtracted.
\subsection{Discussion}
\label{sec:discussionLO}
At this point, we have studied the full single bubble-chain part of the Adler function. Before moving on to the main goal of our investigation, the multi-bubble chain contributions, let us discuss some finer details of the structure that has appeared.
First of all, one may wonder how many free parameters the transseries structure of the Adler function really has. This is not a question that can be answered from the Borel plane structure alone; once one finds a singularity at a location $A$ it is natural to include a free parameter $\sigma$ in front of the $e^{-A/\alpha}$ transmonomial, but it is then not clear whether the $e^{-nA/\alpha}$ sectors should come with powers $\sigma^n$ or with new parameters. (See e.g.~\cite{Borinsky:2022knn} where, as we discussed below \eq{largeOrderFactorMultipParameter} the latter is the case.)
Let us consider only the `forward' Stokes automorphism, coming from the IR singularities on the positive real $u$-axis. Since these singularities are evenly spaced, one may expect a single parameter to be sufficient to describe these sectors. However, the fact that e.g. $\Delta_A^2D_\mathrm{LO}^{(0,0)}=0$, but $\Delta_{2A}D_\mathrm{LO}^{(0,0)}\sim D_\mathrm{LO}^{(2,0)}$ shows that the transseries we find has a structure that differs from the simplest one-parameter transseries structure known from many toy models. There, the forward Stokes automorphism is simply $\sigma \to \sigma + S$ which implies in particular that $\Delta_{2A}D_\mathrm{LO}^{(0,0)} = 0$. This means that our forward Stokes automorphism at least has a bridge equation beyond that of such simple toy models. Unfortunately, since e.g.\ a differential equation for the Adler function as needed for the procedure in \sec{bridgeEquation} is not known, we cannot derive such a bridge equation which would allow us to read off the number of parameters.
Thus, our transseries may be a many-parameter one, but on physical
grounds we still expect the number of `true' parameters to be two. The
reason is that there are only two Stokes automorphisms that play a
role -- and as we shall see, this continues to be the case when we
include further bubble chains -- an IR one along the positive real
$u$-axis and a UV one along the negative real $u$-axis. A single parameter
could in principle suffice to describe the jump across every single Stokes line, and there is no additional physical effect that would lead to singularities on other rays in the $u$-plane, so we conjecture that two parameters is in fact enough. Interestingly, this is similar to the recent work of \cite{Gu:2022fss} where in a different setting a transseries was found with many degrees of freedom (parameterized by variables $\tau_k$ there) but with only a single transseries parameter for each Stokes line.
When it comes to the {\em values} of the transseries parameters, these are difficult to determine and require further physics input. This is beyond the scope of this paper, where we are mainly interested in the underlying structure. In principle, one would need to numerically sum non-perturbative sectors of the Adler function transseries and compare these to experimental data to read off values for $\sigma_1$ and $\sigma_2$. See~\cite{Maiezza:2021mry} for an effort in this direction for the IR Stokes automorphism.
Recall that in \eq{DLOUV2}, a variable $x_2$ appears which comes from the UV singularities and grows exponentially large when $\alpha \to 0$. This is a common feature for transseries that have instanton actions of opposite sign, $\pm A$. An expansion in such a large variable $x_2$ may not seem very physically meaningful. One possible solution to this is of course that in $x_2 = \sigma_2 e^{-\frac{1}{\beta_{0f}\alpha}}$ the parameter $\sigma_2$ vanishes. Even if this were not the case, $x_2$ still has a definite value at any given energy scale, and so an expression like \eq{DLOUV2} still makes sense, even though its small $\alpha$ expansion may not.
As a final remark, let us address the fact that many of the non-perturbative sectors we have found only have a finite number of terms -- and therefore, these sectors in particular have no asymptotic growth of their own. This may seem to limit the use of resurgence techniques to only the perturbative sector, but we expect this to be an artifact of the simplicity of the single bubble chain approximation. In fact, in the next sections we shall see that many non-perturbative sectors do become true asymptotic expansions when further bubble chains are included. This is reminiscent of the `Cheshire cat resurgence' of \cite{Kozcaz:2016wvy} where asymptotic growth of sectors can disappear in particular parameter limits\footnote{We quote the authors of \cite{Kozcaz:2016wvy}: ``All of the characteristics of resurgence remains even when its role seems to vanish, much like the lingering grin of the Cheshire Cat''.}. This may also happen in $1/N_f$ expansions -- see for example the interesting approach of \cite{DiPietro:2021yxb} where it was found that in the Gross-Neveu model, only at finite $N_f$ the full resurgent structure and asymptotic growth of sectors becomes visible.
\section{Using Borel convolution integrals for resurgence}
\label{sec:convoInt}
We saw in the previous section that the flavour
expansion is a useful framework to isolate perturbative series
that show factorial growth due to individual diagrams, i.e.\
renormalons. At higher orders in $1/N_f$, an important
ingredient of the flavour expansion is the convolution integral
\eq{flavourExpansionConvoInt}, which we repeat for convenience
\begin{equation}\label{eq:convoIntrepeated}
\mathcal{B}\bigg[\prod_{j=1}^{n_c} \alpha D_{\mu_j\nu_j}(k_j)\bigg](u)
= \frac{1}{(-\beta_{0f})^{n_c-1}} \int_0^u \bigg[\prod_{j=1}^{n_c}du_j\bigg] \delta\Big(u-\sum_{j=1}^{n_c}u_j\Big) \prod_{j=1}^{n_c} \mathcal{B}\big[\alpha D_{\mu_j\nu_j}(k_j)\big] (u_j).
\end{equation}
In section \ref{sec:adlerNLO}, we shall
apply this integral to go beyond the leading order in
the flavour expansion, but before doing so, we discuss in this
section what the effect is of the convolution integral on the
resurgence structure.
In particular, we will see momentarily that one does not need to know the particular exact Borel transforms, but that the structure of the alien derivatives and their calculus can be used instead. Although the techniques are a straightforward application of the framework outlined in \sec{resurgence}, in physics applications this is, as far as we are aware, a novel way to study renormalon effects.
The results we present in this section have twofold use. First, they
set up the calculation of particular $\ord{1/N_f^{2}}$ diagrams that we
shall encounter in the next section. The results, however, are more general
and also apply to more complicated convolution integrals which one
would encounter at higher orders in $1/N_f$. We explain, by means of examples that are relevant for the next section, how the convolution integral
`builds' resurgent functions. The results will also show
glimpses of the full resurgent structure that would emerge upon
including all orders in $1/N_f$.
\subsection{Resurgence of the convolution integral}
Given two asymptotic formal power series $F^{(0)}(\alpha)$ and $G^{(0)}(\alpha)$, we can define a new asymptotic power series $\Psi^{(0)}(\alpha)$ by taking the product:
\begin{equation}
\Psi^{(0)}(\alpha)
\equiv F^{(0)}(\alpha) \, G^{(0)}(\alpha).
\end{equation}
As explained in more detail in \sec{resurgence}, and shown extensively
in \sec{adlerLO}, the resurgence properties of $\Psi^{(0)}$ can
be recovered from the singularities of the Borel transform. Here, the
convolution integral appears as the Borel transform
of $\Psi^{(0)}$ is computed as
\begin{equation}\label{eq:convo}
\mathcal{B}[\Psi^{(0)}](u)
= \int_0^u du_1\, \mathcal{B}[F^{(0)}](u)\mathcal{B}[G^{(0)}](u-u_1)\,,
\end{equation}
and the resurgence structure can be found using the relation
\eq{borelSingularities} between the different non-perturbative sectors $\mathcal{B}[\Psi^{(n)}](u)$. However, except for simple examples,
performing the convolution integral might be a difficult task.
We can study instead the resurgence properties of $\Psi^{(0)}$ by using the
alien derivatives. Acting with an arbitrary alien
derivative $\Delta_\omega$ yields
\begin{align}\label{eq:alienDerivativeFG}
\Delta_\omega\Psi^{(0)}
= \Delta_\omega\left(F^{(0)}(\alpha)\right)G^{(0)}(\alpha)
+ F^{(0)}(\alpha)\Delta_\omega\left( G^{(0)}(\alpha)\right),
\end{align}
since the alien derivative satisfies the Leibniz rule. Thus
the calculation of $\Delta_\omega\Psi^{(0)}$ has shifted to the
calculation of $\Delta_\omega F^{(0)}$ and $\Delta_\omega G^{(0)}$. Consequently, if
the resurgence structure of $F^{(0)}$ and $G^{(0)}$ is
known, one can compute the resurgence structure of
$\Psi^{(0)}$. As alluded to above, we will see in \sec{adlerNLO} that, with one
exception, the convolution integral \eq{convo} cannot be
computed exactly. It will be easier to obtain the
resurgence properties of $F^{(0)}$ and $G^{(0)}$ instead of that of
$\Psi^{(0)}$. Therefore, to prepare our discussion in
\sec{adlerNLO}, and to show how \eq{alienDerivativeFG} can be employed
in practice, we discuss in this section a few relevant examples.
In these examples the convolution integral can be
computed exactly and therefore these computations act as a check on the method
of alien derivatives.
\subsection{Convolution of pure factorial growth}
Let us begin with the simplest possible model. Consider an asymptotic formal power series where the perturbative coefficients show pure factorial growth:
\begin{equation}
F^{(0)}(\alpha) = \sum_{n=0}^\infty\Gamma(n+1)\alpha^{n+1}
\qquad\implies\qquad
\mathcal{B}[F^{(0)}](u)=\frac{1}{1-u}.
\end{equation}
As the complete asymptotics of $F^{(0)}$ is determined by the pole of
$\mathcal{B}[F^{(0)}](u)$ in the Borel plane, we know that in the full
transseries there is just one possible non-perturbative sector,
$F^{(1)}$, which consists of only a single term
$S_1f_0^{(1)}(\alpha) = 2\pi\mathrm{i}$. As we will encounter many different Stokes
constants for different resurgent functions, we adopt the notation
in which we put the function name as a superscript on the Stokes
constants, e.g.\ $S_1^F$.
Although the resurgence of $F^{(0)}$ is
relatively simple, i.e.
\begin{equation}\label{eq:delta1F}
\Delta_1 F^{(0)} = 2\pi\mathrm{i}\,,
\end{equation}
with all other alien derivatives vanishing, we show in this section that it can be used as a
building block to build more complicated resurgent functions. By
taking powers of $F^{(0)}$, we will show schematically how such a
resurgence structure builds up.
Consider the formal power series
\begin{align}
\Psi^{(0)}(\alpha)
= \Big(F^{(0)}(\alpha)\Big)^2,
\end{align}
where its Borel transform is computed as
\begin{equation}\label{eq:convoIntFsq}
\mathcal{B}\Big[\Psi^{(0)}\Big](u)
= \int_0^udu_1\ \frac{1}{1-u_1}\frac{1}{1-u+u_1}
= -2\frac{\log(1-u)}{2-u}.
\end{equation}
Having the exact Borel transform, we can read off the resurgence
structure using \eq{borelSingularities}. Since we know that
\begin{equation}
\mathcal{B}\Big[\Psi^{(0)}\Big](u)
= -S_1^\Psi\mathcal{B}\Big[\Psi^{(1)}\Big](u-1)\frac{\log(1-u)}{2\pi\mathrm{i}}\,,
\end{equation}
we can read off the result
\begin{equation}
S_1^\Psi\mathcal{B}\Big[\Psi^{(1)}\Big](u-1)
=\frac{4\pi\mathrm{i}}{1-(u-1)}.
\end{equation}
Transforming back to the $\alpha$-plane, this corresponds to the formal power series
\begin{equation}\label{eq:convoExample1Psi1}
S_1^\Psi\Psi^{(1)}(\alpha)=4\pi\mathrm{i}\sum_{n=0}^\infty\Gamma(n+1)\alpha^{n+1}\,.
\end{equation}
We see that in this `squared model', the
coefficients of the leading non-perturbative sector $\Psi^{(1)}$ are
no longer a single term but now show pure factorial growth themselves.
Though we were able to perform the convolution integral exactly
and therefore immediately read off the resurgence structure, it is
instructive to obtain the same result using alien derivatives. On the
one hand, the bridge equation, \eq{resurgenceEqs}, tells us that
$\Delta_1\Psi^{(0)}=S_1^\Psi\Psi^{(1)}$. On the other hand, we have by
direct calculation
\begin{equation}
\Delta_1\Psi^{(0)}
= 2F^{(0)}\Delta_1F^{(0)}
= 2\sum_{n=0}^\infty 2\pi\mathrm{i}\,\Gamma(n+1)\alpha^{n+1},
\end{equation}
where we used \eq{delta1F}. We observe the same result as obtained from the exact Borel transform in \eq{convoExample1Psi1}. Likewise, for the second non-perturbative sector $\Psi^{(2)}$ we get
\begin{equation}\label{eq:delta1sq}
2\Big(S_1^\Psi\Big)^2\Psi^{(2)}
= \Delta_1^2\Psi^{(0)}
= 2\Delta_1F^{(0)}\Delta_1F^{(0)}
= 2 (2\pi\mathrm{i})^2\,,
\end{equation}
which is a sector with a single coefficient. Notice that this can also be related to the expansion of \eq{convoIntFsq} around $u=2$ which reads
\begin{equation}
\mathcal{B}[\Psi^{(0)}(u)\Big|_{u=2} = \pm 2\frac{\pi\mathrm{i}}{2-u}+...\,,
\end{equation}
where the ellipsis denotes regular terms. Up to a sign ambiguity this agrees with \eq{delta1sq}. This ambiguity originates in the ambiguous expansion of the logarithm $\log(1-u)$ in \eq{convoIntFsq} around $u=2$. We extensively come back to this point in \sec{adlerNLOsecondsectors}, where we shall see how to resolve such ambiguities.
This simple example easily generalizes to higher powers of $F^{(0)}(\alpha)$.
For example, consider
\begin{equation}
\Phi^{(0)}(\alpha)
\equiv \Big(F^{(0)}(\alpha)\Big)^3.
\end{equation}
The convolution integral to get the Borel transform
$\mathcal{B}[F^{(0)}(\alpha)^3]$ can still be done exactly, but we stay in the
$\alpha$-plane and follow the second method we just used for
$\big(F^{(0)}(\alpha)\big)^2$. Acting once with $\Delta_1$ yields
\begin{equation}
S_1^\Phi \Phi^{(1)}
= \Delta_1\Phi^{(0)}
= 3\Big(F^{(0)}\Big)^2\Delta_1F^{(0)}
= 6\pi\mathrm{i} \sum_{n=0}^\infty \alpha^{n+2}\sum_{h=0}^n\Gamma(n-h+1)\Gamma(h+1).
\end{equation}
Acting twice with $\Delta_1$ yields
\begin{equation}
2\left(S_1^\Phi\right)^2\Phi^{(2)}
= \Delta_1^2\Phi^{(0)}
= 6F^{(0)}\Big(\Delta_1F^{(0)}\Big)^2
= 6(2\pi\mathrm{i})^2 \sum_{n=0}^\infty \Gamma(n+1)\alpha^{n+1}.
\end{equation}
Finally we have
\begin{equation}
6\left(S_1^\Phi\right)^3\Phi^{(3)}
= \Delta_1^3\Phi^{(0)}
= 6\Big(\Delta_1F^{(0)}\Big)^3
= 6(2\pi\mathrm{i})^3.
\end{equation}
In \fig{AlienLatticeConvoIntExample1}, we show the alien chain built
up with more sectors, more of which are now asymptotic.
\begin{figure}
\centering
\includegraphics[width=.5\linewidth]{figures/convo_int_example_1.pdf}
\caption{Alien chains for the convolution of an asymptotic power series where the perturbative coefficients show pure factorial growth. The sectors with a filled box are true asymptotic sectors, whereas the white boxes are not and consist of a finite number of terms.}
\label{fig:AlienLatticeConvoIntExample1}
\end{figure}
\subsection{Convolution of power series with a double pole}
\label{sec:convoDoublePole}
In the next section we shall consider diagrams at order $1/N_f^2$ in
the flavour expansion We will encounter two generalizations of the
above convolution integral of power series with pure factorial growth.
We introduce and discuss these already in this section. The first generalization is the
case that the large order relation of the formal power series that we
convolute with itself is determined by a double pole, treated in this
subsection. In the next subsection we address the convolution of a power
series with (infinitely many) evenly separated poles in the Borel
plane.
Consider a power series $F^{(0)}(\alpha)$ where the large order growth of the coefficients is determined by a double pole in the Borel plane at $u=1$, e.g.\
\begin{equation}
F^{(0)}(\alpha)
= \sum_{n=0}^\infty\Gamma(n+2)\left(a+\frac{b}{n+1}\right)\alpha^{n+1}
\quad\implies\quad
\mathcal{B}[F^{(0)}](u)=\frac{a}{(1-u)^2} + \frac{b}{1-u}.
\end{equation}
Notice that we keep the
option open for subleading growth coming from a $(1-u)^{-1}$ term. We use $a$ and $b$ as a compact notation for
$f^{(1)}_0$ and $f^{(1)}_1$ respectively. We are again interested in
the resurgence properties of
\begin{equation}
\Psi^{(0)}(\alpha)\equiv \left(F^{(0)}(\alpha)\right)^2.
\end{equation}
The Borel transform of this power series is given by the convolution integral
\begin{align}
\mathcal{B}[\big(F^{(0)}\big)^2](u)
&= \int_0^udu_1
\left(
\frac{a}{(1-u_1)^2}+\frac{b}{1-u_1}
\right)
\left(
\frac{a}{(1-u+u_1)^2}+\frac{b}{1-u+u_1}
\right)\nonumber\\
&=
2\frac{a^2+ab}{1-u}
-S_1^\Psi\mathcal{B}\Big[\Psi^{(1)}\Big](u-1)\frac{\log(1-u)}{2\pi\mathrm{i}}\nonumber\\
&\hspace{1cm}
-2\left(\frac{2a^2}{(2-u)^2} + \frac{a^2+2ab}{2-u}\right) \,,\label{eq:convoIntf2LOandNLO}
\end{align}
where in the second equality we emphasized the resurgence of the
non-perturbative sector $\Psi^{(1)}$, which has the explicit form
\begin{align}
S_1^\Psi\mathcal{B}\Big[\Psi^{(1)}\Big](u-1)
=4\pi\mathrm{i}\left(\frac{2a^2}{(2-u)^3}+\frac{2ab}{(2-u)^2}+\frac{b^2}{2-u}\right).\label{eq:cubicpole}
\end{align}
Notice that, studying the singularities of these expressions around $u=2$, it might look like the second non-perturbative sector $\Psi^{(2)}$ gets contributions from both the $\Psi^{(1)}$ sector and the last line in \eq{convoIntf2LOandNLO}. However, as
\begin{equation}
\log(1-u)\Big|_{u=2}
= \pm \pi\mathrm{i} -(2-u)-\frac12(2-u)^2 + ...\,,
\end{equation}
the last line cancels against the (real part of the) expansion of the logarithm and $\mathcal{B}[\Psi^{(1)}]$ around $u=2$.
The inverse Borel transforms of \eq{cubicpole}, together with the residue of the
simple pole at $u=1$ in \eq{convoIntf2LOandNLO}, yield the
coefficients of the first non-perturbative sector
\begin{equation}
S_1^\Psi\Psi^{(1)}(\alpha)
= 4\pi\mathrm{i}(a^2+ab)+4\pi\mathrm{i}\sum_{n=0}^\infty 2\Gamma(n+3)\left(
a^2 + \frac{2ab}{n+2} + \frac{b^2}{(n+2)(n+1)}
\right)\alpha^{n+1}.\label{eq:convoIntf2LOandNLO1instanton}
\end{equation}
where, in the sum, we factored out the leading order growth $\Gamma(n+3)$ coming from the fact that \eq{cubicpole} has a cubic pole in the Borel plane (recall \eq{valueBetaCoefficientHigherOrderPoles}).
Instead of performing the convolution integral as in
\eq{convoIntf2LOandNLO}, we can again use alien derivatives. Since
\begin{equation}
\Delta_1F^{(0)}=2\pi\mathrm{i}\left(\frac{a}{\alpha}+b\right)\,,
\end{equation}
we find
\begin{align}
S_1^\Psi\Psi^{(1)}
&= 2F^{(0)}\Delta_1F^{(0)}
= 4\pi\mathrm{i}\left(\frac{a}{\alpha}+b\right)
\sum_{n=0}^\infty\Gamma(n+2)\left(a+\frac{b}{n+1}\right)\alpha^{n+1}\nonumber\\
&= 4\pi\mathrm{i}(a^2+ab)+4\pi\mathrm{i}\sum_{n=0}^\infty 2\Gamma(n+3)\left(
a^2 + \frac{2ab}{n+2} + \frac{b^2}{(n+2)(n+1)}
\right)\alpha^{n+1}\label{eq:readOff1instantonLOandNLO},
\end{align}
which is indeed the same as in \eq{convoIntf2LOandNLO1instanton}.
For this specific example there is another way to think about \eq{readOff1instantonLOandNLO}.
Instead of identifying the coefficients $f_n^{(0)}$, we rewrite \eq{readOff1instantonLOandNLO} as
\begin{align}\label{eq:doublePoleConvoTrick}
S_1^\Psi\Psi^{(1)}
&= 4\pi\mathrm{i}\left(\frac{a}{\alpha}+b\right)
\sum_{n=0}^\infty F^{(0)}_n\alpha^{n+1}\nonumber\\
&= 4\pi\mathrm{i}\,a\,F^{(0)}_0
+ 4\pi\mathrm{i}\,a\sum_{n=0}^\infty f^{(0)}_{n+1}\alpha^{n+1}
+ 4\pi\mathrm{i}\,b\sum_{n=0}^\infty f^{(0)}_n\alpha^{n+1}
\end{align}
Next, consider a generic series and its Borel transform
\begin{equation}
G(\alpha) = \sum_{n=0}^\infty g_n\alpha^{n+1}
\qquad\implies\qquad
\mathcal{B}[G](u)=\sum_{n=0}^\infty\frac{g_n}{\Gamma(n+1)}u^n.
\end{equation}
The derivative of the Borel transform yields
\begin{equation}
\mathcal{B}[G]'(u)
=\sum_{n=0}^\infty\frac{g_{n+1}}{\Gamma(n+1)}u^{n}
\qquad\implies\qquad
H(\alpha) = \sum_{n=0}^\infty g_{n+1}\alpha^{n+1},
\end{equation}
where in the last step we applied an inverse Borel transform. In other
words, the newly constructed series $H(\alpha)$, with
coefficients those of $G(\alpha)$ but shifted: $g_n\to g_{n+1}$,
corresponds to taking the derivative in the Borel plane. Applied to
\eq{doublePoleConvoTrick}, this implies
\begin{equation}\label{eq:doublePoleConvoTrick2}
S_1^\Psi\mathcal{B}[\Psi^{(1)}](u)
=4\pi\mathrm{i}\,a\,\mathcal{B}[F^{(0)}]'(u)+4\pi\mathrm{i}\,b\,\mathcal{B}[F^{(0)}](u).
\end{equation}
We shall employ this observation in \sec{adlerNLO}.
\subsection{Convolution with equidistant singularities}
\label{sec:convoEquidistant}
A final case we need is where we take the
convolution of perturbative series $F^{(0)}$ and $G^{(0)}$, where the
large order behaviour of their coefficients, $f_n^{(0)}$ and
$g_n^{(0)}$, come from singularities at $u=1,2,3,...$ in the Borel
plane. Furthermore, we assume that we know both the resurgence
properties of $F^{(0)}$ and $G^{(0)}$, i.e. we know the action of the
alien derivative $\Delta_1$:
\begin{equation}
\Delta_1F^{(0)}=S_1^FF^{(1)}\,,
\qquad
\Delta_1^2F^{(0)}=2(S_1^F)^2F^{(2)}\,,
\qquad
\Delta_1^3F^{(0)}=6(S_1^F)^3F^{(3)}\,,
\end{equation}
etc., and similar for $G^{(0)}$, but now with the Stokes constant $S_1^F$ replaced by $S_1^G$. This is the standard structure of alien derivatives if one assumes that $F^{(0)}$ and $G^{(0)}$ are one parameter transseries for which one can derive a bridge equation as we did in \sec{bridgeEquation}.
However, from the results obtained for the Adler function at order $1/N_f$ in the flavour expansion (recall our discussion in \sec{discussionLO}), we know that the Adler function does not necessarily has such a bridge equation. Therefore, we also allow for non-vanishing actions of $\Delta_2$, $\Delta_3$, $\Delta_4$ etc.
In fact, when we apply the machinery of this section to the Adler function at order $1/N_f^2$, we indeed see that such `multiple steps forward' alien derivatives indeed occur.
Using the Leibniz rule
repeatedly, we may now also obtain the resurgence structure of the product
\begin{equation}
\Psi^{(0)}
\equiv F^{(0)}G^{(0)}\,.
\end{equation}
In particular, to obtain the first non-perturbative sector $\Psi^{(1)}$, we can act with $\Delta_1$. This yields
\begin{align}
S_1^\Psi\Psi^{(1)}
&= \left(\Delta_1F^{(0)}\right)G^{(0)}
+ F^{(0)}\left(\Delta_1 G^{(0)}\right)
\,.
\end{align}
In order to obtain the sector $\Psi^{(2)}$, we can act with $\Delta_1^2$, for which $\Psi^{(2)}$ gets contributions from
\begin{align}
\frac12\Delta_1^2 \left(F^{(0)}G^{(0)}\right)
&= \frac12\left(\Delta_1^2F^{(0)}\right)G^{(0)}
+ \left(\Delta_1F^{(0)}\right)\left(\Delta_1G^{(0)}\right)
+ \frac12F^{(0)}\left(\Delta_1^2 G^{(0)}\right)\,,
\end{align}
and potentially also with a nonzero $\Delta_2$, for which the contributions come from
\begin{align}
\Delta_2\left(F^{(0)}G^{(0)}\right)
= \left(\Delta_2F^{(0)}\right)G^{(0)}+ F^{(0)}\left(\Delta_2 G^{(0)}\right)\,.
\end{align}
Likewise, an arbitrary sector $\Psi^{(n)}$ could get contributions from products of alien derivatives of the form $\Delta_1^n$, $\Delta_1^{n-2}\Delta_2$, $\Delta_1^{n-3}\Delta_3$,~...,~$\Delta_n$.
\subsection{Prefactor singularities}
\label{sec:prefactor}
A final subtlety that we need to address is one that we will encounter at several points in our computations. The Borel transforms in the convolution integral, \eq{convoIntrepeated}, may contain singular factors -- in practice: poles -- that only depend on the overall Borel plane variable $u$, not on the integration variables $u_i$. Such singular factors can therefore be taken outside the integral; see \eq{convolution} and the expressions below it for examples that we will encounter.
In the case of a transseries with non-perturbative exponentials $A_1 = -A_2$, as we have here, these overall singularities can be somewhat difficult to interpret. Clearly, singular prefactors play a role in the singularity structure of the final Borel transform, and therefore they will determine some of the structure of the full transseries solution that we are after. However, since the singularities are not obtained by acting with a specific alien derivative on one of the factors in the integrand, it is not always immediately clear to which transseries sectors the expansions around them belong.
For example, an expansion in the Borel plane around $u=A_1$ could describe a $(1,0)$ sector, but also other $(1+n,n)$ sectors. When such an expansion comes from actions of alien derivatives on factors in the convolution integrand, one can simply read off which alien derivatives play a role and therefore which sectors appear. When an expansion comes from a prefactor singularity, this is not the case. As a result, in these cases one needs other arguments (e.g.\ comparing different expansions that involve the same higher nonperturbative sectors) to determine the precise transseries structure.
Some of our results on the transseries structure will therefore be conjectural, depending on such additional arguments, though in many cases we can also fully pin down the structure. We will even encounter situations where the extra singularities are a virtue rather than a nuissance -- occasionally, they help us to read off transseries coefficients that would otherwise have remained hidden in a regular expansion, but that now become part of the singular structure.
\bigskip
\noindent
To summarize what we learned in this section, it is clear that the resurgence structure of the convolution integral, \eq{convoIntrepeated}, can be obtained in two ways: either directly via the singularity structure of the Borel transform, or by applying alien derivatives acting on the constituents of the convolution integral. The examples discussed in this section were relatively simple and we were able to compute the convolution integral exactly. However, in general, harder problems it is difficult (if not impossible), to compute the convolution integral exactly. In fact, as we see in the next section, this is indeed the case for the Adler function at order $1/N_f^2$ in the flavour expansion, so that there we have to turn to the second method using alien derivatives.
Let us emphasize that this method is quite general, and certainly not only applicable to the resurgence properties of the Adler function. It may further open the door to study renormalons in physics in general.
\section{Adler function with two bubble chains (\texorpdfstring{$\ord{1/N_f^2}$)}{ }}
\label{sec:adlerNLO}
In \sec{adlerLO} we discussed how the minimal two-parameter
transseries Ansatz for the Adler function could be obtained using the
resurgence relations described in \sec{resurgence}. From a resurgence
point of view, the structure of this transseries at order $1/N_f$ was surprisingly
simple as the only asymptotic sector is the perturbative
$(0,0)$-sector. In particular, we saw that most of the
non-perturbative sectors are in fact vanishing (recall
\fig{alienLatticeLOAdler}), and those sectors that were not vanishing
only consist of one or two terms. As a result, we were even able to
sum the whole non-perturbative part of the transseries, leading to a
closed analytic form, \eqs{DLOIR2}{DLOUV2}.
From our discussion in \sec{convoInt}, we do not expect that this
relatively simple transseries structure is still present at higher orders in
the flavour expansion, i.e. after adding more bubble chains. (See e.g. \cite{Beneke:1995qq,Vainshtein:1994ff,Peris:1997dq,Dondi:2020qfj} for earlier work including two or more bubble chains.) Recalling \fig{AlienLatticeConvoIntExample1},
we observed that the convolution integral leads to asymptotic
non-perturbative sectors, and found that the alien chain or lattice gradually builds up
by taking more and more convolution integrals. We therefore expect
that already at the next order in the flavour expansion, i.e.\ at order
$1/N_f^2$, the transseries for the Adler function will contain
asymptotic non-perturbative sectors.
In order to test these
expectations, and to get a first view of the resurgence structure
at higher orders in the flavour expansion, we discuss in this section
the Adler function at order $1/N_f^2$ (NLO). Unfortunately, a
complete calculation of the Adler function at this order is not
possible yet, because for some diagrams the master integrals for the Feynman
integrals appearing at this order are not yet known. Therefore, we
focus in this section on the set of planar diagrams shown in
\fig{AdlerNLO}, which we are able to compute. (These diagrams are leading in the SU$(3)$ color structure.) A brief review of the complete set of diagrams at
order $1/N_f^2$ can be found in \app{allDiagrams}.
Note that the subset of diagrams of
\fig{AdlerNLO} is not gauge invariant, and our resurgence analysis in this
section is only on a diagram by diagram basis. Unless very specific cancellations occur between diagrams, however, one may expect most of the resurgence features that appear in individual diagrams to also appear in the sum of the full set of diagrams -- as indeed occurred at order $1/N_f$ -- and it is those resurgence features that we are after. In particular, we discuss the resurgence of the complete first two non-perturbative sectors, i.e.\ the $(1,0)$, $(0,1)$, $(2,0)$ and $(0,2)$ sectors. We show how these results can be extracted numerically from the perturbative coefficients only. To supplement these results, we use the techniques of the previous section, \sec{convoInt}, to get the key result of this section: the exact Borel transforms of these sectors. Using the same techniques, we even find that there are further $(n,m)$ sectors with both $n$ and $m$ nonzero, contary to what was the case at leading order in $1/N_f$. At the end of the section, we briefly discuss the remaining transseries sectors and summarize the structures we have found in \tab{NLOsummary}.
\begin{figure}
\centering
\begin{subfigure}{3.5cm}
\includegraphics[width=\textwidth]{figures/adlerNLO1.pdf}
\caption{}
\label{fig:adlerNLO1}
\end{subfigure}
\begin{subfigure}{3.5cm}
\includegraphics[width=\textwidth]{figures/adlerNLO2.pdf}
\caption{}
\label{fig:adlerNLO2}
\end{subfigure}
\begin{subfigure}{3.5cm}
\includegraphics[width=\textwidth]{figures/adlerNLO3.pdf}
\caption{}
\label{fig:adlerNLO3}
\end{subfigure}
\begin{subfigure}{3.5cm}
\includegraphics[width=\textwidth]{figures/adlerNLO4.pdf}
\caption{}
\label{fig:adlerNLO4}
\end{subfigure}
\caption{Subset of diagrams at $\ord{1/N_f^2}$ that are computed in this section.}
\label{fig:AdlerNLO}
\end{figure}
\subsection{Four diagrams at \texorpdfstring{$\ord{1/N_f^2}$}{ }}
Using \eq{BorelChain}, the Borel transform of the diagrams shown in
\fig{AdlerNLO} can be computed using an analytic regularized
propagator for each of the bubble chains. In terms of the convolution
integral, the Borel transform for each diagram is given by
\begin{align}
\mathcal{B}[\Pi(Q^2)](u)
=\frac{-1}{\beta_{0f}} \bigg(\frac{Q^2}{\mu^2}e^{C}\bigg)^{-u}
\int_0^u du_1\,du_2\,
\delta\big(u-u_1-u_2\big) \Pi(u_1,u_2)\,,
\label{eq:convolution}
\end{align}
where $\Pi(u_1,u_2)$ is the $Q^2$-independent part of these diagrams
in terms of the two Borel parameters $u_1$ and $u_2$ of the bubble
chains. Here and in the expressions that follow, we again ignore an overall colour factor $C_F^2$. For diagrams $(a)$-$(c)$ in \fig{AdlerNLO}, we managed to compute this $Q^2$-independent part exactly to all orders in $u_1$ and $u_2$, see App. \ref{app:masters} and \ref{app:momentumIntegrals} for more details, and the result
reads:
\begin{align}
\Pi_a(u_1,u_2)
&= -\frac{3}{2\pi^2}
\frac{1}{u(1\!-\!u)(2\!-\!u)}
\frac{1}{(u_1\!-\!2)(u_1\!-\!1)(u_1+1)}
\frac{1}{(u_2\!-\!2)(u_2\!-\!1)(u_2+1)}\,,\\
\Pi_b(u_1,u_2)
&= -\frac{3}{2\pi^2}
\frac{\Gamma(u)}{(1-u)\Gamma(3-u)}
\frac{\Gamma(1-u_1)}{(u_1-2)\Gamma(2+u_1)}
\frac{\Gamma(1-u_2)}{(u_2-2)\Gamma(2+u_2)}\,,\label{eq:resultDiagramb}\\
\Pi_c(u_1,u_2)
&= \frac{3}{2\pi^2}
\frac{1}{u(1-u)(2-u)}
\frac{\Gamma(u)}{\Gamma(3-u)}
\frac{u_1 \Gamma(1-u_1)}{\Gamma(2+u_1)}
\frac{\Gamma(1-u_2)}{(u_2-2)\Gamma(2+u_2)}\,,
\end{align}
where, after taking the $\delta$-function in the convolution integral
(\ref{eq:convolution}) into account, we have $u=u_1+u_2$. For diagram $(d)$ we
do not have a such a closed form, but instead computed the expansion
in $u_1$ and $u_2$, for which the first few terms read
\begin{align}
\Pi_d(u_1,u_2)
&= \frac{201}{4}
-36\zeta_3
+\Big(\frac{315}{4}-54\zeta_3\Big)u_1
+\Big(\frac{3123}{16}-63\zeta_3-90\zeta_5\Big)u_1^2\nonumber\\
&+\Big(\frac{873}{8}-54\zeta_3\Big)u_2
+\Big(\frac{747}{4}-93\zeta_3-60\zeta_5\Big)u_2^2
+\frac{1}{u}\Big(18+\frac{9u_1}{4}+36u_1^2\Big)\nonumber\\
&+\Big(\frac{4539}{16}-144\zeta_3-90\zeta_5\Big)u_1u_2
+...
\end{align}
In \app{masters} we give additional details for the computation of the perturbative coefficients for this diagram.
In order to get the (Borel transform of the) Adler function, we can
perform the convolution integral (\ref{eq:convolution}) and take the
derivative with respect to $Q^2$ (recall \eq{pitoadler}). The convolution
integral (\ref{eq:convolution}) for diagram $(a)$ can be computed
exactly, and we obtain
\begin{equation}
\mathcal{B}[D(Q^2)](u)
= \frac{1}{\beta_{0f}}\bigg(\frac{Q^2}{\mu^2}e^{C}\bigg)^{-u}\mathcal{B}[D](u)\,,
\end{equation}
with the closed form expression
\begin{align}
\mathcal{B}[D_a](u)
&=\frac{6}{(1-u)(2-u)}
\bigg[
\frac{\log(1+u)}{3(1-u)u(2+u)}
+\frac{\log(1-u)}{(3-u)(2-u)u}
\nonumber\\
&\hspace{7cm}
+\frac{2\log(1-\frac{u}{2})}{3(4-u)(3-u)(1-u)}\bigg].\label{eq:BorelTransNLOb}
\end{align}
However, for the other three diagrams we do not have the benefit of
a closed form. Instead, one can expand the $\Pi(u_1,u_2)$ for
these diagrams in $u_1$ and $u_2$ and perform the convolution integral
order by order. In this way we find
\begin{align}
\mathcal{B}[D_a](u)
&= -\frac{3u}{4}
-\frac{3u^2}{2}
-\frac{81u^3}{32}
-\frac{215u^4}{64}
-\frac{2707u^5}{640}
+\ORd{u^6}\,,\label{eq:diagramaPerturbativeCoeffs}\\
\mathcal{B}[D_b](u)
&= -\frac{3u}{4}
-\frac{3u^2}{2}
-\frac{85u^3}{32}
-\bigg(\frac{239}{64}-\frac{\zeta_3}{4}\bigg)u^4
-\bigg(\frac{3211}{640}-\frac{\zeta_3}{2}\bigg)u^5
+\ORd{u^6}\,,\\
\mathcal{B}[D_c](u)
&= -\frac{3u}{8}
-\frac{13u^2}{16}
-\frac{95u^3}{64}
-\bigg(\frac{277}{128}-\frac{\zeta_3}{8}\bigg)u^4
-\bigg(\frac{759}{256}-\frac{11\zeta_3}{40}\bigg)u^5
+\ORd{u^6}\,,\\
\mathcal{B}[D_d](u)
&= 3u
+\bigg(\frac{137}{16}-6\zeta_3\bigg)u^2
+\bigg(\frac{565}{32}-9\zeta_3\bigg)u^3
+\bigg(\frac{11219}{384}-\frac{38\zeta_3}{3}-\frac{65\zeta_5}{6}\bigg)u^4\nonumber\\
&\hspace{3cm}
+\bigg(\frac{27787}{640}-\frac{703\zeta_3}{48}+\zeta_3^2-\frac{65\zeta_5}{4}\bigg)u^5
+\ORd{u^6}\,,
\end{align}
where, even though we have the closed form in
\eq{BorelTransNLOb}, we added the first few coefficients for
$\mathcal{B}[D_a](u)$. For diagrams $(a)$-$(c)$, we computed the coefficients
up to order $u^{150}$. Diagram $(d)$ is computationally more involved
and we managed to compute up to order $u^{18}$ \footnote{Our calculation exhausts the current datamine \cite{Blumlein:2009cf}, which we used to compute these coefficients.}.
Before we enter the detailed resurgence analysis, we can already
have a closer look at the singularities of each of those expressions
in the Borel plane. The singularities of diagram $(a)$ can
be read off immediately from its closed form, \eq{BorelTransNLOb}:
branch cuts starting at $u=-1$, $u=1$ and $u=2$ and poles at $u=-2$,
$u=1$, $u=2$, $u=3$, and $u=4$. For the other diagrams, for which we
only have a finite number of perturbative coefficients, we cannot
read off the singularities in this way. Instead, we can use Padé
approximants to this end (see \app{borelPade} for a brief description). The poles
of these Padé approximants give a good indication of where the `true'
singularities in the Borel plane are located. In order to see the
type of singularities that we can expect in the Borel plane of the
different diagrams, we plot the poles of the diagonal Padé
approximants in~\fig{PadeNLO}.
For diagrams $(b)$ and $(c)$, we see essentially the same pattern
arising as for diagram $(a)$: the emergence of poles in the UV
direction and a branch cut starting at $u=1$ and at $u=2$. Note that
of course a Padé approximant, being a rational function, cannot
produce an actual branch cut -- instead, the branch cut is mimicked by
the Padé approximants as an accumulation of poles. This well-known
effect occurs in many other examples that involve Padé approximants, see e.g.~\cite{STAHL1997139} for an early example. We
also expect the emergence of branch cuts beyond $u=2$ for diagrams
$(b)$ and $(c)$, but more on this in a moment. As we do not have many
terms for diagram $(d)$ it is hard to tell if the Padé poles near
$u=-1$ and $u=1$ are the start of a branch cut, or are Borel plane poles
instead.
We also observe some `spurious' poles that have an imaginary component. A closer look reveals that these poles always come in pairs: they are the complex conjugate of each other. This is a common phenomenon for Padé approximants, and often reveals the fact that these mimic one pole without an imaginary component, see also \app{borelPade} for more details.
It is interesting to note that in the Borel transforms of the
individual diagrams, the singularity at $u=1$ does not disappear. This
was also the case for the LO ($\ord{1/N_f}$) Adler function: when the diagrams
were taken separately they did have a singularity at $u=1$. However,
these singularities at $u=1$ cancelled when we added the diagrams.
On physical grounds, we expect that this will still happen when we take all
diagrams together at every given order in the flavour expansion -- recall the discussion on the OPE at the end of \sec{adler}.
Nevertheless we do {\em not} expect singularities at other values of $u$ to
cancel. This also did not happen in the case of the LO Adler function
where the singularities on a diagram by diagram basis indeed gave a realistic
picture of the singularities for the sum of the diagrams.
\begin{figure}
\centering
\begin{subfigure}{7.5cm}
\includegraphics[width=\textwidth]{figures/PadePlot2.pdf}
\caption{\vspace{1cm}}
\label{fig:Pade1}
\end{subfigure}
\begin{subfigure}{7.5cm}
\includegraphics[width=\textwidth]{figures/PadePlot3.pdf}
\caption{\vspace{1cm}}
\label{fig:Pade2}
\end{subfigure}
\begin{subfigure}{7.5cm}
\includegraphics[width=\textwidth]{figures/PadePlot1.pdf}
\caption{}
\label{fig:Pade3}
\end{subfigure}
\begin{subfigure}{7.5cm}
\includegraphics[width=\textwidth]{figures/PadePlot4.pdf}
\caption{}
\label{fig:Pade4}
\end{subfigure}
\caption{Plot (a)-(c) show the singularities of the diagonal order 75
Pad\'e approximants of the diagrams (a)-(c) respectively. Branch
cuts are mimicked by an accumulation of poles. As we only have 18
coefficients for diagram (d), we show the Padé poles of the diagonal order 9 Pad\'e approximant in plot (d).}
\label{fig:PadeNLO}
\end{figure}
\subsection{Resurgence analysis: non-perturbative sectors \texorpdfstring{$(1,0)$}{ } and \texorpdfstring{$(0,1)$}{ }}
Our discussion of resurgence in \sec{resurgence} was mainly focused on
the case where the singularities of the Borel transform are simple
poles or logarithmic branch cuts. In many examples studied so far in the literature these are indeed
the only singularities that occur, but in the case of the Adler
function several other types of singularity are present. We
already encountered double poles in \sec{adlerLO}, and if we look at
the exact Borel transform for diagram $(a)$, we observe another
type of singularity:
\begin{equation}\label{eq:logoverpole}
\mathcal{B}[D_a](u)\Big|_{u=1} = -3\frac{\log(1-u)}{u-1}+...
\end{equation}
where the ellipsis denotes other
singularities (poles and logarithmic branch cuts) as well as regular
terms. At the end of \sec{generalizations} we saw that a singularity
of the form in \eq{logoverpole} follows from perturbative
coefficients that grow as $\Gamma(k+1)\psi(k+1)$, where $\psi(z)$ is the digamma function. Dividing by $\Gamma(k+1)$, this means that the leading growth of the coefficients of the Borel transform $\mathcal{B}[D_a](u)$ is given by $-3\log(k)$, where we read off the $-3$ from \eq{logoverpole} and used $\psi(k) = \log(k) + \ord{\frac{1}{k}}$.
In \fig{perturbativeCoefficientsNLO2}, we show the perturbative
coefficients of $\mathcal{B}[D_a]$ together with the
function $-3\log(k)$, and indeed this function matches the leading
growth of these coefficients quite well. We observe similar
logarithmic growth for the coefficients $\mathcal{B}[D_b](u)$ and
$\mathcal{B}[D_c](u)$. Notice that the plotted points for the
coefficients of $\mathcal{B}[D_b](u)$ nearly overlap with those for
$\mathcal{B}[D_a](u)$, implying that the leading
growths of $D_b$ and $D_a$ are equal. For the
coefficients of $\mathcal{B}[D_d](u)$ it is more difficult to tell if the coefficients show
logarithmic growth, as we only have 18 coefficients where the even and
odd coefficients seem to follow notably different curves, so that we
only have 9 coefficients to determine the trend for each curve. This
makes determining the large order behaviour unrewarding. Nevertheless
it is a pleasant surprise to see from \fig{Pade4} how well the Pad\'e approximation already estimates
the location of the Padé poles in $u$ near $\pm 1$.
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{figures/BorelCoeffsPlot.pdf}
\caption{Plot of the first 150 perturbative coefficient of $\mathcal{B}[D_a]$
(blue), $\mathcal{B}[D_b]$ (orange), $\mathcal{B}[D_c]$ (green). As the solid gray line we also
show the function $-3\log(k)$ to highlight the fact that the
coefficients for (a), (b) and (c) show logarithmic
growth. Since the points denoting the
coefficients for (a) and (b) practically overlap, we included a subfigure zooming in at part of those sequences. In red, above the horizontal axis, we show the 18
coefficients for $\mathcal{B}[D_d]$ we have calculated. As the even and odd
coefficients seem to follow different curves it is unclear if
these coefficients also show logarithmic growth. }
\label{fig:perturbativeCoefficientsNLO2}
\end{figure}
Comparing the logarithmic growth $\sim\Gamma(k+1)\psi(k+1)$ with
\eq{largeOrderWithLogs}, which describes the large order relation for a
transseries Ansatz of the type \eq{ansatzWithLogs} with logarithms, we see that
at the level of the transseries this implies that logarithmic terms
are included in the expansion.
As discussed in \sec{adlerLO} for the Adler function at LO in the flavour expansion, the form of the complete transseries
is further determined by the fact that the
Borel transforms of the diagrams have singularities at both negative
and positive integer values of $u$. This leads to a transseries with at
least two parameters. Taking
these considerations into account, we make the following minimal
transseries Ansatz for diagrams $(a)$, $(b)$ and~$(c)$:
\begin{equation}
D_{\mathrm{NLO}}(\alpha)
=\sum_{n=0}^\infty\sum_{m=0}^\infty \sigma_1^n\sigma_2^m e^{-n\frac{A_1}{\alpha}}e^{-m\frac{A_2}{\alpha}}
\Big[
D_{\mathrm{NLO}}^{(n,m)[0]}(\alpha)
+\log(\alpha)D_{\mathrm{NLO}}^{(n,m)[1]}(\alpha)
\Big]\,,
\end{equation}
where as in the LO case\footnote{One often encounters the phenomenon of resonance -- see also the discussion on page \pageref{page:resonance} -- in
transseries with logarithms and when multiple $A_i$ add up
to $0$. Although this is the case for the diagrams considered in
this section, their transseries do not show resonance. See \cite{Borinsky:2022knn} for another example of this behaviour.} $A_1=-A_2=1$. Note that for
diagram $(a)$, we do not have an infinite number of singularities in
the Borel plane, hence in that case the sums over $n$ and $m$ truncate.
Furthermore, note that as the sectors with logarithms lead to
singularities of the form \eq{logoverpole}, we expect that the
sectors $D_\mathrm{NLO}^{(0,m)[1]}$ vanish as diagram $(a)$ does not contain
such a singularity at $u=-1$ and because the Pad\'e plots for diagram
$(b)$ and $(c)$ do not show branch cuts starting from negative
integers. We will have more to say about diagram $(d)$ at the end of
this subsection.
Following what we did in \sec{adlerLO} for the LO Adler function, we
can translate the transseries Ansatz into large order relations for
the perturbative coefficients and study these numerically. This will
then lead to the extraction of non-perturbative sectors. The singularity at $u=1$ does not disappear on a diagram by diagram basis,
which implies that the leading large order behaviour of the perturbative coefficients
follows from forward steps in the alien lattice in both the $(1,0)$ and
$(0,1)$ directions. Taking into account that some
coefficients grow logarithmically, this yields the following natural
form for the large order relation for the perturbative coefficients:
\begin{align}
d_k^{(0,0)}
&\sim \frac{S_{1,0}}{2\pi\mathrm{i}}\sum_{h=0}^\infty \frac{\Gamma(k-h-\beta)}{A_1^{k-h-\beta}}
\Big(d_h^{(1,0)[0]}+d_h^{(1,0)[1]}(\log(A_1)-\psi(k-h-\beta)\Big)
\nonumber\\
&\hspace{3cm}
+\frac{S_{0,1}}{2\pi\mathrm{i}}\sum_{h=0}^\infty
\frac{\Gamma(k-h-\beta)}{A_2^{k-h-\beta}}
d_h^{(0,1)[0]}
+ \ord{2^{-k}}\,,
\end{align}
where, since we can allow a finite number
of the leading coefficients to vanish, we assumed
$\beta_{1,0}^{[0]}=\beta_{0,1}^{[0]}=\beta_{1,0}^{[1]}=\beta_{0,1}^{[1]}\equiv\beta$;
this entails no loss of generality (see also the discussion below \eq{largeOrderLOsector2}).
As we did in \sec{adlerLO}, we absorb the Stokes
constant and factors of $2\pi\mathrm{i}$ into the non-perturbative
coefficients:
\begin{equation}
\tilde{d}_h^{(\ell,0)[p]} \equiv \frac{S_{1,0}^\ell}{2\pi\mathrm{i}} d_h^{(\ell,0)[p]},
\qquad\qquad
\tilde{d}_h^{(0,\ell)[p]} \equiv \frac{S_{0,1}^\ell}{2\pi\mathrm{i}} d_h^{(0,\ell)[p]}.
\end{equation}
For explicit results, let us first focus on diagram $(a)$ as the closed form of the Borel
transform, \eq{BorelTransNLOb}, allows us to read off the resurgence
structure, using \eqs{borelSingularities}{logTransSingularityExample}.
We extract the dominant growth
\begin{equation}
\tilde{d}_0^{(1,0)[1]}\frac{\Gamma(k-\beta)\psi(k-\beta)}{A_1^{k-\beta}}
\sim \tilde{d}_0^{(1,0)[1]}\frac{\Gamma(k-\beta)\log(k-\beta)}{A_1^{k-\beta}}
+\ORd{\frac1k}\,,
\end{equation}
from the expansion around \eq{logoverpole}, which yields the exact values
\begin{equation}\label{eq:dominantGrowthDiagrama}
\beta=-1\,,
\qquad
\tilde{d}_0^{(1,0)[1]} = -3\,,
\qquad\text{and}\qquad
\tilde{d}_{h>0}^{(1,0)[1]} = 0\,.
\end{equation}
Subtracting the leading growth from the large order formula yields
\begin{align}\label{eq:largeOrderWithoutPolyGamma}
\delta_k^{(0)}
&\equiv \frac{A_1^{k-\beta}}{\Gamma(k-\beta)}
\Big(\tilde{d}_k^{(0,0)}-\tilde{d}_0^{(1,0)[1]}\frac{\Gamma(k-\beta)\psi(k-\beta)}{A_1^{k-\beta}}\Big)\nonumber\\
&\sim \sum_{h=0}^\infty\frac{\Gamma(k-\beta-h)}{\Gamma(k-\beta)}A_1^h
\Big[\tilde{d}_h^{(1,0)[0]}+(-1)^{h+\beta-k}\tilde{d}_h^{(0,1)[0]}\Big]+...\,,
\end{align}
a type of growth that we already encountered in \sec{adlerLO}.
Taking similar ratio tests as we did there, and taking the parity
of $k$ into account, we find full asymptotic expansions:
\begin{align}
\tilde{d}_h^{(1,0)[0]}
&=
\begin{cases}
\frac13-3\gamma_E+\frac{13}{9}\log(2)
&\qquad h=0\\
\Gamma(h)\Big(\left(3h+\frac{3}{2}\right)+(-1)^h\frac12+\frac{1}{2^{h}}\Big)
&\qquad h>0
\end{cases}\label{eq:sector(1,0)Diagrama}
\end{align}
for the $(1,0)$-sector, and
\begin{align}
\tilde{d}_h^{(0,1)[0]}
&=
\begin{cases}
0
&\qquad h=0\\
\Gamma(h)\Big(
\frac{1}{2}-(-1)^h\frac1{36}
-\frac{1}{2^{h+1}}\big(\frac23h+\frac49\big)
-\frac{1}{3^{h}}\frac14
\Big)
&\qquad h>0
\end{cases}\label{eq:sector(0,1)Diagrama}
\end{align}
for the $(0,1)$-sector. We have checked \eqs{sector(1,0)Diagrama}{sector(0,1)Diagrama} numerically up to 20
decimal places for the first 15 coefficients by using Richardson
transforms.
Note the gamma-functions $\Gamma(h)$ in these expressions, implying
that the first non-perturbative sectors are now indeed asymptotic
series, just like the perturbative sector.
As an additional check, we can read off the resurgence structure from the exact Borel transform \eq{BorelTransNLOb} using \eq{BorelSingularitiesMultiParTrans}. For example, near the singularity positioned at $u=1$ we get
\begin{align}\label{eq:borelDiagramaAround1}
\mathcal{B}[D_a^{(0,0)}](u)\Big|_{u=1}
&= -\frac{S_{1,0}}{2\pi\mathrm{i}}\bigg(
-3\frac{\log(1-u)+\psi(1)}{1-u} + \frac{\frac13+3\psi(1)+\frac{13}{9}\log(2)}{1-u}\nonumber\\
&\hspace{3cm}+ \mathcal{B}[D^{(1,0)[0]}](u-1) \log(1-u)\bigg)+...\,,
\end{align}
where the ellipsis denote regular terms, and with
\begin{equation}\label{eq:borelDiagramaAround1LogTerm}
\frac{S_{1,0}}{2\pi\mathrm{i}}\mathcal{B}[D^{(1,0)[0]}](u-1)
= \frac{3}{(2-u)^2}+\frac{\frac32}{2-u} - \frac{1}{2u} + \frac{1}{3-u}\,.
\end{equation}
As already discussed above, the first term in \eq{borelDiagramaAround1} can be compared with \eqs{logTransSingularityExample1}{logTransSingularityExample} and yields the coefficients of \eq{dominantGrowthDiagrama}. This is the reason why we included the $\psi(1)$ term there, and then subtracted it again in the next term. Using $\psi(1)=-\gamma_E$ in the second term in \eq{borelDiagramaAround1} yields the coefficient in the first line in \eq{sector(1,0)Diagrama}. Finally, an inverse Borel transform of \eq{borelDiagramaAround1LogTerm} gives the second line in \eq{sector(1,0)Diagrama}.
In a similar way, one can read off the resurgence of the $D^{(0,1)[0]}$ sector, i.e. \eq{sector(0,1)Diagrama}, by expanding $\mathcal{B}[D_a^{(0,0)}](u)$ around $u=-1$.
We discussed a diagram for which it was possible to find a
closed form Borel transform, but the true power of
resurgence analysis emerges when we study the diagrams for which we do
{\em not} have such an exact expression. To be precise, for diagrams $(b)$ and $(c)$, we can study
the large order behaviour of the perturbative coefficients for these
diagrams, compute coefficients in the transseries numerically and
can then usually infer their exact values. Furthermore, we can use
the discussion of \sec{convoInt} on convolution integrals and their
resurgence as a cross check. The interested reader can apply the methods of \sec{convoInt} to obtain the same results for diagram $(a)$.
Turning to diagram $(b)$, we find, by studying the large order behaviour of the perturbative coefficients, that $\beta_{1,0}=\beta_{0,1} = -1$ and that the non-zero coefficients are
\begin{align}
\tilde{d}_0^{(1,0)[1]}
&= -3\label{eq:101sectorDiagramb}\\
\tilde{d}_h^{(1,0)[0]}
&=
\begin{cases}
\frac13-3\gamma_E+\frac{13}{9}\log(2)
&\qquad h=0\\
\Gamma(h)\Big(
6-\frac{3}{2^{h}}
\Big)
&\qquad h>0
\end{cases}\label{eq:100sectorDiagramb}\\
\tilde{d}_0^{(0,1)[0]}
&= \frac15\Big(\log(2)-\log(3)\Big)\,.\label{eq:010sectorDiagramb}
\end{align}
We have checked these numbers numerically up to 12 decimal places for the
first 15 coefficients for $\tilde{d}_h^{(1,0)[0]}$, after which we inferred the exact expression -- an expression that we shall argue to be correct in a different way in a moment.
From the coefficients, we deduce the following information. The leading order growth of this diagram is given by a $\log$-sector with a single coefficient $\tilde{d}_0^{(1,0)[1]}$. Furthermore, the non-perturbative $(1,0)$ sector is now an asymptotic sector, while the $(0,1)$ sector only contains just a single non-zero coefficient corresponding to a simple pole in the Borel plane (we already observed this in \fig{Pade2}).
Using the results of \sec{convoInt}, we can check these numerical results
using the method of alien derivatives acting on the convolution
integral. We therefore rewrite the convolution integral for $D_b^{(0,0)}$ (recall \eqs{convolution}{resultDiagramb}) as
\begin{align}\label{eq:convoMethodDiagramb}
\mathcal{B}[D_b^{(0,0)}](u)
=\frac{-6\Gamma(1+u)}{(1-u)\Gamma(3-u)}\mathcal{B}[\Psi^{(0)}](u)\,,
\end{align}
where we defined
\begin{equation}\label{eq:convoMethodDiagramb2}
\mathcal{B}[\Psi^{(0)}](u)
=\int_0^u du_1\, \mathcal{B}[F](u_1)\mathcal{B}[F](u-u_1),
\quad\text{with}\quad
\mathcal{B}[F](u)
= \frac{\Gamma(1-u)}{(2-u)\Gamma(2+u)}\,.
\end{equation}
The singularities of the Borel transform $\mathcal{B}[F](u)$ are simple poles
at positive integers, except at $u=2$ which is a double
pole. Therefore, the resurgence structure of $F$ can be easily read off
from the expansions around these poles and we find
\begin{equation}
\Delta_1F = \frac12(2\pi\mathrm{i}).
\end{equation}
Using the procedure as outlined in \sec{convoInt}, we obtain
\begin{equation}
S_1^\Psi\Psi^{(1)}
=\Delta_1\Psi^{(0)}
= 2F\Delta_1F
= (2\pi\mathrm{i}) F(\alpha).
\end{equation}
In other words, in a neighborhood around $u=1$, the Borel transform of $\Psi^{(0)}$ looks like
\begin{align}
\mathcal{B}[\Psi^{(0)}](u)\Big|_{u=1}
&= -S_1^\Psi\mathcal{B}[\Psi^{(1)}](u-1)\frac{\log(1-u)}{2\pi\mathrm{i}}+...\nonumber\\
&= -\mathcal{B}[F](u-1)\log(1-u)+...\nonumber\\
&= -\frac{\Gamma(2-u)}{(3-u)\Gamma(1+u)}\log(1-u)+...\,,
\end{align}
where the ellipsis denotes regular terms around $u=1$.
By adding back in the prefactor for $\mathcal{B}[D_b^{(0,0)}](u)$, close to $u=1$ \eq{convoMethodDiagramb} becomes
\begin{align}
\mathcal{B}[D_b^{(0,0)}](u)\Big|_{u=1}
&=\frac{-6\Gamma(1+u)}{(1-u)\Gamma(3-u)}\left[\mathcal{B}[\Psi^{(0)}](u)\Big|_{u=1}+...\right]\nonumber\\
&= \frac{6\log(1-u)}{(1-u)(2-u)(3-u)}+...\nonumber\\
&\equiv - \tilde{d}_0^{(1,0)[1]}\frac{\log(1-u)}{1-u}
-S_{1,0}\mathcal{B}[D_b^{(1,0)[0]}](u-1)\frac{\log(1-u)}{2\pi\mathrm{i}}+...\,,\label{eq:expansionAroundLog(1-u)b}
\end{align}
with $\tilde{d}_0^{(1,0)[1]}=-3$ already given in \eq{101sectorDiagramb} and where
\begin{equation}
\frac{S_{1,0}}{2\pi\mathrm{i}}\mathcal{B}[D_b^{(1,0)[0]}](u) = \frac{6}{1-u} - \frac{3}{2-u}\,.
\end{equation}
Indeed, by performing an inverse Borel transform, we can now directly read off the coefficients
$\tilde{d}_h^{(1,0)[0]}$ for $h>0$ already given in \eq{100sectorDiagramb}.
Note however that the prefactor in the first line of \eq{expansionAroundLog(1-u)b} itself has a pole at $u=1$ -- see also our discussion in \sec{prefactor}. As a result, a constant term from the regular part between square brackets (indicated by the dots) also contributes to the singular terms in the last line. Therefore, the coefficient $\tilde{d}_0^{(1,0)[0]}$, which should correspond to a simple pole at $u=1$ in the Borel plane, is not determined yet by studying the convolution integral. Furthermore, $\mathcal{B}[F](u)$ in \eq{convoMethodDiagramb2} does not have singularities at negative integers, meaning that $\Delta_{-1}F = 0$. Therefore it does not seem possible to compute $\mathcal{B}[D_b^{(0,0)}](u)$ locally near $u=-1$, and thus we can also not extract $\tilde{d}_0^{(0,1)[0]}$ using the method of convolution. We come back to this point in the next subsection, where we will see that by including further non-perturbative sectors in the analysis, some of these undetermined coefficients can still be found.
This concludes our discussion of diagram $(b)$. In a similar manner, for diagram $(c)$, a large order analysis of the perturbative coefficients yields $\beta_{1,0}=\beta_{0,1} = -1$ as well as the following non-zero expansion coefficients for the non-perturbative sectors:
\begin{align}
\tilde{d}_0^{(1,0)[1]}
&= -\frac{3}{2}\label{eq:101diagramc}\\
\tilde{d}_h^{(1,0)[0]}
&=
\begin{cases}
-\frac12-\frac{3}{2}\gamma_E-\frac{1}{6}\log(2)
&\qquad h=0\\
\Gamma(h)\Big(\left(3h+\frac{3}{2}\right)-(-1)^h\frac12+\frac{1}{2^{h}}\frac12\Big)
&\qquad h>0
\end{cases}\label{eq:100diagramc}\\
\tilde{d}_0^{(0,1)[0]}
&= \frac16+\frac12\Big(\log(2)-\log(3)\Big)\label{eq:010diagramc},
\end{align}
which we have checked numerically up to at least 13 decimal places for the
first 15 coefficients for $\tilde{d}_h^{(1,0)[0]}$. We observe a similar pattern as for
diagram $(b)$: a $\log$-sector with a single coefficient $\tilde{d}_0^{(1,0)[1]}$, an asymptotic $(1,0)[0]$ sector, and a $(0,1)$ sector which contains only a single non-perturbative coefficient.
Once again, we can also determine these coefficients by studying the
convolution integral more closely. Therefore, we write the convolution integral of diagram $(c)$ as
\begin{align}\label{eq:convoMethodDiagramc1}
\mathcal{B}[D_c^{(0,0)}](u)
=\frac{-6\Gamma(u)}{(1-u)(2-u)\Gamma(3-u)}\mathcal{B}[\Phi^{(0)}](u)\,,
\end{align}
where we defined
\begin{equation}\label{eq:convoMethodDiagramc2}
\mathcal{B}[\Phi^{(0)}](u)
=\int_0^u du_1\, \mathcal{B}[F](u_1)\mathcal{B}[G](u-u_1)\,,
\qquad\text{with}\qquad
\mathcal{B}[G](u)
= \frac{u\Gamma(1-u)}{\Gamma(2+u)}\,,
\end{equation}
and with $\mathcal{B}[F]$ the same as in \eq{convoMethodDiagramb2}. Using that both $\Delta_1F=\Delta_1G=\frac12(2\pi\mathrm{i})$ we obtain
\begin{equation}
S_1^{\Phi}\Phi^{(1)}
=\Delta_1\Phi^{(0)}
= F\Delta_1G+G\Delta_1F
= \frac12(2\pi\mathrm{i}) (F(\alpha)+G(\alpha)).
\end{equation}
A brief calculation now yields
\begin{align}
\mathcal{B}[D_c^{(0,0)}](u)\Big|_{u=1}
&=\frac{-6\Gamma(u)}{(1-u)(2-u)\Gamma(3-u)}\mathcal{B}[\Phi^{(0)}](u)\Big|_{u=1}+...\nonumber\\
&= -\frac{\log(1-u)}{1-u}
\frac{6-3u(4-u)}{u(2-u)^2(3-u)}
+...\nonumber\\
&\equiv - \tilde{d}_0^{(1,0)[1]}\frac{\log(1-u)}{1-u}
-S_{1,0}\mathcal{B}[D_c^{(1,0)[0]}](u-1)\frac{\log(1-u)}{2\pi\mathrm{i}}+...\,,\label{eq:expansionAroundLog(1-u)c}
\end{align}
with $d_0^{(1,0)[1]}=-\frac{3}{2}$ as given above in \eq{101diagramc}, and where
\begin{equation}
\frac{S_{1,0}}{2\pi\mathrm{i}}\mathcal{B}[D_c^{(1,0)[0]}](u) = \frac{\frac12}{1+u}+\frac{3}{(1-u)^2}+\frac{\frac32}{1-u} + \frac{\frac{1}{2}}{2-u}\,.
\end{equation}
It is straightforward to show that the inverse Borel transform of this indeed yields the
coefficients $d_h^{(1,0)[0]}$ given in \eq{100diagramc}
for $h>0$. In the next subsection we see how the convolution integral method also allows to obtain information about the
coefficients $d_0^{(1,0)[0]}$ and $d_0^{(0,1)[0]}$.
To summarize the results so far (see also \tab{NLOsummary}): although the convolution integral for diagram $(a)$ was the only one
we could compute exactly, our resurgence large order analysis, together with the power of the convolution analysis,
allowed us to make a transseries
Ansatz and extract the whole first non-perturbative $(1,0)$ and $(0,1)$ sectors for the diagrams $(a)$, $(b)$ and $(c)$.
For all three diagrams, the leading order growth of the perturbative coefficients is governed by the non-perturbative coefficient $d_0^{(1,0)[1]}$. In particular, we found for diagram $(c)$ that $d_0^{(1,0)[1]}=-\frac{3}{2}$, while for both diagrams $(a)$ and $(b)$ the growth is twice as strong, i.e. $d_0^{(1,0)[1]}=-3$. We already observed this qualitatively in \fig{perturbativeCoefficientsNLO2}, where we saw that the growth of perturbative coefficients for diagram $(c)$ is indeed less than for the diagrams $(a)$ and $(b)$, which were more or less overlapping. However, at closer inspection, one sees that the points do not overlap exactly -- an artifact of the subleading growth dictated by the coefficients $d_h^{(1,0)[0]}$ and $d_h^{(0,1)[0]}$ that we have now also computed. For all three diagrams, the $(1,0)$ sector is asymptotic while only for diagram $(a)$ the $(0,1)$ sector is asymptotic. Furthermore, we notice that for diagram $(a)$ the coefficient $\tilde{d}_0^{(1,0)[0]}$ is the same as that of diagram $(b)$.
Finally, we want to offer some observations on diagram
$(d)$. A numerical resurgence analysis on this diagram is challenging
as we only have 18 coefficients, and we already observed that the even
and odd coefficients behave differently. In
\fig{perturbativeCoefficientsNLO2}, we see that the growth of the
upper curve is much stronger than that of the other three diagrams
$(a)$, $(b)$ and $(c)$. Although it looks like the curve bends slightly,
there are not enough coefficients to tell if this is the
beginning of logarithmic growth. It might as well be the case that
the large order growth of the coefficients is different. Furthermore, the coefficients that we displayed have opposite sign to the coefficients of the other diagrams, though
the lower curve clearly bends downwards towards negative coefficients.
\subsection{Resurgence analysis: non-perturbative sectors \texorpdfstring{$(2,0)$}{ } and \texorpdfstring{$(0,2)$}{ }}
\label{sec:adlerNLOsecondsectors}
In the previous subsection we determined the non-perturbative $(1,0)$
and $(0,1)$ sectors for diagrams $(a)$-$(c)$, using both numerical
results and the method of alien derivatives acting on the convolution
integral. In order to probe the second
non-perturbative sectors $(2,0)$ and $(0,2)$, we must
subtract the first sectors from the large order expressions. This is
not as straightforward as in the LO case (see
\eq{LOsubtractSector1}), since now the coefficients
$\tilde{d}_k^{(1,0)[0]}$ and $\tilde{d}_k^{(0,1)[0]}$ grow
factorially themselves, implying that we need to subtract an entire (divergent)
asymptotic series. To make this possible, we will apply Borel summation
on the first non-perturbative sector before subtracting it.
To prepare for this we rewrite \eq{largeOrderWithoutPolyGamma} as
\begin{align}
\delta_k^{(0)}
&\sim \frac{\Gamma(k-\beta)}{A^{k-\beta}}
\sum_{\ell=1}^\infty
\frac{1}{\ell^{k-\beta}}
e^{-\frac{\log(\ell)}{1/k}}
\sum_{h=0}^\infty \frac{\Gamma(k-\beta-h)}{\Gamma(k-\beta)}
(\ell A)^{h}\tilde{d}_h^{(\ell,0)[0]}
+(0,\ell)\text{-sectors}\nonumber\\
&= \frac{\Gamma(k-\beta)}{A^{k-\beta}}
\sum_{\ell=1}^\infty
\frac{1}{\ell^{k-\beta}}
e^{-\frac{\log(\ell)}{1/k}}
\sum_{h=0}^\infty p_h^{(\ell,0)}\bigg(\frac{1}{k}\bigg)^h
+(0,\ell)\text{-sectors}\label{eq:largeOrderWithoutPolyGammaRewritten}
\end{align}
where $p_h^{(\ell,0)}$ and $p_h^{(0,\ell)}$ are the coefficients in
the $1/k$ expansion of the first line, obtained from expanding
the ratio of gamma functions. In particular, $p_h^{(\ell,0)}$ is a
linear combination of the $\tilde{d}_g^{(\ell,0)[0]}$ with $g \leq h$,
and similarly for $p_h^{(0,\ell)}$. The important observation is that
the above expression is itself a transseries in $1/k$ with
non-perturbative exponents proportional to $\log(\ell)$. It is therefore natural to
perform Borel summation on the series
\begin{equation}\label{eq:largeOrderTransseriesSectors}
P^{(1,0)}(x) \equiv \sum_{h=0}^\infty p_h^{(1,0)}x^h
\qquad\text{and}\qquad
P^{(0,1)}(x) \equiv \sum_{h=0}^\infty p_h^{(0,1)}x^h\,,
\end{equation}
with $x=\frac1k$. Then, in the large order expressions we
replace these series with the Borel-summed version\footnote{Recall that the constant term is
not included in our definition of $\mathcal{B}[p^{(1,0)}](t)$, which is why
we have to add it separately.}
\begin{equation}
\mathcal{S}[P^{(1,0)}](x) = p_0^{(1,0)} + \int_0^\infty dt\,\mathcal{B}[P^{(1,0)}](t)e^{-t/x}\,,
\end{equation}
and similarly for $\mathcal{S}[P^{(0,1)}](x)$. Once we have summed the leading asymptotic series in this way, we will be able to read off further subleading coefficients from the large $k$ behaviour of $\delta_k^{(1)}$.
Unfortunately, this expression cannot be taken at face value, as the
asymptotic expansion of $P^{(0,1)}$ and $P^{(1,0)}$ is not Borel
summable. More precisely, there are singularities on the integration
contours, as we can see in \fig{BorelPade}, where we show the
singularities of the Padé approximants of the Borel transforms of
\eq{largeOrderTransseriesSectors} for diagram $(a)$. We make a few
observations. First of all, we do not only see singularities at positions
$\log(\ell)$, but also at shifts of $2\pi \mathrm{i} m$ for $m\in\mathbb Z$. This
is related to the fact that logarithms are not uniquely
defined: for instance one could replace
\begin{equation}
\log(\ell)\to \log(\ell)+2\pi\mathrm{i} m\,.
\end{equation}
The second important observation is that we only observe singularities
at $\log(2)+2\pi\mathrm{i} m$ and $\log(3)+2\pi\mathrm{i} m$ for $\mathcal{B}[P^{(1,0)}]$
and $\log(1)+(2m+1)\pi\mathrm{i}$ and $\log(3)+\pi\mathrm{i} m$ for
$\mathcal{B}[P^{(0,1)}]$. These singularities can be traced back to the
expansion of $\mathcal{B}[D_a^{(0,0)}](u)$ around $\log(1-u)$ and $\log(1+u)$:
\begin{equation}
\label{eq:9}
\left(\frac{6}{(3-u)(2-u)^2u} - 1 \right) \cdot \frac{\log(1-u)}{(1-u)}\,,
\qquad\text{and}\qquad
\frac{2\log(1+u)}{(1-u)^2(2-u)u(2+u)}\,,
\end{equation}
where in the first expression we subtracted the $\frac{\log(1-u)}{1-u}$ term from \eq{BorelTransNLOb}
as its contribution, i.e. the large order growth initiated by the
coefficient $\tilde{d}_0^{(1,0)[1]}$, is already subtracted in
\eq{largeOrderWithoutPolyGammaRewritten}. That is, the expansions of \eqref{eq:9}
have singularities at $u=1,2,3$ and $u=-2,1,2$ respectively
\footnote{The pole around $u=0$ cancels against the first term in the
expansion of the logarithms.}. Taking the logarithm of these
values and using the fact that the logarithm is multi-valued, one indeed finds the
observed positions of the singularities in \fig{BorelPade}. For
this paper, the above observations are sufficient, but
they are indicative of further interesting phenomena which we plan to come back to in a
forthcoming publication \cite{CM-AvS-MV-to-appear}.
\begin{figure}
\centering
\begin{subfigure}{7.5cm}
\includegraphics[width=\textwidth]{figures/BorelPadePlot10.pdf}
\caption{}
\label{fig:BorelPade10}
\end{subfigure}
\begin{subfigure}{7.5cm}
\includegraphics[width=\textwidth]{figures/BorelPadePlot01.pdf}
\caption{}
\label{fig:BorelPade01}
\end{subfigure}
\caption{For diagram $(a)$, Figs. (a) and (b) show the Pad\'e poles (small black dots) of the 75th diagonal Padé approximant of $\mathcal{B}[P^{(1,0)}]$ and $\mathcal{B}[P^{(0,1)}]$ respectively. The larger red transparant points are positioned at $\log(\ell)+\pi\mathrm{i} m$, with $\ell=1,2,3$ and $m\in\mathbb Z$. At large imaginary values, some `spurious' poles appear due to numerical inaccuracies.}
\label{fig:BorelPade}
\end{figure}
For both diagrams $(b)$ and $(c)$, we have simple $(0,1)$-sectors
with only a single coefficient, so we do not need to perform a
Borel-Pad\'e summation on these sectors to be able to subtract them. For the $(1,0)$ sector, we
obtain a similar plot as in \fig{BorelPade10}, where the positions of the
singularities are traced back to locations of the poles in the
expansions around $\log(1-u)$,
given in \eqs{expansionAroundLog(1-u)b}{expansionAroundLog(1-u)c}. To be
precise, for both diagram $(b)$ and $(c)$, we observe singularities at
$\log(2)+2\pi\mathrm{i} m$ and $\log(3)+2\pi\mathrm{i} m$.
Since the Borel transforms in particular have singularities on the
real positive axis, we need to deal with the ambiguity of the Laplace
contour. In order to do so, we performed a numerical integration under
a small positive angle just above the real axis in \fig{BorelPade01}. See \app{borelPade}
for details about the numerical integration. This choice for the
`upper contour' implies that we also get an imaginary part. However, as we will see in a moment, this imaginary part does not play a role in determining the perturbative coefficients through large order formulas, because of a similar and corresponding ambiguity in the choice of logarithm branch cuts in the Borel plane.
Of course, we
also could have chosen the `lower contour'; the imaginary contribution will then have opposite sign, but will still be `canceled' leading to the
same large order results.
With the Borel summed $(1,0)$ and $(0,1)$ sectors at our disposal, we can now turn our attention to the resurgence of the second non-perturbative $(2,0)$ and $(0,2)$ sectors. As in the previous subsection, we start with a discussion of diagram $(a)$. We can now subtract the numerically performed integral
from the perturbative coefficients. Recalling that $\beta=-1$ and $A_1=-A_2=1$, we
obtain the new sequence
\begin{align}
\delta_k^{(1)}
&= d_k^{(0,0)}
- \tilde{d}_0^{(1,0)[1]}\frac{\Gamma(k-\beta)\psi(k-\beta)}{A_1^{k-\beta}}\nonumber\\
&\hspace{4cm}
- \frac{\Gamma(k-\beta)}{A_1^{k-\beta}}\mathcal{S}[P^{(1,0)}]\left(\frac1k\right)
- \frac{\Gamma(k-\beta)}{A_2^{k-\beta}}\mathcal{S}[P^{(0,1)}]\left(\frac1k\right)\,,
\label{eq:deltak1}
\end{align}
which probes the contribution of the
$(2,0)$ and $(0,2)$ sectors to the large order behaviour of the perturbative coefficients. In \fig{coeffsPlotExactDiagram} we
show $d_k^{(0,0)}$ together with $\delta_k^{(1)}$ for $20\leq k\leq 100$ for
diagram $(a)$. We clearly observe that the (real) perturbative coefficients
$d_k^{(0,0)}$ diverge faster than both Re$[\delta_k^{(1)}]$ and
Im$[\delta_k^{(1)}]$. This is
already a good sign that these coefficients probe the subleading non-perturbative
$(2,0)$ and $(0,2)$ sectors.
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{figures/coeffsPlotExactDiagram.pdf}
\caption{Plot of the coefficients $d_k^{(0,0)}$ (top, blue) and
Re$[\delta_k^{(1)}]$ (bottom, orange), Im$[\delta_k^{(1)}]$ (middle, green)
for $20\leq k\leq 100$ for diagram $(a)$. We clearly observe
that the coefficients $\delta_k^{(1)}$ are `less asymptotic'
compared to the perturbative coefficients $d_k^{(0,0)}$, a sign
that this new sequence probes the subleading non-perturbative $(2,0)$ and $(0,2)$ sectors.}
\label{fig:coeffsPlotExactDiagram}
\end{figure}
Let us now discuss the fact that $\delta_k^{(1)}$ for diagram $(a)$ appears to have an imaginary part. This may seem at odds with the fact that we are describing the large order behaviour of a {\em real} expansion of the Adler function, but in fact it is not. Our numerical resurgence analysis yields
\begin{equation}\label{eq:largeOrderImPartDiagrama}
\mathrm{i} \, \text{Im}\left[\delta_k^{(1)}\right]
\sim \pm \pi\mathrm{i}
\left[
- \frac{\Gamma(k+2)}{2^{k+2}} \cdot 3
- \frac{\Gamma(k+1)}{2^{k+1}} \cdot \frac{3}{2}
+ \frac{\Gamma(k+1)}{(-2)^{k+1}} \cdot \frac{1}{36}
-\frac{\Gamma(k+1)}{3^{k+1}}
\right]\,,
\end{equation}
where the overall sign ambiguity comes from a choice of contour in the Borel-Padé evaluation of \eq{deltak1}, either above or below the singularities on the positive real axis in the Borel plane. Note that this imaginary contribution can itself be Borel transformed into
\begin{equation}
\pm \mathrm{i} \pi \left[-3\frac{1}{(2-u)^2} - \frac{3}{2} \frac{1}{2-u} -\frac{1}{36}\frac{1}{2+u} - \frac{1}{3-u}\right]\,.
\label{eq:imgcontr}
\end{equation}
Now, these same imaginary and ambiguous contributions will also appear in the Borel transform of diagram (a) itself, \eq{BorelTransNLOb}. For example, an ambiguity
is present in the expansion of the logarithm $\log(1-u)$ in that expression around $u=2$. This expansion takes the form
\begin{equation}
\frac{6\log(1-u)}{(3-u)(2-u)^2(1-u)u}\bigg|_{u=2}
= - \frac{\pm3\pi\mathrm{i}}{(2-u)^2}-\frac{\pm\frac32\pi\mathrm{i}+3}{2-u}+...
\end{equation}
where we see the exact same imaginary ambiguity as in the first and second term of \eq{imgcontr}. As a result, the coefficients of the $(2,0)$ sector that can be read off from either \eq{largeOrderImPartDiagrama} or \eq{imgcontr} will {\em not} be ambiguous and moreover will be purely real.
A similar reasoning can be applied to the third ambiguous term in \eq{imgcontr} which will reappear in the expansion around $\log(1+u)$ of \eq{BorelTransNLOb}:
\begin{equation}
\frac{2\log(1+u)}{(1-u)^2(2-u)u(2+u)}\bigg|_{u=-2}
= -\frac{1}{36} \cdot \frac{\pm\pi\mathrm{i} }{2+u} + ...\,.
\end{equation}
For the fourth ambiguous term in \eq{imgcontr}, which will determine a $(3,0)$ sector coefficient, we expand around $u=3$:
\begin{equation}
\frac{6\log(1-u)}{(3-u)(2-u)^2(1-u)u}\bigg|_{u=3}
= -\frac{\pm\pi\mathrm{i}-\log(2)}{3-u}+...\,.
\end{equation}
Taking the same steps as we did for the $(1,0)$ and $(0,1)$ sectors,
we can now determine the coefficients in the $(2,0)$ and $(0,2)$ sectors by doing a large order analysis on the $\delta_k^{(1)}$. Above, we have already analysed the imaginary part of $\delta_k^{(1)}$. The real part does not have a contribution to the $(0,2)$ sector, but is does affect the $(2,0)$ sector. From it, we extract $\beta_{2,0}=-1$ and the $(2,0)$ coefficients
\begin{align}
\tilde{d}_0^{(2,0)[1]}
&= -2\\
\tilde{d}_h^{(2,0)[0]}
&=
\begin{cases}
3-2\gamma_E+\frac{1}{4}\log(3)
&\qquad h=0\\
\Gamma(h)\Big(\left(\frac23h +\frac{11}{9}\right)(-1)^{h}+ 1
-\frac29\frac{1}{2^{h}}\Big)
&\qquad h>0\,.
\end{cases}\label{eq:diagrama200sectorreal}
\end{align}
As we see, the $(2,0)$ sector contains again a $\log(\alpha)$ contribution in the transseries, given by the coefficient $d_0^{(2,0)[1]}$. We were able to extract the first 15 coefficients in \eq{diagrama200sectorreal} (as well as those of the simpler imaginary part, \eq{largeOrderImPartDiagrama}) numerically from \eq{deltak1} up to 9 decimal places, after which we inferred the exact expressions. Once again, we know that this is exact to all orders, because we can compare with the exact Borel transform. (Recall how we did this for the $(1,0)$ and $(0,1)$ sectors in \eqs{borelDiagramaAround1}{borelDiagramaAround1LogTerm}.)
Let us now turn to diagram $(b)$. Similar to diagram $(a)$, before we can probe the second non-perturbative $(2,0)$ and $(0,2)$ sectors, we want to subtract the $(1,0)$ and $(0,1)$ sectors. As both the $(0,1)$ sector and the log part of the $(1,0)$ sector (i.e. the $(1,0)[1]$ sector) contain just one coefficient, we can directly subtract these parts. The $(1,0)[0]$ coefficients however grow asymptotically, and therefore we have to perform a Borel-Padé summation on this part. This leads to the new sequence
\begin{align}\label{eq:deltak1bandc}
\delta_k^{(1)}
&= d_k^{(0,0)}
- \tilde{d}_0^{(1,0)[1]}\frac{\Gamma(k-\beta)\psi(k-\beta)}{A_1^{k-\beta}}
- \frac{\Gamma(k-\beta)}{A_2^{k-\beta}}\tilde{d}_0^{(0,1)[1]}
- \frac{\Gamma(k-\beta)}{A_1^{k-\beta}}\mathcal{S}[P^{(1,0)}]\left(\frac1k\right)\,,
\end{align}
with $\beta=-1$ and $A_1=-A_2=1$ and the coefficients $\tilde{d}_0^{(1,0)[1]}$ and $\tilde{d}_0^{(1,0)[0]}$ given in \eqs{101sectorDiagramb}{010sectorDiagramb} respectively. A numerical analysis on the imaginary part of $\delta_k^{(1)}$ yields a contribution to the $(2,0)$ sector:
\begin{equation}\label{eq:imPartDiagramb20}
\mathrm{i}\,\text{Im}\left[\delta_k^{(1)}\right]
\sim \pm6\pi\mathrm{i}\frac{\Gamma(k+1)}{2^{k+1}} + \ord{3^{-k}}\,.
\end{equation}
From the real part of $\delta_k^{(1)}$ we extract for both sectors $\beta_{2,0}=\beta_{0,2}=-1$. Furthermore, we extract a single non-perturbative coefficient for the $(0,2)$ sector,
\begin{equation}
\tilde{d}_0^{(0,2)[0]} = -\frac16+\frac29\log(2)\,,
\end{equation}
and the first few coefficients of the $(2,0)$ sectors. We list the first four:
\begin{equation}
\tilde{d}_0^{(2,0)[0]} = -2\,,
\qquad
\tilde{d}_1^{(2,0)[0]}= -\frac{14}{3}\,,
\qquad
\tilde{d}_2^{(2,0)[0]} = -\frac{5}{2}\,,
\qquad
\tilde{d}_3^{(2,0)[0]} = -\frac{7}{6}-8\zeta_3\,.
\end{equation}
We have obtained these coefficients numerically up to at least 9 decimal places, as well as some further ones not displayed here, after which we inferred the analytic expression. Furthermore, notice that this diagram does not have a $\log$ sector, i.e. all the coefficients $\tilde{d}_h^{(2,0)[1]}$ vanish.
We can improve on these results using again the method of alien derivatives acting on the convolution integral to find all $\tilde{d}_h^{(2,0)[0]}$ exactly. We repeat the convolution integral, \eqs{convoMethodDiagramb}{convoMethodDiagramb2}, for this diagram:
\begin{align}
\mathcal{B}[D_b^{(0,0)}](u)
=\frac{-6\Gamma(1+u)}{(1-u)\Gamma(3-u)}\mathcal{B}[\Psi^{(0)}](u),
\end{align}
with:
\begin{equation}
\mathcal{B}[\Psi^{(0)}](u)
=\int_0^u du_1\, \mathcal{B}[F](u_1)\mathcal{B}[F](u-u_1),
\quad\text{with}\quad
\mathcal{B}[F](u)
= \frac{\Gamma(1-u)}{(2-u)\Gamma(2+u)}\,.
\end{equation}
In order to extract the $(2,0)$ sector, we need the expansion of $\mathcal{B}[\Psi^{(0)}]$ around $u=2$, i.e. we need the second non-perturbative sector $\Psi^{(2)}$.
As discussed in \sec{convoInt}, this implies we need
\begin{equation}\label{eq:delta1FandDelta2F}
\Delta_1F = \frac12(2\pi\mathrm{i}),
\qquad\text{and}\qquad
\Delta_{2}F = -\frac16(2\pi\mathrm{i})\left(\frac{1}\alpha +\frac{17-12\gamma_E}{6}\right)\,,
\end{equation}
which is read off from the expansions around the poles of $\mathcal{B}[F]$ positioned at $u=1$ and $u=2$.
We notice that, similar to the LO Adler function, $\Delta_1^2F=0$. However, $\Delta_2F$ is now non-vanishing.
This yields
\begin{align}
\left(S_1^\Psi\right)^2\Psi^{(2)}
&= 2\cdot \frac12 (\Delta_1F)^2 + 2 \cdot F\Delta_2F\nonumber\\
&=\frac14(2\pi\mathrm{i})^2
-\frac13(2\pi\mathrm{i})\left(\frac{1}\alpha +\frac{17-12\gamma_E}{6}\right)
\sum_{n=0}^\infty F_n\alpha^{n+1}\nonumber\\
&= 2\pi\mathrm{i}\left(\frac12 \pi\mathrm{i} - \frac{1}{3}F_0\right)
- \frac{1}{3}(2\pi\mathrm{i})\sum_{n=0}^\infty \left(
F_{n+1}
+\frac{17-12\gamma_E}{6}\sum_{n=0}^\infty F_{n}
\right)\alpha^{n+1}\,.
\end{align}
Recalling the discussion at the end of \sec{convoDoublePole}, in particular \eqs{doublePoleConvoTrick}{doublePoleConvoTrick2}, we can write the Borel transform of $\Psi^{(2)}$ in terms of $\mathcal{B}[F](u)$ and its derivative $\mathcal{B}[F]'(u)$ w.r.t. $u$. Therefore, the Borel transform $\mathcal{B}[\Psi^{(0)}]$ around $u=2$ reads
\begin{align}
\mathcal{B}[\Psi^{(0)}](u)\Big|_{u=2}
&= \frac{\frac12\pi\mathrm{i}-\frac13F_0}{u-2}-\left(S_1^\Psi\right)^2\mathcal{B}[\Psi^{(2)}](u-2)\frac{\log(1-\frac{u}{2})}{2\pi\mathrm{i}}+...\\
&= \frac{\frac12\pi\mathrm{i}-\frac13F_0}{u-2}
+ \frac{1}{3}\left(\mathcal{B}[F]'(u-2)+\frac{17-12\gamma_E}{6}\mathcal{B}[F](u-2)\right)
\log\left(1-\frac{u}{2}\right)
+...\,.\nonumber
\end{align}
Adding back in the prefactor to get $\mathcal{B}[D_b](u)$ and $F_0=\frac12$ yields, after a brief calculation\footnote{Note that in \eq{imPartDiagramb20} the factor $6\pi \mathrm{i}$ appears with an ambiguous sign. As usual, in large order formulas this imaginary ambiguity should not contribute to the perturbative coefficients; it will be `canceled' by the $6\pi i$ terms in the present equation. One may be worried that here, no ambiguous sign appears in front of the $6 \pi \mathrm{i}$, but this is a result of the fact that we have implicitly chosen the `upper contour' by applying a particular form of the Stokes automorphism and the resulting alien derivatives.},
\begin{align}
\mathcal{B}[D_b^{(0,0)}](u)\Big|_{u=2}
&=\frac{-6\Gamma(1+u)}{(1-u)\Gamma(3-u)}\mathcal{B}[\Psi^{(0)}](u)\Big|_{u=2}+...\nonumber\\
&= \frac{6\pi\mathrm{i}-2}{2-u}
-S_{1,0}^2\mathcal{B}[D_b^{(2,0)}](u-2)\frac{\log(1-\frac{u}{2})}{2\pi\mathrm{i}}
+...\,,\label{eq:sector2NLO2}
\end{align}
with
\begin{equation}\label{eq:sector2NLO2BorelPart}
\frac{S_{1,0}^2}{2\pi\mathrm{i}}\mathcal{B}[D_b^{(2,0)}](u-2)
= \frac{2u}{(1-u)(4-u)}\left(\frac{1}{4-u}+\frac{17-12\gamma_E}{6}
-\psi(3-u)-\psi(u)
\right)\,.
\end{equation}
The exact coefficients $\tilde{d}_h^{(2,0)[0]}$ are obtained from the residue of the simple pole around $u=2$ in \eq{sector2NLO2}, and after applying an inverse Borel transform on \eq{sector2NLO2BorelPart}.
At this point, we want to come back to a left over point of the previous subsection: the resurgence of the $\tilde{d}_0^{(1,0)[0]}$ and $\tilde{d}_0^{(0,1)[0]}$ coefficients -- which we were not yet able to determine there -- from the convolution integral. (We shall see momentarily how the same reasoning also leads to the resurgence of the $\tilde{d}_0^{(0,2)[0]}$ coefficient.)
In principle, as $\mathcal{B}[F](u)$ only has singularities at positive integers of $u$, also its convolution integral $\mathcal{B}[\Psi^{(0)}](u)$ only has possible singularities at positive integers of $u$. Therefore, with the techniques of studying the convolution integral with alien derivatives, it seems we can only make statements about the local expansions of $\mathcal{B}[\Psi^{(0)}](u)$ around these values. However, the prefactor
\begin{equation}
\frac{-6\Gamma(1+u)}{(1-u)\Gamma(3-u)}
\end{equation}
in front of this convolution integral has singularities at $u=1$ and the negative integers of $u$. Therefore, these singularities pick up regular terms of $\mathcal{B}[\Psi^{(0)}](u)$ at these points.
In particular, the prefactor also leads to the pattern of singularities at negative integers in $u$ of $\mathcal{B}[D_b^{(2,0)}](u-2)$. Furthermore, there is also a singularity at $u=1$ and therefore we notice that this sector contributes to the coefficients $\tilde{d}_0^{(1,0)[0]}$, $\tilde{d}_0^{(0,1)[0]}$ and $\tilde{d}_0^{(0,2)[0]}$. For example, expanding the Borel transform of the $(0,2)$ sector around $u=1$ yields
\begin{equation}
S_{1,0}^2\mathcal{B}[D_b^{(2,0)}](u-2)\frac{\log(1-\frac{u}{2})}{2\pi\mathrm{i}}\Big|_{u=1} = \frac{13\log(2)}{9(u-1)}+...
\end{equation}
with the ellipsis denoting regular terms around $u=1$. We notice that this indeed corresponds with part of the coefficient $\tilde{d}_0^{(1,0)[0]}$ already obtained numerically in \eq{100sectorDiagramb}. The remaining part of this coefficient follows from the fact that we multiplied a pole at $u=1$ of the prefactor with the expansion of $\mathcal{B}[\Psi^{(0)}[u]$ around $u=1$.
Similarly, we can expand the same expression around $u=-1$ and $u=-2$ to find (parts of) the coefficients $\tilde{d}_0^{(0,1)[0]}$ and $\tilde{d}_0^{(0,2)[0]}$ as given above. In conclusion, we see in particular that knowing the subleading $(2,0)$ and $(0,2)$ sectors actually teaches us a bit more about the leading $(1,0)$ and $(0,1)$ sectors that we were not able to deduce from the perturbative $(0,0)$ sector alone.
Finally, we discuss the resurgence of the $(2,0)$ and $(0,2)$ sectors of diagram $(c)$. For this, we can use the same sequence as in \eq{deltak1bandc}, but now using the coefficients of the $(1,0)$ and $(0,1)$ sectors of diagram $(c)$ given in Eqs. \eqref{eq:101diagramc}-\eqref{eq:010diagramc}. A numerical analysis yields
\begin{equation}\label{eq:imdiagramc}
\mathrm{i}\,\text{Im}\left[\delta_k^{(1)}\right]
\sim \pm3\pi\mathrm{i}\frac{\Gamma(k+2)}{2^{k+2}}\left(1-\frac{1}{k+1}\right) + \ord{3^{-k}}
\end{equation}
for the imaginary part.
Similar to diagram $(b)$, there is only a single non-vanishing non-perturbative coefficient in the $(0,2)$ sector:
\begin{equation}
\tilde{d}_0^{(0,2)[0]} = -\frac{17}{72}+\frac13\log(2)\,.
\end{equation}
However, the $(2,0)$ sector of diagram $(c)$ has a $\log$ contributing in the transseries, given by the non-perturbative coefficient
\begin{equation}
\tilde{d}_0^{(1,0)[1]} = -2\,.
\end{equation}
Again we list the first four coefficients of the $(2,0)[0]$ sector and we show in a moment how higher coefficients are obtained using the convolution method:
\begin{equation}\label{eq:200diagramc}
\tilde{d}_0^{(2,0)[0]} = 4-2\gamma_E+\frac14\log(3)\,,
\quad
\tilde{d}_1^{(2,0)[0]}= \frac13\,,
\quad
\tilde{d}_2^{(2,0)[0]} = \frac{11}{12}\,,
\quad
\tilde{d}_3^{(2,0)[0]} = -\frac{43}{12}+4\zeta_3\,.
\end{equation}
The numerical analysis on $\delta_k^{(1)}$ agrees with Eqs. \eqref{eq:imdiagramc}-\eqref{eq:200diagramc} up to at least 11 decimal places.
These same results are reproduced using the convolution integral method. We repeat this integral, \eqs{convoMethodDiagramc1}{convoMethodDiagramc2},
\begin{align}
\mathcal{B}[D_c^{(0,0)}](u)
=\frac{-6\Gamma(u)}{(1-u)(2-u)\Gamma(3-u)}\mathcal{B}[\Phi^{(0)}](u),
\end{align}
with
\begin{equation}
\mathcal{B}[\Phi^{(0)}](u)
=\int_0^u du_1\, \mathcal{B}[F](u_1)\mathcal{B}[G](u-u_1),
\qquad\text{with}\qquad
\mathcal{B}[G](u)
= \frac{u\Gamma(1-u)}{\Gamma(2+u)}\,,
\end{equation}
and $\mathcal{B}[F]$ the same as for diagram $(b)$ given in \eq{convoMethodDiagramb2}.
Again, for the resurgence of the $(2,0)$ and $(0,2)$ sectors of diagram $(c)$, we need access the second non-perturbative sector of $\Phi$. To compute this this, we use $\Delta_1F$ and $\Delta_2F$ already given in \eq{delta1FandDelta2F}, and also
\begin{equation}
\Delta_1 G=\frac12(2\pi\mathrm{i}),
\qquad\text{and}\qquad
\Delta_2G=-\frac13(2\pi\mathrm{i})\,.
\end{equation}
As $\Delta_1^2F=\Delta_1^2G=0$, we get
\begin{align}
\left(S_1^\Phi\right)^2\Phi^{(2)}
&=2\cdot\frac12\Delta_1F\Delta_1G+F\Delta_2G+G\Delta_2F\\
&=\frac14(2\pi\mathrm{i})^2
-\frac13(2\pi\mathrm{i}) \sum_{n=0}^\infty F_n\alpha^{n+1}
-\frac16(2\pi\mathrm{i})\left(\frac{1}\alpha +\frac{17-12\gamma_E}{6}\right)\sum_{n=0}^\infty G_n\alpha^{n+1}\nonumber\\
&= (2\pi\mathrm{i})\left(\frac12\pi\mathrm{i} - \frac{1}{6}G_0\right)
-\frac16(2\pi\mathrm{i}) \sum_{n=0}^\infty \left(
2F_n
+ G_{n+1}
+\frac{17-12\gamma_E}{6} G_{n}
\right)\alpha^{n+1}.\nonumber
\end{align}
We can write the Borel transform $\mathcal{B}[\Phi^{(2)}](u)$ in terms of $\mathcal{B}[F](u)$, $\mathcal{B}[G](u)$, and $\mathcal{B}[G]'(u)$. This yields
\begin{align}
\mathcal{B}[\Phi^{(0)}](u)\Big|_{u=2}
&= \frac{\frac12\pi\mathrm{i}-\frac16G_0}{u-2}-\left(S_1^\Phi\right)^2\mathcal{B}[\Phi^{(2)}](u-2)\frac{\log(1-\frac{u}{2})}{2\pi\mathrm{i}}+...\\
&= \frac{\frac12\pi\mathrm{i}-\frac16G_0}{u-2}
+ \frac16\bigg(2\mathcal{B}[G](u-2)+\mathcal{B}[G]'(u-2)\nonumber\\
&\hspace{4cm}
+\frac{17-12\gamma_E}{6}\mathcal{B}[G](u-2)\bigg)
\log\left(1-\frac{u}{2}\right)
+...\,.\nonumber
\end{align}
Adding back in the prefactor to get $\mathcal{B}[D_c](u)$ and $G_0=0$ yields
\begin{align}
\mathcal{B}[D_c^{(0,0)}](u)\Big|_{u=2}
&=\frac{-6\Gamma(u)}{(1-u)(2-u)\Gamma(3-u)}\left[\mathcal{B}[\Phi^{(0)}](u)\Big|_{u=2}+...\right]\nonumber\\
&= -\frac{3\pi\mathrm{i}}{(2-u)^2}-\frac{3\pi\mathrm{i}}{2-u}
-S_{1,0}^2\mathcal{B}[D_c^{(2,0)}](u-2)\frac{\log(1-\frac{u}{2})}{2\pi\mathrm{i}}+...\,,
\label{eq:expansionAroundLog(2-u)c}
\end{align}
with
\begin{equation}
\frac{S_{1,0}^2}{2\pi\mathrm{i}}\mathcal{B}[D_c^{(2,0)}](u-2)
= -\frac{1}{1-u}\left(\frac{1}{4-u}
-\frac{2}{2-u}+\frac{17-12\gamma_E}{6}
-\psi(3-u)-\psi(u)
\right)\,.
\end{equation}
An inverse Borel transform indeed yields the coefficients $\tilde{d}_h^{(2,0)[0]}$ given above. Note that the pole in the prefactor in the first line of \eq{expansionAroundLog(2-u)c} has a pole at $u=2$, and therefore a constant term from the regular part between square brackets also contributes to the singular terms in the last line. This corresponds to the coefficient $\tilde{d}_0^{(2,0)[0]}$. With the same reasoning as for diagram $(b)$, we can get parts of the coefficients of $\tilde{d}_0^{(1,0)[0]}$, $\tilde{d}_0^{(0,1)[0]}$ and $\tilde{d}_0^{(0,2)[0]}$ by expanding the sector $\mathcal{B}[D_c^{(2,0)}]$ around the singular points $u=1$, $u=-1$ and $u=-2$ respectively.
This finishes the analysis and discussion of the resurgence of the $(2,0)$ and $(0,2)$ sectors. In our analysis, we saw that for diagram $(b)$ and $(c)$ the $(2,0)$ sector was more interesting from a resurgence point of view, with resurgence relations that involve multiple non-perturbative sectors. In the next two subsections, we study some further non-perturbative sectors and draw the alien lattices for these diagrams.
\subsection{Furter non-perturbative sectors and alien derivative structure}
In the previous two subsections, we have focused on the resurgence of the $(1,0)$, $(0,1)$, $(2,0)$ and $(0,2)$ sectors from the perturbative $(0,0)$ sector. As already observed in these sections, the coefficients of these non-perturbative sectors show asymptotic growth, so these sectors themselves have resurgence relations to other non-perturbative sectors. We will see momentarily that the resurgence relations know more about the full transseries due to the fact that at order $1/N_f^2$ also non-perturbative sectors like $(n,m)$ with $n\neq0$ and $m\neq0$ will appear, something we have not observed before.
Let us once again start our discussion with diagram $(a)$. In \eqs{borelDiagramaAround1}{borelDiagramaAround1LogTerm}, we saw how in the Borel plane of $\mathcal{B}[D_a^{(0,0)}](u)$, the $(1,0)$ sector resurges in the form
\begin{equation}
\frac{S_{1,0}}{2\pi\mathrm{i}}\mathcal{B}[D^{(1,0)[0]}](u-1)
= \frac{3}{(2-u)^2}+\frac{3}{2(2-u)} - \frac{1}{2u} + \frac{1}{3-u}\,.
\end{equation}
Likewise, one finds the Borel transform of the $(0,1)$ sector:
\begin{equation}\label{eq:BorelDiagrama01Sector}
\frac{S_{0,1}}{2\pi\mathrm{i}}\mathcal{B}[D^{(0,1)[0]}](u+1)
= -\frac{1}{4(2-u)}-\frac{2}{3(1-u)^2}+\frac{2}{9(1-u)} - \frac{1}{2u} + \frac{1}{36(2+u)}\,.
\end{equation}
In particular, we notice that these expressions have a singularity at $u=0$. One can wonder why the perturbative $(0,0)$ sector does not contain this singularity in its own Borel transform. The reason is that, to get the $(0,0)$ sector, one has to include the logarithms:
\begin{equation}
\frac{S_{1,0}}{2\pi\mathrm{i}}\mathcal{B}[D^{(1,0)[0]}](u-1)\log(1-u)\Big|_{u=0}
= \frac12+\ord{u}
\end{equation}
and
\begin{equation}
\frac{S_{0,1}}{2\pi\mathrm{i}}\mathcal{B}[D^{(0,1)[0]}](u+1)\log(1+u)\Big|_{u=0}
= -\frac12+\ord{u}\,.
\end{equation}
That is, as seen from the point of view of the $(0,0)$ sector, the two constant terms that would contribute to a singularity at $u=0$ cancel. However, from the perspective of the $(1,0)$ and $(0,1)$ sectors, the individual singularities contribute to the large order growth of these sectors. Note that what these sectors see is {\em not} the $(0,0)$ sector again, as that sector has a vanishing constant term (see \eq{diagramaPerturbativeCoeffs}) and moreover, as in examples with bridge equations, one would not expect a sector like $(1,0)$ to be able to detect the $(0,0)$ sector. Thus, we conclude that what we find here is the contribution of a $(1,1)$ sector, i.e. we get
\begin{equation}
D^{(1,1)[0]} = \frac12\,,
\end{equation}
which is a sector containing only a single non-perturbative coefficient.
This relation is best viewed in the alien lattice, for which we now learn that it must contain the following arrows:
\begin{equation*}
\vcenter{\hbox{\includegraphics[width=.45\textwidth]{figures/alien_lattice_NLO_a-1-0_and-0-1.pdf}}}
\end{equation*}
Here we also included the motions towards other non-perturbative sectors. Let us make two more remarks about this diagram.
First of all, observe that from the $(0,1)$ sector we have also drawn alien derivative arrows towards a $(2,1)$ and a $(3,1)$ sector. The existence of a $(2,1)$ sector can be argued in a similar way as above and we will show this explicitly in a moment, by looking at the singularities in the relevant Borel transforms and observing that these do not correspond to sectors that are already known. The same argument for a $(3,1)$ sector is on less solid ground, as the coefficient we find for that sector is also contained in the relevant perturbative coefficient in the $(2,0)$ sector, and so we cannot be certain that it is indeed the former sector that the $(1,0)$ sector detects. Thus, for now the existence of a $(3,1)$ sector is a conjecture, which we hope to come back to in future work.
Secondly, notice the alien derivative $\Delta_1$ pointing from the $(1,0)$ sector to the $(2,0)$ sector; this alien derivative was discussed and explained around \eq{imgcontr}. In this diagram, we drew the $(2,0)$ sector with a white, solid box, which would imply that it is not an asymptotic series, contrary to the fact that we already know from the previous subsection that this {\em is} an asymptotic sector. The reason for this representation here is that the $(1,0)$ sector only sees a small, non-asymptotic part of the $(2,0)$ sector.
\bigskip
\noindent
Similarly, we observe that the Borel transform of the $(0,1)$ sector has a double pole at $u=1$, whereas this does not resurge from the $(0,0)$ sector directly, i.e.\ in our analysis we only found a single pole given by the coefficient $\tilde{d}_0^{(1,0)[0]}=\frac13-3\gamma_E+\frac{13}{9}\log(2)$. A closer inspection of the $(2,0)$ sector reveals the same double pole at $u=1$, i.e. we have
\begin{equation}
\frac{S_{1,0}^2}{2\pi\mathrm{i}}\mathcal{B}[D^{(2,0)[0]}](u-1)
= -\frac{2}{3(1-u)^2}+\frac{11}{9(1-u)} +\frac{1}{3-u}- \frac{2}{9(4-u)}\,,
\end{equation}
with the same coefficient $\frac23$ as in \eq{BorelDiagrama01Sector}.
Again, from the point of view of the $(0,0)$ sector, one has to include the logarithms:
\begin{equation}
\frac{S_{1,0}^2}{2\pi\mathrm{i}}\mathcal{B}[D^{(2,0)[0]}](u-1)\log\left(1-\frac{u}{2}\right)\Big|_{u=1}
= \frac{2\log(2)}{3(1-u)^2} +\frac{\frac23+\frac{11}{9}\log(2)}{1-u}+...
\end{equation}
and
\begin{equation}
\frac{S_{0,1}}{2\pi\mathrm{i}}\mathcal{B}[D^{(0,1)[0]}](u+1)\log(1+u)\Big|_{u=1}
= -\frac{2\log(2)}{3(1-u)^2} +\frac{-\frac13+\frac{2}{9}\log(2)}{1-u}+...\,.
\end{equation}
This means that the $(0,0)$ sector does not see the double pole, but the $(2,0)$ and $(0,1)$ sectors independently do. Furthermore, we observe that the $(2,0)$ and the $(0,1)$ sectors see the same double pole, and we thus conclude that this is part of the $(2,1)$ sector.
We notice however that the single pole, as seen from the point of view of the $(2,0)$ and $(0,1)$ sectors has a different residue in the two cases, i.e.\ $11/9$ and $2/9$ respectively. This is a result of the fact that both sectors have resurgence relations towards both the $(1,0)$ sector and the $(2,1)$ sector. As the $(2,0)$ sector sees the $(2,1)$ and $(1,0)$ sectors with different weights compared to the $(0,1)$ sector (which depend on different Stokes constants), we conclude that it is hard to distinguish which part of the single pole at $u=1$ is part of the $(2,1)$ sector and which part corresponds to the $(1,0)$ sector. Being able to distinguish between the two would allow one to extract an interesting relation between the Stokes constants of the Adler function. This is another point that we hope to come back to in future work.
Again, these relations are best viewed in terms of the alien lattice, which as we now have learned also contains the following ingredients:
\begin{equation*}
\vcenter{\hbox{\includegraphics[width=.6 \textwidth]{figures/alien_lattice_NLO_a-2-0_and-0-1.pdf}}}
\end{equation*}
The full alien lattice for diagram $(a)$ can now be obtained by combining the previous two diagrams with the similar diagram for the $(0,0)$ sector:
\begin{equation*}
\vcenter{\hbox{\includegraphics[width=.6\textwidth]{figures/alien_lattice_NLO_a-0-0.pdf}}}
\end{equation*}
where we also inlcuded motions from the perturbative $(0,0)$ sector towards a $(3,0)$ and $(4,0)$ sector (see also the discussion in the next subsection).
For diagrams $(b)$ and $(c)$, we have seen that only the $(1,0)$ and $(2,0)$ sectors are asymptotic. To be precise, we derived for diagram $(b)$ the following Borel transform of the $(1,0)$ sector:
\begin{equation}
\frac{S_{1,0}}{2\pi\mathrm{i}}\mathcal{B}[D_b^{(1,0)[0]}](u-1)
= \frac{6}{2-u}-\frac{3}{3-u}\,.
\end{equation}
The singularities at $u=2$ and $u=3$ can be seen as part of the $(2,0)$ and $(3,0)$ sectors, i.e.\ in terms of alien lattices:
\begin{equation}
\vcenter{\hbox{\includegraphics[width=.3\textwidth]{figures/alien_lattice_NLO_b-1-0.pdf}}}
\end{equation}
Diagram $(c)$ on the other hand is slightly different. The Borel transform of its $(1,0)$ sector reads
\begin{equation}
\frac{S_{1,0}}{2\pi\mathrm{i}}\mathcal{B}[D_c^{(1,0)[0]}](u-1)
= \frac{1}{2u}+\frac{3}{(2-u)^2}+\frac{3}{2(2-u)}+\frac{1}{2(3-u)}\,.
\end{equation}
Again we notice a singularity at $u=0$ which is not seen by the perturbative $(0,0)$ sector. We therefore conclude that this is the effect of a non-zero $(1,1)$ sector. As before, this new sector is not an asymptotic sector, but it only contains one non-perturbative coefficient. This yields the following part of the alien lattice:
\begin{equation*}
\vcenter{\hbox{\includegraphics[width=.3\textwidth]{figures/alien_lattice_NLO_a-1-0.pdf}}}
\end{equation*}
We already observed earlier that the $(2,0)$ sector of both diagram $(b)$ and $(c)$ has an infinite number of singularities in $u$ at both positive and negative integers.
Coming back to our discussion in \sec{prefactor}, we want to mention here that from the convolution integral alone, \eq{convoMethodDiagramc2}, we would not have expected singularities at $u=0$ or negative values of $u$. However the prefactor in front of the convolution integral, \eq{convoMethodDiagramc2} adds singularities at non-positive $u$. It is therefore hard to distinguish which part of the pole at $u=1$ for diagram $(c)$ is related to the $(0,1)$ sector or the $(1,2)$ sector.
First of all, the singularities at positive integers of $u$ follow mainly from the convolution integral, but they also arise from the prefactors in front of the convolution integral since these add a pole at $u=1$ for diagram $(b)$ and a pole at $u=1,2$ for diagram $(c)$. We therefore expect that this contributes to both a $(1,0)$ and a $(2,1)$ sector as seen from the $(2,0)$ sector. Likewise, the prefactor adds an infinite number of singularities at negative $u$, leading to both $(0,n)$ as well as $(2,2+n)$ sectors, for $n\geq1$, or in terms of motions on the alien lattice:
\begin{equation*}
\vcenter{\hbox{\includegraphics[width=.6\textwidth]{figures/alien_lattice_NLO_b-2-0.pdf}}}
\end{equation*}
All in all, we see that at order $1/N_f^2$ in the number of flavours, the Adler function displays many more interesting resurgence features than were present at order $1/N_f$.
\subsection{Overview: transseries sectors and relations}
We have discussed the resurgence of the first non-perturbative sectors ($(1,0)$ and $(0,1)$) as well as the second non-perturbative sectors ($(2,0)$ and $(0,2)$) of diagrams $(a)$-$(c)$, plus their own resurgence towards other sectors. We now briefly sketch the resurgence structure of higher sectors for these diagrams.
Looking at the closed form expression for diagram $(a)$ in \eq{BorelTransNLOb}, we notice from the singularity structure that the only remaining non-vanishing sectors are the $(3,0)$ and $(4,0)$ sectors. Furthermore, as these singularities are not branch cuts but just simple poles, we conclude that they describe non-asymptotic sectors containing just a single non-vanishing non-perturbative coefficient.
From our discussion in \sec{convoEquidistant}, it might seem that we need to act with many alien derivatives to get access to the higher sectors of diagrams $(b)$ and $(c)$. However, the constituents of the convolution integral $F$ and $G$ of diagram $(b)$ and $(c)$ (recall \eqs{convoMethodDiagramb}{convoMethodDiagramc1} respectively), have $\Delta_\omega^nF=\Delta_\omega^nG=0$ for $n>1$. This implies that we only get non-vanishing contributions by acting with at most two alien derivatives -- one for each factor in the convolution product. Still, that leaves us with quite a few options to consider, but as we see momentarily, we can categorise all remaining options into a few classes. We discuss each of these classes case by case. Furthermore, we will see that not all cases contribute to the resurgence of higher sectors.
Let us first discuss the higher non-perturbative asymptotic sectors of diagram $(b)$. The non-vanishing alien derivatives acting on $F$ of that diagram are given by
\begin{equation}
\Delta_1F = \frac12(2\pi\mathrm{i})\,,
\qquad
\Delta_{2}F = -\frac16(2\pi\mathrm{i})\left(\frac{1}\alpha +\frac{17-12\gamma_E}{6}\right)\,,
\end{equation}
and
\begin{equation}
\Delta_nF = \frac{(-1)^n}{(2-n)(n+1)!(n-1)!}(2\pi\mathrm{i})\,,
\qquad n\geq3\,.
\end{equation}
First of all, using the convolution method we get an asymptotic contribution to the $(n,0)$, $n\geq3$, sector via:
\begin{equation}
F\Delta_n F\,.
\end{equation}
One can show that this gives a contribution to $\mathcal{B}[D_b^{(0,0)}](u)$ around $u=n$ proportional to
\begin{align}
&\frac{-6\Gamma(1+u)}{(1-u)\Gamma(3-u)}\mathcal{B}[F](u-n)\log\Big(1-\frac{u}{n}\Big)\nonumber\\
&\hspace{2cm}
=\frac{-6\Gamma(1+u)}{(1-u)\Gamma(3-u)}\frac{\Gamma(n+1-u)}{(n+2-u)\Gamma(2+u-n)}\log\Big(1-\frac{u}{n}\Big)\,.
\end{align}
The ratio of gamma functions, is regular for $n\geq3$ and furthermore it cancels the pole at $u=1$. We therefore see that this asymptotic part of the $(n,0)$ sector is completely determined by the singularity at $u=n+2$, i.e. by the sector $(n+2,0)$.
Secondly, one gets contributions from acting with two alien derivatives:
\begin{equation}\label{eq:diagrambTwoAlienDerivatives}
\Delta_k\,F\Delta_{(n-k)}F\,.
\end{equation}
However, many of these cases do not lead to a contribution to the resurgence of diagram $(b)$ due to the prefactor in front of the convolution integral. That is, the contributions of \eq{diagrambTwoAlienDerivatives} that lead to a simple pole do not contribute as the
prefactor contains a term $1/\Gamma(3-u)$ which has zeros at $u=n$ for $n\geq3$. As a result, the only non-vanishing contribution follows from the case $k=2$ as this leads to a double pole at $u=n$. In particular, this results in a contribution to $\mathcal{B}[D_b^{(0,0)}](u)$ at $u=n$ proportional to:
\begin{align}
\frac{-6\Gamma(1+u)}{(1-u)\Gamma(3-u)}\frac{1}{(n-u)^2}\,.
\end{align}
Combining the above observations leads to the contributions to the alien lattices shown in the second column of \tab{NLOsummary}.
\begin{table}[]
\renewcommand{\arraystretch}{2}
\centering
\vspace{-.7cm}
\makebox[\textwidth][c]{
\begin{tabular}{||c|c|c|c||}
\hline\hline
& \includegraphics[width=.18\textwidth]{figures/adlerNLO1.pdf} & \includegraphics[width=.18\textwidth]{figures/adlerNLO2.pdf} & \includegraphics[width=.18\textwidth]{figures/adlerNLO3.pdf}
\\
\hline\hline
\raisebox{1.5cm}{\rotatebox[]{90}{$(0,0)$}}
& \raisebox{.09\totalheight}{\includegraphics[width=.45\textwidth]{figures/alien_lattice_NLO_a-0-0.pdf}}
& \multicolumn{2}{|c||}{\includegraphics[width=.4\textwidth]{figures/alien_lattice_NLO_b-0-0.pdf}} \\
\hline
\raisebox{.8cm}{\rotatebox[]{90}{$(1,0)$}}
& \includegraphics[width=.3\textwidth]{figures/alien_lattice_NLO_a-1-0.pdf}
& \includegraphics[width=.3\textwidth]{figures/alien_lattice_NLO_b-1-0.pdf}
& \includegraphics[width=.3\textwidth]{figures/alien_lattice_NLO_a-1-0.pdf} \\
\hline
\raisebox{1cm}{\rotatebox[]{90}{$(0,1)$}}
& \includegraphics[width=.38\textwidth]{figures/alien_lattice_NLO_a-0-1.pdf}
& \multicolumn{2}{|c||}{\raisebox{1.7\totalheight}{\includegraphics[width=.07\textwidth]{figures/alien_lattice_NLO_b-0-1.pdf}}} \\
\hline
\raisebox{2.4\totalheight}{\rotatebox[]{90}{$(2,0)$}}
& \raisebox{.6\totalheight}{\includegraphics[width=.4\textwidth]{figures/alien_lattice_NLO_a-2-0.pdf}}
& \multicolumn{2}{|c||}{\includegraphics[width=.5\textwidth]{figures/alien_lattice_NLO_b-2-0.pdf}} \\
\hline
\raisebox{.15\totalheight}{\rotatebox[]{90}{$(0,2)$}}
& \includegraphics[width=.07\textwidth]{figures/alien_lattice_NLO_b-0-2.pdf}
& \multicolumn{2}{|c||}{\includegraphics[width=.07\textwidth]{figures/alien_lattice_NLO_b-0-2.pdf}} \\
\hline
\raisebox{.9\totalheight}{\rotatebox[]{90}{$(3,0)$}}
& \raisebox{1.1\totalheight}{\includegraphics[width=.07\textwidth]{figures/alien_lattice_NLO_a-3-0.pdf}}
& \raisebox{.5\totalheight}{\includegraphics[width=.2\textwidth]{figures/alien_lattice_NLO_b-3-0.pdf}}
& \includegraphics[width=.3\textwidth]{figures/alien_lattice_NLO_c-3-0.pdf} \\
\hline
\raisebox{.15\totalheight}{\rotatebox[]{90}{$(0,3)$}}
& -
& \multicolumn{2}{|c||}{\includegraphics[width=.07\textwidth]{figures/alien_lattice_NLO_b-0-3.pdf}} \\
\hline
\raisebox{.22\totalheight}{\rotatebox[]{90}{$(n,0)$}}
& \includegraphics[width=.07\textwidth]{figures/alien_lattice_NLO_a-4-0.pdf}
& \multicolumn{2}{|c||}{\includegraphics[width=.2\textwidth]{figures/alien_lattice_NLO_b-n-0.pdf} }\\
\hline
\raisebox{.15\totalheight}{\rotatebox[]{90}{$(0,n)$}}
& -
& \multicolumn{2}{|c||}{\includegraphics[width=.07\textwidth]{figures/alien_lattice_NLO_b-0-n.pdf}} \\
\hline\hline
\end{tabular}
}
\caption{Final summary of the relations between non-perturbative sectors at order $1/N_f^2$.}
\label{tab:NLOsummary}
\end{table}
The structure of higher sectors of diagram $(c)$ is similar to that of diagram $(b)$. The non-vanishing alien derivatives acting on $G$ are
\begin{equation}
\Delta_nG = \frac{n(-1)^n(2\pi\mathrm{i})}{(n+1)!(n-1)!}\,,
\qquad
n\geq1\,.
\end{equation}
Acting with a single alien derivative on the product $FG$, we get asymptotic contributions to the $(n,0)$ sectors ($n\geq3$) from
\begin{equation}
F\Delta_nG+G\Delta_nF\,.
\end{equation}
First, we consider $n=3$, where $G\Delta_3F$ yields a contribution to $\mathcal{B}[D_c^{(0,0)}](u)$ proportional to
\begin{equation}
\frac{\log(1-\frac{u}{3})}{8(2-u)}\,.
\end{equation}
Likewise, $F\Delta_3G$ yields a contribution proportional to
\begin{equation}
\left(\frac{1}{8(2-u)}+\frac{1}{4(5-u)}\right)\log(1-\frac{u}{3})\,.
\end{equation}
Notice in particular that the expansion of this around $u=2$ leads to part of the coefficient $\tilde{d}_0^{(2,0)[0]}$.
For $n>3$, the only non-vanishing contribution of the convolution integral to the resurgence of diagram $(c)$ coming from a single alien derivative is given by $F\Delta_nG$. This yields a contribution proportional to
\begin{equation}
\frac{\log(1-\frac{u}{n})}{(2+n-u)}\,.
\end{equation}
Similar to diagram $(b)$, when acting with two alien derivatives, one can show that only a double pole coming from the convolution integral contributes to the resurgence of diagram $(c)$. This means that the contribution has to come from $\Delta_2F\Delta_{n-2}G$, and we get a term proportional to
\begin{equation}
\frac{-6\Gamma(u)}{(1-u)(2-u)\Gamma(3-u)}\frac{1}{(n-u)^2}
\end{equation}
We have added the resulting motions on the alien lattice in \tab{NLOsummary}.
\bigskip
\noindent
This brings us to the end of our resurgence analysis of diagrams $(a)$, $(b)$ and $(c)$. The results are summarized in \tab{NLOsummary}. Compared to the LO Adler function, at order $1/N_f^2$ the $(n,0)$ sectors (and also the $(0,1)$ sector for diagram $(a)$) are asymptotic sectors with their own resurgence relations towards further non-perturbative sectors. In particular we find sectors like $(n,m)$ with $n\neq0$ and $m\neq0$. We notice however that the asymptotic non-perturbative sectors only have resurgence relations to sectors that contain only one or two coefficients, quite similar to asymptotic perturbative series of the LO Adler function.
We expect that at higher order in the flavour expansion, the asymptotic non-perturbative sectors will have resurgence towards full asymptotic sectors, thus yielding an alien lattice with more and more motions between sectors.
\section{Conclusion and outlook}
\label{sec:conclusion}
In this paper we have analyzed the transseries of the Adler function, which includes non-perturbative effects, to order $1/N_f^2$. In our description of the analysis we aimed to connect two communities: we wished to introduce particle physicists to the techniques of resurgence, but also wanted to show the resurgence community that in particle physics and phenomenology problems, many of the techniques and results known from toy models still appear, in addition to new phenomena.
Let us summarize our main findings.
First, our resurgence analysis of renormalon singularities in the Borel transform of the Adler function led us to construct a minimal, non-trivial {\em two-parameter} transseries, which -- in contrast to the $1/N_f$ Adler function -- has asymptotic coefficient growth in many of its non-perturbative sectors.
Second, we found that resurgent properties of the Adler function are best expressed using the calculus and lattice structure based on the alien derivative (as advocated in \cite{Aniceto:2018bis}). In particular, we introduced in \sec{convoInt} a way to study the alien structure of transseries including renormalons using a convolution technique. Applying this technique in \sec{adlerNLO} gave us a way to extract resurgence properties without having immediate access to the exact Borel transform of a given asymptotic expansion. Furthermore, from this technique we were able to extract locally the form of the Borel transformed result.
Third, while the exact Borel transform of the $\mathcal{O}(1/N_f)$ Adler function and its singularities has been known for a long time \cite{Broadhurst:1992si,Beneke:1992ch} (reviewed and rederived in \sec{adlerLO}), we constructed a complete resummed transseries expression \eqs{DLOIR2}{DLOUV2} for the Adler function at order $1/N_f$ in the original $\alpha$ variable.
At this order in $1/N_f$ there are only poles in the Borel plane: UV double poles at negative integer values of the (rescaled) Borel plane parameter $u$, and IR double poles at positive integer values with the exceptions of an absent pole at $u=+1$ and the occurence of a single pole at $u=+2$.
Our fourth finding is a more intricate singularity structure in the Borel plan at order $1/N_f^{2}$, involving logarithmic branch cuts.
In particular, using the Borel-Padé technique and a large order analysis of the coefficients that we computed, we found many of these branch cuts at IR singularities and a few at UV singularities. Of course, since we have only computed a limited number of diagrams, we cannot state with full certainty which singularities have branch cuts (and therefore which sectors have asymptotic expansions) in the full Adler function at all orders in $1/N_f$. However, we conjecture that all singularities in fact become of this sort - except of course a potential IR singularity at $u=+1$ which on physical grounds (due to the absence of a corresponding operator in the OPE \cite{Parisi:1978az,Mueller:1984vh}) we expect to remain absent.
A fifth and final finding for the order $1/N_f^2$ transseries is that overlapping poles and branch point singularities in the Borel plane lead to $\log(\alpha)$ transmonomials. These factors were already anticipated in \cite{Beneke:1995qq}; we indeed find that they occur. These transmonomials indicate a new type of power correction that we also expect to persist at higher orders in $1/N_f$.
The picture that emerges from these findings is that the structure of our transseries, displayed in detail for order $1/N_f^2$ in \tab{NLOsummary}, is interestingly different from that occurring in many models studied in the literature. In particular, we have seen that for the Adler function, `multiple forward alien derivatives' such as $\Delta_2$ give nonzero results, whereas in most simple models that have a bridge equation, only $\Delta_1$ provides a nonvanishing `forward motion'. Furthermore, even though opposite non-perturbative exponentials $\pm A$ as well as logarithmic transmonomials appear in the Adler function transseries, the transseries turns out {\em not} to be resonant.
\bigskip
\noindent With our analysis and aim to start bridging the gap between the particle physics and resurgence communities, many open questions and opportunities for further research suggest themselves. Let us mention a few interesting directions.
To start verifying further the conjecture that for the full Adler function all non-perturbative sectors are asymptotic, one could take a closer look at specific diagrams at higher orders in $1/N_f$ and study their resurgence properties using the techniques developed in this paper.
A second interesting area for further research concerns the structure of the Adler transseries. We have seen that for our purposes, two parameters for this transseries suffice, but it is conceivable that one would in fact require more parameters. A related problem is that our Adler transseries has a nonvanishing `two steps forward' alien derivative $\Delta_2$; a `three-steps forward' $\Delta_3$ etc. This may already hint at a situation where sectors like the $e^{-2A/\alpha}$ sector come with their own transseries paramaters. Whether such parameters should be viewed as integration constants that need to be fixed, or as true non-perturbative expansion parameters is an interesting open question (cf.\ \cite{Gu:2022fss}, where a similar interplay was observed in a different model). One way to answer such questions would be using a bridge equation, but unfortunately we do not have one at our disposal. Nevertheless, the case of two parameters being sufficient seems supported by our finding that the Adler function has only two Stokes lines and therefore two Stokes automorphisms.
In conclusion, the Adler function remains an interesting object to study using resurgence techniques. It is sufficiently complicated to show features not displayed by many simpler models, but is also amenable to further perturbative studies that might uncover some of the subtleties mentioned. We expect many more lessons and surprises to lie ahead.
\acknowledgments
\noindent
We thank Jos Vermaseren for assistance with \texttt{FORM} and \texttt{SUMMER} and Alexander van Spaendonck for useful discussions.
\newpage
|
{
"arxiv_id": "2302.13706",
"language": "en",
"timestamp": "2023-02-28T02:27:57",
"url": "https://arxiv.org/abs/2302.13706",
"yymm": "2302"
} | \section{Introduction}\label{sec1}
One of the most well-known invariants of knots in 3-space would be the Fox's 3-colorablity.
In general, for a prime $n \ge 3$, it is well-known that a knot is Fox $n$-colorable if and only if the knot group admits a surjective homomorphism to the dihedral group $D_n$ of degree $n$.
For instance, it is stated in \cite[Chap. VI, Exercises, 6, pp.92--93]{CrowellFox1963}.
However, this is not the case for links with two or more components.
Some examples are given in \cite{IchiharaMatsudo2022} for $D_3$-coloring, which is the coloring by the symmetric group of degree three.
In this paper, we introduce a two-tone coloring on a link diagram, and give a condition for links that the link groups admit surjective homomorphisms to the dihedral groups.
In particular, we show that the link group of any link with at least 3 components admits a surjective homomorphism to the dihedral group of arbitrary degree.
\begin{remark}
It is well-known that, for $n \ge 3$, a link is Fox $n$-colorable, i.e., a diagram of the link admits a non-trivial Fox $n$-coloring (a coloring with at least two colors), if and only if $\det (L) =0$ or $(n, \det(L)) \ne 1$, where $\det(L)$ denotes the determinant of the link.
See \cite[Proposition 2.1]{LopezMatias2012} for example.
Also a condition for knot groups to admit a surjective homomorphism to the dihedral groups in terms of the homology of the double branched covering is also known.
See \cite[14.8]{BurdeZieschang} for example.
\end{remark}
To state our results, we prepare some notations.
Let $D_n$ be the dihedral group of degree $n$.
It is well-known that $D_n$ has the following presentation with $e$ the identity element:
\[
D_n
=\left< a, b \mid a^2 = b^n = (ab)^2 = e \right> .
\]
Note that any element in $D_n$ is represented as $a^x b^y$ ($x=0,1$, $0 \le y \le n-1$).
Thus, by setting $a_i := a b^i$ ($0 \le i \le n-1$) and $b_j := b^j$ ($1 \le j \le n-1$),
we see that $D_n = \{ e, a_0, a_1, \dots , a_{n-1}, b_1, \dots, b_{n-1} \}$ as a set.
In geometric view point, the $a_i$'s represent reflections and $b_j$'s represent rotations as the symmetries of a regular polygon ($n$-gon).
In the following, let $L$ be an oriented link in the 3-sphere $S^3$ with a link diagram $D$.
We call a map $\Gamma:\{ \mbox{arcs on $D$} \} \rightarrow D_n$ is a {\it $D_n$-coloring} on $D$ if it satisfies $\Gamma(x) \Gamma(z) = \Gamma(y) \Gamma(x)$
(respectively, $\Gamma(z) \Gamma(x) = \Gamma(x) \Gamma(y)$) in $D_n$ at each positive (resp. negative) crossing on $D$, where $x$ denotes the over arc, $y$ and $z$ the under arcs at the crossing supposing $y$ is the under arc before passing through the crossing and $z$ is the other. (See Figure~\ref{Fig/condition}.)
\begin{figure}[htbt]
\centering
{\unitlength=1mm
\begin{picture}(100,23)
\put(25,0){\includegraphics[width=.45\textwidth]{condition.pdf}}
\put(46,21.5){{\large $x$}}
\put(24,21.5){{\large $z$}}
\put(46,3){{\large $y$}}
\put(58,21.5){{\large $x$}}
\put(81,21.5){{\large $z$}}
\put(59,3){{\large $y$}}
\end{picture}}
\caption{}\label{Fig/condition}
\end{figure}
\begin{remark}\label{rmk2}
The $D_n$-colorings and the Fox $n$-colorings are related in terms of representations of link groups to $D_n$ as follows.
For a link diagram $D$ with $c$ crossings of a link $L$,
set $g_1, \dots, g_c$ the Wirtinger generators of the link group $G_L$, i.e., $G_L = \pi_1 (S^3 - L)$.
Then a $D_n$-coloring on $D$ corresponds to a map $\{ g_1 , \dots , g_c \} \to D_n$ which extends to a homomorphism of $G_L$ to $D_n$.
When a $D_n$-coloring sends $g_k$'s to $a_i$'s (reflections, $0 \le i \le n-1$) in $D_n$, it induces a map $\{ \text{arcs of } D \} \to \{ 0, 1, \dots, n-1 \}$, which gives a Fox $n$-coloring.
\end{remark}
Let $\Gamma$ be a $D_n$-coloring on a link diagram $D$ of an oriented link $L$.
We say that $\Gamma$ is \textit{two-tone} if $\mathrm{Im} (\Gamma) \cap \{ a_i \} \ne \emptyset $ and $\mathrm{Im} (\Gamma) \cap \{ b_j \} \ne \emptyset $, that is, the coloring uses colors from both $\{ a_i \} $ and $\{ b_j \}$.
We say that a link is \textit{two-tone $D_n$-colorable} if, with some orientation, it has a diagram $D$ admitting a two-tone $D_n$-coloring.
Now the following are our main results.
Here $D_\infty$ denote the group presented by $\left< a, b \mid a^2 = (ab)^2 = e \right>$, and
two-tone $D_\infty$-colorable for links are defined in the same way as above.
\begin{theorem}\label{Thm1}
For a 2-component link $L = \ell_1 \cup \ell_2$, the following are equivalent.
\begin{itemize}
\item[(i)] $lk( \ell_1, \ell_2)$ is even.
\item[(ii)] $L$ is two-tone $D_n$-colorable for some odd $n \ge 3$.
\item[(iii)] $L$ is two-tone $D_\infty$-colorable.
\item[(iv)] The link group $G_L$ admits a surjective homomorphism to $D_n$ for every $n \ge 3$.
\item[(v)] The link group $G_L$ admits a surjective homomorphism to $D_\infty$.
\end{itemize}
\end{theorem}
\begin{theorem}\label{Thm2}
Let $L = \ell_1 \cup \ell_2$ be a 2-component link with $lk( \ell_1,\ell_2)$ odd.
Then the following hold.
\begin{itemize}
\item[(i)] The link $L$ admits no two-tone $D_n$-colorings for any odd $n \ge 3$.
\item[(ii)]
If the link group $G_L$ admits a surjective homomorphism to $D_n$ for $n \ge 3$, then the homomorphism is induced from a Fox $n$-coloring on $\ell_1$, $\ell_2$ or $L$, i.e., the homomorphism sends a meridional element in $G_L$ to the trivial element or a reflection in $D_n$.
\end{itemize}
\end{theorem}
\begin{theorem}\label{Thm3}
Let $L$ be a link with at least 3 components.
Then the link group $G_L$ admits a surjective homomorphism to $D_n$ and $L$ is two-tone $D_n$-colorable for every $n \ge 3$.
\end{theorem}
We remark that (ii) in Theorem~\ref{Thm1} can be also equivalent to that $L$ is two-tone $D_n$-colorable for some even $n \ge 3$ except for the case that $L$ admits a $D_n$-coloring with only the color $b_{n/2}$.
Also remark that (iii) in Theorem~\ref{Thm1} does not imply that $L$ is two-tone $D_n$-colorable for every odd $n \ge 3$.
See Remark~\ref{rem3}.
For simplicity, we omit the details.
As a corollary, we have the following.
\begin{corollary}\label{CorG}
If a link $L$ is two-tone $D_m$-colorable for some odd $m$, then $G_L$ admits a surjective homomorphism to $D_n$ for every $n \ge 3$.
If $G_L$ admits a surjective homomorphism to $D_n$ for some $n$, then $L$ is two-tone $D_n$-colorable or contains a Fox $n$-colorable sub-link.
\end{corollary}
Note that even if a link admits a nontrivial Fox $n$-coloring, it may not induce a surjective homomorphism from $G_L$ to $D_n$.
See the example given in Section~\ref{sec4}.
On the other hand, even if a link $L$ is known to be two-tone $D_n$-colorable for some $n$, finding a two-tone $D_n$-coloring on a given diagram of $L$, or, finding a surjective homomorphism of $G_L$ to $D_n$, is a tedious task in general.
The next proposition and its proof give a simple way to find a two-tone $D_n$-coloring on a give link diagram for any $n \ge 3$.
\begin{proposition}\label{prop1}
Suppose that there exists a trivial component $\ell_0$ of a link $L$ and, for every component $\ell \subset L - \ell_0$, $lk (\ell_0 , \ell)$ is even.
Then any diagram of $L$ admits a two-tone $D_n$-coloring for every odd $n \geq 3$ which assigns the arcs on $\ell_0$ to $a_i$'s and the other arcs to $b_j$'s.
\end{proposition}
\section{Properties of $D_n$-coloring}
In this section, we study some properties of $D_n$-colorings, and give lemmas which will be used in the next sections.
In the following, we set $A_n := \{ a_i \}$ and $B_n:=\{ b_j \}$ for $D_n$.
\begin{lemma}\label{lem21}
Let $\Gamma$ be a $D_n$-coloring on a diagram $D$ of an oriented link $L$ in $S^3$.
At a crossing on $D$, $x$ denotes the over arc, $y$ and $z$ the under arcs at the crossing supposing $y$ is the under arc before passing through the crossing and $z$ is the other.
Then the following hold.
\begin{enumerate}
\item
If $\Gamma(x) \in B_n$ and $\Gamma(y) \in B_n$, then $\Gamma(z) = \Gamma (y)$.
\item
If $\Gamma(x) = a_i$ and $\Gamma(y) = a_{i'}$, then $\Gamma(z) = a_k$ and $ k \equiv 2 i' - i \pmod n$.
\item
If $\Gamma(x) = a_i$ and $\Gamma(y) = b_j$, then $\Gamma(z) = b_k$ and $ k \equiv n-j \pmod n$.
\item
If $\Gamma(x) = b_j$ and $\Gamma(y) = a_j$, then $\Gamma(z) = a_k$ and $ k \equiv i + 2j $ (resp. $ k \equiv i - 2j$ ) $ \pmod n$ if the crossing is a positive (resp. negative) crossing.
\end{enumerate}
\end{lemma}
\begin{proof}
We give a proof of the case (3) and the crossing is a positive crossing.
The others are proved in the same way.
Suppose that $\Gamma(x) = a_i$ and $\Gamma(y) = b_j$.
By the definition of a $D_n$-coloring, we have the following.
\begin{align*}
\Gamma(z) &= (a_i)^{-1} b_j a_i = b^{n-i} a^{-1} b^j a b^i \\
&= a b^{i+j-n} a b^i = b^{n-i-j+i} = b^{n-j}=b_{n-j}
\end{align*}
Thus $ \Gamma(z) = b_k$ and $ k \equiv n-j \pmod n$ holds.
\end{proof}
\begin{lemma}\label{lem22}
Let $\Gamma$ be a $D_n$-coloring on a diagram $D$ of an oriented link $L$ in $S^3$.
Let $x,y,z,w$ be the arcs depicted in Figure~\ref{Fig22}.
If $\Gamma(x) = a_i$ and $\Gamma(y) = b_j$, then $\Gamma(z) = a_k$ with $ k \equiv i-2j \pmod n$ and $\Gamma(w) = b_l$ with $ l \equiv n-j \pmod n$
\end{lemma}
\begin{figure}[htbt]
{\unitlength=1mm
\begin{picture}(50,17)
\put(16,0){
\includegraphics[width=.1\textwidth]{Fig2.pdf} }
\put(15,0){$x$}
\put(30,0){$y$}
\put(15,17){$z$}
\put(30,17){$w$}
\put(17,7){$u$}
\put(28,7){$v$}
\end{picture}}
\caption{A full twist, i.e., the crossings have the same sign}\label{Fig22}
\end{figure}
\begin{proof}
We only give a proof for the case that the both crossings in the figure are positive crossings.
A proof for the other case is similar.
In that case, by Lemma~\ref{lem21}(3), $\Gamma(u) = b_l$ with $ l \equiv n-j \pmod n$ and $\Gamma(v) = \Gamma(x) = a_i$.
Then, by Lemma~\ref{lem21}(4), $\Gamma(z) = a_k$ and $ k \equiv i + 2(n-j) \equiv i -2j \pmod n$ and $\Gamma(w) = \Gamma(u) = b_l$ with $ l \equiv n-j \pmod n$.
\end{proof}
\section{Two-tone colorings and surjective homomorphisms to $D_\infty$}
In this section, we study some conditions for two-tone $D_n$-colorings and surjective homomorphisms to $D_\infty$.
Let $lk(L,L')$ denote the (total) linking number of oriented links $L, L'$, i.e., $lk(L,L') = \sum_{\ell \subset L, \ell' \subset L'} lk(\ell, \ell')$.
In the following, the linking number is calculated for the link with arbitrarily chosen orientations.
Note that the parity of such a linking number is independent of the choice of orientations.
\begin{lemma}\label{lem1}
If $L$ is two-tone $D_n$-colorable for some odd $n \ge 3$, then there exists a sub-link $L_b$ of $L$ such that, for every component $\ell \subset L_b$, $lk (\ell , L - L_b)$ is even.
\end{lemma}
\begin{proof}
Take a two-tone $D_n$-coloring $\gamma$ on a diagram of $L$ for some $n \ge 3$.
Let $L_b$ be a maximal sub-link of $L$ such that each arc in a diagram of $L_b$ is colored by $b_j$'s by $\Gamma$.
By the definition of the $D_n$-coloring, $L_b$ is well-defined for $\Gamma$, independent of the choice of a diagram.
Now we consider the diagram of $L$ depicted in Figure~\ref{Fig31}, where $D_b$ is a sub-diagram corresponding to $L_b$, $D_a$ is the remaining sub-diagram, and each box between $D_a$ and $D_b$ contains a full-twist.
For the sub-diagram $D_\ell$ corresponding to a component $\ell \subset L_b$, we consider the arcs $\beta$ and $\beta'$ as in Figure~\ref{Fig31}.
\begin{figure}[htbt]
{\unitlength=1mm
\begin{picture}(100,50)
\put(20,0){
\includegraphics[width=.5\textwidth]{Fig3.pdf}}
\put(27,23){\large $D_a$}
\put(70,38){\large $D_b$}
\put(68,20){\large $D_\ell$}
\put(52,1){$\beta$}
\put(52,31){$\beta'$}
\end{picture}}
\caption{}\label{Fig31}
\end{figure}
Since $\beta$ and $\beta'$ are connected in $D_b$, we see $\Gamma(\beta) = \Gamma(\beta')$ by Lemma~\ref{lem21}(1).
On the other hand, if $\Gamma(\beta) = \Gamma(\beta')$, then the number $N$ of the boxes (full-twists) $\ell_i$ runs through have to be even by Lemma~\ref{lem22} together with $n$ is odd.
This number $N$ is equal to $lk (\ell , L - L_b)$, and so the lemma holds.
\end{proof}
The next is a key proposition to prove the theorems.
\begin{proposition}\label{prop}
Suppose that a link $L$ contains a component $\ell_0$ and a sub-link $L'$ with $lk(\ell_0, L')$ is even and $\det L' \ne 0$.
Then $G_L$ admits a surjective homomorphism to $D_\infty$ and $L$ is two-tone $D_\infty$-colorable.
\end{proposition}
\begin{proof}
We assume $L = \ell_0 \cup L'$; if $L$ has other components, we can obtain a desired representation on $G_L$ from one on $G_{\ell_0 \cup L'}$ by sending their meridians to the identity.
\par Let $p: X \to S^3 - L'$ be the cyclic double covering on the total linking number with $L'$, and $\bar{p}: M \to S^3$ the cyclic double branched covering.
Let $\tilde{K} = K_1 \cup K_2$ denote the inverse image $p^{-1}(\ell_0) \subset X$; because $lk(\ell_0, L')$ is even, $\tilde{K}$ is a $2$-component link in $X$ (or in $M$).
We shall construct a surjective group homomorphism $\pi_1(M - \tilde{K}) \to \mathbb{Z}$ and extend the composition $\pi_1(X - \tilde{K}) \to \pi_1(M - \tilde{K}) \to \mathbb{Z}$ to obtain a $D_\infty$-coloring.
\par Taking a regular neighborhood $N$ of $\tilde{K}$, we consider the Mayer-Vietoris exact sequence for $M = N \cup (M - \tilde{K})$:
$$H_2(M) \to H_1(N - \tilde{K}) \to H_1(N) \oplus H_1(M - \tilde{K}) \to H_1(M) \to H_0(N - \tilde{K})$$
is exact.
The rightmost map is zero as usual and the leftmost one is also zero because $\det L' \neq 0$ (hence $|H_1(M)| = |\det(L')| < \infty$); by the Poincar\'{e} duality $H_2(M) \cong H^1(M; \mathbb{Z}) = 0$.
Thus, we obtain a short exact sequence
$$0 \to H_1(N - \tilde{K}) \to H_1(N) \oplus H_1(M - \tilde{K}) \to H_1(M) \to 0.$$
Take a meridional disc $D_1 \subset N$ of $K_1$ and let $D_2$ denote $\varphi(D_1)$, where $\varphi: M \to M$ is the nontrivial covering transformation. We denote $D_1 \cup D_2$ by $\tilde{D}$.
Because the kernel of the surjective homomorphism $H_1(N - \tilde{K}) \to H_1(N)$ is the image of the injective map $H_1(\partial\tilde{D}) \to H_1(N - \tilde{K})$, the short exact sequence above shows that
\begin{equation}\label{es1}
0 \to H_1(\partial\tilde{D}) \to H_1(M - \tilde{K}) \to H_1(M) \to 0
\end{equation}
is also exact.
We should remark that (\ref{es1}) is compatible with $\varphi_*$.
\par Let $x \in H_1(\partial D_1)$ be a generator and set $y = \varphi_*(x) \in H_1(\partial D_2)$. We use the same symbols $x, y$ for their images in $H_1(\partial{D})$ or $H_1(M - \tilde{K})$. By taking the quotient of (\ref{es1}) by the $\varphi_*$-invariant part of $H_1(\partial\tilde{D})$, we obtain an exact sequence
$$0 \to H_1(\partial\tilde{D})/(x+y) \to H_1(M - \tilde{K})/(x+y) \to H_1(M) \to 0.$$
Since $H_1(\partial\tilde{D})/(x+y) \cong \mathbb{Z}$ and $|H_1(M)| < \infty$, the rank of $H_1(M - \tilde{K})/(x+y)$ equals $1$ and there exists a surjective homomorphism $f: H_1(M - \tilde{K})/(x+y) \to \mathbb{Z}$, which satisfies $f \circ \varphi_* = -f$. Let $\bar{f}: \pi_1(X - \tilde{K}) \to \mathbb{Z}$ denote the composition
$$\pi_1(X - \tilde{K}) \to \pi_1(M - \tilde{K}) \to H_1(M - \tilde{K}) \to H_1(M - \tilde{K})/(x+y) \to \mathbb{Z}.$$
\par Let $m \in G_L$ be a meridian of a component of $L'$. Identifying $\langle b \rangle \subset D_\infty$ with $\mathbb{Z}$, we define $\tilde{f}: G_L \to D_\infty$ by
$$\tilde{f}(g) = \left\{ \begin{array}{ll} \bar{f}(g) & (g \in \pi_1(X - \tilde{K})),\\ a \bar{f}(m^{-1} g) & (g \not\in \pi_1(X - \tilde{K})). \end{array} \right.$$
Since $a^2 = e$, $\tilde{f}$ is well defined as a map.
Furthermore, we have $\bar{f}(mgm^{-1}) = f \circ \varphi_* (g) = f(g)^{-1} = \bar{f}(g)^{-1} \in D_\infty$ for $g \in \pi_1(X - \tilde{K})$. By this equation, we can easily check that $\tilde{f}$ is a group homomorphism.
Because $\bar{f}$ is surjective and $\tilde{f}(m) = a$, the homomorphism $\tilde{f}: G_L \to D_\infty$ is surjective.
\end{proof}
The next is an immediate corollary of the proposition above, since any knot has odd determinant.
\begin{corollary}\label{cor3}
Let $L = \ell_1 \cup \ell_2$ be a $2$-component link.
If $lk( \ell_1, \ell_2)$ is even, then $G_L$ admits a surjective homomorphism to $D_\infty$ and $L$ is two-tone $D_\infty$-colorable. \qed
\end{corollary}
\section{Proof of theorems}\label{sec4}
To prove the theorems, we first prepare the next lemma.
\begin{lemma}\label{lem3}
Let $L = \ell_1 \cup \ell_2$ be a $2$-component link.
If $\det (L) =0$, then $lk( \ell_1, \ell_2)$ is even.
\end{lemma}
\begin{proof}
Let $D$ be a diagram of $L$.
Since $\det(L) = 0$, there exists a Fox $4$-coloring $\Gamma$ on $D$ which induces a surjective group homomorphism.
By the definition of Fox colorings, if $\Gamma(x)$ equals $a_0$ or $a_2$ (resp. $a_1$ or $a_3$) for an arc $x$ belonging to $\ell_i \; (i = 1,2)$, it holds for any arc $x$ of $\ell_i$.
Then, we may assume
$$\Gamma(\{\text{arcs of $\ell_1$}\}) \subset \{a_1, a_3\} \quad \text{and} \quad \Gamma(\{\text{arcs of $\ell_2$}\}) \subset \{a_0, a_2\}.$$
For a crossing point of $D$, let $x$ be the over arc and $y, z$ the under arcs.
Again by the definition of Fox colorings, we find that $\Gamma(y) = \Gamma(z)$ holds if and only if $x$ and $y$ belong to the same component.
In particular, the crossing switches the color of the under arc if $x$ belongs to $\ell_1$ and $y$ to $\ell_2$.
This implies that $D$ has an even number of such crossings, and hence the linking number $lk(\ell_1, \ell_2)$ is even.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm1}]
Let $L = \ell_1 \cup \ell_2$ a 2-component link.
We show that all the following are equivalent.
\begin{itemize}
\item[(i)] $lk( \ell_1, \ell_2)$ is even.
\item[(ii)] $L$ is two-tone $D_n$-colorable for some odd $n \ge 3$.
\item[(iii)] $L$ is two-tone $D_\infty$-colorable.
\item[(iv)] The link group $G_L$ admits a surjective homomorphism to $D_n$ for every $n \ge 3$.
\item[(v)] The link group $G_L$ admits a surjective homomorphism to $D_\infty$.
\end{itemize}
We see that (i)$\Rightarrow$(iii) follows from Corollary~\ref{cor3} and (ii)$\Rightarrow$(i) follows from Lemma~\ref{lem1}.
\medskip
\noindent
\underline{(iii)$\Rightarrow$(ii):}
Suppose that $L$ is two-tone $D_\infty$-colorable, that is, a diagram of $L$ admits a two-tone $D_\infty$-coloring.
Since there is a surjection from $D_\infty$ to $D_n$ for every $n \ge 3$ defined by $a \in D_\infty \mapsto a \in D_n$ and $b \in D_\infty \mapsto b \in D_n$, this implies that the diagram of $L$ admits a $D_n$-coloring for every $n$.
By taking odd $n$ sufficiently large, the $D_n$-coloring uses at least two colors from $a_i$'s.
Furthermore, if necessary, retaking $n$ to satisfy $ (n, \det(L))=1, (n, \det(\ell_1))=1, (n,\det(\ell_2))=1$, the coloring cannot come from Fox $n$-colorings on $L$, $\ell_1$, $\ell_2$. Thus the coloring have to be two-tone, and so, $L$ is two-tone $D_n$-colorable for some odd $n \ge 3$.
\medskip
We also see that (i)$\Rightarrow$(v) follows from Corollary~\ref{cor3}.
\medskip
\noindent
\underline{(v)$\Rightarrow$(iv):}
By a surjection from $D_\infty$ to $D_n$ for every $n \ge 3$ defined as above, if the link group $G_L$ admits a surjective homomorphism to $D_\infty$, then the link group $G_L$ admits a surjective homomorphism to $D_n$ for every $n \ge 3$.
\medskip
\noindent
\underline{(iv)$\Rightarrow$(i) or (ii):}
Suppose that the link group $G_L$ admits a surjective homomorphism to $D_n$ for every $n \ge 3$.
Such a surjective homomorphism induces a $D_n$-coloring on a diagram of $L$ for $n \ge 3$ by considering the Wirtinger generators for the diagram.
If $\det (L) =0$, then $lk( \ell_1, \ell_2)$ is even by Lemma~\ref{lem1}, and so (i) holds.
If $\det (L) \ne 0$, then for some odd $n$ which is coprime to $\det(L), \det(\ell_1), \det(\ell_2)$, the $D_n$-coloring does not come from a Fox $n$-coloring, and so, it have to be two-tone.
This implies (ii).
\end{proof}
\begin{remark}\label{rem3}
We remark that (iii) in Theorem~\ref{Thm1} does not imply that $L$ is two-tone $D_n$-colorable for every odd $n \ge 3$.
Actually even if there is a two-tone $D_\infty$-coloring on a diagram of a link $L$, the coloring may not give a two-tone $D_n$-coloring, but a Fox $n$-coloring on a sub-diagram of $L$.
Moreover, there exists a two-tone $D_\infty$-colorable link $L$ that admits no two-tone $D_n$-colorings for any $n \ge 3$.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{Thm2}]
Let $L = \ell_1 \cup \ell_2$ be a 2-component link with $lk( \ell_1,\ell_2)$ is odd.
\noindent
(i) Then $L$ admits no two-tone $D_n$-colorings for any $n \ge 3$ by Theorem~\ref{Thm1} (by the contraposition of (ii)$\Rightarrow$(i)).
\noindent
(ii)
By (i), if the link group $G_L$ admits a surjective homomorphism to $D_n$ for $n \ge 3$, then it is not induced from two-tone $D_n$-colorings.
That is, the homomorphism must send Wirtinger generators to either the trivial element and reflections in $D_n$ or the trivial element and rotations in $D_n$.
However the latter cannot be surjective, and so, it is impossible.
Therefore the homomorphism sends Wirtinger generators to either the trivial element and reflections in $D_n$.
Such a homomorphism is induced from a Fox $n$-coloring on $\ell_1$, $\ell_2$ or $L$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm3}]
Let $L$ be a link with at least 3 components.
We show that $G_L$ admits a surjective homomorphism to $D_n$ and $L$ is two-tone $D_n$-colorable for every $n \ge 3$.
Consider sub-links of 2 components in $L$.
If some of them, say $L' = \ell'_1 \cup \ell'_2$, satisfies $lk (\ell'_1, \ell'_2)$ is even, then by Theorem~\ref{Thm2}, $G_{L'}$ admits a surjective homomorphism to $D_n$ and $L'$ is two-tone $D_n$-colorable for $n$, and also does $G_L$ via a surjection $G_L \to G_{L'}$ and also is
$L$.
Suppose that for all the 2 component sub-links of $L$, the linking numbers of the two components are odd.
Then, by Lemma~\ref{lem3}, no such links have the determinant 0.
Since $L$ has at least 3 components, we can consider a sub-link of $L$ with 3 components, say $L' = \ell_1 \cup \ell_2 \cup \ell_3$.
For this link, $lk (\ell_1, \ell_2 \cup \ell_3)$ is even and $\det ( \ell_2 \cup \ell_3) \ne 0$ holds.
Then, by Proposition~\ref{prop}, $G_{L'}$ admits a surjective homomorphism to $D_\infty$ and so a surjective homomorphism to $D_n$, and $L'$ is two-tone $D_n$-colorable for every $n \ge 3$.
This implies that $G_L$ admits a surjective homomorphism to $D_n$ and $L$ is two-tone $D_n$-colorable for every $n \ge 3$.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{CorG}]
Suppose that $L$ is two-tone $D_m$-colorable for some odd $m \ge 3$.
If $L$ is a link with 2 components, then $G_L$ admits a homomorphism to $D_n$ for every $n \ge 3$ by Theorem~\ref{Thm2} ((ii)$\Rightarrow$(iv)).
If $L$ has at least 3 components, then $G_L$ admits a homomorphism to $D_n$ for every $n \ge 3$ by Theorem~\ref{Thm3}.
Suppose that $G_L$ admits a surjective homomorphism to $D_n$ for $n \ge 3$.
Then there is a $D_n$-coloring on a diagram of $L$.
See Remark~\ref{rmk2}.
If the coloring is two-tone, then $L$ is two-tone $D_n$-colorable.
Otherwise, since the homomorphism is surjective, the coloring comes from a nontrivial Fox $n$-coloring on a diagram of a sub-link of $L$ as in the proof of Theorem~\ref{Thm2}.
\end{proof}
Note that even if a link admits a nontrivial Fox $n$-coloring, it may not induce a surjective homomorphism from $G_L$ to $D_n$.
See the example illustrated in Figure~\ref{Fig4}.
In this case, the image of the wirtinger generators by the homomorphism induced by the Fox $4$-coloring is the set $\{ a_0, a_2 \} \subset D_4$, but the elements $a_0$ and $a_2$ do not generate $D_4$.
Thus the induced homomorphism is not surjective.
\begin{figure}[htbt]
{\unitlength=1mm
\begin{picture}(90,40)
\put(25,0){
\includegraphics[width=.3\textwidth]{linkL4a1.pdf} }
\put(30,26){$0$}
\put(58,26){$0$}
\put(50,2){$2$}
\put(50,34){$2$}
\end{picture}}
\caption{Fox 4-colorable link}\label{Fig4}
\end{figure}
\section{Finding two-tone colorings}
\begin{proof}[Proof of Proposition~\ref{prop1}]
Suppose that there exists a trivial component $\ell_0$ of a link $L$ and, for every component $\ell \subset L - \ell_0$, $lk (\ell_0 , \ell)$ is even.
If a diagram of a link $L$ admits a two-tone $D_n$-coloring for every odd $n \geq 3$ which assigns the arcs on $\ell_0$ to $a_i$'s and the other arcs to $b_j$'s, then also does any diagram of $L$.
Thus, to prove the proposition, it suffices to show that a particular diagram of $L$ admits such a $D_n$-coloring.
Now we take a diagram $D$ of $L$ depicted in Figure~\ref{Fig511}.
In the figure, $D_0$ is a sub-diagram corresponding to $\ell_0$, which is a trivial knot diagram, and each box between $D_0$ and the remaining sub-diagram $D_b$ contains a full-twist.
\begin{figure}[htbt]
{\unitlength=1mm
\begin{picture}(100,55)
\put(20,0){
\includegraphics[width=.4\textwidth]{Fig5.pdf}}
\put(32,0){$\alpha$}
\put(16,25){$D_0$}
\put(62,30){$D_b$}
\end{picture}}
\caption{the diagram $D$ with each box in center contains a full-twist}\label{Fig511}
\end{figure}
Consider the arc $\alpha$ in the figure, take an arc $\beta_i$ from each component of $L - \ell_0$, and assign $a_0$ to $\alpha$ and $b_1$ to $\beta_i$'s.
Let us show that this assignment induces a two-tone $D_n$-coloring.
For the arc $\beta_i$, let $\ell$ be the component of $L-\ell_0$ containing $\beta_i$.
Since $lk (\ell_0 , \ell)$ is even for every component $\ell \subset L - \ell_0$, due to Lemma~\ref{lem22}, the assignment for $\beta_i$ to be $b_1$ induces a $D_n$-coloring on $\ell$.
In the same way, we can find a $D_n$-coloring on $L-\ell_0$.
Note that, on the sub-diagram corresponding to each component of $L-\ell_0$,
an arc in the lower right of a box in center is colored by $b_1$ or $b_{n-1}$.
Beside, when the arc in the lower right is colored by $b_1$, then the arc in the upper right is colored in $b_{n-1}$, and vice versa.
Thus, by Lemma~\ref{lem22}, for each component of $L - \ell_0$, the number of the boxes in center with the arc in the lower right colored by $b_1$ is equal to the number of those with the arc colored in $b_{n-1}$.
Let $m$ be the half of the linking number $lk(\ell_0, L - \ell_0)$.
(Note that $lk(\ell_0, L - \ell_0)$ must be even, since $lk (\ell_0 , \ell)$ is even for each component $\ell$ of $L - \ell_0$.)
Then the number of the boxes in center with the arc in the lower right colored by $b_1$ is $m$ and the number of those with the arc colored in $b_{n-1}$ is also $m$.
Again by Lemma~\ref{lem22}, the assignment for $\alpha$ to be $a_0$ induces the assignment for the arc in the upper left of the top box in center to be $a_{0 - 2 ( m \cdot 1 + m \cdot (-1) )} = a_0$.
This implies that the assignment induces a $D_n$-coloring on whole the diagram.
By construction, the $D_n$-coloring is obviously two-tone.
Thus any diagram of $L$ admits a two-tone $D_n$-coloring for every odd $n \geq 3$ which assigns the arcs on $\ell_0$ to $a_i$'s and the other arcs to $b_j$'s.
\end{proof}
|
{
"arxiv_id": "2302.13722",
"language": "en",
"timestamp": "2023-02-28T02:28:24",
"url": "https://arxiv.org/abs/2302.13722",
"yymm": "2302"
} | \section{Introduction}
Iron \glspl{NP} are widely used for catalytic purposes, e.g., for the hydrogen evolution reaction (HER)~\cite{Cilpa-Karhu2019,Ahsan2020}, the oxygen reduction reaction (ORR)~\cite{Ahsan2020} or light olefin synthesis~\cite{TorresGalvis2012,Gu2020}.
To enable an economy not relying on crude oil for energy or base chemicals, the development of cost-effective and scalable catalysts for these reactions is crucial. Non-precious catalysts such as iron \glspl{NP} are of special interest because of their high reactivity and low cost compared to Pt- and Pd-based catalysts.
To study the atomic-level processes of these reactions in detail, while considering a wide variety of \glspl{NP} and active sites, an accurate but also computationally cheap model is necessary. However, among the existing models, \gls{DFT} is too expensive for such comprehensive studies and the different classical interatomic potentials lack in accuracy and general applicability.
Early on, many \gls{EAM} potentials were developed for general application to \gls{bcc} $\alpha$-Fe~\cite{Mendelev2003,FinnisSinclair,Dragoni2018EAM,Partay2018}. These were followed by a \gls{BOP}~\cite{Erhart2007}, even able to describe magnetic interactions~\cite{Mrovec2011} and, to some extent, \gls{fcc} $\gamma$-Fe.
Later, potentials for special purposes were created, for instance to study radiation defects~\cite{Alexander2020} or the conditions in the Earth's core~\cite{Mendelev2022}.
Recently, different flavors of \gls{ML} potentials~\cite{deringer_2019} have been trained for iron, including neural network potentials~\cite{Cian2021,Meng2021} and \glspl{GAP}~\cite{Dragoni2018GAP,Zhang2022}.
Some of the potentials mentioned above claimed general applicability, but are not truly general as they are not applicable to all crystal phases, surfaces, \glspl{NP} and disordered structures (including the liquid). Rather, they are typically designed to describe a wide range of properties of $\alpha$-Fe, with no guarantee of transferability outside of this range. In this work, we present a new \gls{GAP} \gls{MLP} trained on a much wider range of structures for true general applicability and transferability across a wide range of problems in atomistic modeling of iron.
The results are compared to the \gls{GAP} potential by Dragoni \textit{et al.}~\cite{Dragoni2018GAP}, as the current state of the art, and the \gls{EAM} potential by Mendelev \textit{et al.}~\cite{Mendelev2003}, as one of the most used classical alternatives with a lower computational cost.
We showcase the ability of our \gls{GAP} to accurately describe the \gls{PES} of crystalline and nanostructured iron, including a reasonable description of different surfaces and phase transformations at extreme thermodynamic conditions. The highlight application of this paper is the search for stable iron \glspl{NP} of different sizes, a task for which our \gls{GAP} achieves accuracy remarkably close to that of \gls{DFT}. We hope that this work will speed up the discovery of efficient iron-based nanocatalysts.
\section{Database generation}
There are three critical steps in training a GAP: 1) training database generation, 2) selection of model architecture and hyperparameters, including the choice of atomic descriptors and data regularization, and 3) the computation of the fitting coefficients. These have been covered in detail in the literature, and we refer the reader to Refs.~\cite{bartok_2015,deringer_2021} for an in-depth discussion. Here, we will only give a brief account of the technical ingredients of our \gls{GAP} and focus mostly on accuracy benchmarks and applications.
Ensuring the accuracy and transferability of a \gls{GAP}, or any other \gls{MLP}, for that matter, relies on the availability of a database of atomic structures covering the relevant regions of configuration space. For a general-purpose \gls{MLP}, this means that comprehensive sampling of the \gls{PES} needs to be done. Our iron database contains dimers and trimers, crystalline structures (\gls{bcc}, \gls{fcc}, \gls{hcp}, \gls{sc} and diamond) over a wide range of cell parameters and with ``rattled'' atomic positions (i.e., atoms slightly displaced about their equilibrium positions), transitional structures between \gls{bcc}-\gls{fcc} and \gls{bcc}-\gls{hcp}, surface slabs cleaved from the relaxed bulk structures, \glspl{NP} and liquid configurations.
For each structure, the magnetic configuration with the lowest energy was chosen for inclusion in the database. In this way, our \gls{GAP} is fitted to the \gls{DFT} ground state with regard to the magnetic degrees of freedom, which are otherwise not explicitly taken into account in our \gls{PES} description.
Detailed types and numbers of structures in our training database are given in \tab{tab_s:db_composition} of the \gls{SM}.
The energy, forces and virials for the atomic structures in our training database were computed at the \gls{DFT} level of theory using VASP~\cite{Kresse1993,Kresse1996a,Kresse1996b}. We used the PBE functional~\cite{perdew_1996} with standard PAW pseudopotentials~\cite{bloechl_1994,Kresse1999} for Fe (with 8 valence electrons, $4s^2 3d^6$). The kinetic energy cutoff for plane waves was set to $400$~eV and the energy threshold for convergence was $10^{-7}$~eV. All the \gls{DFT} calculations were carried out with spin polarization, which can describe collinear magnetism. While non-collinear magnetic effects can in principle be described in VASP, the gain in accuracy in the context of \gls{MLP} simulation is only modest compared to the increased CPU cost and difficulty to systematically converge thousands of individual calculations in a high-throughput setting.
On this database, we trained our \gls{GAP} with $2$-body, $3$-body and many-body \gls{SOAP}~\cite{Bartok2013,Caro2019} atomic descriptors using a cutoff of $5$~\AA{}, $3$~\AA{} and $5$~\AA{}, respectively. A ``core'' potential, a tabulated pairwise interaction at very short interatomic distances, was added to model the strongly repulsive regime down to $0.1$~\AA{}.
The number of sparse configurations and the regularization parameter were both chosen per configuration type, and are listed in \tab{tab_s:db_composition} of the \gls{SM}. The training was carried out with the QUIP/GAP codes~\cite{csanyi_2007,ref_quip}. The full command passed to the \texttt{gap\_fit} binary is given in the \gls{SM}. We again refer the reader to the literature for further details on GAP training~\cite{bartok_2015,deringer_2021}.
\section{GAP validation}
In this section we validate our \gls{GAP} against a wide range of simulation problems and compare it to existing potentials. We first motivate the need and usefulness of a general-purpose \gls{MLP} for iron. We then benchmark the \gls{GAP} for the description of bulk iron, phase transitions, elastic properties and surface calculations.
\subsection{General-purpose vs bcc-specific iron potential}
While ferromagnetic \gls{bcc} is the ground-state structure of bulk iron at room temperature and pressure, iron transitions to other stable structures as the thermodynamic conditions change. In addition, surfaces cleaved from the bulk look different depending on the bulk crystal structure. Nanostructured iron, in particular \glspl{NP}, will not necessarily have a \gls{bcc} or, for that matter, ordered structure, even at room temperature and pressure. Finally, liquid iron is simply disordered and thus its structure differs significantly from \gls{bcc} or any other crystal structure. Interatomic potentials trained from \gls{bcc} data can be very useful to accurately describe the properties of $\alpha$-Fe, but their accuracy deteriorates rapidly as they extrapolate in regions of configuration space away from the training data. Here we show how our \gls{GAP} overcomes these issues and provides a consistent prediction of the \gls{PES} of iron for widely different problems, enabling an accurate description of \glspl{NP} of varying sizes. We will also show that, in the absence of an explicit inclusion of the magnetic degrees of freedom, this transferability is achieved at the cost of sacrificing accuracy in the description of some of the properties, e.g., of the surface energetics of the different crystal phases.
\begin{figure*}[t]
\begin{tabular}{c c}
RMSE on our training database (general) & RMSE on Dragoni \textit{et al.}'s database (bcc-specific) \\
\includegraphics[width=0.48\linewidth,keepaspectratio]{fig/train_all.xyz_iron.xml--paper_unified--zoom.png} &
\includegraphics[width=0.48\linewidth,keepaspectratio]{fig/DB_1-8--niggli_fixed.xyz_iron.xml--paper_unified--zoom--no_dia_sc.png}
\end{tabular}
\caption{Energy and force \gls{RMSE} of (a) the GAP potential developed in this work, (b) the \gls{GAP} potential by Dragoni \textit{et al.}~\cite{Dragoni2018GAP} and (c) the \gls{EAM} potential by Mendelev \textit{et al.}~\cite{Mendelev2003} compared to \gls{DFT}. On the left are the \glspl{RMSE} computed on our training database and on the right are the \glspl{RMSE} on the training database of Dragoni \textit{et al.}~\cite{Dragoni2018GAP}. The energy values have been referenced to the bulk energy of \gls{bcc} iron for each potential. Note that on the left, only the panel for our \gls{GAP} includes the diamond and \gls{sc} structures.}
\label{fig:EnergyForce}
\end{figure*}
Figure~\ref{fig:EnergyForce} shows the energy and force errors of (a) the \gls{GAP} potential developed in this work, (b) the \gls{GAP} potential by Dragoni \textit{et al.}~\cite{Dragoni2018GAP} and (c) the \gls{EAM} potential by Mendelev \textit{et al.}~\cite{Mendelev2003} against the corresponding \gls{DFT} values in two \textit{training} databases: the training database of our \gls{GAP} on the left and that of the Dragoni \gls{GAP} on the right. The energy values have been referenced to the bulk energy of \gls{bcc} iron for each potential, to make the results comparable between the potentials. Each panel shows the \gls{RMSE} for the respective data. Importantly, \fig{fig:EnergyForce} is not intended as an accuracy test of our \gls{GAP}, for which reporting the training-set \gls{RMSE} is meaningless. We rather use it to showcase the difficulty encountered by bcc-specific iron potentials to reproduce the \gls{PES} of other structures, on the one hand, and to quantify the ability of the \gls{GAP} framework to learn the \gls{PES} within a significantly more comprehensive region of configuration space than that corresponding to \gls{bcc}, on the other. In this regard, our \gls{GAP} is able to learn our general-purpose training database to an accuracy of $20$~meV/atom which, while satisfactory for many purposes, is significantly higher than the $2$~meV/atom with which the Dragoni \gls{GAP} can learn its own \gls{bcc}-specific training database. At the same time, our \gls{GAP} only significantly deviates in the predictions of high-energy structures in the Dragoni database, with an overall \gls{RMSE} of $60$~meV/atom, mostly arising from outliers in the high-energy regions of the \gls{bcc} \gls{PES}, whereas the Dragoni \gls{GAP} struggles to capture the energetics of many low-energy structures in our database (as well as the high-energy ones), with an overall \gls{RMSE} of $128$~meV/atom. The \gls{EAM}'s performance is more predictable, with reasonably good \glspl{RMSE} for \gls{bcc} iron.
We note that the test in \fig{fig:EnergyForce} (left) was done only for physically meaningful structures. The diamond and simple cubic structures used in the training of our \gls{GAP} are not included in the plots for the reference potentials, as these structures are high in energy and not physically meaningful and would not make for a fair, nor instructive, comparison. They are included in the plot for our \gls{GAP}, though, increasing the \gls{RMSE} from $12$ to $20$~meV/atom there.
Dimer and trimer structures are excluded from the plots as well, as they reach very high energies and would obscure the more important data ranges. All other configurations used in the training are shown here, including the different bulk crystal structures, surfaces, melt, vacancies and \glspl{NP}.
Unsurprisingly, the energies in our training database are very well reproduced by the \gls{GAP} potential developed in this work. The GAP potential by Dragoni \textit{et al.}~\cite{Dragoni2018GAP} reproduces the energies almost to the same \gls{RMSE} as the \gls{EAM} potential by Mendelev \textit{et al.}~\cite{Mendelev2003}. For both potentials, nucleation clusters, \glspl{NP} and structures derived from \gls{hcp} pose the greatest problems, with \glspl{RMSE} of $312$, $171$ and $151$~meV/atom for the Dragoni \gls{GAP}, and $460$, $343$ and $119$~meV/atom for the Mendelev \gls{EAM}.
The energies of structures derived from \gls{bcc} on the other hand are predicted best, with \glspl{RMSE} of $8$ and $11$~eV/atom, respectively.
Again, unsurprisingly, the forces in our training database are reproduced very well with the \gls{GAP} potential developed in this work. Both the \gls{GAP} potential by Dragoni \textit{et al.}~\cite{Dragoni2018GAP} and the \gls{EAM} potential by Mendelev \textit{et al.}~\cite{Mendelev2003} predict forces that are systematically too large by factors of approx. $1.1$ and $1.6$, respectively.
The energy and force errors shown in \fig{fig:EnergyForce} are significantly lower with our \gls{GAP} than with the reference potentials, which had to be expected as the data contains many types of structures that the reference potentials were never intended for.
It is still not a given that our \gls{GAP} would reproduce the energies and forces so well in the different regions of configuration space, as we have observed that the addition of training data in one region usually leads to a very slight degradation in the other regions, a phenomenon that we ascribe to the absence of explicit magnetic degrees of freedom in our \gls{GAP}.
Also note the slopes of the force error data for the two reference potentials, especially the Mendelev \gls{EAM}. This shows a typical behavior of empirical potentials to have too strong a driving force towards stable configurations, by design, in order to avoid unstable trajectories in \gls{MD} simulations.
\subsection{Description of bulk iron}
Figure~\ref{fig:Stability} shows the relative stability of the bulk crystal phases vs. the atomic volume for all three potentials. Curves over a larger range of atomic volumes and additional diamond and \gls{sc} structures are shown in the \gls{SM}, \fig{fig_s:Stability}.
The shaded dashed curves represent \gls{DFT} reference values. The atomic volumes of the \gls{DFT} minimum structures for \gls{bcc} and \gls{fcc} are marked in each panel.
The \gls{GAP} developed in this work and the \gls{GAP} by Dragoni \textit{et al.}{} capture the \gls{bcc} curve very well, including the minimum. The \gls{EAM} by Mendelev \textit{et al.}{} has the minimum at a slightly too large volume and diverges from the \gls{DFT} reference data at lower and higher atomic volumes.
The \gls{fcc} energies are only reproduced well throughout the whole range considered by our \gls{GAP}. The \gls{GAP} by Dragoni \textit{et al.}{} only gives the correct energies from approx. $11.5$ to $16$~\AA$^3$/atom. Towards lower atomic volumes the slope is much too steep, erroneously predicting \gls{fcc} iron to be less stable there than \gls{bcc} iron. The \gls{EAM} by Mendelev \textit{et al.}{} does not capture \gls{fcc} well at all, with an exception around $12$~\AA$^3$/atom where it gives approximately the correct energies (but wrong trends).
However, there the \gls{hcp} energy is much too low, with both phases predicted as having similar energy. From $8$ to $10$~\AA$^3$/atom, where \gls{hcp} should be the stable crystal phase, either \gls{bcc} appears as more stable or all three crystal phases are practically identical in energy. With the Dragoni \gls{GAP}, \gls{hcp} is never the most stable structure in the atomic volume range shown here. At even larger volumes (see \fig{fig_s:Stability} in the \gls{SM}), \gls{hcp} becomes more stable, but spuriously so, with a predicted energy almost as low as for the bulk \gls{bcc} minimum. The \gls{GAP} developed in this work predicts the correct energies also for the \gls{hcp} structure over the whole volume range studied.
Therefore, both the Dragoni \gls{GAP} and the Mendelev \gls{EAM} show strong deviations from the \gls{DFT} stabilities and predict the \gls{bcc} phase as the stable one over too wide a volume range, with the high-pressure \gls{hcp} phase missing. This can be easily attributed to the fact that the reference potentials were developed for the \gls{bcc} phase, neglecting the other crystal structures.
Of the problems at very high volumes (see \fig{fig_s:Stability} of the \gls{SM}), only one seems important: the unphysical behavior of \gls{hcp} beyond 16~\AA{}$^3$/atom could lead to configurations blowing up during \gls{MD}. (This has been fixed in the fracture \gls{GAP}~\cite{Zhang2022}, based on the Dragoni \gls{GAP}, see Fig.~\ref{fig_s:Stability_zhang22}.)
\begin{figure}[t]
\includegraphics[width=\linewidth,keepaspectratio]{fig/stability_plot--paper_UNIFIED.png} \\
\caption{Energy of different Fe crystal phases over a wide range of atomic volumes: (a) the \gls{GAP} of this work, (b) the \gls{GAP} potential by Dragoni \textit{et al.}~\cite{Dragoni2018GAP}, (c) the \gls{EAM} potential by Mendelev \textit{et al.}~\cite{Mendelev2003}. Vertical black lines mark the equilibrium atomic volumes of the bcc and fcc phases. DFT data for each crystal phase are underlayed as thick dashed lines in each panel.}
\label{fig:Stability}
\end{figure}
Figure~\ref{fig:CellParams} shows the energies for strained bulk cells at different cell parameters for all three crystal structures and the three different potentials, each as the difference to the corresponding \gls{DFT} value.
For \gls{bcc}, the energy landscape is reproduced equally well by the \gls{GAP} potential by Dragoni \textit{et al.}~\cite{Dragoni2018GAP} and the \gls{GAP} developed in this work. The \gls{EAM} potential by Mendelev \textit{et al.}~\cite{Mendelev2003} shows larger disagreement with the \gls{DFT} reference, especially at low atomic volumes.
Both for \gls{fcc} and \gls{hcp}, the energy error is significantly lower for the \gls{GAP} developed in this work compared to the other two potentials. Noticeably, the Mendelev \gls{EAM} performs slightly better than the Dragoni \gls{GAP}. Both potentials overestimate the energy at low and underestimate it at high atomic volumes.
The lowest-energy structure is marked in black in each panel of \fig{fig:CellParams} for the respective potential (\gls{FM} \gls{bcc}, \gls{AFM} \gls{fcc}, \gls{NM} \gls{hcp}). The lowest DFT energy structure is marked in green in all panels for reference. Local minima with higher energy are marked in grey and purple for the interatomic potentials and \gls{DFT}, respectively.
The minima for the \gls{GAP} developed in this work coincide with the \gls{DFT} minima for \gls{bcc} and \gls{hcp}.
For the Dragoni \gls{GAP} the \gls{bcc} minimum differs only marginally from the \gls{DFT} reference, for the Mendelev \gls{EAM} slightly more.
The \gls{hcp} cell parameters predicted by the Dragoni \gls{GAP} and the Mendelev \gls{EAM} deviate substantially from the \gls{DFT} reference.
For the \gls{fcc} cell, multiple minima exist for \gls{DFT}~\cite{Muller2007,Herper1999}. The lowest in energy is the \gls{AFM} magnetic configuration with a tetragonal cell ($c$ longer than $a$), but two \gls{FM} minima with cubic cells exist as well, usually called \gls{FMLS} and \gls{FMHS}.
Of the three potentials, only the \gls{GAP} developed in this work reproduces more than one minimum structure correctly: the \gls{AFM} and the \gls{FMLS}.
The Dragoni \gls{GAP} does have a cubic and a tetragonal minimum structure as well, but with $a=3.416$~\AA{} and $c=4.042$~\AA{}, the tetragonal minimum is outside of the plotting range of \fig{fig:CellParams}.
The Mendelev \gls{EAM} does not have a tetragonal cell minimum at all, but just a cubic cell minimum corresponding to the \gls{FMHS} at approx. the correct cell parameter, with cell parameter slightly too large at $a=3.658$~\AA{}, compared $a=3.634$~\AA{} from \gls{DFT}.
All cell parameters are given in \tab{tab_s:cell_params} in the \gls{SM}.
The reproduction of the cell parameters of the crystalline phases shown in \fig{fig:CellParams} works well for the \gls{bcc} phase with all three potentials. Still, for the Mendelev \gls{EAM} the deviation from the \gls{DFT} cell parameters and the errors in the surrounding energy landscape are larger than with the Dragoni \gls{GAP} and the \gls{GAP} developed in this work.
However, for the \gls{fcc} and \gls{hcp} phases, both reference potentials yield large errors in the energy landscape and cell parameters far from the \gls{DFT} ones, while our \gls{GAP} gives very low errors and the correct cell parameters.
While multiple local minima exist for the cell parameters of the \gls{fcc} cell, one for the tetragonal \gls{AFM} configuration and two for the cubic \gls{FMLS} and \gls{FMHS} states, no potential has a minimum for all three of these.
Thus, for our \gls{GAP}, the only shortcoming here is that it does not have a minimum for the \gls{FMLS} structure. The reason for this is the way that \gls{GAP} fits the underlying data smoothly, potentially removing shallow minima in some instances. Note that, even if the \gls{FMHS} minimum is missing, the error is still low.
We remark here that the inability of the Dragoni \gls{GAP} and Mendelev EAM potentials to accurately describe crystal phases other than \gls{bcc} is to be expected, and not an artifact, since they were designed to correctly describe \gls{bcc} iron only. An accurate description of the low-pressure \gls{bcc} structures can still be obtained with these potentials, especially with the Dragoni \gls{GAP}, which should be able to indeed outperform our GAP for simulation of single-phase \gls{bcc} iron.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth,keepaspectratio]{fig/cell_params-3x3_panel-coolwarm_centered--FINAL.png}
\caption{Energy error (difference between interatomic potential and DFT energies) over a space of structural parameters. Rows contain \gls{bcc}, \gls{fcc} and \gls{hcp} crystal structures and columns contain the three different potentials: the \gls{GAP} potential developed in this work, the \gls{GAP} potential by Dragoni \textit{et al.}~\cite{Dragoni2018GAP} and the \gls{EAM} potential by Mendelev \textit{et al.}~\cite{Mendelev2003}. The first structural parameter is always the cell parameter $a$ and the second is the cell parameter $c$ for \gls{bcc} and \gls{fcc}, but the aspect ratio $c/a$ for \gls{hcp}. Marked in black are the lowest-energy structures in each panel for the corresponding potential, marked in green the lowest \gls{DFT} energy structures. Interatomic potential and DFT data agree when green and black rectangles overlap on the graph. Additional local minima are marked in grey and purple for the interatomic potentials and \gls{DFT}, respectively.}
\label{fig:CellParams}
\end{figure}
\subsection{Phase transitions}
So far we have discussed the accuracy of our \gls{GAP} to describe (meta)stable structures. However, a general-purpose potential to be used in dynamic structure generation, e.g., involving \gls{MD} simulation, also needs to accurately describe the \gls{PES} along important transition paths between crystal phases.
Since the initial and final states, as well as the minimum energy path for a transition, depend on the specific force field used, we choose the following approach to be able to compare the potentials to DFT and among them.
For the transformations from \gls{bcc} to \gls{fcc} and from \gls{bcc} to \gls{hcp}, transition structures were created by \textit{linear interpolation} between the cell parameters and atomic positions of the endpoint structures at $19$ points along the path. The minimum \gls{DFT} energy structures were used for all potentials, i.e., to the two reference potentials these endpoints are not the minimum energy structures, but for our \gls{GAP} they are, as there the cell parameters are identical to the \gls{DFT} ones. Thus, at the \gls{fcc} and \gls{hcp} endpoints, the Dragoni and Mendelev energies differ from the \gls{DFT} reference values.
All curves are referenced to the \gls{bcc} bulk energy.
Along the \gls{bcc} to \gls{fcc} path shown here, our \gls{GAP} and the Mendelev \gls{EAM} reproduce the energy barrier reasonably well, our \gls{GAP} a little too low and the Mendelev \gls{EAM} a little too high. The Dragoni \gls{GAP} energies along the path are considerably higher than with the other potentials.
Along the \gls{bcc} to \gls{hcp} path, the Dragoni \gls{GAP} and the Mendelev \gls{EAM} trace the \gls{DFT} curve up to $x \sim 0.3$ (where $x$ is the reaction coordinate), but then quickly diverge and reach an endpoint far from the \gls{DFT} one (more so for the Dragoni \gls{GAP} than the Mendelev \gls{EAM}).
Our \gls{GAP}'s curve has a slightly different shape, with a steeper incline at low $x$ and a higher maximum, but still fits the \gls{DFT} curve much better than the other two potentials.
We note again that these curves were not obtained through the minimum energy path, e.g., by performing a \gls{NEB} calculation~\cite{jonsson_1998}, but rather by linear interpolation between the endpoint structures. Hence, the maxima in the transition curves cannot be interpreted as ``barriers'' (i.e., the energy calculated at the saddle point along the minimum energy path), as lower-barrier paths might exist.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{fig/transitions--paper.png}
\caption{Transition between (a) bcc and fcc and (b) bcc and hcp crystal structures calculated with \gls{DFT}, the \gls{GAP} potential developed in this work, the \gls{GAP} potential by Dragoni \textit{et al.}~\cite{Dragoni2018GAP} and the \gls{EAM} potential by Mendelev \textit{et al.}~\cite{Mendelev2003}. Energies are referenced to the bcc bulk energy for each potential.}
\label{fig:Transitions}
\end{figure}
\subsection{Elastic properties}
The elastic constants of all three \gls{DFT} minimum-energy structures have been computed with all potentials by straining the structures in all relevant directions (depending on the symmetry) in $5$ steps with strain increments in the range of $10^{-5}$ to $10^{-2}$ to check for consistency, as implemented in \gls{ASE}~\cite{Xiao2017}. The results are shown in \fig{fig:Elasticity}.
The elastic constants of the \gls{bcc} structure are generally reproduced well by both the \gls{GAP} by Dragoni \textit{et al.}{} and our \gls{GAP}. With the Mendelev \gls{EAM}, $C_{11}$ is noticeably too low by approx. $10\%$ and for $C_{44}$ the difference to the \gls{DFT} reference value is about twice as large as for the two \gls{GAP} potentials. The error for $C_{12}$ is only half as large as with the Dragoni \gls{GAP} potential, though.
For the \gls{hcp} elastic constants, our \gls{GAP} also yields low errors with respect to the \gls{DFT} data. Both the reference potentials significantly underestimate the elastic constants. For the \gls{fcc} elastic constants, the results are much more mixed: while most of the Mendelev \gls{EAM} values are significantly too low, our \gls{GAP} and the Dragoni \gls{GAP} give values that are a mix of too low or high ones with some that are spot on.
For the elastic constants shown in \fig{fig:Elasticity}, again only the \gls{bcc} phase is represented well by all three potentials, with the Mendelev \gls{EAM} showing the largest errors compared to the \gls{DFT} reference values.
However, the elastic constants of the \gls{hcp} phase are only reproduced well by our \gls{GAP} and the \gls{fcc} phase is not reproduced well by any of the potentials. While the reference potentials were not developed for these crystal structures, our \gls{GAP} does have the necessary structures in the training database and still fails in the prediction for the \gls{fcc} phase.
Although we tried extensively to train a GAP that could correctly reproduce the elastic constants of all three crystal phases by fine tuning the regularization parameters and sparse set configurations of the strained+rattled structures, we did not manage to obtain a fit that predicted all of them accurately at the same time. We attribute this to the fact that each crystal structure belongs to a different magnetic branch with possibly significantly different energetics, including the energy derivatives (i.e., forces and the stress tensor, used to compute the elastic constants).
Different magnetic configurations exist for the \gls{fcc} phase, depending on the stained state. Without the explicit treatment of the magnetic moments, the underlying energy landscape has discontinuities where the lowest energy magnetic state changes. Our GAP can resolve these branches, but only implicitly, whenever the structures are sufficiently different in terms of atomic arrangements.
While it should be possible to train a dedicated potential to reproduce the elastic constants of any one crystal phase and magnetic configuration, predicting all of them accurately with a general potential seems impossible within our current methodological framework, especially when many other types of configurations are also considered. We speculate that only an iron \gls{MLP} which explicitly accounts for the magnetic structure of iron will be able to accurately capture all of these features simultaneously. Augmenting the \gls{GAP} framework to incorporate magnetism is far beyond the scope of this work, but we expect advances in this area within the next few years.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth,keepaspectratio]{fig/elasticity_calc_ASE_FINAL.png}
\caption{Elastic constants for all crystal structures and potentials, compared with \gls{DFT} results. Note that the Mendelev \gls{EAM} predicts a cubic symmetry for the \gls{fcc} structure, that is predicted tetragonal by \gls{DFT}.}
\label{fig:Elasticity}
\end{figure}
\subsection{Surfaces}
Another stringent test for an interatomic potential is the prediction of surface energies cleaved and reconstructed along various crystallographic planes, since these structures look significantly different from the bulk.
To calculate the energies of surfaces with various Miller indices cleaved from the \gls{bcc}, \gls{fcc} and \gls{hcp} bulk structures, slabs with a number of layers between $4$ and $16$ were set up using \gls{ASE}~\cite{Xiao2017}. These were then relaxed using \gls{DFT} calculations with a fixed box size. We ensured that the amount of added vacuum perpendicular to the slabs was sufficient to allow for relaxation in this direction and rule out any interaction between the periodic copies of the slabs.
We observed these slabs to have nontrivial magnetic structure, e.g., showing a strong dependence on the number of atomic monolayers. Thus, converging these \gls{DFT} relaxations was not possible in every case and, even when the calculations converged, local energy minima were found that were not necessarily also the global minima.
The result of the \gls{DFT} relaxations is primarily dependent on the setup of the initial magnetic moments and the final magnetic configuration resulting from those. Large differences up to $\sim 100$~meV/atom for the same slab with different magnetic configurations were found. The \gls{fcc} surface slabs proved more problematic than those of the other crystal structures in this regard.
Typically, the \gls{fcc} magnetic configurations consisted of layers with opposite local magnetic moments (not necessarily all with the same magnitude), one or more atomic layers thick.
To extract meaningful surface energies from the energies of the slabs, convergence of the surface energy with respect to the slab thickness would be expected. This was found for many surface indices, but in some cases no convergence could be observed, indicating that only local minima were found for at least some of the slabs.
To improve the performance of our \gls{GAP} for surfaces, surface slabs relaxed with \gls{DFT} were further relaxed with earlier versions of our \gls{GAP} and single-point \gls{DFT} calculations of the resulting structures fed back into the training set in an iterative manner (known as ``iterative training''~\cite{deringer_2017}).
The surface structures in our database were split into two categories: thin slabs that are too thin to have a bulk-like region in the center and thicker slabs that do contain such a bulk-like center. For the thin slabs, the regularization parameter during training was chosen $25$ times higher than the default (the higher the regularization parameter the less stringently the GAP is required to follow the data). This was done to keep the structures in the database, but focus on the more realistic surfaces from the thicker slabs.
The surface energies for all three crystal structures and various surface indices are shown in \fig{fig:SurfaceEnergies}, for our \gls{GAP} and the \gls{GAP} by Dragoni \textit{et al.}, compared against the \gls{DFT} values. Data points for a given surface index are connected by lines between the different potentials and the indices are encoded in color for easier tracking. An equivalent plot comparing our \gls{GAP} to the \gls{EAM} by Mendelev \textit{et al.}{} is shown in \fig{fig_s:surface_energies_Mendelev} of the \gls{SM}.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth,keepaspectratio]{fig/surface_energies--paper_FINAL_GAP_Dragoni_2018.png}
\caption{Surface energies of a wide range of surface indices for the \gls{GAP} by Dragoni \textit{et al.}{} and our \gls{GAP}, compared with the \gls{DFT} values.}
\label{fig:SurfaceEnergies}
\end{figure}
For the \gls{bcc} surface energies, both Dragoni's and our \gls{GAP} give results that are comparable and match the \gls{DFT} references reasonably well. While the overall value range for the Dragoni GAP better matches the one found using \gls{DFT} (especially for high-energy surfaces, which our GAP overestimates), our \gls{GAP} performs better in terms of reproducing the correct order of the low-energy surface indices, i.e., there are fewer crossings of the connecting lines.
For the \gls{fcc} surfaces, both \gls{GAP}s predict the \gls{DFT} reference values poorly. While the energies come out too high with our \gls{GAP}, the one by Dragoni shows energies that are too low. Recall the issues already mentioned with convergence of \gls{fcc} slab calculations with \gls{DFT} as a function of the number of atomic layers; this is likely a problem that necessitates the explicit inclusion of magnetic structure and cannot be fully solved within the standard \gls{GAP} framework.
The \gls{hcp} surface energies are predicted much too low with the Dragoni \gls{GAP}, for some indices even negative, a clear sign of extrapolation outside of the training set. Our \gls{GAP}, on the other hand, predicts all surface energies in a range similar to the one predicted by \gls{DFT}, although with sizable errors.
The \gls{EAM} by Mendelev \textit{et al.}{} gives bad predictions for the surface energies of all three crystal structures (\fig{fig_s:surface_energies_Mendelev} of the \gls{SM}).
The calculation of the surface energies shown in \fig{fig:SurfaceEnergies} was challenging for some of the surface indices, as finding the magnetic ground state for these systems is much harder than for small bulk unit cells. In a few instances, it was not even possible to converge the \gls{DFT} calculation at all. In other cases the magnetic layering changed erratically between slabs with a different number of layers, so that no convergence of the surface energy with increased slab thickness could be found. There, even trying to use the magnetic configuration of one slab to inform the initial configuration for a similar slab failed. This and the interpolation between different magnetic states in the fitting of our \gls{GAP} led to mixed results. Consequently, the surface energies are more accurate for \gls{bcc} where only the \gls{FM} state exists and fewer convergence problems occurred in the creation of the training database.
Still, our \gls{GAP} performs noticeably better than the other two potentials, giving a reasonable range of values for all surface energies and the correct order of the lowest-energy surfaces for \gls{bcc}.
The Dragoni \gls{GAP} performs only slightly worse than our \gls{GAP} for \gls{bcc} (where it was trained), but gives even negative surface energies for \gls{hcp}, which could lead to instabilities in dynamics simulations.
Lastly, the Mendelev \gls{EAM} yields high errors for the surface energies of all crystal structures, as it was fitted only to properties of the crystalline bulk and liquid.
\section{Application to selected problems}
In this section we benchmark our Fe \gls{GAP} with representative use cases. We go beyond simple numerical scores, like \gls{RMSE} and \gls{MAE}, and focus on how the potential performs when trying to reproduce experimental trends in 1) thermal expansion, 2) the solid-liquid phase transition and 3) the temperature-pressure phase diagram.
\subsection{Thermal expansion}
The thermal expansion was studied using \gls{MD} calculations with \gls{ASE}~\cite{Xiao2017,Kermode2020-wu}. Systems with $1024$ atoms were set up in the minimum \gls{DFT} \gls{bcc} structure and kept initially at 200~K (the lowest temperature in the series) for $1$~ps to equilibrate and for another $2$~ps for averaging with a time step of $1$~fs. This procedure was repeated, step by step, increasing the temperature in intervals of $200$~K up to $1600$~K.
Detailed simulation parameters are specified in the SM.
The data for the Dragoni \gls{GAP} lines up almost perfectly with their \gls{DFT} data~\cite{Dragoni2015}, which is itself at lower volume than the experimental data by, e.g., Basinski \textit{et al.}~\cite{Basinski1955} and Ridley \textit{et al.}~\cite{Ridley1968}.
The Mendelev \gls{EAM} comes closest to the experimental data, but only really agrees around $200$~K and between $1190$ and $1660$~K (where Basinski \textit{et al.}{} found \gls{fcc} Fe, as opposed to the \gls{bcc} Fe predicted by \gls{EAM}).
Our \gls{GAP} predicts the lowest atomic volumes at all temperatures, lower than the \gls{DFT} data reported by Dragoni \textit{et al.}, consistent with our \gls{DFT} data. We note the two discontinuities in experimental data by Basinski \textit{et al.}{} that take place first at the $\alpha$ to $\gamma$ and then at the $\gamma$ to $\delta$ phase boundaries. The only one of these happening where simulation data is available ($\alpha$ to $\gamma$) is not captured by any of the potentials.
The coefficients of thermal expansion were fitted around $400$ to $600$~K, where data points were available. The fitting ranges are shown in \fig{fig:thermal_exp} as solid lines and the coefficients are noted next to the curves.
From the three potentials studied, the Dragoni \gls{GAP} comes closest to the experimental values and our \gls{GAP} differs the most. All three potentials underestimate the experimental values.
The thermal expansion curve for our \gls{GAP} starts at lower atomic volume than the reference potentials, in accordance with the cell parameters shown in \fig{fig:CellParams}.
Also note the much larger sampled range of temperatures and volumes around the $1600$~K data point for our \gls{GAP}. We attribute this to the fact that the potential is exploring the energy landscape above the melting point (which is underestimated by our GAP, compared to experiment), but is missing a nucleation center for the liquid phase. We deal with the solid-liquid phase transition in Sec.~\ref{sec:solid-liquid}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{fig/thermal_expansion_compare--paper.png}
\caption{Atomic volume over a wide range of temperatures to show the thermal expansion. Experimental results~\cite{Basinski1955,Ridley1968} and \gls{DFT} calculations~\cite{Dragoni2015} shown as empty circles. For the calculations done in the scope of this work, volume and temperature along the trajectories are shown below the average values (full circles). For all curves, the coefficients of thermal expansion were fitted over the temperature ranges marked with solid lines. The coefficients are noted next to the curves.}
\label{fig:thermal_exp}
\end{figure}
\subsection{Solid-liquid phase transition}\label{sec:solid-liquid}
The melting temperature of bcc iron was calculated using the two-phase method~\cite{Morris1994,Morris2002} with all three potentials. Systems with $\approx 11000$~atoms were set up as \gls{bcc} crystals and heated to $4000$~K for $15$~ps to melt half of the system, while the positions of the other half were kept fixed.
These half crystalline and half molten systems were then run at different target temperatures for $100$~ps of \gls{MD} with a time step of $1$~fs to find the temperature where both phases coexisted.
\gls{MD} simulations were carried out using LAMMPS~\cite{Plimpton1995} for the Mendelev \gls{EAM}~\cite{Mendelev2003} and Dragoni \gls{GAP}~\cite{Dragoni2018GAP} reference potentials and TurboGAP~\cite{Caro2019} for the \gls{GAP} developed in this work.
In the LAMMPS calculations, temperature and pressure were controlled with the Nos\'e-Hoover~\cite{Nose1998,Hoover1985} thermostat and barostat with damping constants of $1$~ps and $2$~ps, respectively. In TurboGAP, a Berendsen~\cite{Berendsen1998} thermostat and barostat with the same damping constants and a \texttt{gamma\_p}~\footnote{In TurboGAP, the bulk modulus for the barostat is expressed in units of the inverse compressibility of liquid water. E.g., \texttt{gamma\_p}~$= 55$ means that the material is assumed to be 55 times as incompressible as liquid water for the purpose of barostating. This allows the user to provide an intuitive value for this parameter whenever the compressibility factor of the system is not know \textit{a priori} (as is usually the case).} of $55$ were used.
Crystalline and molten states in the systems were identified with the Steinhardt parameter $Q_8$~\cite{SteinhardtPaulJNelson1983}. Figure~\ref{fig:Tmelt} shows the results for our \gls{GAP}, each line indicating a separate \gls{MD} run. (For the results using the Mendelev \gls{EAM} potential, see \fig{fig_s:Tmelt_references} in the \gls{SM}.) Decreasing values of $Q_8$ indicate a melting system, while increasing values indicate a crystallizing one.
This gives approximate melting temperatures of $1760$~K and $1438$~K for the Mendelev \gls{EAM} and our \gls{GAP}, respectively.
For our \gls{GAP}, this is well below the experimental value of $1811$~K. (The experimental value is for the $\delta$ phase, which is also \gls{bcc} as in our simulations.) We attribute this deviation to the properties of the \gls{DFT} functional used for the training, which has in previous works been shown to predict too low melting points.
Part\'ay computed the phase diagram for the Mendelev \gls{EAM} using the nested sampling method~\cite{Partay2018}, believed to be the most comprehensive and accurate method for this purpose. She found a melting temperature higher than the experimental value, at approx. $1810-1940$~K.
The value of approx. $1760$~K we find for the Mendelev \gls{EAM} is right in the range of $1750-1775$~K they give in their original paper, somewhat in disagreement with the nested sampling result.
This disagreement can be attributed to a finite-size effect in the nested sampling calculations, overestimating the temperature of the melting transition compared to coexistence simulations.
For the Dragoni \gls{GAP} we found that the trajectories expanded to very high atomic volumes (about three times the volumes found with the other potentials for two-phase systems at the same temperature) upon releasing the crystalline atoms, immediately melting the crystalline half of the box. Thus, we were not able to stabilize the two-phase state at any temperature.
We attribute this to a spurious local minimum in the \gls{PES} for a low-pressure melt, lower than the pressurized crystal.
We note that our \gls{GAP} was not trained for thermal properties specifically, while the Dragoni \gls{GAP} was trained on data for thermomechanical properties and the Mendelev \gls{EAM} was fitted to the pair correlation function at $1820$~K. It is therefore not surprising that the Mendelev \gls{EAM} predicts the melting temperature better than the other two potentials.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth,keepaspectratio]{fig/steinhardt-1375..1525-paper.png}
\caption{Steinhardt parameter $Q_8$~\cite{SteinhardtPaulJNelson1983} over time for \gls{MD} simulations at various temperatures in the range of $1400-1550$~K, around the melting temperature $T_m$. The inset shows the transition from the crystallizing to the melting state as a function of temperature.}
\label{fig:Tmelt}
\end{figure}
\subsection{Phase diagram}
To compute the phase diagram we carried out free-energy calculations within the 2PT framework~\cite{lin_2003} as implemented in the DoSPT code~\cite{caro_2016,caro_2017b,ref_dospt}. 2PT computes the free energy of an ensemble of atoms from the integral of the density of states (calculated from \gls{MD}), which is partitioned between solid-like and gas-like degrees of freedom. This method is particularly suited to estimate the thermodynamic properties of liquids. In this work, we use it both for the liquid and the solid to be able to directly compare the free energy of the two and draw the melting curve: at any given set of thermodynamic conditions, the phase with the lowest free energy is the stable phase.
We calculated the iron phase diagram with our \gls{GAP} up to high pressures of $10^6$~bar and temperatures of $3000$~K, shown in \fig{fig:phase_diagram}. At each pressure, three \gls{MD} trajectories were initialized from $250$~K as \gls{bcc}, \gls{fcc} and \gls{hcp} and one from $3000$~K as liquid. The temperature was then increased/decreased in steps of $250$~K using the Bussi thermostat~\cite{Bussi2007} while controlling the pressure with the Berendsen barostat~\cite{Berendsen1998}. The calculations were done in TurboGAP~\cite{Caro2019} with equilibration constants of $100$~fs and $1000$~fs, respectively, and a \texttt{gamma\_p} of $100$. At each point, the trajectories were equilibrated for $80$~ps and subsequently sampled for $80$~ps.
Liquid structures were detected using the Steinhardt $Q_8$ parameter~\cite{SteinhardtPaulJNelson1983}, with low values indicating the liquid. We detected the crystalline structures by both comparing \gls{SOAP} descriptors and \gls{XRD} spectra calculated using the Debyer software package~\cite{ref_debye}. Example spectra at $p = 1$~bar and for the reference structures are shown in \fig{fig_s:ref_spectra} of the \gls{SM}.
For the \gls{hcp} structures at $10^6$~bar the $c/a$-ratio is given in \fig{fig:phase_diagram}, determined by comparing to the \gls{SOAP} descriptors of \gls{hcp} with $c/a$-ratios in the range of $1.10$ to $1.40$. The plot in \fig{fig:phase_diagram} shows the structure of the trajectory with the lowest energy at each point.
The phase diagram for our \gls{GAP} shown in \fig{fig:phase_diagram} reproduces very well the trends in the melting curve found experimentally~\cite{Morard2018}, including the raised melting temperature at $10^5$~bar and the missing liquid phase at $10^6$~bar up to $3000$~K. There is some disagreement between the melting temperature estimated using this method, which is located in the range $1500$--$1750$~K, and that estimated using the two-phase method in Sec.~\ref{sec:solid-liquid}, which is situated a bit below $1450$~K.
The \gls{fcc} phase is missing completely from our phase diagram, although it is found experimentally within a narrow band of temperatures, above $\sim 1180$~K and below $\sim 1670$~K (depending on the pressure).
At the highest pressures studied in this work, all the trajectories led to spontaneous nucleation of \gls{hcp} up to a temperature of at least $3000$~K (we did not check higher temperatures than this), also in agreement with the experimental data. In our phase diagram we also show the $c/a$-ratio, which is much lower than at ambient pressure. With increased temperature the spacing between the close-packed planes grows, which seems sensible. This leads us to believe that our \gls{GAP} could be suitable to study iron at the conditions of the Earth's core (exceeding $136$~GPa and approx. $4000$~K~\cite{Hirose2013}).
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth,keepaspectratio]{fig/phase_diagram-0.98_density_scale--paper_q8_FINAL.png}
\caption{Phase diagram for our \gls{GAP} up to high temperatures and pressures. Crystalline structures were identified using \gls{XRD} spectra and by comparing \gls{SOAP} descriptors. For the \gls{hcp} phase, the $c/a$-ratio is given. Encoded in the color is the Steinhardt parameter $Q_8$~\cite{SteinhardtPaulJNelson1983}, to further highlight the solid-liquid transition. Red lines indicate the phase boundaries in the experimental phase diagram~\cite{Partay2018,Morard2018}: $\alpha$, $\gamma$, $\delta$ (in increasing $T$, at low $p$) and $\epsilon$ (at high $p$).}
\label{fig:phase_diagram}
\end{figure}
\section{Nanoparticles}\label{sec:nanoparticles}
\glspl{NP} were created in four distinct ways: 1) by condensation from a random starting atomic distribution, 2) using the \gls{GA} implementation by Weal \textit{et al.}~\cite{Weal2021}, 3) with the Wulff method~\cite{Wulff1901} and 4) by annealing \glspl{NP} found with the other methods at elevated temperature.
For the first method, the atoms were randomly placed in a periodic box with double the atomic volume of the \gls{bcc} bulk. The atomic positions were then relaxed using the current version of the \gls{GAP} developed in this work using the algorithm detailed in the Atomic Simulation Recipes~\cite{Gjerding2021}. The suitable selection of the box volume ensured that the atoms coalesced into a single \gls{NP}.
The \gls{GA} was run with a population size of $100$ particles for $2500$ generations with $20$ offsprings per generation for each \gls{NP} size. The starting populations were created using the condensation method. For detailed \gls{GA} settings, see Sec.~\ref{subsec:GA_settings} of the \gls{SM}. From all \glspl{NP} evaluated during a \gls{GA} run, the $1$st, $50$th, $100$th and $500$th lowest in energy (as per the \gls{GAP} developed in this work) were then calculated with \gls{DFT} to be used in the training of the next iteration of the \gls{GAP} potential. In total, four iterations of \gls{GA} were done to improve the performance of the potential for (increasingly) low-energy \glspl{NP}.
Crystalline \glspl{NP} were generated using the Wulff method~\cite{Wulff1901} as implemented in \gls{ASE}~\cite{Xiao2017}, using the surface energies we calculated using \gls{DFT}.
To augment the search space, \glspl{NP} found with the aforementioned methods were also annealed at $1200$~K for $20$~ps, quenched down to $300$~K over another $20$~ps and finally relaxed using gradient-descent minimization.
The region of configuration space corresponding to \glspl{NP} displays a rather complex \gls{PES}, due to the coexistence of diverse atomic motifs not encountered in the bulk: small surfaces, edges and vertices. To make this problem tractable, we used the iterative training approach~\cite{deringer_2017} combined with the \gls{GA} to incrementally improve the accuracy in this region of configuration space. Figure~\ref{fig:NP_potentials} shows the energies of the \glspl{NP} created by and calculated with our new \gls{GAP} and the two reference potentials, compared to the energies calculated using \gls{DFT} and referenced to the energy of the \gls{bcc} bulk.
The energies calculated with the two reference potentials differ significantly from the \gls{DFT} energy, while our \gls{GAP} predicts the energies with good accuracy, regardless of which potential was used to generate the \glspl{NP}. The \glspl{RMSE} are given in the legend.
Note how the Mendelev \gls{EAM} predicts too low energies for all the low-energy \glspl{NP} and the Dragoni \gls{GAP} predicts a number of \glspl{NP} to be lower in energy than the \gls{bcc} bulk material, again a sign of extrapolation outside of the training set, as seen for \gls{hcp} surfaces.
Note that none of the \glspl{NP} in \fig{fig:NP_potentials} belong to the \gls{GAP} training set, and thus this test gives a clear indication of the ability of our \gls{GAP} to accurately generate and predict iron \glspl{NP}, a particularly challenging modeling task.
\begin{figure}[t]
\includegraphics[width=\linewidth,keepaspectratio]{fig/Dragoni_vs_GAP_NPs--paper_FINAL.png}
\caption{Energy for \glspl{NP} generated and computed with the GAP potential developed in this work, the \gls{GAP} potential by Dragoni \textit{et al.}~\cite{Dragoni2018GAP} and the \gls{EAM} potential by Mendelev \textit{et al.}~\cite{Mendelev2003} compared to \gls{DFT}. Shapes indicate the potential used to generate the \glspl{NP} and colors indicate the potential used to calculate the energies, respectively. The \glspl{NP} were generated using the random condensation method (for details, see text). None of the \glspl{NP} were used in the training of our GAP. The inset shows the data at lower energies more clearly.}
\label{fig:NP_potentials}
\end{figure}
In \fig{fig:NP_potentials} we show the accuracy of the energy predictions of our \gls{GAP} and the two reference potentials for a large number of \glspl{NP} over a wide range of stabilities. This is meant to ensure that our GAP can model small \glspl{NP} as well as ones that are far from the lowest energy for their size, as might be observed at the elevated temperatures of a catalytic process.
The inability of the \gls{bcc} reference potentials to correctly describe \glspl{NP} can be seen for example in the isolated group of points for the Dragoni \gls{GAP} at the bottom of the plot: all the \glspl{NP} in this group were created with the Dragoni \gls{GAP} and the performance on these is clearly different than on the \glspl{NP} created with the Mendelev \gls{EAM}.
In contrast to the two reference potentials, our \gls{GAP} performs very well on all the \glspl{NP} shown here, regardless of which potential they were created with or their relative stability. This can be measured in the \gls{RMSE} value which is about $30$ times lower than for the reference potentials. More importantly though, there is no region in the plot where our \gls{GAP} has substantial errors. This is particularly important for \gls{NP} modeling, when we use the \gls{GAP} to generate \glspl{NP} with a few hundred or thousands of atoms, which cannot be directly validated with \gls{DFT} due to CPU cost.
In the search for stable \glspl{NP}, the common practice is to generate low-energy \glspl{NP} within a range of sizes which, for small \glspl{NP}, is measured in terms of the number of atoms. The energies are then used to construct a convex hull of \gls{NP} stability. Because of the high computational cost of \gls{DFT} calculations, the known convex hulls reach only up to a size of $30$ atoms at most~\cite{Bobadova-Parvanova2002,Kohler2005,Ma2007,Akturk2016}. More comprehensive convex hulls, up to a size of $100$ atoms, have been computed using the Finnis-Sinclair~\cite{FinnisSinclair,Sutton1990} \gls{EAM}~\cite{Elliott2009,Liu2016,cambridge_energy_landscape}. With our GAP, we have reconstructed the convex hull of the lowest-energy \glspl{NP} for each size up to a size of $200$ atoms, using the search methods detailed above. The \gls{GA} was only used up to a size of $100$ atoms, due to its comparatively high computational cost.
To validate the GAP results, the energies of the \glspl{NP} from the \gls{CELD}~\cite{cambridge_energy_landscape} as well as the \glspl{NP} in the convex hull found in this work were recalculated using \gls{DFT} up to a size of $200$ atoms. The total energies for both are shown in \fig{fig:convex_hull_DFT}. In the curve for the convex hull from this work, empty circles indicate \glspl{NP} that are higher in energy than the \gls{CELD} \gls{NP} of the same size and full circles such particles that are lower in energy.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{fig/convex_hull--DFT--paper_WITH_INSET_long.png}
\caption{\gls{DFT} total energy convex hull of the \glspl{NP} from the Cambridge Energy Landscape Database~\cite{cambridge_energy_landscape} and the \glspl{NP} discovered in this work. Full circles indicate \glspl{NP} in the convex hull that were improved by this work ($90$ of $98$ particles). Snapshots show a selection of \glspl{NP} along the convex hull.}
\label{fig:convex_hull_DFT}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{fig_nps/character.png}
\caption{(Top) Fraction of surface sites that resemble [(100), (110), (120), (112) and (111)] surface motifs from pristine \gls{bcc}, \gls{fcc} and \gls{hcp} surfaces more than the others for all \glspl{NP} in our convex hull. (Bottom) Log-scaled fraction of surface sites that resemble the reference surfaces most for four example \glspl{NP}, two crystalline and two amorphous ones. A video with the panels corresponding to every \gls{NP} is available on Zenodo~\cite{zenodo_nps}.}
\label{fig:character}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{fig_nps/surface_sites.png}
\caption{\texttt{cl-MDS} representation (low-dimensional embedding) of the surface sites on the \glspl{NP} in our convex hull clustered by $k$-medoids into ten characteristic motifs. The snapshots show the medoids representing the clusters. Encoded in the color is the atomic \gls{GAP} energy of each surface site. The inset shows the same map color coded according to the $k$-medoids clusters.}
\label{fig:mds}
\end{figure}
\begin{figure*}[p]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{fig_nps/all_nps.png}
\caption{Snapshots of all \glspl{NP} from our convex hull, shown in \fig{fig:convex_hull_DFT}. Atoms are colored according to the ten most characteristic surface site motifs according to the $k$-medoids clustering shown in \fig{fig:mds}. The medoids for the ten motifs are shown as well.}
\label{fig:all_nps}
\end{figure*}
In the size range from 3 up to 100 atoms, where \gls{CELD} data is available, $90$ out of $98$ \glspl{NP} in our convex hull are lower in energy than the \gls{CELD} ones. Most of these were found using the annealing method, but Fe$_{59}$ and Fe$_{65}$ were constructed with the Wulff method~\cite{Wulff1901} (subsequently relaxed using our \gls{GAP}).
From these, the two particles created using the Wulff method stand out from the curve with particularly low energy (e.g., see snapshot at $65$ atoms in \fig{fig:convex_hull_DFT}).
That we found more stable \glspl{NP} despite relying on a less sophisticated search method than Liu \textit{et al.}~\cite{Liu2016}, in terms of the number of \glspl{NP} that could be sampled, can be attributed to the much higher accuracy of our potential compared to the Finnis-Sinclair \gls{EAM}. This is especially true for very small \glspl{NP}, where the error for the Finnis-Sinclair \gls{EAM} is the highest. (E.g., see the energies for the \glspl{CELD} convex hull shown in the SM Fig.~\ref{fig_s:Cambridge_FS_vs_GAP}.)
Among the \gls{DFT} data reported in the literature, the structures of the \glspl{NP} are unavailable. We were thus unable to compute the total energies with our GAP for comparison. The published binding energies, on the other hand, strongly depend on the exchange-correlation functional used (e.g., BLYP by Ma \textit{et al.}~\cite{Ma2007}, BLYP/SDD by Akt\"urk \textit{et al.}~\cite{Akturk2016}) and are not easily comparable to our results. Hence, the convex hulls derived by Ma \textit{et al.}{} and Akt\"urk \textit{et al.}{} were omitted from \fig{fig:convex_hull_DFT}.
In the remainder of this section we try to uncover trends in the structure of these \glspl{NP}, in particular regarding surface features. We first analyze the similarity between \gls{NP} surface motifs and selected [(100), (110), (120), (112) and (111)] surface motifs present in pristine \gls{bcc}, \gls{fcc} and \gls{hcp} surfaces. As a first step, we identify surface atoms in the \glspl{NP} with a rolling-sphere algorithm as implemented in \texttt{ase\_tools}~\cite{ase_tools}. Then, the SOAP descriptors characterizing the environment of these surface atoms within a $4$~\AA{} sphere are computed, as well as the \gls{SOAP} descriptors of the atoms in the reference surfaces. Finally, we calculate the \gls{SOAP} kernels between \gls{NP} and surface descriptors, yielding a measure of similarity between $0$ and $1$. These kernels are used to rank the surface ``character'' of each \gls{NP} as a histogram counting the number of motifs of each type divided by the total number of surface sites on that \gls{NP}. The results of this analysis are given in \fig{fig:character} (top) for the overall \gls{bcc}/\gls{fcc}/\gls{hcp} character, whereas the bottom panel of the figure shows four examples further resolving the surface character for Fe$_{59}$, Fe$_{65}$, Fe$_{100}$ and Fe$_{200}$.
From the figure we infer that, except for very small \glspl{NP} with just a couple dozen atoms, the highly symmetric Fe$_{59}$ and Fe$_{65}$ corresponding to magic numbers, and the also highly symmetric Fe$_{76}$ and Fe$_{78}$, the motif distribution in these small \glspl{NP} is close to random, oscillating around an equal distribution of \gls{bcc}, \gls{fcc} and \gls{hcp} sites up to $N_\text{atoms}=100$. Beyond that, there are slightly more \gls{fcc} sites and slightly less \gls{hcp} sites. Given this degree of disorder, classifying surface sites in small Fe \glspl{NP} in terms of the crystalline surface motifs is not very useful. Instead, we resort to a motif classification scheme that draws the classes directly from the database of structures. We do this using $k$-medoids, a data-clustering technique that separates data points into classes (``clusters'') according to their similarity~\cite{bauckhage_2015}. The most representative data point in each cluster is called a medoid. In our case, a collection of medoids provides a catalogue of representative motifs found in our database~\cite{caro_2018c}. We used the \texttt{fast-kmedoids} library for the $k$-medoids computation~\cite{fast_kmedoids} and \texttt{cl-MDS} to graphically represent the clustering via low-dimensional embedding~\cite{hernandez-leon_2022}. The results are shown in \fig{fig:mds}.
In the figure we classified the surface sites of all the lowest-energy \glspl{NP} for a given number of atoms, from $3$ to $200$ atoms, into 10 data clusters. The size of the clusters decreases with their number, i.e., cluster 1 represents the most common surface motifs and 10 the least common. The color encodes the local GAP energy of the surface atoms in the big map, and the cluster index in the figure inset, for better reference. We observe ``burried'' (almost sub-surface) motifs to be the most stable (clusters 6, 7 and 8, $\approx -7.5$~eV/atom). Then, motifs with a six-fold surface coordination (i.e., they appear to be at the center of a hexagon on the surface) but embedded within the \gls{NP} facet are contained in clusters 1 ($\approx -7.2$~eV/atom) and 2 ($\approx -6.9$~eV/atom). Six-fold coordinated motifs that are raised further from the surrounding atoms are higher in energy, and contained within cluster 5 ($\approx -6.5$~eV/atom). Five-fold coordinated motifs are similar in energy to the latter, with the central atom similarly raised, and belong to clusters 3 and 4. Finally, clusters 9 and 10 contain just a handful of motifs found in the extremely small \glspl{NP}.
The atomic GAP energy of bulk \gls{bcc} iron is $\approx -8.25$~eV/atom. The energy difference between this bulk value and the GAP energy of a less stable motif is directly related to the cohesive energy that could be gained by, e.g., increasing the coordination of the less stable motif. Therefore, we expect the less stable motifs to readily passivate while in contact with a surrounding medium, e.g., by forming strong bonds with and effectively capturing hydrogen atoms. Opposed to this, some of the motifs (especially those in clusters 6, 7 and 8) may be too stable to interact with adsorbants via covalent interactions. The sites with intermediate atomic GAP energies might be the most interesting from the catalytic point of view, e.g., because of their potential to adsorb or desorb reactants as a function of applied external bias. We will explore the precise relationship between adsorption energy of typical adsorbants and atomic GAP energies in subsequent work on Fe \gls{NP} reactivity.
Finally, a gallery of all the \glspl{NP} in our convex hull database is given in \fig{fig:all_nps}, with each surface site colored according to the data cluster to which it belongs (the reference motifs are also shown in the figure). We can easily observe that, except for the highly symmetric \glspl{NP} at very small size and magic numbers $59$ and $65$, as well as the stability island between $76$ and $78$ atoms, the distribution of surface motifs is highly irregular. That is, there is no obvious facet formation in these \glspl{NP}. Tests that we carried out for a significantly larger \gls{NP} with a few thousand atoms, generated using the condensation method, also showed lack of significant facet formation. This contrasts with the very clear facet formation in other metal \glspl{NP}, for instance (111) facets in Pt \glspl{NP} as we have recently observed using very similar methodology~\cite{kloppenburg_2023}. A possible explanation for this is that structural disorder in iron \glspl{NP} is driven by the interplay between the formation of the stable \gls{fcc} surface facets versus the formation of the stable \gls{bcc} bulk motifs. Since the bulk motif will nucleate facet formation with its same crystal structure and vice versa, this may lead to a non-trivial dynamics which in turn results in highly disordered \glspl{NP}. Indeed, it has been shown experimentally that nanostructured Fe, e.g., Fe thin films on a substrate, can be grown in the \gls{fcc} structure even at room temperature~\cite{li_1994,keavney_1995}.
\section{Code and data availability}
The GAP is available for free on Zenodo~\cite{zenodo_gap} and can be used with QUIP/GAP, LAMMPS via the QUIP interface, ASE via Quippy, and TurboGAP.
Incidentally, we note an improvement in computational efficiency of our \gls{GAP} over the previous state-of-the-art Dragoni \gls{GAP} by a factor of approx. $4$. This speedup can be attributed mostly to the use of SOAP descriptor compression~\cite{darby_2022,darby_2022b} in our \gls{GAP}, as available from the \texttt{soap\_turbo} descriptor~\cite{soap_turbo}. When used with the TurboGAP \gls{MD} engine~\cite{Caro2019}, better speedups can usually be achieved.
To facilitate further work in this area, we have made the structures of the \glspl{NP} derived in this work available to the community. A full database is available for download on Zenodo~\cite{zenodo_nps}, including the energies computed with the reference potentials and the Finnis-Sinclair \gls{EAM}.
\section{Summary}
In summary, we created a generally applicable \gls{GAP} \gls{ML} potential for the iron system which is stable in the whole configuration space and performs well for a wide range of applications, from bulk to nanostructured iron. While it cannot beat previously existing specialized potentials in every case, it can be used reliably for most problems, including the study of systems where two or more Fe phases coexist. We found it to be the most accurate for \glspl{NP} from among the potentials considered. We have derived a series of low-energy Fe \glspl{NP} and made these structures available for further use. The \gls{GAP} potential itself, which in addition to accuracy also achieves a sizeable speedup over the previous state-of-the-art potential, has also been made freely available. We hope that this will enable and stimulate further work in this field, in particular with regard to catalytic applications of low-dimensional iron structures.
Some limitations remain from the implicit treatment of the magnetic states, especially regarding surface energies and elastic constants. These could be addressed by training a \gls{GAP} including an explicit description of atomic magnetic moments. To this end, the necessary methodology and infrastructure to treat magnetism explicitly within the \gls{GAP} framework needs to be developed.
\begin{acknowledgments}
The authors are grateful to the Academy of Finland for financial support under projects \#321713 (R.~J. \& M.~A.~C.) and \#330488 (M.~A.~C.), and CSC -- IT Center for Science as well as Aalto University's Science-IT Project for computational resources.
\end{acknowledgments}
\input{output}
\clearpage \beginsupplement
\section{Supplemental material}
This supplemental material contains miscellaneous technical detail pertaining our manuscript, such as input parameters for reproducing our calculations and complementary figures.
\subsection{GAP fit command}\label{subsec:gap_fit}
The following command for gap\_fit was used to train our \gls{GAP}:
\begin{widetext}\begin{lstlisting}
gap_fit atoms_filename=train_all_pbe_tagged.xyz core_param_file=core_pot.xml \
core_ip_args={IP Glue} \
gap={ distance_2b Z1=26 Z2=26 cutoff=5.0 n_sparse=40 covariance_type=ard_se \
delta=1. theta_uniform=0.5 sparse_method=uniform add_species=F: \
angle_3b Z_center=26 Z1=26 Z2=26 cutoff=3.0 n_sparse=200 covariance_type=pp \
delta=0.01 theta_uniform=4.0 sparse_method=uniform add_species=F: \
soap_turbo l_max=8 alpha_max={{8}} atom_sigma_r={{0.4}} atom_sigma_t={{0.4}} \
atom_sigma_r_scaling={{0.}} atom_sigma_t_scaling={{0.}} zeta=4 \
rcut_hard=5.0 rcut_soft=4.5 basis="poly3gauss" scaling_mode="polynomial" \
amplitude_scaling={{1.0}} n_species=1 species_Z={26} central_index=1 \
radial_enhancement={{1}} compress_file="compress.dat" \
central_weight={{1.0}} config_type_n_sparse={...} delta=0.1 f0=0.0 \
covariance_type=dot_product sparse_method=cur_points add_species=F } \
default_sigma={0.0002 0.02 0.02 0.02} energy_parameter_name=free_energy \
force_parameter_name=NULL virial_parameter_name=NULL sparse_jitter=1.0e-8 \
e0=-3.406721695 do_copy_at_file=F sparse_separate_file=T gp_file=iron.xml
\end{lstlisting}\end{widetext}
where the number of sparse configurations per config type is given in \tab{tab_s:db_composition}. Trivial compression~\cite{Caro2019} was used with a compression file. $e0$ was chosen as half the energy of a dimer at the cutoff distance $5$~\AA{}.
\begin{table*}[htb]
\centering
\caption{Number of sparse configurations and sigma parameters for each configuration type used in the training of the \gls{GAP}.}
\begin{tabular}{ l | c | c | c | c }
\hline
config type & config\_type\_n\_sparse & number of configs in DB & $\sigma$ energy (eV/atom) & $\sigma$ virial (eV/atom) \\
\hline
dimer & $44$ & $44$ & $0.001$ & $0.1$ \\
trimer & $232$ & $232$ & $0.001$ & $0.1$ \\
bcc cell & $275$ & $4196$ & $0.001$ & $0.1$ \\
fcc cell & $275$ & $6165$ & $0.001$ & $0.1$ \\
hcp cell & $275$ & $1040$ & $0.001$ & $0.1$ \\
bcc FM elastic & $800$ & $1918$ & $10^{-5}$ & $0.001$ \\
fcc AFM elastic & $322$ & $322$ & $10^{-5}$ & $0.001$ \\
hcp NM elastic & $399$ & $399$ & $10^{-5}$ & $0.001$ \\
bcc rattle & $100$ & $2200$ & $10^{-5}$ & $0.001$ \\
fcc rattle & $100$ & $409$ & $10^{-5}$ & $0.001$ \\
hcp rattle & $100$ & $175$ & $10^{-5}$ & $0.001$ \\
nanoparticles & $500$ & $1664$ & $0.001$ & $0.1$ \\
nucleation cluster & $25$ & $331$ & $0.001$ & $0.1$ \\
bcc FM surface & $127$ & $127$ & $0.001$ & $0.1$ \\
fcc FMLS surface & $216$ & $216$ & $0.001$ & $0.1$ \\
hcp NM surface & $72$ & $72$ & $0.001$ & $0.1$ \\
thin slab & $38$ & $38$ & $0.025$ & $2.5$ \\ %
transition & $25$ & $42$ & $0.001$ & $0.1$ \\
vacancy interstitial & $25$ & $26$ & $$ & $$ \\
vacancy migration & $25$ & $42$ & $$ & $$ \\
melt & $75$ & $292$ & $0.001$ & $0.1$ \\
simple cubic & $50$ & $596$ & $0.025$ & $2.5$ \\
diamond & $75$ & $475$ & $0.025$ & $2.5$ \\
\hline
\end{tabular}
\label{tab_s:db_composition}
\end{table*}
Furthermore, the database has been pre-processed by removing all but $25\%$ of the forces and scaling the the rest by $0.05$, but to a minimum of $0.1$~eV/\AA{}. Energy and virial regularization is done on a per-structure basis with the parameters given in \tab{tab_s:db_composition}
Dimer and trimer structures in the database were limited to energies of $100$ and $250$~eV/atom, respectively, to avoid impacting the accuracy of the potential in more stable regions of phase space. Thus, the inter-atomic distance of the closest dimer in the database is $0.7$~\AA{}.
\subsection{Stability plot}
\fig{fig_s:Stability} shows an extension of \fig{fig:Stability} to larger atomic volumes. For the Dragoni \gls{GAP}, this reveals a minimum in the \gls{hcp} curve around $22$~\AA{}$^3$/atom which is almost as low in energy as the global \gls{bcc} minimum. This has been fixed in the fracture \gls{GAP} by Zhang \textit{et al.}{}~\cite{Zhang2022}, based on the Dragoni \gls{GAP} (see \fig{fig_s:Stability_zhang22}).
For both potentials the predictions for diamond and \gls{sc} structures are not accurate, but high enough in energy to be unproblematic.
\begin{figure}
\includegraphics[width=\linewidth,keepaspectratio]{fig_s/stability_plot--paper_UNIFIED-LONG.png} \\
\caption{Plots corresponding to Fig.~\ref{fig:Stability} over a larger range of atomic volumes.}
\label{fig_s:Stability}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth,keepaspectratio]{fig_ad/stability_plot--paper--GAP_Zhang_2022-LONG.png} \\
\caption{Stability of different crystal phases for the fracture \gls{GAP}~\cite{Zhang2022}. The \gls{hcp} minimum around $22$~\AA{}$^3$/atom has been fixed.}
\label{fig_s:Stability_zhang22}
\end{figure}
\subsection{Cell parameters}
The cell parameters given in \tab{tab_s:cell_params} were obtained by relaxing the \gls{DFT} minimum structures with the respective potentials. Hence, other local minima in the cell parameter space might exist.
\begin{table*}[htb]
\centering
\caption{Cell parameters for the different magnetic configurations of \gls{fcc}, relaxed with \gls{DFT} and the three potentials. Values missing from the table indicate that no stable structure was found close to the \gls{DFT} cell parameters.}
\begin{tabular}{ l | c c c c }
\hline
& DFT & this work & Dragoni GAP~\cite{Dragoni2018GAP} & Mendelev EAM~\cite{Mendelev2003} \\
\hline
bcc FM $a$ (\AA{}) & $2.825$ & $2.829$ & $2.834$ & $2.855$ \\
\hline
fcc AFM $a$ (\AA{}) & $3.418$ & $3.418$ & $3.416$ & \\
fcc AFM $c$ (\AA{}) & $3.674$ & $3.684$ & $4.042$ & \\
fcc FMLS $a$ (\AA{}) & $3.474$ & $3.484$ & & \\
fcc FMHS $a$ (\AA{}) & $3.634$ & & $3.611$ & $3.658$ \\
\hline
hcp NM $a$ (\AA{}) & $2.455$ & $2.457$ & $2.563$ & $2.621$ \\
hcp NM $c$ (\AA{}) & $3.884$ & $3.886$ & $4.332$ & $4.146$ \\
hcp NM $c \slash a$ & $1.582$ & $1.582$ & $1.690$ & $1.582$ \\
hcp NM $c \slash a$ ($\sqrt{8 \slash 3}$) & $0.969$ & $0.969$ & $1.035$ & $0.969$ \\
\hline
\end{tabular}
\label{tab_s:cell_params}
\end{table*}
\subsection{Surface energies}
Fig.~\ref{fig_s:surface_energies_Mendelev} shows a plot equivalent to \fig{fig:SurfaceEnergies}. Instead of the Dragoni \gls{GAP}, this plot shows the Mendelev \gls{EAM} as reference potential, compared with our \gls{GAP} and the \gls{DFT} values.
\begin{figure}
\centering
\includegraphics[width=0.66\linewidth,keepaspectratio]{fig_s/surface_energies--paper_FINAL_EAM_Mendelev_2003.png}
\caption{Surface energies of a wide range of surface indices for the \gls{EAM} by Mendelev \textit{et al.}{} and our \gls{GAP}, compared with the \gls{DFT} values.}
\label{fig_s:surface_energies_Mendelev}
\end{figure}
\subsection{Thermal expansion}
During equilibration and averaging of the thermal expansion calculations (see \fig{fig:thermal_exp}) the target temperature and pressure were maintained using Nos\'e-Hoover and Parrinello-Rahman dynamics~\cite{Melchionna1993,Melchionna2000,Holian1990,DiTolla1993}, with the barostat bulk modulus set to $100$~GPa and the thermostat and barostat characteristic timescales set to $6$~fs and $10$~fs, respectively.
The short timescales were chosen for computational efficiency, but do not affect the results as compared to longer timescales. This can be seen in \fig{fig_s:thermal_expansion_params}, which compares sets of different characteristic timescales for thermostat and barostat.
The parameter sets (6, 10) and (60, 100) were run for the times stated in the main text. (100, 1000) was equilibrated for $10$~ps ($10$ times longer) and averaged over $2$~ps at each temperature.
While the oscillations in temperature and pressure increase with larger parameters, clearly, the different characteristic timescales do not impact the average values at any temperature.
\begin{figure}
\centering
\includegraphics[width=\linewidth,keepaspectratio]{fig_s/thermal_expansion_compare--paper.png}
\caption{The thermal expansion plot, with two additional curves for our GAP, computed with longer characteristic timescales. The curves for the Mendelev \gls{EAM} and Dragoni \gls{GAP} were remove for clarity.}
\label{fig_s:thermal_expansion_params}
\end{figure}
\subsection{Melting temperature}
Fig.~\ref{fig_s:Tmelt_references} shows the results for the Mendelev \gls{EAM}~\cite{Mendelev2003} potential. Each line represents a separate \gls{MD} run at the temperature indicated by the color of the line.
Multiple calculations were run at each temperature with different random seeds to account for random thermal fluctuations ($5$ trajectories in the temperature range from $1750$ to $1775$~K and $1$ at all other temperatures shown).
\begin{figure}
\includegraphics[width=\linewidth,keepaspectratio]{fig_s/steinhardt-Mendelev-1675..1850-paper.png} \\
\caption{Two-phase trajectories to determine the melting temperature for the \gls{EAM} potential by Mendelev \textit{et al.}{}~\cite{Mendelev2003}.}
\label{fig_s:Tmelt_references}
\end{figure}
\subsection{Phase diagram}
To label the phase diagram for our \gls{GAP}, shown in \fig{fig:phase_diagram}, the \gls{XRD} spectra of the lowest-energy structure at each (p, T) were computed~\cite{ref_debye} and compared to the reference spectra shown in \fig{fig_s:ref_spectra} (a), representing perfect crystalline structures.
\fig{fig_s:ref_spectra} (b) shows the spectra for a slice through the phase diagram at $p = 1$~bar. Up to $1500$~K, all structures are \gls{bcc}, and above liquid.
\begin{figure}
{(a) reference structures} \\
\includegraphics[width=\linewidth,keepaspectratio]{fig_s/XRD_test-REFERENCE_PLOTS-n128-cut100-res0.1.png} \\
{(b) $p = 1$~bar} \\
\includegraphics[width=\linewidth,keepaspectratio]{fig_s/XRD_p_0.png}
\caption{Example \gls{XRD} spectra for (a) the reference structures of different crystalline phases and (b) spectra at $p = 1$~bar over the whole temperature range.}
\label{fig_s:ref_spectra}
\end{figure}
\subsection{Genetic algorithm settings}\label{subsec:GA_settings}
Settings for the \gls{GA}~\cite{Weal2021} used to find low-energy \glspl{NP} in Sec.~\ref{sec:nanoparticles}.
\begin{widetext}\begin{lstlisting}
# general settings
pop_size = 100
generations = 2500
no_offspring_per_generation = 20
# offspring creation using the mating and mutation procedures
creating_offspring_mode = "Either_Mating_and_Mutation"
crossover_type = "CAS_weighted"
mutation_types = [['random', 0.5], ['random_50', 0.5]]
chance_of_mutation = 0.35
# epoch criterion
epoch_settings = {'epoch mode': 'same population', 'max repeat': 5}
# new cluster creation
r_ij = 4.9
cell_length = r_ij * (sum([float(noAtoms) for noAtoms in \
list(cluster_makeup.values())]) ** (1.0/3.0))
vacuum_to_add_length = 5.0
# predation scheme
predation_information = {'Predation Operator': 'Energy',
'mode': 'comprehensive',
'minimum_energy_diff': 0.1 / 1000,
'type_of_comprehensive_scheme': 'fitness'}
# fitness scheme
energy_fitness_function = {'function': 'exponential', 'alpha': 3.0}
fitness_information = {'Fitness Operator': 'Energy',
'fitness_function': energy_fitness_function}
\end{lstlisting}\end{widetext}
\subsection{Finnis-Sinclair}
To showcase the capabilities of the Finnis-Sinclair \gls{EAM} for \glspl{NP}, we computed the energies of the \glspl{NP} in the \gls{CELD} using \gls{DFT} and the three potentials used in this work.
\fig{fig_s:Cambridge_FS_vs_GAP} shows the errors of the interatomic potentials compared to \gls{DFT}. Only our \gls{GAP} yields acceptable accuracy over the whole size range of \glspl{NP}.
The Dragoni \gls{GAP} shows good accuracy for the largest \glspl{NP}, but fully diverges towards smaller \glspl{NP}. This is likely connected to the spurious surface energies the Dragoni \gls{GAP} yields for \gls{hcp} surfaces.
The Mendelev and Finnis-Sinclair \glspl{EAM} on the other hand show an almost constant offset from the \gls{DFT} values, only slightly increasing towards smaller \glspl{NP}.
\begin{figure}
\centering
\includegraphics[width=\linewidth,keepaspectratio]{fig_s/cambridge_hull-DFT_energies.xyz--iron.xml--alpha_1-incl_Dragoni-incl_Mendelev.png}
\caption{Energies for the \glspl{NP} from the Cambridge Energy Landscape Database: computed with our \gls{GAP}, the Finnis-Sinclair \gls{EAM}, Dragoni \gls{GAP} and Mendelev \gls{EAM} compared to \gls{DFT}.}
\label{fig_s:Cambridge_FS_vs_GAP}
\end{figure}
\end{document}
|
{
"arxiv_id": "2302.13626",
"language": "en",
"timestamp": "2023-02-28T02:25:43",
"url": "https://arxiv.org/abs/2302.13626",
"yymm": "2302"
} | \section{Introduction}
Dark nebulae, such as the ones described by \citet{lynds}, have been relatively unexplored in the optical due to the large extinction of these regions. Lynds catalogued these dark nebulae into 6 opacity classes based upon their visual appearance in the National Geographic Palomar Observatory Sky Survey images.\\
Polarimetry is a good way to probe the magnetic field structures \citep{davis} in dark clouds \citep{joshi1985, Jones1989,Myers,andersson}.
Polarimetric observations of the stars in the line of sight through the dark nebulae/molecular cloud would help to probe the magnetic field structure in the cloud and along the line of sight \citep{lazarian}. For example, \citet{eswaraiah} have studied the dark globule LDN 1225 using optical polarimetry and infrared photometry to understand the extinction and magnetic field properties of the region.\\
Infrared polarimetry would be even more appropriate in regions of high extinction. Earlier work \citep[e.g][]{wilking, Kwon_2016} using infrared polarimetry concluded that it is a valuable tool for measuring the dichroic polarization of the background stars and those embedded within the dense clouds.\\
\citet{CF} formulated methods to estimate the magnetic field strength in the molecular clouds. The method is independent of the Zeeman effect which is difficult to detect in molecular clouds due to smaller frequency splitting in presence of weak magnetic fields. The estimation of the mean field strength was based on the knowledge of the mean gas density, the line-of-sight velocity, and the position angle dispersion (obtained from polarisation measurements). The position angle dispersion is connected to the dispersion in the orientation of the magnetic field in the plane of the sky. The estimated magnetic field strength using this method is found to be accurate when the polarisation angle fluctuations are small \citep{ostriker}.\\
\cite{Kwon_2016} discuss the observations of GGD 27 in the Lynds 291 molecular cloud complex in the constellation of Sagittarius using the SIRPOL instrument on the IRSF telescope at Sutherland, South Africa. They study the complex structure of the magnetic field and derive the magnetic field strength based on their polarimetric observations.\\
Lynds dark nebula 1340 (hereafter referred to as L1340), a cloud of opacity class 5, located at $\alpha=$ 2h32m, $\delta=$ $73^{\circ} 00'$ ($\ell=$ 130$^{\circ}$.07,$b=$ 11$^{\circ}$.6), has been studied in a series of papers \citep[e.g.][and references therein]{Kun1994,Kun,2016ApJ...822...79K} by M. Kun and collaborators. \citet{Kun1994} have studied L1340 as a star-forming complex in Cassiopeia using optical photometry as well as radio ($^{13}$CO and C$^{18}$O) molecular emissions. Using objective prism spectroscopy they detected 13 $H\alpha$ emission line stars. \citet{Ganesh2001} also studied this cloud using infrared photometry with the PRLNIC3 instrument on the 1.2m Mount Abu IR telescope. \citet{2016ApJ...822...79K, Kun} studied the region in the optical \& infrared using the photometric data from SDSS, WISE, and Spitzer survey to determine the young stellar populations present in the region. They conclude that it is an isolated molecular cloud of 3700 $M_{\odot}$
with an intermediate star formation efficiency $\sim$ $3\%$.\\
In this paper, we present an infrared polarisation study of the L1340 region using a near-infrared imaging polarimeter (H band). Section \ref{2} briefly describes the observational procedure followed. Section \ref{3} talks about the data reduction and analysis steps for deriving infrared polarisation measurements from the observations. Section \ref{4} describes the supplementary data sets used. Section \ref{sec:results} discusses the results obtained. A concluding summary is provided in the Section \ref{6}.
\section{Observations}
\label{2}
\subsection{Telescope and instrument}
Physical Research Laboratory (PRL) operates a 1.2 m telescope at its Mount Abu Infrared Observatory (MIRO). MIRO is located at $24^{\circ} 39^\prime 9^{\prime}{^\prime}$ (N) latitude, $72^{\circ} 46^\prime 47{^\prime}{^\prime} $ (E) longitude at 1680~m altitude. The observatory location favours good conditions for IR observations with a median seeing $\sim$ $1{^\prime}{^\prime}$ in the visual band.
PRL's 1.2 m f/13 telescope is equipped with a Near-IR Camera \& Spectrograph (NICS) serving as one of the general purpose back-end instruments. The wavelength bands covered are J, H, and Ks. The imaging detector is a Teledyne H1RG array with 1K x 1K pixels of 18 $\mu$m size each. In imaging mode, the instrument has a square FOV of $8^\prime$ x $8^\prime$ with a spatial scale of $0.5{^\prime}{^\prime}$ per pixel. The capabilities of the instrument were enhanced with the addition \citep{aarthy} of a polarimetric module (NICSPol), between the telescope optics \& NICS. The technique of polarimetry is implemented using a 25.0 mm x 25.0 mm wire-grid polarizer (WGP) module at room temperature fixed in a motorised rotator originally used as a field rotator for a small telescope (Pyxis LE field rotator model from Pyxis Instruments). With this arrangement, the wire-grid serves as both a modulator \& an analyzer. The clear circular field of view at the image plane is $\sim$ $3.9^\prime$ in diameter due to vignetting by WGP. Images are taken at 4 angular positions ($0^{\circ}$, $45^{\circ}$, $90^{\circ}$ \& $135^{\circ}$) of the wire-grid polarizer.
\subsection{Observational procedure}
The observations of the L1340 with particular emphasis on the RNO 8 and surrounding area were fulfilled using NICSPol. All observations were completed during dark nights, and at multiple epochs in November 2017 with appropriate polarisation standard stars done each night to facilitate the standardization procedure.
The relatively small FOV of NICSPol (3.9$^{\prime}$ diameter) meant that multiple pointings were required to be observed for a complete coverage of the field. We covered the region of L1340 from $\ell=$ 130.08$^{\circ}$ - 130.24$^{\circ}$, $b=$ 11.44$^{\circ}$ - 11.60$^{\circ}$, an effective FOV of 9.6$^\prime$ x 9.6$^\prime$ with 6 positions (spread over different central coordinates) taken in succession. The images at 4 wire-grid position angles were acquired. With the individual exposure times of 50 sec, and three exposures per pointing per position angle, the effective exposure time was 150 sec per position angle per pointing. Standard polarized and unpolarized stars were observed multiple times, in all filter bands, during the night, to get coverage of redundant standards over the night.
\section{Data reduction and analysis}
\label{3}
The observed data were reduced and analyzed using standard IRAF routines.
This involved a series of steps in the sequence of basic image reduction, astrometry, photometry \& polarimetry.
We created sky frames for each pointing by median combining all the rest of the observed pointings for a given WGP position.
The minor shifts in the 4 object frames for different WGP position angles were corrected using the \textsc{imshift} task.\\
Astrometric solutions were fitted on the combined fits images using the \textsc{Astrometry.net} software\footnote{\url{http://nova.astrometry.net}}.\\
After fitting astrometric solutions, we used the 2MASS All-Sky Point Source Catalog for this region as input to the `sky2xy' command\footnote{part of the wcstools package} and obtained the image coordinates for further photometry. Photometry utilised \textsc{iraf} routines making use of aperture photometry \textsc{phot} \citep{stetson} followed by psf photometry with \textsc{pstselect, psf, \& allstar}.
\begin{table*}
\centering
\caption{Observational results of polarized standard stars. The standard value for \textit{p} $\&$ $\theta$ is taken from NOT $\&$ UKIRT links. The observed values are listed as \textit{p}$_{obs}$ $\&$ $\theta_{obs}$. The offset for position angles is given by $\theta_{off}$ }
\begin{tabular}{|c | c |c | c | c | c | c | c |}
\hline
\Tstrut\Bstrut Star & Date & Filter & \textit{p} & $\theta$ & \textit{p}$_{obs}$ & $\theta_{obs}$ & $\theta_{off}$=$\theta_{obs}-\theta$ \\
\Tstrut\Bstrut & & & (percent) & ($\circ$) & (percent) & ($\circ$) & ($\circ$) \\ \hline
\Tstrut\Bstrut HD204827 & 16/11/2017 & J &2.83 $\pm$ 0.07 & 61.1 & 2.83 $\pm$ 0.33 & -3.19 & -64.29 $\pm$ 6.55 \\
\Tstrut\Bstrut & 16/11/2017 & H & & & 1.11 $\pm$ 0.49 & -1.19 & -62.29 $\pm$ 5.47 \\
\Tstrut\Bstrut & 17/11/2017 & J & 2.83 $\pm$ 0.07 & 61.1 & 3.39 $\pm$ 0.49 & -16.72 & -77.82 $\pm$ 16.1 \\
\Tstrut\Bstrut & 17/11/2017 & H & & & 1.84 $\pm$ 0.33 & -4.94 & -66.04 $\pm$ 5.5\\ \hline
\Tstrut\Bstrut HD283809 & 16/11/2017 & J & 3.81 $\pm$ 0.07 & 57 $\pm$ 1 & 3.26 $\pm$ 0.69 & -13.09 & -70.09 $\pm$ 14.5 \\
\Tstrut\Bstrut & 16/11/2017 & H & 2.59 $\pm$ 0.07 & 58 $\pm$ 1 & 1.32 $\pm$ 0.69 & -6.27 & -64.27 $\pm$ 7.2 \\ \hline
\end{tabular}
\label{observations_1}
\end{table*}
\subsection{Polarisation calculation}
The measured magnitudes were converted to flux units for polarisation calculations.
After obtaining the intensity measurements of the stars at each position angle of the WGP, the polarimetric analysis was carried out using the Stokes method by applying the formulae below:
\begin{equation}
I = \frac{F_{0}+F_{45}+F_{90}+F_{135}}{4}
\label{I}
\end{equation}
\begin{equation}
Q = \frac{F_{0}-F_{90}}{2}
\label{Q}
\end{equation}
\begin{equation}
U = \frac{F_{45}-F_{135}}{2}
\label{U}
\end{equation}
\noindent where $F_0$, $F_{45}$, $F_{90}$, and $F_{135}$ are the fluxes at 0$^\circ$, 45$^\circ$, 90$^\circ$, and 135$^\circ$ position angles of the WGP.\\
Using the Stokes vectors, I, Q \& U, we derived the degree of polarisation (P) \& position angle of polarisation ($\theta$), as formulated below:
\begin{equation}
P = \frac{\sqrt{Q^{2}+U^{2}}}{I}
\label{PF}
\end{equation}
\begin{equation}
\theta = \frac{1}{2}tan^{-1}\frac{U}{Q}
\label{theta}
\end{equation}
The error estimates for the P \& $\theta$ were derived using the fundamental error propagation algorithms:
\begin{equation}
\sigma_{P} = \frac{1}{I}\sqrt{\frac{Q^{2}{\sigma_{Q}}^{2} + U^{2}{\sigma_{U}}^{2}}{Q^{2}+U^{2}} + \frac{Q^{2}+U^{2}}{I^{2}}{\sigma_{I}}^{2}}
\label{sigmaPF}
\end{equation}
\begin{equation}
\sigma_{\theta} = \frac{1}{2}\sqrt{\frac{Q^{2}{\sigma_{U}}^{2} + U^{2}{\sigma_{Q}}^{2}}{(Q^{2}+U^{2})^2}} rad
\label{sigmatheta}
\end{equation}
\subsection{Polarisation calibration}
The data are calibrated with observed polarisation standards i.e. unpolarised \& polarised standard stars. The unpolarised stars are used to account for any instrumental polarisation if present in the system. Since the NICSPol module is the first element after the telescope and before any asymmetric reflection in the instrument, we do not expect any instrumental polarisation. The results of the standard star observations are discussed in the instrument paper of NICSPol \citep{aarthy}. The instrumental polarisation is close to 1\% uncertainty in the polarisation fraction. The observation of polarised standard stars allows for the conversion of our polarisation position angles to celestial coordinates (table \ref{observations_1}). Several standards were observed in filters (J, H) on the multiple nights of the observing runs. The standards were taken from the lists maintained by UKIRT \footnote{\url{https://about.ifa.hawaii.edu/ukirt/calibration-and-standards/
unpolarized-standard-stars/.}} \& Nordic Optical Telescope \footnote{\url{http://www.not.iac.es/instruments/turpol/std/hpstd.html}}.
\subsection{IR polarisation values and large errors}
\label{largeIR}
All the stars from our observation and analysis satisfy the criteria of $P$/$\sigma_{P}$ $>$ 3, and the degree of polarisation $P$ $<$ 30$\%$. We have dropped sources with $P$ $>$ 30$\%$ since they were all faint or were at the edge of the circular field of NICSPol and the photometry would not be reliable.
We present a data set with a larger degree of polarisation than theoretically expected to arise from purely interstellar extinction at the IR wavelengths \citep{Jones1989}, for some of the stars.
However, we note that at the individual source level, similar numbers are seen in the case of MIMIR \citep{mimir} and SIRPOL \citep{Hatano2013} data sets in other directions. In the case of stars sitting in or towards reflection nebulae, one may see a much higher degree of polarisation \citep{Jones1989}.
The large values may be attributed to a few observational limitations in our case: non-simultaneity of images at 4 angles, lower SNR, and significant background from the warm WGP. Of all these points, it appears that the varying infrared sky background (over the period of the observations of the 4 WGP angles) may be the most significant contributor to the uncertainty in the photometry in the individual frames.
It has been mentioned in \cite{ss} that with low S/N, the derived polarisation values tend to be overestimated. This would be especially applicable for the fainter stars. A correction has been suggested by them, for reducing the overestimated polarisation values, expressed as
\begin{equation}
P_{corr} = \sqrt{(P_{o}^{2}) - \delta P^{2}}
\end{equation}
where, $P_{o}$ is the observed degree of polarisation in our results.
We considered the above correction and found that it did not make a significant change in our results and hence do not include this correction in the final tables and figures presented in this work.
\section{Supplementary Data}
\label{4}
\subsection{\label{sec:2massgaia}2MASS and Gaia}
The 2MASS and Gaia data have been used in this study to understand the variation of polarisation with NIR color and with distances along the line of sight. The 2MASS All-Sky Catalog for Point Sources \citet{2MASS}, has a total of 765 sources detected in the 10$^{\prime}$ search radius for the L1340 cloud. \\\\
The Gaia EDR3 data
\citep{baileredr3,gaia2016b, gaia2020a} have been downloaded from the Vizier\footnote{http://vizier.u-strasbg.fr/} database. The distance estimates from inverting the parallax have been dealt with in a self-consistent manner in this data release, using a Bayesian Inference approach.
A total of 1003 Gaia sources were detected in the 10$^{\prime}$ search radius centered on LDN 1340. Of these, 686 sources are seen matched in both Gaia and 2MASS.\\
The recently released Gaia DR3 data \citep{gaia2022, 2022j} was also utilised for further analysis which included the proper motion data and extinction values A$_G$ for the stars within the L1340 cloud.\\
Figure \ref{distJK} uses this combined catalog to show the colour vs distance diagram for all the 2MASS and Gaia sources. Figure \ref{allpol} shows the H band image (2MASS) of the field covered by our NICSPol observations in the equatorial coordinate system. The 2MASS image of the L1340 region was used after mosaicing individual 2MASS tiles using \textsc{montage} tool. Our coverage of the field with NICSPol is incomplete due to the circular fields of the individual NICSPol tiles as discussed in the previous section. In both Figures (\ref{distJK}, \ref{allpol}), the individual stars measured for polarisation are shown by separate symbols and will be discussed in section \ref{sec:results}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{distedr3vsJK_new.pdf}
\caption{2MASS $J-K_s$ color vs Gaia distance. Blue-filled squares show the stars for which polarisation is measured in the H band. Stars of the RNO 8 area are shown as red open squares. A vertical dashed line is marked at 825pc - denoting the distance to the cloud as per \citet{Kun}. }
\label{distJK}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{Hpol_wise4_annotated.jpeg}
\caption{H band polarisation vectors (thin solid lines) plotted on the H filter 2MASS image of L1340. The thick black vectors are for stars corresponding to the RNO 8 clump, a small nebulous cluster within the L1340 cloud. We emphasize the highly ordered pattern of polarisation vectors for the stars within this clump. The solid line at the bottom left corner indicates a 10\% degree of polarisation value. The contours in red are from the WISE (band 4) image of the same area.}
\label{allpol}
\end{figure}
\subsection{WISE data}
\begin{figure}
\includegraphics[width=\columnwidth]{rno8_wise_galactic_purple_withcore.jpeg}
\caption{H band polarisation vectors overlaid for the RNO 8 stars on the RGB colour composite of the WISE 4,3,2 bands. The black circle marks the core of RNO 8. Further discussion is in the text.
}
\label{allpolwise}
\end{figure}
The WISE mission \citep{2010AJ_Wright_WISE} maps the interstellar dust over the whole Galaxy. This is from the presence of PAH emission features in the W1 and W3 filters, 3.4 $\mu$m and 12 $\mu$m respectively. The other two filters, W2 and W4 (4.6 $\mu$m and 22 $\mu$m respectively) measure the continuum emission from the grains (small and large). We note that the 22$\mu$m filter is expected to see both `stochastic emission from small grains and the Wien tail of thermal emission from large grains' \citep{2010AJ_Wright_WISE}.\\
The WISE band images for the L1340 cloud at 3.4 $\mu$m, 4.6 $\mu$m, 12 $\mu$m, and 22 $\mu$m were imported into {\sc{ds9}}
\footnote{Data downloaded from \url{https://irsa.ipac.caltech.edu/Missions/wise.html}} in RGB mode. The colour scaling and levels were chosen to bring out the dust features. This nebulosity is very nicely evident around the RNO 8 area (marked by a black circle in Fig. \ref{allpolwise}) in the RGB colour-composite (with W2, W3 and W4 represented as blue, green, and red colours). The 22 $\mu$m band was also used to mark contours over the cloud (Fig \ref{allpol}), revealing the clumpy structure of the dust towards and in the L1340 region.
\section{Results \& Discussion}
\label{sec:results}
In the region covered by our NICSPol observations, we have 84 stars in the H band. We cross-identified the polarisation measurements from these stars with the 2MASS, Gaia EDR3 and DR3 surveys to get proper distance information and thence characterize them. In total, we have 76 stars in the H band with 2MASS and Gaia counterparts. The cross-match of the different surveys was done in \textsc{TOPCAT} with the Sky algorithm, using a search radius of 2${^\prime}{^\prime}$.
\begin{table*}
\setlength\tabcolsep{4pt}
\caption{
NICSPol linear polarization measurements (p$_{H}$) \& polarization position angle ($\theta$) of the RNO 8 stars. The H magnitudes and distances to individual stars from 2MASS \& Gaia survey data are also included. The central star of RNO 8 is shown in bold in this table and discussed further in the text.
}
\centering
\begin{tabular}{| c | c | r | r | r | r | c | c |}
\hline
\multicolumn{1}{|c|}{RA} &
\multicolumn{1}{c|}{DEC} &
\multicolumn{1}{|c|}{p$_{H}$} &
\multicolumn{1}{c|}{$\theta_{H}$} &
\multicolumn{1}{c|}{Hmag} &
\multicolumn{1}{c|}{rgeo} &
\multicolumn{1}{c|}{Proper} &
\multicolumn{1}{|c|}{SIMBAD}\\
\multicolumn{1}{|c|}{(deg)} &
\multicolumn{1}{c|}{(deg)} &
\multicolumn{1}{c|}{(percent)} &
\multicolumn{1}{c|}{($\circ$)} &
\multicolumn{1}{c|}{(mag)} &
\multicolumn{1}{c|}{(pc)} &
\multicolumn{1}{c|}{motion} &
\multicolumn{1}{|c|}{identification}\\
\hlin
37.67549 & 73.00384 & 10 $\pm$ 1 & 133$\pm$5 & 12.94 $\pm$ 0.03 & 893$^{935}_{848}$ & 2.4 & YSO \\[0.2cm]
37.65248 & 73.00647 & 4 $\pm$ 1 & 127$\pm$7 & 13.14 $\pm$ 0.03 & 808$^{848}_{765}$ & 2.3 & YSO\\[0.2cm]
37.65338 & 72.99905 & 7 $\pm$ 1 &122$\pm$3 & 12.51 $\pm$ 0.04 & 876$^{906}_{850}$ & 2.2 & T Tauri\\[0.2cm]
37.60990 & 72.98741 & 5 $\pm$ 1 &127$\pm$7 & 13.71 $\pm$ 0.05 & 1101$^{1450}_{884}$ & 2.1 & -\\[0.2cm]
37.65020 & 73.00396 & 13 $\pm$ 3 &134$\pm$5 & 14.76 $\pm$ 0.07 & 1170$^{1598}_{948}$ & 2.3 & -\\[0.2cm]
37.63104 & 73.00655 & 13 $\pm$ 2 &119$\pm$5 & 14.90 $\pm$ 0.06 & 1689$^{2246}_{1134}$ & 3.0 & YSO\\[0.2cm]
37.64659 & 72.99383 & 9 $\pm$ 1 &131$\pm$4 & 13.50 $\pm$ 0.03 & 842$^{918}_{769}$ & 2.3 & -\\[0.2cm]
\textbf{37.63532} & \textbf{72.98826} & \textbf{16 $\pm$ 1} & \textbf{94$\pm$1} & \textbf{12.23 $\pm$ 0.03} & \textbf{5458$^{8269}_{3812}$} & 2.3 & \textbf{T Tauri}\\[0.1cm]
\hline
\end{tabular}
\label{RNO8_table}
\end{table*}
\subsection{Color \& distance information}
Figure \ref{distJK} shows the distribution of the 2MASS colours and the corresponding Gaia distances. The distance to the cloud is quoted to be 825$^{+110}_{-80}$ pc from the photometric and spectroscopic study of L1340 by \citep{Kun}. We show this by a dashed line in the figure.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{JKHK3.pdf}
\caption{Color-color diagram using 2MASS colors for the stars in L1340. The stars detected in the H band with NICSPol are marked with blue squares over the smoothScatter density plot of the 2MASS stars. The red open squares mark the stars of RNO8 cluster, and the green open square is for the central star of RNO8.}
\label{ccdia}
\end{figure}
We see that this estimate is quite robust, since there is a sharp increase in the number of sources, along with a reddening in colour, beyond this distance.
The 2MASS two-colour diagram (CCD) is shown in Figure \ref{ccdia}. The stars observed by NICSPol are shown as blue squares over a `smoothScatter' distribution of the 2MASS detections towards the L1340 cloud. The RNO8 stars are overlaid as large red squares. From the color-color diagram it is evident that these stars are redder compared to the other stars in the CCD.
\subsection{NIR Polarimetry}
The position angles of the stars in the RNO 8 group show a preferential direction along the Galactic plane, visible from their alignment nearly parallel to the grid lines in Figure \ref{allpolwise}, Table \ref{RNO8_table}.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{magvsPF.pdf}
\caption {Polarisation in the H band as a function of the corresponding 2MASS H magnitudes.}
\label{magPF}
\end{figure}
The polarisation values are plotted as a function of the 2MASS H band photometry in Figure \ref{magPF}.
The polarisation values and the corresponding uncertainties for the fainter stars have been discussed in section \ref{largeIR}.
To investigate the orientation of the magnetic field in the direction of the L1340 molecular cloud, we created linear polarization maps for the H wavelength band. The map in Figure \ref{allpol} shows a vector plot overlaying the polarisation vectors (degree of polarisation \& position angle) for the H band on the 2MASS H filter image. The overall orientation of most of the polarisation vectors is nearly aligned with the Galactic plane. There are large departures in some of the stars away from the nebulosity marked by the 22 $\mu$m WISE band 4 contours.\\
\begin{figure}[H]
\centering
\includegraphics[width=0.9\columnwidth]{distedr3vsPFPA_n.pdf}
\caption{Distance (in pc from Gaia) vs polarisation percentage (lower panel) and polarisation position angle (upper panel) for H band measurements. Stars in the vicinity of RNO 8 are shown with larger symbols. }
\label{distPF}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\columnwidth]{JKvsPFPA_H_n.pdf}
\caption{Polarisation in the $H$ band (percentage in lower panel and angle in upper panel) vs $J-K_s$ colour (from 2MASS). The central star of RNO 8 is shown with a green open circle.}
\label{JKpfpa}
\end{figure}
The distribution of the position angle with distance for the H band is plotted in Figure \ref{distPF}. Two dominant angles (PA $=$ 130$^\circ$ and 70$^\circ$) are seen in the H band histogram (Fig. \ref{angleHist}). These angles are marked in the position angle panels of Figures \ref{distPF} and \ref{JKpfpa} as horizontal lines. For the stars located at the distance of the L1340 cloud, there is a large scatter in the polarization position angle covering the extent of the two dominant angles.\\
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{Hband_thetahist.pdf}
\caption{Position angle distribution for the stars was measured in the H band. A gaussian fit is overlaid on the panel. Two dominant angles could be seen from the plot.}
\label{angleHist}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width = 0.45\textwidth]{COplot.png}
\caption{Line of sight velocity plot from CO data are taken from \citet{dame} composite survey.}
\label{COspec}
\end{figure}
The plot in Figure \ref{COspec}, shows the line of sight velocity (V$_{lsr}$) for the Lynds cloud. The distinctive peak seen is at a velocity of -14.92 km/s, and the comparatively smaller peak is at a velocity of -1.944 km/s. Similar velocity has been quoted for the L1340 cloud in \cite{Kun1994}, using CO spectra data taken in the field containing LDN 1340 from the Nagoya university 4 m telescope. The characteristic peak tending to a radial velocity of -14.5 km/s was estimated. Using \cite{reid}\footnote{\url{http://bessel.vlbi-astrometry.org/revised_kd_2014}}, we could estimate the distances to these V$_{lsr}$. The distance corresponding to the taller peak comes out to be 0.82$^{0.51}_{0.49}$ kpc. Unfortunately, the distance to the smaller peak couldn't be estimated from the V$_{lsr}$ using the same code.
\subsection{The RNO 8 cloud core}
The Figures \ref{allpol} \& \ref{allpolwise} show the RNO 8 stellar group between $\alpha$ = 37.60$^{\circ}$ - 37.70$^{\circ}$, $\delta$ = 72.98$^{\circ}$ - 73.00$^{\circ}$; $l$ = 130.12$^{\circ}$ $\&$ $b$ = 11.51$^{\circ}$. These are red and nebulous objects with associated groups of fainter stars \citep{cohen,RNO1}. From the proper motion plot in Fig.\ref{pmplot}, it is clear that the stars in this clump share a similar proper motion and hence are considered member stars for the RNO8 clump for the analysis in this paper.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\columnwidth]{propermotionplot.pdf}
\caption{Proper motion in the stars belonging to the L1340 cloud. The red cross is for all the stars whose polarisation is quoted in the H band. The RNO 8 clump stars are shown in the blue cross. }
\label{pmplot}
\end{figure}
The star identified by \citet{Kun} at the core of the RNO 8 nebulosity ($\alpha$ = 37.635 $\&$ $\delta$ = 72.988) is at a distance of 5~kpc as per the corresponding value from the Gaia data. The RNO 8 region is well studied photometrically \& spectroscopically by Kun et al., and one major result is that the central star is a candidate embedded intermediate-mass young star of the optical nebulosity RNO 8. They suggest it to be a young star of 2 - 2.5 M$_{\odot}$ \citep{Kun}. Based upon the `renormalised unit weight error' $ruwe$ parameter, whose threshold value for reliable astrometry is 1.4, the central star has $ruwe$ $>$ 1.4, which makes the Gaia distance data unreliable for this star.\\
Table \ref{RNO8_table}, lists the polarisation values for the stars of the RNO 8 group along with the corresponding 2MASS H magnitudes and the distance and proper motion information from Gaia.
\subsubsection{Polarisation distribution with distance \& colour:}
\label{sec:pf}
An indirect estimation for distances to the cloud can be devised from the polarimetric data. A uniform distribution of material along the line-of-sight will exhibit an increasing trend in extinction, from the relation $<A_{v}/{L}>$ $\approx$ 1.8 mag kpc$^{-1}$ \citep{whittet}. The presence of a cloud will enhance the number density of the material in the line-of-sight from which the background stellar light is passing. The starlight reaching us, travelling through a medium will see an increment in the degree of polarisation at the distance to the cloud because of uniform dust presence. \citep{eswaraiah}.
Looking at the lower panel in Figure \ref{distPF}, it seems that the stars part of RNO 8 (marked with red open squares) for which polarization is measured show an increase in the degree of polarisation with the distance in the cloud, assuming that the cloud starts just before the dotted line (at 825~pc) in the figure. But because of the statistics, we cannot visibly conclude the increment in polarisation degree to be significant in our case. These stars also have consistent angles, with very small dispersion, as seen in the upper panel of Figure \ref{distPF}. We interpret this to mean that within the RNO 8 cloud core the magnetic field orientation remains uniform.
In Fig \ref{JKpfpa}, we have plotted the 2MASS $J-K_s$ colour vs degree of polarisation in the lower panel. The upper panel shows the variation of position angle. We see that the stars in RNO 8 (marked by red open squares) do not show significant variation in the angle of polarisation over a large range in colour. These stars are identified as YSO and T Tauri (SIMBAD identification - see Table \ref{RNO8_table}) in the work by \citet{Kun}.
Since we do not see any variation in position angle for these stars (apart from the star in the direction of the center of the core) it appears that they do not affect the overall magnetic field orientation despite being embedded in the dust.\\
\subsubsection{Estimation of magnetic field strength:}
The plane of the sky magnetic field can be calculated from the inclusion of polarimetric observations in addition to other parameters known from photometric data. The magnetic field strength as formulated by Chandrashekhar and Fermi (CF model) \citep{CF}, depends on the mean local density of the cloud $\rho$, the line-of-sight velocity dispersion $\delta v$, and the dispersion in the position angle of polarisation $\delta\theta$. The equation can be expressed as,
\begin{equation}
B_{p}=\mathcal{Q} \sqrt{4 \pi \rho} \frac{\delta v_{\operatorname{los}}}{\delta \theta}
\label{mag}
\end{equation}
\citep{ostriker} has performed numerical simulations for the estimation of magnetic field strength and compared it with the CF model. Based on his results, he suggested a value $\mathcal{Q}$ $\sim$ 0.5, for angle dispersion $<$ 25$^{\circ}$.
The values for $n(H_{2})$ and velocity dispersion based upon the observations of $C^{18}O$ from the earlier work of \citet{Kun1994} on this cloud will be used. They have quoted velocity dispersion and mean cloud density for three cores A, B, and C within the L1340. Based on the RNO 8 coordinates, we use the numbers for core B of the cloud, which contains the RNO 8 cluster. The estimated value for $n(H_{2})$ = 830 $cm^{-3}$, and $\Delta v$ = 0.9 $kms^{-1}$. Using these values with our position angle dispersion of $5.17^{\circ}$, the $\Vec{B}$ strength in the plane of sky is calculated as $\approx$ 42 $\mu G$ for the RNO 8 core. This value is in accordance with the magnetic field strength $\sim$ 20 - 200 $\mu G$ in other molecular clouds \citep{Kwon_2016}.
The Gaia distance to the star in the direction of the center of the RNO 8 core is highly uncertain. However since it is identified to be a T Tauri (see the star identified in bold font in Table \ref{RNO8_table}) star, we may consider it to be a part of the cloud complex. Including the measurement of polarisation for this star increases the dispersion in the polarisation position angle to 12$^{\circ}$. From equation \ref{mag}, the calculated $\Vec{B}$ strength is found to be 18 $\mu G$.
In such a case, the mean magnetic field strength is decreased with the increase in the dispersion of the magnetic field vector direction due to varying cloud structure (or multiple clouds) in the line of sight giving rise to different position angles in polarisation. The consequences of dispersion in the magnetic field direction have been discussed in \cite{ostriker}.
\section{Conclusions}
\label{6}
Near-Infrared polarisation in the H band over the L1340 molecular cloud has been presented for a FOV of 9.6$^{\prime}$ x 9.6$^{\prime}$ using PRL's 1.2 m telescope with the NICSPol instrument. We conclude a few important results for the RNO 8 region in particular. The results are :
\begin{itemize}
\item[1.] NIR polarimetry was performed on a region within L1340, and measurements for a total of 76 stars are reported in the H band along with their 2MASS photometry and Gaia distance and other astrophysical parameters where available.
\item[2.] The distribution of the position angle of polarisation is very consistent and is almost aligned with the Galactic magnetic field, which implies that the dichroic extinction causing the polarisation due to the dust grains being aligned with the magnetic field permeating the galaxy.
\item[3.] The magnetic field strength for the RNO 8 region within the L1340 cloud was estimated using the Chandrashekhar-Fermi method. A 42 $\mu$G magnetic field was estimated which is similar to the strength present in other such areas.
\item[4.] Considering the star in the direction of the center as part of the RNO 8 core, reduces the magnetic field strength to $\sim$ 18 $\mu$G, due to the increase in position angle dispersion. \\
\end{itemize}
One important conclusion from the work presented here is that the use of single beam polarimetry is not very effective at measuring the relatively low interstellar polarisation. For such cases, particularly with the rapidly varying IR sky background, we should simultaneously use both ordinary and extraordinary components to measure polarisation. Simultaneous measurement of both components would make the polarisation measurement independent of the sky signal.
\section*{Acknowledgements}
We acknowledge the support provided by the observatory and technical staff at MIRO (Mt. Abu Infrared Observatory), PRL during observations. We are grateful to the night operators present at MIRO for their assistance during the observation run. We thank our colleagues in the Astronomy \& Astrophysics Division, PRL, for useful discussions and comments. We acknowledge the anonymous referee for all the valuable points which improved the quality of the paper.\\
This work has made use of data from the European Space Agency (ESA) mission
{\it Gaia} \footnote{\url{https://www.cosmos.esa.int/gaia}}, processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC,
\footnote{\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium})}.Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.\\
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.\\
This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France.\\
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration
|
{
"arxiv_id": "2302.13769",
"language": "en",
"timestamp": "2023-02-28T02:29:48",
"url": "https://arxiv.org/abs/2302.13769",
"yymm": "2302"
} | \section{Introduction}
According to the general principles of (local) quantum field theory (QFT) \cite{Bogolyubov:1959bfo},
observables in a spacelike region (i.e. in Euclidean space) can have singularities only for negative values of their argument $Q^2$.
However, for large $Q^2$ values, these observables are usually represented as power expansions in the running coupling constant (couplant)
$\alpha_s(Q^2)$,
which has a ghostly singularity, the so-called Landau pole, at $Q^2 = \Lambda^2$. Therefore, to restore the analyticity of the considered expansions,
this pole in the strong couplant should be removed.
The strong couplant $\alpha_s(Q^2)$ obeys the renormalization group equation
\be
L\equiv \ln\frac{Q^2}{\Lambda^2} = \int^{\overline{a}_s(Q^2)} \, \frac{da}{\beta(a)},~~ \overline{a}_s(Q^2)=\frac{\alpha_s(Q^2)}{4\pi}\,
\label{RenGro}
\ee
with some boundary condition and the QCD $\beta$-function:
\be
\beta(\ar) ~=~ -\sum_{i=0} \beta_i \overline{a}_s^{i+2}
=-\beta_0 \overline{a}_s^{2} \, \Bigl(1+\sum_{i=1} b_i \ar^i \Bigr),~~ b_i=\frac{\beta_i}{\beta_0^{i+1}}\,, ~~
\ar(Q^2)=
\beta_0\,\overline{a}_s(Q^2)\,,
\label{beta}
\ee
where
\be
\beta_0=11-\frac{2f}{3},~~\beta_1=102-\frac{38f}{3},~~\beta_2=\frac{2857}{2}-\frac{5033f}{18}+\frac{325f^2}{54},~~
\label{beta_i}
\ee
for $f$ active quark flavors. Really now the first fifth coefficients, i.e. $\beta_i$ with $i\leq 4$, are exactly known \cite{Baikov:2016tgj}.
In our present consideration we will need only $0 \leq i\leq 2$.
Note that in Eq. (\ref{beta})
we have added the first coefficient of the QCD $\beta$-function to the $\ar$ definition, as is usually done in the case of
analytic couplants (see, e.g., Refs. \cite{ShS}-\cite{Bakulev:2010gm}).
So, already at leading order (LO), where $\ar(Q^2)=\ar^{(1)}(Q^2)$, we have from Eq. (\ref{RenGro})
\be
\ar^{(1)}(Q^2) = \frac{1}{L}\, ,
\label{asLO}
\ee
i.e. $\ar^{(1)}(Q^2)$ does contain a pole at $Q^2=\Lambda^2$.
In a timelike region ($q^2 >0$) (i.e., in Minkowski space), the definition of a running couplant turns out to be quite difficult.
The reason for the problem is that, strictly speaking, the expansion of perturbation theory (PT) in QCD cannot be defined directly in
this region.
Since the early days of QCD, much effort has been made to determine the appropriate Minkowski coupling parameter needed to describe
important timelike processes such as, $e^+e^-$-annihilation into hadrons, quarkonia and $\tau$-lepton decays into hadrons. Most of the
attempts (see, for example, \cite{Pennington:1981cw}) have been based on the analytical continuation of strong couplant from the deep Euclidean region,
where perturbative QCD calculations can be performed, to the Minkowski space, where physical measurements are made.
In other developments, analytical expressions for a LO couplant were obtained \cite{Krasnikov:1982fx} directly in Minkowski space,
using an integral transformation from the spacelike to the timelike mode from the Adler D-function.
In Refs. \cite{ShS,MSS} an efficient approach was developed to eliminate the Landau singularity without introducing extraneous infrared controllers,
such as the gluon effective mass
(see, e.g., \cite{GayDucati:1993fn}).
This method is based on a dispersion relation that relates the new analytic couplant $A_{\rm MA}(Q^2)$ to the spectral function $r_{\rm pt}(s)$
obtained in the PT framework.
In LO this gives
\be
A^{(1)}_{\rm MA}(Q^2)
= \frac{1}{\pi} \int_{0}^{+\infty} \,
\frac{ d s }{(s + t)} \, r^{(1)}_{\rm pt}(s),~~ r^{(1)}_{\rm pt}(s)= {\rm Im} \; a_s^{(1)}(-s - i \epsilon) \,.
\label{disp_MA_LO}
\ee
The \cite{ShS,MSS} approach follows the corresponding results \cite{Bogolyubov:1959vck} obtained in the framework of Quantum Electrodynamics.
Similarly, the analytical images of a running coupling in the Minkowski space are defined using another linear operatio
\be
U^{(1)}_{\rm MA}(s)
= \frac{1}{\pi} \int_{s}^{+\infty} \,
\frac{ d\sigma }{\sigma} \, r^{(1)}_{\rm pt}( \sigma) \, ,
\label{disp_MAt_LO}
\ee
So, we repeat once again: the spectral function in the dispersion relations (\ref{disp_MA_LO}) and (\ref{disp_MAt_LO}) is taken directly from PT,
and the analytical couplants $A_{\rm MA}(Q^2)$ and $U_{\rm MA}(Q^2)$ are restored using the corresponding dispersion relations. This approach is usually
called the {\it Minimal Approach} (MA) (see, e.g., \cite{Cvetic:2008bn})
or the {\it Analytical Perturbation Theory} (APT) \cite{ShS,MSS}.
\footnote{An overview of other similar approaches can be found in \cite{Bakulev:2008td}, including approaches \cite{Nesterenko:2003xb} that are close to APT.}
Thus, MA QCD is a very convenient approach that combines the analytical properties of QFT quantities and the results
obtained in the framework of perturbative QCD, leading to the appearance of the MA couplants $A_{\rm MA}(Q^2)$ and $U_{\rm MA}(s)$, which are close to the usual
strong couplant $a_s(Q^2)$ in the limit of large $Q^2$ values and completely different from $a_s(Q^2)$ for small $Q^2$ values,
i.e. for $Q^2 \sim \Lambda^2$.
A further APT development is the so-called fractional APT (FAPT) \cite{BMS1,Bakulev:2006ex,Bakulev:2010gm}, which extends the construction principles
described above to PT series, starting from non-integer powers of the couplant. In the framework of QFT, such series arise for quantities that have
non-zero anomalous dimensions.
Compact expressions for quantities within the FAPT framework were obtained mainly in LO, but this approach was also used in higher orders,
mainly by re-expanding the corresponding couplants in powers of the LO couplant, as well as using some approximations.
In this short paper, we give an overview of the main properties of
MA couplants in the FAPT framework, obtained in Refs. \cite{Kotikov:2022sos,KoZe23}
using the so-called $1/L$-expansion. Note that for an ordinary couplant, this expansion is applicable only for large $Q^2$ values, i.e. for $Q^2>>\Lambda^2$.
However, as shown in \cite{Kotikov:2022sos,KoZe23}, the situation is quite different in the case of analytic couplants,
and this $1/L$-expansion is applicable for all values of the argument. This is due to the fact that the non-leading expansion corrections vanish not only
at $Q^2 \to \infty$, but also at $Q^2 \to 0$,
\footnote{The absence of high-order corrections for $Q^2 \to 0$ was also discussed in Refs. \cite{ShS,MSS}.}
which leads only to nonzero (small) corrections in the region $Q^2 \sim \Lambda^2$.
Below we consider the representations for the MA couplants and their (fractional) derivatives obtained in \cite{Kotikov:2022sos,KoZe23} and valid in
principle in any PT order. However, in order to avoid cumbersome formulas, but at the same time to show the main features of the approach
obtained in \cite{Kotikov:2022sos,KoZe23}, we confine ourselves to considering only the first three PT orders.
\section{Strong couplant}
\label{strong}
As shown in the Introduction, the strong couplant $a_s(Q^2)$ obeys the renormalized group equation (\ref{RenGro}).
When $Q^2>>\Lambda^2$, Eq. (\ref{RenGro}) can be solved by iterations in the form of a $1/L$-expansion
(we give the first three terms of the expansion in accordance with the reasoning in the introduction),
which can be represented in the following compact form
\be
a^{(1)}_{s,0}(Q^2) = \frac{1}{L_0},~~
a^{(i+1)}_{s,i}(Q^2) =
a^{(1)}_{s,i}(Q^2) + \sum_{m=2}^i \, \delta^{(m)}_{s,i}(Q^2)
\,,~~(i=0,1,2,...)\,,
\label{as}
\ee
where
\be
L_k=\ln t_k,~~t_k=\frac{1}{z_k}=\frac{Q^2}{\Lambda_k^2}\,.
\label{L}
\ee
The corrections $\delta^{(m)}_{s,k}(Q^2)$ are represented as follows
\be
\delta^{(2)}_{s,k}(Q^2) = - \frac{b_1\ln L_k}{L_k^2} ,~~
\delta^{(3)}_{s,k}(Q^2) = \frac{1}{L_k^3} \, \Bigl[b_1^2(\ln^2 L_k-\ln L_k-1)+b_2\Bigr]\,.
\label{ds}
\ee
As shown in Eqs. (\ref{as}) and (\ref{ds}), in any PT
order, the couplant $\ar(Q^2)$ contains its dimensional transmutation parameter
$\Lambda$, which is related to the normalization of $\alpha_s(M_Z^2)$,
where $\alpha_s(M_Z)=0.1176$ in PDG20 \cite{PDG20}.\\
{\bf $f$-dependence of the couplant $\ar(Q^2)$.}~~
The coefficients $\beta_i$ (\ref{beta_i}) depend on the number $f$ of active quarks
that change the couplant $\ar(Q^2)$ at thresholds $Q^2_f \sim m^2_f$, where some the additional quark comes enters the game $Q^2 > Q^2_f$.
Here $m_f$ is the $\overline{MS}$ mass of the $f$ quark, e.g.,
$m_b=4.18 +0.003-0.002$ GeV and $m_c=1.27 \pm 0.02$ GeV from PDG20 \cite{PDG20}.
\footnote{Strictly speaking, the quark masses in the $\overline{MS}$ scheme depend on $Q^2$ and $m_f=m_f(Q^2=m_f^2)$. The $Q^2$-dependence is rather slow and
will not be discussed in this paper.}
Thus, the couplant $a_s$ depends on $f$, and this $f$-dependence can be taken into account in $\Lambda$, i.e. it is $\Lambda^f$ that contributes to the above
Eqs. (\ref{RenGro}) and (\ref{as}).
Relationships between $\Lambda_{i}^{f}$ and $\Lambda_{i}^{f-1}$, i.e.
the so-called matching conditions
between $a_s(f,Q_f^2)$ and $a_s(f-1,Q_f^2)$
are known up to the four-loop order \cite{Chetyrkin:2005ia} in the $\overline{MS}$ scheme and usually are used
for $Q_f^2=m_f^2$, where these relations have the simplest form (see e.g. \cite{Enterria} for a recent review).
Here we will not consider the $f$-dependence of $\Lambda_{i}^{f}$ and $a_s(f,M_Z^2)$, since we mainly consider the range of small $Q^2$ values and therefore use
$\Lambda_{i}^{f=3}$.
\begin{figure}[!htb
\centering
\includegraphics[width=0.58\textwidth]{ashow123q}
\caption{\label{fig:as1352}
The results for $a^{(i+1)}_{s,i}(Q^2)$ with $i=0,1,2$.
}
\end{figure}
On Fig. \ref{fig:as1352} one can see that the strong couplants $a^{(i+1)}_{s,i}(Q^2)$ become singular at $Q^2=\Lambda_i^2$.
The values of $\Lambda_0$ and $\Lambda_i$ $(i\geq 1)$ are very different.
We use results taken from a recent Ref. \cite{Chen:2021tjz}, where $\Lambda_i^{f=3}$
$(i=0,1,2)$ were obtained in the following form
\be
\Lambda_0^{f=3}=142~~ \mbox{MeV},~~\Lambda_1^{f=3}=367~~ \mbox{MeV},~~\Lambda_2^{f=3}=324~~ \mbox{MeV}
\,.
\label{Lambdas}
\ee
\section{Fractional derivatives}
Following \cite{Cvetic:2006mk,Cvetic:2006gc},
we introduce the derivatives (in the $(i)$-order of of PT)
\be
\tilde{a}^{(i)}_{n+1}(Q^2)=\frac{(-1)^n}{n!} \, \frac{d^n a^{(i)}_s(Q^2)}{(dL)^n} \, ,
\label{tan+1}
\ee
which are very convenient in the case of the analytical QCD (see, e.g., \cite{Kotikov:2022JETP}).
The series of derivatives $\tilde{a}_{n}(Q^2)$ can successfully replace the corresponding series of $\ar$-degrees. Indeed, each
the derivative reduces the $\ar$ degree, but is accompanied by an additional $\beta$-function $\sim \ar^2$.
Thus, each application of a derivative yields an additional $\ar$, and thus indeed possible to use series of derivatives instead of
series of $\ar$-powers.
In LO, the series of derivatives $\tilde{a}_{n}(Q^2)$ are exactly the same as $\ar^{n}$. Beyond LO, the relationship between $\tilde{a}_{n}(Q^2)$
and $\ar^{n}$ was established in \cite{Cvetic:2006gc,Cvetic:2010di} and extended to fractional cases, where $n \to$ is a non-integer $\nu $,
in Ref. \cite{GCAK}.
Now consider the $1/L$-expansion of $\tilde{a}^{(k)}_{\nu}(Q^2)$. We can raise the $\nu$-power of the results (\ref{as}) and (\ref{ds}) and then
restore $\tilde{a}^{(k)}_{ \nu}(Q^ 2)$ using the relations between $\tilde{a}_{\nu}$ and $\ar^{\nu}$ obtained in \cite{GCAK}.
This operation is carried out in detail in Appendix B to \cite{Kotikov:2022sos} (see also Appendix A to \cite{Kotikov:2022vnx}).
Here we present only the final results, which have the following form
\footnote{The expansion (\ref{tdmp1N}) is similar to those used in Refs. \cite{BMS1,Bakulev:2006ex} for the expansion of
${\bigl({a}^{(i+1)} _{s,i}(Q^2)\bigr)}^ {\nu}$ in terms of powers of $a^{(1)}_{s,i}(Q^2)$.}:
\bea
\z\tilde{a}^{(1)}_{\nu,0}(Q^2)={\bigl(a^{(1)}_{s,0}(Q^2)\bigr)}^{\nu} = \frac{1}{L_0^{\nu}},~
\tilde{a}^{(i+1)}_{\nu,i}(Q^2)=\tilde{a}^{(1)}_{\nu,i}(Q^2) + \sum_{m=1}^{i}\, C_m^{\nu+m}\, \tilde{\delta}^{(m+1)}_{\nu,i}(Q^2),~~\nonumber\\
\z\tilde{\delta}^{(m+1)}_{\nu,i}(Q^2)=
\hat{R}_m \, \frac{1}{L_i^{\nu+m}},~~C_m^{\nu+m}=\frac{\Gamma(\nu+m)}{m!\Gamma(\nu)}\,,
\label{tdmp1N}
\eea
where
\be
\hat{R}_1=b_1 \Bigl[\hat{Z}_1(\nu)+ \frac{d}{d\nu}\Bigr],~~
\hat{R}_2=b_2 + b_1^2 \Bigl[\frac{d^2}{(d\nu)^2} +2 \hat{Z}_1(\nu+1)\frac{d}{d\nu} + \hat{Z}_2(\nu+1 )\Bigr]
\,
\label{hR_i}
\ee
and $\hat{Z}_j(\nu)$ $(j=1,2)$ are combinations of the Euler $\Psi$-functions and their derivatives.
The representation (\ref{tdmp1N}) of the $\tilde{\delta}^{(m+1)}_{\nu,i}(Q^2)$ corrections as $\hat{R} _m$-operators is very important and allows us
to similarly present high-order results for the ($1/L$-expansion) of analytic couplants.
\section{MA coupling}
We first show the LO results, and then go beyond LO following our results (\ref{tdmp1N}) for the ordinary strong couplant obtained in the previous section.\\
{\bf LO.}~~
The LO MA couplant $A^{(1)}_{{\rm MA},\nu,0}$
has the following form \cite{BMS1}
\be
A^{(1)}_{{\rm MA},\nu,0}(Q^2) = {\left( a^{(1)}_{\nu,0}(Q^2)\right)}^{\nu} - \frac{{\rm Li}_{1-\nu}(z_0)}{\Gamma(\nu)}=
\frac{1}{L_0^{\nu}}- \frac{{\rm Li}_{1-\nu}(z_0)}{\Gamma(\nu)} \equiv \frac{1}{L_0^{\nu}}-\Delta^{(1)}_{\nu,0}\,,
\label{tAMAnu}
\ee
where
\be
{\rm Li}_{\nu}(z)=\sum_{m=1}^{\infty} \, \frac{z^m}{m^{\nu}}= \frac{z}{\Gamma(\nu)} \int_0^{\infty}
\frac{ dt \; t^{\nu -1} }{(e^t - z)}
\label{Linu}
\ee
is the Polylogarithm.
The LO MA couplant $U^{(1)}_{{\rm MA},\nu,0}$ in the Minkowski space
has the
form \cite{Bakulev:2006ex}
\be
U^{(1)}_{{\rm MA},\nu,0}({\rm s})
=\frac{\sin[(\nu-1)\,g_0(s)]
}{\pi(\nu-1)(\pi^2+ L^2_{s,0})^{(\nu-1)/2}}
\, ,\, (\nu>0)\, ,
\label{mainexpr}
\ee
where
\be
L_{s,i}=\ln\dfrac{s}{\Lambda_i^2},~~g_i(s)= \arccos\left(\frac{L_{s,i}}{\sqrt{\pi^2+ L^2_{s,i}}}\right) \, .
\label{Ls}
\ee
For $\nu=1$ we recover the famous Shirkov-Solovtsov results \cite{ShS}:
\be
\hspace{-0.5cm} A^{(1)}_{\rm MA,0}(Q^2) \equiv A^{(1)}_{\rm MA,\nu=1,0}(Q^2)
=\frac{1}{L_0}- \frac{z_0}{1-z_0},~ U^{(1)}_{\rm MA,0}(Q^2) \equiv U^{(1)}_{\rm MA,\nu=1,0}({\rm s})=\frac{g_0(s)}{\pi}\, .
\label{tAM1}
\ee
Note that the result (\ref{tAM1}) can be taken directly for the integral forms (\ref{disp_MA_LO}) and (\ref{disp_MAt_LO}), as it was in Ref. \cite{ShS}.\\
{\bf Beyond LO.}
Following Eqs. (\ref{tAMAnu}) and (\ref{mainexpr}) for the LO analytic couplants,
we consider
the derivatives of the
MA couplants, as
\be
\tilde{A}_{{\rm MA},n+1}(Q^2)=\frac{(-1)^n}{
n!} \, \frac{d^n A_{\rm MA}(Q^2)}{(dL)^n},~~\tilde{U}_{{\rm MA},n+1}(Q^2)=\frac{(-1)^n}{
n!} \, \frac{d^n U_{\rm MA}(s)}{(dL_s)^n} \, .
\label{tanMA+1}
\ee
By analogy with ordinary couplant,
using the results (\ref{tdmp1N})
we have for MA analytic couplants $\tilde{A}^{(i+1)}_{{\rm MA},\nu,i}$ and $\tilde{U}^{(i+1)}_{{\rm MA},\nu,i}$ the following expressions:
\bea
&&\tilde{A}^{(i+1)}_{{\rm MA},\nu,i}(Q^2) = \tilde{A}^{(1)}_{{\rm MA},\nu,i}(Q^2) + \sum_{m=1}^{i} \, C^{\nu+m}_m \tilde{\delta}^{(m+1)}_{{\rm A},\nu,i}(Q^2), \nonumber \\
&&\tilde{U}^{(i+1)}_{{\rm MA},\nu,i}(s) = \tilde{U}^{(1)}_{{\rm MA},\nu,i}(s) + \sum_{m=1}^{i} \, C^{\nu+m}_m \tilde{\delta}^{(m+1)}_{{\rm U},\nu,i}(s),
\label{tAiman}
\eea
where $\tilde{A}^{(1)}_{{\rm MA},\nu,i}$ and $\tilde{U}^{(1)}_{{\rm MA},\nu,i}$ are given in Eqs. (\ref{tAMAnu}) and (\ref{mainexpr}), respectively, and
\be
\tilde{\delta}^{(m+1)}_{{\rm A},\nu,i}(Q^2)= \tilde{\delta}^{(m+1)}_{\nu,i}(Q^2) - \hat{R}_m \left( \frac{{\rm Li}_{-\nu-m+1}(z_i)}{\Gamma(\nu+m)}\right),~
\tilde{\delta}^{(m+1)}_{{\rm U},\nu,i}(s)= \hat{R}_m \Bigl(\tilde{U}^{(1)}_{{\rm MA},\nu+m,i}(s)\Bigr)\,.
\label{tdAman}
\ee
and $\tilde{\delta}^{(m+1)}_{\nu,i}(Q^2)$ and $\hat{R}_m$
are given in Eqs. (\ref{tdmp1N}) and (\ref{hR_i}), respectively.\\
The analytical results for the MA analytic couplants $\tilde{A}^{(i+1)}_{{\rm MA},\nu,i}$ and $\tilde{U}^{(i+1)}_{{\rm MA},\nu,i}$
can be found in Refs. \cite{Kotikov:2022sos} and \cite{KoZe23}, respectively. Here we present only the results for the case $\nu=1$:
\bea
&&A^{(i+1)}_{{\rm MA},i}(Q^2)\equiv \tilde{A}^{(i+1)}_{{\rm MA},\nu=1,i}(Q^2) = A^{(1)}_{{\rm MA},i}(Q^2) + \sum_{m=1}^{i} \, \tilde{\delta}^{(m+1)}_{{\rm A},\nu=1,i}(Q^2),~~\nonumber \\
&&U^{(i+1)}_{{\rm MA},i}(s)\equiv \tilde{U}^{(i+1)}_{{\rm MA},\nu=1,i}(s) = U^{(1)}_{{\rm MA},i}(s) + \sum_{m=1}^{i} \, \tilde{\delta}^{(m+1)}_{{\rm U},\nu=1,i}(s)
\label{tAiman.1}
\eea
where $A^{(1)}_{{\rm MA},i}(Q^2)$ and $ U^{(1)}_{{\rm MA},i}(s)$ are shown in Eq. (\ref{tAM1}) and
\bea
\z \tilde{\delta}^{(m+1)}_{{\rm A},\nu=1,i}(Q^2
= \tilde{\delta}^{(m+1)}_{\nu=1,i}(Q^2)- \frac{P_{m,1}(z_i)}{m!} \, ,\nonumber\\
\z \tilde{\delta}^{(2)}_{{\rm A},\nu=1,i}({\rm s})= \frac{b_1}{\pi(\pi^2+L_{s,i}^2)^{1/2}}\Bigl\{g_i\cos(g_i)-\Bigl[1+G_i\Bigr]\sin(g_i)\Bigr\},~~
\nonumber\\
\z \tilde{\delta}^{(3)}_{{\rm U},\nu=1,i}({\rm s})=\frac{1}{2\pi(\pi^2+L_s^2)}\Biggl(b_2\sin(2g_i) +b_1^2\Bigl[G_i^2-g_i^2-1\Bigr]\sin(2g_i)\Biggr)
\label{tdAmanA}
\eea
with
\bea
\z G_i({\rm s})=\frac{1}{2}\,\ln\left(\pi^2+L_{s,i}^2\right),~~P_{1,\nu}(z)=b_1\Bigl[\overline{\gamma}_{\rm E}
{\rm Li}_{-\nu}(z)+{\rm Li}_{-\nu,1}(z)\Bigr],~~\overline{\gamma}_{\rm E}=\gamma_{\rm E}-1,~~ \nonumber \\
\z P_{2,\nu}(z)=b_2 \,{\rm Li}_{-\nu-1}(z) + b_1^2\Bigl[{\rm Li}_{-\nu-1,2}(z) + 2\overline{\gamma}_{\rm E}
{\rm Li}_{-\nu-1,1}(z)
+ \Bigl(\overline{\gamma}^2_{\rm E}-
\zeta_2\Bigr) \, {\rm Li}_{-\nu-1}(z) \Bigr]\,,
\label{Pkz}
\eea
Euler constant $\gamma_{\rm E}$ and
\be
{\rm Li}_{n,m}(z)= \sum_{m=1} \, \frac{\ln^k m}{m^n},~~
{\rm Li}_{-1}(z)= \frac{z}{(1-z)^2},~~{\rm Li}_{-2}(z)= \frac{z(1+z)}{(1-z)^3}
\, .
\label{Lii.1}
\ee
On Fig. \ref{fig:A123} we see that $A^{(i+1)}_{\rm MA,i}(Q^2)$ and $U^{(i +1) }_{\rm MA,i}(Q^2)$ are very close to each other for $i=0$ and $i=2$.
The differences between the L0 and NNLO results are nonzero only for $Q^2 \sim \Lambda^2$.
\begin{figure}[!htb
\centering
\includegraphics[width=0.58\textwidth]{LogAU13}
\caption{\label{fig:A123}
The results for $A^{(i+1)}_{\rm MA,i}(Q^2)$ and $U^{(i+1)}_{\rm MA,i}(Q^2)$ with $i=0,2$.}
\end{figure}
\section{Conclusions}
In this short paper, we have demonstrated the results obtained in our recent papers \cite{Kotikov:2022sos,KoZe23}. In particular,
Ref. \cite{Kotikov:2022sos} contains $1/L$-expansions of $\nu$-derivatives of the strong couplant $a_s$ expressed as combinations of
the $\hat{R}_m$ (\ref{hR_i}) operators applied to the LO couplant $a_s^{(1)}$.
Using the same operators to $\nu$-derivatives of LO MA couplants $A_{\rm MA}^{(1)}$ and $U_{\rm MA}^{(1)}$, various
representations were obtained for $\nu$-derivatives of MA couplants,
i.e. $\tilde{A}_{\rm MA,\nu}^{(i)}$ and $\tilde{U}_{\rm MA,\nu}^{(i)}$ in each $i$-order of PT.
All results are presented in \cite{Kotikov:2022sos,KoZe23} up to the 5th order of PT,
where the corresponding QCD $\beta$-function coefficients are well known
(see \cite{Baikov:2016tgj}). In this paper, we have limited ourselves to the first three orders in order to exclude the most cumbersome
results obtained for the last two PT orders.
High-order corrections are negligible in both asymptotics: $Q^2 \to 0$ and $Q^2 \to \infty$, and are nonzero in a neighborhood of the point $Q^2 =\Lambda^2$.
Thus, in fact, they represent only minor corrections to LO MA couplants $A_{\rm MA}^{(1)}(Q^2)$ and $U_{\rm MA}^{(1)}(Q^2)$.
This proves the possibility of expansions of high-order couplants $A_{\rm MA}^{(i)}(Q^2)$ and $U_{\rm MA}^{(i)}(Q^2)$ via the
LO couplants $A_{\rm MA}^{(1)}(Q^2)$ and $U_{\rm MA}^{(1)}(Q^2)$, which was done in
Ref. \cite{Bakulev:2010gm}.\\
{\bf Acknowledgments}~
This work was supported in part by the Foundation for the Advancement
of Theoretical Physics and Mathematics “BASIS”.
One of us (A.V.K.) thanks the Organizing Committee of the XXIV International Seminar on High Energy Physics "From quarks to
galaxies: clearing up the dark sides" (November 22-24, Protvino, Russia)
for invitation.
|
{
"arxiv_id": "2302.13684",
"language": "en",
"timestamp": "2023-02-28T02:27:35",
"url": "https://arxiv.org/abs/2302.13684",
"yymm": "2302"
} | \section{Abstract}
The \textbf{stopp} \texttt{R} package deals with spatio-temporal point processes which might have occurred on the Euclidean space or on some specific linear networks such as roads of a city.
The package contains functions to summarize, plot, and perform different kinds of analyses on point processes, mainly following the methods proposed in some recent papers in the stream of scientific literature. The main topics of such works, and of the package in turn, include
modeling, statistical inference, and simulation issues on spatio-temporal point processes on Euclidean space and linear networks, with a focus on their local characteristics. We contribute to the existing literature by collecting many of the most widespread methods for the analysis of spatio-temporal point processes into a unique package, which is intended to welcome many further proposals and extensions.
\section{Introduction}
Modelling real problems through space-time point processes is crucial in many scientific and engineering fields such as environmental sciences, meteorology, image analysis, seismology, astronomy, epidemiology and criminology.
The growing availability of data is a challenging opportunity for the scientific research, aiming at more detailed information through the application of statistical methodologies suitable for describing complex phenomena.
The aim of the present work is to contribute to the existing literature by gathering many of the most widespread methods for the analysis of spatio-temporal point processes into a unique package, which is intended to host many further extensions.
The \textbf{stopp} \citep{R} package provides codes, related to methods and models, for analysing complex spatio-temporal point processes, proposed in the papers \cite{siino2018joint,siino2018testing,adelfio2020some,dangelo2021assessing,dangelo2021local,d2022locally}.
The main topics include modelling, statistical inference, and simulation issues on spatial and spatio-temporal point processes, point processes on linear networks, non-Euclidean spaces.
The context of application is very broad, as the proposed methods are of interest in describing any phenomenon with a complex spatio-temporal dependence.
Some examples, include seismic events \citep{dangelo2021locall}, GPS data \citep{dangelo2021inhomogeneous}, crimes \citep{dangelo2021self}, and traffic accidents.
Moreover, local methods and models can be applied to different scientific fields and could be suitable for all those phenomena for which it makes sense to hypothesize interdependence in space and time.
The main dependencies of the \textbf{stopp} package are \textbf{spatstat} \cite{spatstat}, \textbf{stpp} \cite{gabriel2009second}, and \textbf{stlnpp} \cite{moradi2020first}.
In the purely spatial context, \textbf{spatstat} is by far the most comprehensive open-source toolbox for analysing spatial point patterns, focused mainly on two-dimensional point patterns. We exploit many functions from this package when needing purely spatial tools while performing spatio-temporal analyses.
Turning to the spatio-temporal context, \textbf{stpp} represents the main reference of statistical tools for analyzing the global and local second-order properties of spatio-temporal point processes, including estimators of the space-time inhomogeneous $K$-function and pair correlation function. The package is documented in the paper \cite{gabriel:rowlingson:diggle:2013}.
While \textbf{stpp} allows for the simulation of Poisson, inhibitive and clustered patterns, the \textbf{stppSim} \citep{stppSim} package generates artificial spatio-temporal point patterns through the integration of microsimulation and agent-based models.
Moreover, \textbf{splancs} \citep{splancs} fosters many tools for the analysis of both spatial and spatio-temporal point patterns \citep{rowlingson1993splancs,bivand2000implementing}.
Moving to spatio-temporal point patterns on linear networks, the package \textbf{stlnpp} provides tools to visualise and analyse such patterns using the first- and second-order summary statistics developed in \cite{moradi2020first,mateu2020spatio}.
Other worth-to-mention packages dealing with spatio-temporal point pattern analysis include \textbf{etasFLP} \cite{chiodi:adelfio:14}, mainly devoted to the estimation of the components of an ETAS (Epidemic Type Aftershock Sequence) model for earthquake description with the non-parametric background seismicity estimated through FLP (Forward Likelihood Predictive) \cite{adelfio2020including}, and
\textbf{SAPP},
the Institute of Statistical Mathematics package \citep{ogata2006timsac84,ogata2006statistical}, which provides functions for the statistical analysis of series of events and seismicity.
Finally, we highlight some \texttt{R} packages that implement routines to simulate and fit log-Gaussian Cox processes (LGCPs). In particular, the package \textbf{stpp} implements code to simulate spatio-temporal LGCP with a separable and non-separable covariance
structure for the Gaussian Random Field (GRF). Instead, the package \textbf{lgcp} \cite{taylor:davies:barry:15} implements code to fit LGCP models using methods
of the moments and a Bayesian inference for spatial, spatio-temporal,
multivariate and aggregated point processes. Furthermore, the minimum contrast method is used to estimate parameters assuming a separable structure of the covariance of
the GRF. Both packages do not handle for non-separable (and anisotropic)
correlation structures of the covariance structure of the GRF.
The outline of the paper is as follows.
First, we set the notation of spatio-temporal point processes, both occurring on Euclidean space and on linear networks. Then, we introduce the main functions for handling point processes objects, data, and simulations from different point process models. We then move to the Local Indicators of Spatio-Temporal Association functions, recalling their definition on the spatio-temporal Euclidean space and introducing the new functions to compute the LISTA functions on linear networks. Then, we illustrate how to perform a local test for assessing the local differences in two point patterns occurring on the same metric space. Hence, the functions available in the package for fitting models are illustrated, including separable Poisson process models on both the Euclidean space and networks, global and local non-separable inhomogeneous Poisson processes and LGCPs. Then, methods to perform global and local diagnostics on both models for point patterns on planar and linear network spaces are presented. The paper ends with some conclusions.
\section{Spatio-temporal point processes and their second-order properties}
\label{sec:stpp}
We consider a spatio-temporal point process with no multiple points as a random countable subset $X$ of $\mathbb{R}^2 \times \mathbb{R}$, where a point $(\textbf{u}, t) \in X$ corresponds to an event at $ \textbf{u} \in \mathbb{R}^2$ occurring at time $t \in \mathbb{R}$.
A typical realisation of a spatio-temporal point process $X$ on $\mathbb{R}^2 \times \mathbb{R}$ is a finite set $\{(\textbf{u}_i, t_i)\}^n_{
i=1}$ of distinct points within a
bounded spatio-temporal region $W \times T \subset \mathbb{R}^2 \times \mathbb{R}$, with area $\vert W\vert > 0$ and length $\vert T\vert > 0$, where $n \geq 0$ is not fixed in
advance.
In this context, $N(A \times B)$ denotes the number of points of a set $(A \times B) \cap X$, where $A \subseteq W$ and $B \subseteq T$. As usual \citep{daley:vere-jones:08}, when $N(W \times T) < \infty $ with probability 1, which holds e.g. if $X$ is defined on a bounded set, we call $X$ a finite spatio-temporal point process.
For a given event $(\textbf{u}, t)$, the events that are close to $(\textbf{u}, t)$ in both space and time, for each spatial distance $r$ and time lag $h$, are given by the corresponding spatio-temporal cylindrical neighbourhood of the event $(\textbf{u}, t)$, which can be expressed by the Cartesian product as
$$
b((\textbf{u}, t), r, h) = \{(\textbf{v}, s) : \vert \vert\textbf{u} - \textbf{v}\vert \vert \leq r, \vert t - s \vert \leq h\} , \quad \quad
(\textbf{u}, t), (\textbf{v}, s) \in W \times T,
$$
where $ \vert \vert \cdot \vert \vert$ denotes the Euclidean distance in $\mathbb{R}^2$. Note that $b((\textbf{u}, t), r, h)$ is a cylinder with centre (\textbf{u}, t), radius $r$, and height $2h$.
Product densities $\lambda^{(k)}, k \in \mathbb{N} \text{ and } k \geq 1 $, arguably the main tools in the statistical analysis of point processes, may be defined through the so-called Campbell Theorem (see \cite{daley:vere-jones:08}), that constitutes an essential result in spatio-temporal point process theory, stating that, given a spatio-temporal point process $X$, for any non-negative function $f$ on $( \mathbb{R}^2 \times \mathbb{R} )^k$
\begin{equation*}
\mathbb{E} \Bigg[ \sum_{\zeta_1,\dots,\zeta_k \in X}^{\ne} f( \zeta_1,\dots,\zeta_k)\Bigg]=\int_{\mathbb{R}^2 \times \mathbb{R}} \dots \int_{\mathbb{R}^2 \times \mathbb{R}} f(\zeta_1,\dots,\zeta_k) \lambda^{(k)} (\zeta_1,\dots,\zeta_k) \prod_{i=1}^{k}\text{d}\zeta_i,
\label{eq:campbell0}
\end{equation*}
where $\neq$ indicates that the sum is over distinct values. In particular, for $k=1$ and $k=2$, these functions are respectively called the \textit{intensity function} $\lambda$ and the \textit{(second-order) product density} $\lambda^{(2)}$.
Broadly speaking, the intensity function describes the rate at which the events occur in the given spatio-temporal region, while the second-order product densities are used for describing spatio-temporal variability and correlations between pair of points of a pattern. They represent the point process analogues of the mean function and the covariance function of a real-valued process, respectively.
Then, the first-order intensity function is defined as
\begin{equation*}
\lambda(\textbf{u},t)=\lim_{\vert \text{d}\textbf{u} \times \text{d}t\vert \rightarrow 0} \frac{\mathbb{E}[N(\text{d}\textbf{u} \times \text{d}t )]}{\vert \text{d}\textbf{u} \times \text{d}t\vert },
\end{equation*}
where $\text{d}\textbf{u} \times \text{d}t $ defines a small region around the point $(\textbf{u},t)$ and $\vert \text{d}\textbf{u} \times \text{d}t\vert $ is its volume. The second-order intensity function is given by
\begin{equation*}
\lambda^{(2)}((\textbf{u},t),(\textbf{v},s))=\lim_{\vert \text{d}\textbf{u} \times \text{d}t\vert ,\vert \text{d}\textbf{v} \times \text{d}s\vert \rightarrow 0} \frac{\mathbb{E}[N(\text{d}\textbf{u} \times \text{d}t )N(\text{d}\textbf{v} \times \text{d}s )]}{\vert \text{d}\textbf{u} \times \text{d}t\vert \vert \text{d}\textbf{v} \times \text{d}s\vert }.
\end{equation*}
Finally, the pair correlation function
$g((\textbf{u},t),(\textbf{v},s))=\frac{ \lambda^{(2)}((\textbf{u},t),(\textbf{v},s))}{\lambda(\textbf{u},t)\lambda(\textbf{v},s)}$
can be interpreted formally as the standardised probability density that an event occurs in each of two small volumes, $\text{d}\textbf{u} \times \text{d}t$ and $\text{d}\textbf{v} \times \text{d}s$, in the sense that for a Poisson process, $g((\textbf{u},t),(\textbf{v},s))=1.$
In this package, the focus is on second-order characteristics of spatio-temporal point patterns, with an emphasis on the $K$-function \citep{ripley:76}.
This is a measure of the distribution of the inter-point distances and captures the spatio-temporal dependence of a point process.
A spatio-temporal point process is second-order intensity reweighted stationary and isotropic if its intensity function is bounded away from zero and its pair correlation function depends only on the spatio-temporal difference vector $(r,h)$, where $r= \vert \vert \textbf{u}-\textbf{v} \vert \vert $ and $h= \vert t-s \vert$ \citep{gabriel2009second}.
For a second-order intensity reweighted stationary, isotropic spatio-temporal point process, the space-time inhomogeneous $K$-function takes the form
\begin{equation}
K(r,h)=2 \pi \int_{-r}^{r} \int_0^{h} g(r',h')r'\text{d}r'\text{d}h'
\end{equation}
where $g(r,h)=\lambda^{(2)}(r,h)/(\lambda(\textbf{u},t)\lambda(\textbf{v},s)), r=\vert \vert\textbf{u}-\textbf{v}\vert \vert,h= \vert t-s \vert$ \citep{gabriel2009second}.
The simplest expression of an estimator of the spatio-temporal $K$-function is given as
\begin{equation}
\hat{K}(r,h)=\frac{1}{ \vert W \vert \vert T \vert}\sum_{i=1}^n \sum_{j > i} I( \vert \vert \textbf{u}_i-\textbf{u}_j \vert \vert \leq r, \vert t_i-t_j \vert \leq h).
\label{eq:k}
\end{equation}
For a homogeneous Poisson process $\mathbb{E}[\hat{K}(r,h)]=\pi r^2 h$, regardless of the intensity $\lambda$.
The $K$-function can be used as a measure of spatio-temporal clustering and interaction \citep{gabriel2009second,moller2012aspects}.
Usually, $\hat{K}(r,h)$ is compared with the theoretical $\mathbb{E}[\hat{K}(r,h)]=\pi r^2 h$. Values $\hat{K}(r,h) > \pi r^{2} h$ suggest clustering, while $\hat{K}(r,h) < \pi r^2 h$ points to a regular pattern.
Point processes on linear networks are recently considered to analyse events occurring on particular network structures such as the traffic accidents on a road network.
Spatial patterns of points along a network of lines are indeed found in many applications.
The network might reflect a map of railways, rivers, electrical wires, nerve fibres, airline routes,
irrigation canals, geological faults or soil cracks \citep{baddeley2020analysing}. Observations of interest could be the locations of
traffic accidents, bicycle incidents, vehicle thefts or street crimes, and many others.
%
A linear network $ L=\cup_{i=1}^{n} l_{i} \subset \mathbb{R}^{2} $ is commonly taken as a finite union of line segments $l_i\subset \mathbb{R}^{2}$ of positive length.
A line segment is defined as $l_i=[u_i,v_i]=\{ku_i+(1-k)v_i: 0 \leq k \leq 1\}$, where $u_i,v_i \in \mathbb{R}^2$ are the endpoints of $l_i$. For any $i \ne j$, the intersection of $l_i$ and $l_j$ is either empty or an endpoint of both segments.\\
A spatio-temporal linear network point process is a point process on the product space $L \times T$, where $L$ is a linear network and $T$ is a subset (interval) of $\mathbb{R}$.
We hereafter focus on a spatio-temporal point process $X$ on a linear network $L$ with no
overlapping points $(\textbf{u},t)$, where $\textbf{u} \in L$ is the location of an event and $t \in T (T \subseteq \mathbb{R}^+)$
is the corresponding time occurrence of $\textbf{u}$. Note that the temporal state-space $T$ might be
either a continuous or a discrete set. A realisation of $X$ with $n$ points is represented by
$\textbf{x} = {(\textbf{u}_i ,t_{i} ),i = 1,\dots,n}$ where $(\textbf{u}_i ,t_{i} ) \in L \times T$.
A spatio-temporal disc with centre
$(\textbf{u},t) \in L \times T$, network radius $r > 0$ and temporal radius $h > 0$ is defined as
$b((\textbf{u},t ),r,h) = \{(\textbf{v},s ) : d_L (\textbf{u},\textbf{v}) \leq r , \vert t - s \vert \leq h\},
(\textbf{u}, t), (\textbf{v}, s) \in L \times T $
where $\vert \cdot\vert $ is a numerical distance, and $d_L(\cdot,\cdot)$ stands for the appropriate distance in the network, typically taken as the shortest-path distance between any two points. The cardinality of any subset $A \subseteq L \times T, N(X \cap A) \in
{0,1,\dots}$, is the number of points of $X$ restricted to $A$, whose expected value is denoted
by
$\nu(A) = \mathbb{E}[N(X \cap A)],
A \subseteq L \times T,$
where $\nu$, the intensity measure of $X$, is a locally finite product measure on $L\times T$ \citep{baddeley2006stochastic}.
We now recall Campbell's theorem for point processes on linear networks \citep{cronie2020inhomogeneous}.
Assuming that the product densities/intensity functions $\lambda^{(k)}$ exist, for any non-negative measurable function $f(\cdot)$ on the product space $L^k$, we have
\begin{equation}
\mathbb{E} \Bigg[ \sum_{\zeta_1,\dots,\zeta_k \in X}^{\ne} f( \zeta_1,\dots,\zeta_k)\Bigg]=\int_{L^k}
f(\zeta_1,\dots,\zeta_k) \lambda^{(k)} (\zeta_1,\dots,\zeta_k) \prod_{i=1}^{k}\text{d}\zeta_i.
\label{eq:campbelL}
\end{equation}
Assume that $X$ has an intensity function $\lambda(\cdot,\cdot)$, hence Equation \eqref{eq:campbelL} reduces to
$\mathbb{E}[N(X \cap A)] =\int_{A} \nu(d(\textbf{u},t )) =
\int_{A} \lambda(\textbf{u},t)d_2(\textbf{u},t), A \subseteq L \times T,$
where $d_2 (\textbf{u},t)$ corresponds to integration over $L \times T$.
%
The second-order Campbell's theorem is obtained from \eqref{eq:campbelL} with $k=2$
\begin{equation}
\mathbb{E} \Bigg[ \sum_{(\textbf{u},t),(\textbf{v},s)\in X}^{\ne} f\big((\textbf{u},t),(\textbf{v},s)\big)
\Bigg] =
\int_{L \times T} \int_{L \times T} f\big((\textbf{u},t),(\textbf{v},s)\big) \lambda^{(2)}\big((\textbf{u},t),(\textbf{v},s)\big)\text{d}_2(\textbf{u},t)\text{d}_2(\textbf{v},s).
\label{eq:campbell}
\end{equation}
Assuming that $X$ has a second-order product density function $\lambda^{(2)} (\cdot,\cdot)$, we then obtain
\begin{equation*}
\mathbb{E}[N(X \cap A)N(X \cap B)] =
\int_{A} \int_{B}
\lambda^{(2)} ((\textbf{u},t ),(\textbf{v},s ))d_2 (\textbf{u},t)d_2 (\textbf{v},s ), \quad A,B \subseteq L \times T.
\end{equation*}
Finally, an important result concerns the conversion of the integration over $L \times T$ to that over $\mathbb{R} \times \mathbb{R}$ \citep{rakshit2017second}.
For any measurable function $f: L \times T \rightarrow \mathbb{R}$
\begin{equation}
\int_{L \times T} f(\textbf{u},t)\text{d}_2(\textbf{u},t)=\int_0^{\infty} \int_0^{\infty} \sum_{\substack{ (\textbf{u},t)\in L \times T:\\
d_L(\textbf{u},\textbf{v})=r,\\
|t-s|=h }} f(\textbf{u},t) \text{d}r\text{d}h.
\label{eq:change}
\end{equation}
Letting $f(\textbf{u},t) = \eta(d_L(\textbf{u},\textbf{v}), \vert t-s\vert)$
then
$$
\int_{L \times T} \eta(d_L(\textbf{u},\textbf{v}), \vert t-s\vert) \text{d}_2(\textbf{u},t)= \int_0^{\infty} \int_0^{\infty} \eta(r,h)M((\textbf{u},t),r,h)\text{d}r \text{d}h
$$
where $M((\textbf{u},t),r,h)$ is the number of points lying exactly at the shortest-path distance $r \geq 0$ and the time distance $h \geq 0$ away from $(\textbf{u},t)$.
\section{Main functions for handling point processes objects, data, and simulations}\label{sec:main}
The \texttt{stp} function creates a \texttt{stp} object as a dataframe with three columns: \texttt{x}, \texttt{y}, and \texttt{t}. If the linear network \texttt{L}, of class \texttt{linnet}, is also provided, a \texttt{stlp} object is created instead.
The methods for this class of objects: (1) print the main information on the spatio-temporal point pattern stored in the \texttt{stp} object: the number of points, the enclosing spatial window, the temporal time period; (2) print the summary statistics of the spatial and temporal coordinates of the spatio-temporal point pattern stored in the \texttt{stp} object; (3) plot the point pattern stored in the \texttt{stp} object given in input, in a three-panel plot representing the 3Dplot of the coordinates, and the marginal spatial and temporal coordinates.
\begin{example}
> set.seed(12345)
> rpp1 <- stpp::rpp(lambda = 200, replace = FALSE)
> is.stp(rpp1)
[1] FALSE
> stp1 <- stp(cbind(rpp1$xyt[, 1], rpp1$xyt[, 2], rpp1$xyt[, 3]))
> is.stp(stp1)
[1] TRUE
> stp1
Spatio-temporal point pattern
208 points
Enclosing window: rectangle = [0.0011366, 0.9933775] x [0.0155277, 0.9960438] units
Time period: [0.004, 0.997]
\end{example}
Some functions are implemented to convert the \texttt{stp} and \texttt{stlp} classes to those of the \textbf{stpp} and \textbf{stlnpp} packages, and vice-versa.
Moreover, the package is furnished with the \texttt{greececatalog} dataset in the \texttt{stp} format containing the catalog of
Greek earthquakes of magnitude at least 4.0 from year 2005 to year 2014,
analysed by mean of local log-Gaussian Cox processes in \cite{dangelo2021locall}
and \cite{d2022locally}.
Data come from the Hellenic Unified Seismic Network (H.U.S.N.).
The same data have been analysed in \cite{siino2017spatial} by hybrids of Gibbs models,
and more recently by \cite{gabriel2022mapping}.
\begin{example}
> plot(greececatalog, tcum = TRUE)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{Art2.pdf}
\caption{Plots of Greek data.}
\label{fig:p2}
\end{figure}
A dataset of crimes occurred in Valencia, Spain, in 2019 is available, together with the linear
network of class \texttt{linnet} of the Valencian roads, named \texttt{valenciacrimes} and \texttt{valencianet}, respectively.
Finally, the linear network of class \texttt{linnet} of the roads of Chicago (Illinois, USA) close to the University of Chicago, is also available.
It represents the linear network of the Chicago dataset published and analysed in \cite{ang2012geometrically}. The network adjacency
matrix is stored as a sparse matrix.
Moving to simulations, the \texttt{rstpp} function creates a \texttt{stp} object, simulating a spatio-temporal Poisson point pattern, following either a homogeneous or inhomogeneous intensity.
\begin{example}
> h1 <- rstpp(lambda = 500, nsim = 1, seed = 2, verbose = TRUE)
> plot(h1, tcum = TRUE)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{Art3.pdf}
\caption{Simulated homogeneous point pattern.}
\label{fig:p3}
\end{figure}
\begin{example}
> inh <- rstpp(lambda = function(x, y, t, a) {exp(a[1] + a[2]*x)}, par = c(2, 6),
nsim = 1, seed = 2, verbose = TRUE)
> plot(inh, tcum = TRUE)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{Art4.pdf}
\caption{Simulated inhomogeneous point pattern.}
\label{fig:p4}
\end{figure}
The \texttt{rstlpp} function creates a \texttt{stlp} object instead, simulating a spatio-temporal Poisson point pattern
on a linear network.
Furthermore, the \texttt{rETASp} function creates a \texttt{stp} object, simulating a spatio-temporal ETAS (Epidemic Type Aftershock Sequence) process.
It follows the generating scheme for simulating a pattern from an ETAS process \citep{ogata:1988likelihood} with conditional intensity function (CIF) as in \cite{adelfio2020including}.
The \texttt{rETAStlp} function creates a \texttt{stlp} object, simulating a spatio-temporal ETAS process on a linear network. The simulation scheme previously introduced is adapted for the space location of events to be constrained on a linear network, being firstly introduced and employed for simulation studies in \cite{dangelo2021assessing}.
\section{Local Indicators of Spatio-Temporal Association functions}
\label{sec:lista}
Local Indicators of Spatio-Temporal Association (LISTA) are a set of functions that are individually associated with each one of the points of the point pattern, and can provide information about the local behaviour of the pattern.
This operational definition of local indicators was introduced by \cite{anselin:95} for the spatial case, and extended by \cite{siino2018testing} to the spatio-temporal context.\\
If $\lambda^{(2)i}(\cdot,\cdot)$ denotes the local version of the spatio-temporal product density for the event $(\textbf{u}_i,t_i)$,
then, for fixed $r$ and $h$, it holds that
\begin{equation}
\hat{\lambda}^{(2)}_{\epsilon,\delta}(r,h)=\frac{1}{n-1}\sum_{i=1}^n\hat{\lambda}^{(2)i}_{\epsilon,\delta}(r,h),
\label{eq:op}
\end{equation}
where $
\hat{\lambda}^{(2)i}_{\epsilon,\delta}(r,h)=\frac{n-1}{4\pi r \vert W \times T \vert}\sum_{j\ne i}\kappa_{\epsilon,\delta}( \vert \vert \textbf{u}_i-\textbf{v}_j \vert \vert -r, \vert t_i-s_j \vert -h),
$
with $r>\epsilon>0$ and $h>\delta>0$, and $\kappa$ a kernel function with spatial and temporal bandwidths $\epsilon$ and $\delta$, respectively.
Any second-order spatio-temporal summary statistic that satisfies the operational definition in \eqref{eq:op}, which means that the sum of spatio-temporal local indicator functions is proportional to the global statistic, can be called a LISTA statistic \citep{siino2018testing}.\\
In \cite{adelfio2020some}, local versions of both the homogeneous and inhomogeneous spatio-temporal $K$-functions on the Euclidean space are introduced.
Defining an estimator of the overall intensity by $\hat{\lambda}=n/(\vert W \vert \vert T \vert)$, they propose the local version of \eqref{eq:k} for the i-th event $(\textbf{u}_i,t_i)$
\begin{equation}
\hat{K}^i(r,h)=\frac{1}{\hat{\lambda}^2 \vert W \vert \vert T \vert}\sum_{(\textbf{u}_i,t_i)\ne (\textbf{v},s)} I( \vert \vert \textbf{u}_i-\textbf{v} \vert \vert\leq r,\vert t_i-s\vert \leq h)
\label{eq:kl}
\end{equation}
and the inhomogeneous version
\begin{equation}
\hat{K}^i_{I}(r,h)=\frac{1}{ \vert W \vert \vert T \vert}\sum_{(\textbf{u}_i,t_i)\ne (\textbf{v},s)} \frac{I(||\textbf{u}_i-\textbf{v} \vert \vert \leq r,\vert t_i-s\vert \leq h)}{\hat{\lambda}(\textbf{u}_i,t_i)\hat{\lambda}(\textbf{v},s)},
\label{eq:kinhl}
\end{equation}
with $(\textbf{v},s)$ being the spatial and temporal coordinates of any other point.
The authors extended the spatial weighting approach of \cite{veen2006assessing} to spatio-temporal local second-order statistics, proving that the inhomogeneous second-order statistics behave as the corresponding homogeneous ones, basically proving that the expectation of both \eqref{eq:kl} and \eqref{eq:kinhl} is equal to $\pi r^2 h$.
\subsection{LISTA on linear networks}
The functions \texttt{localSTLKinhom} and \texttt{localSTLginhom} implement the inhomogeneous LISTA functions proposed in \cite{dangelo2021local}.
The \textit{local spatio-temporal inhomogeneous}
K-function for the i-th event $(\boldsymbol{u}_i,t_i)$ on a linear network
is $$\hat{K}^i_{L,I}(r,h)=\frac{1}{ \vert L \vert \vert T \vert}\sum_{(\boldsymbol{u}_i,t_i)\ne (\boldsymbol{v},s)} \frac{I\{ d_L(\boldsymbol{u}_i,\boldsymbol{v})<r,\vert t_i-s\vert <h\} }{\hat{\lambda}(\boldsymbol{u}_i,t_i)\hat{\lambda}(\boldsymbol{v},s)M((\boldsymbol{u}_i,t_i),d_L(\boldsymbol{u}_i,\boldsymbol{v}),\vert t_i-s\vert )},$$
and the corresponding \textit{local pair correlation function} (pcf)
$$\hat{g}^i_{L,I}(r,h)=\frac{1}{ \vert L \vert \vert T \vert}\sum_{(\boldsymbol{u}_i,t_i)\ne (\boldsymbol{v},s)} \frac{\kappa( d_L(\boldsymbol{u}_i,\boldsymbol{v})-r)\kappa(\vert t_i-s\vert -h) }{\hat{\lambda}(\boldsymbol{u}_i,t_i)\hat{\lambda}(\boldsymbol{v},s)M((\boldsymbol{u}_i,t_i),d_L(\boldsymbol{u}_i,\boldsymbol{v}),\vert t_i-s\vert )},$$
with
$$D(X) = \frac{n-1}{ \vert L \vert \vert T \vert}\sum_{i=1}^n\sum_{i \ne j}\frac{1}{\hat{\lambda}(\textbf{u}_i,t_i)\hat{\lambda}(\textbf{u}_j,t_j)}$$
normalization factor. This leads to the unbiased estimators $\frac{1}{D(X)}\hat{K}^i_{L,I}(r,h)$ and
$\frac{1}{D(X)}\hat{g}^i_{L,I}(r,h)$.
The homogeneous versions \citep{dangelo2021assessing} can be obtained by weighting the second-order
summary statistics (either K or pcf) by a constant intensity
$\hat{\lambda}=n/( \vert L \vert \vert T \vert)$, giving
$$\hat{K}_L^i(r,h)=\frac{1}{\hat{\lambda}^{2} \vert L \vert \vert T \vert}\sum_{(\boldsymbol{u}_i,t_i)\ne (\boldsymbol{v},s)} \frac{I\{ d_L(\boldsymbol{u}_i,\boldsymbol{v})<r,\vert t_i-s\vert <h\} }{M((\boldsymbol{u}_i,t_i),d_L(\boldsymbol{u}_i,\boldsymbol{v}),\vert t_i-s\vert )},$$
and
$$\hat{g}_L^i(r,h)=\frac{1}{\hat{\lambda}^{2} \vert L \vert \vert T \vert}\sum_{(\boldsymbol{u}_i,t_i)\ne (\boldsymbol{v},s)} \frac{\kappa( d_L(\boldsymbol{u}_i,\boldsymbol{v})-r)\kappa(\vert t_i-s\vert -h) }{M((\boldsymbol{u}_i,t_i),d_L(\boldsymbol{u}_i,\boldsymbol{v}),\vert t_i-s\vert )}.$$
These can be computed easily with the functions \texttt{localSTLKinhom} and \texttt{localSTLKginhom}, by imputing a lambda vector of constant intensity values, the same for each point.
The proposed functions are the local counterparts of \texttt{STLKinhom} and \texttt{STLKginhom} by \cite{moradi2020first}, available in the \texttt{stlnpp} package \citep{stlnpp}.
\begin{example}
> set.seed(10)
> X <- stlnpp::rpoistlpp(.2, a = 0, b = 5, L = stlnpp::easynet)
> lambda <- density(X, at = "points")
> x <- as.stlp(X)
> k <- localSTLKinhom(x, lambda = lambda, normalize = TRUE)
## select an individual point
> j = 1
> k[[j]]
## plot the lista function and compare it with its theoretical value
> inhom <- list(x = k[[j]]$r, y = k[[j]]$t, z = k[[j]]$Kinhom)
> theo <- list(x = k[[j]]$r, y = k[[j]]$t, z = k[[j]]$Ktheo)
> diff <- list(x = k[[j]]$r, y = k[[j]]$t, z = k[[j]]$Kinhom - k[[j]]$Ktheo)
> oldpar <- par(no.readonly = TRUE)
> par(mfrow = c(1, 3))
> fields::image.plot(inhom, main= "Kinhom", col = hcl.colors(12, "YlOrRd", rev = FALSE),
xlab = "Spatial distance", ylab = "Temporal distance")
> fields::image.plot(theo, main = "Ktheo", col = hcl.colors(12, "YlOrRd", rev = FALSE),
xlab = "Spatial distance", ylab = "Temporal distance")
> fields::image.plot(diff, main = "Kinhom - Ktheo", col = hcl.colors(12, "YlOrRd", rev = FALSE),
xlab = "Spatial distance", ylab = "Temporal distance")
> par(oldpar)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=.9\textwidth]{Art5.pdf}
\caption{Observed vs theoretical K-function.}
\label{fig:p5}
\end{figure}
\subsection{Local test for assessing the second-order differences between of two point patterns}\label{sec:test}
The function \texttt{localtest} performs the permutation test of the local structure of spatio-temporal point pattern data, proposed in \cite{siino2018testing}.
The network counterpart is also implemented, following \cite{dangelo2021assessing}.
This test detects local differences in the second-order structure of two observed point patterns $\textbf{x}$ and $\textbf{z}$
occurring on the same space-time region.
This procedure was firstly introduced in \cite{moraga:montes:11} for the purely spatial case, and then extended in
the spatio-temporal context by \cite{siino2018testing}. Finally,
test has been made suitable also for spatio-temporal point patterns
with spatial domain coinciding with a linear network by \cite{dangelo2021assessing}.
In general, for each point $(\textbf{u},t)$ in the spatio-temporal observed
point pattern $\textbf{x}$, we test
$$
\begin{cases}
\mathcal{H}_{0}: & \text{no difference in the second-order local structure of } (\textbf{u},t) \quad \text{ w.r.t } \quad \{ \{ \textbf{x} \setminus (\textbf{u},t) \} \cup \textbf{z} \}\\
\mathcal{H}_{1}: & \text{significant difference in the second-order local } \text{structure of} (\textbf{u},t) \quad \text{ w.r.t } \quad \{ \{ \textbf{x} \setminus (\textbf{u},t) \} \cup \textbf{z} \}
\end{cases}$$
The sketch of the test is as follows:
\begin{enumerate}
\item Set $k$ as the number of permutations
\item For each point $(\textbf{u}_i,t_i) \in \textbf{x}, i = 1, \ldots, n$:
\begin{itemize}
\item Estimate the LISTA function $\hat{L}^{(i)}(r,h)$
\item Compute the local deviation test
$$T^i=\int_{0}^{t_0} \int_{0}^{r_0} \Big(
\hat{L}^{(i)}(r,h)- \hat{L}^{-(i)}_{H_0}(r,h)
\Big)^2 \text{d}r \text{d}h,$$
where $\hat{L}^{-(i)}_{H_0}(r,h)$
is the LISTA function for the $i^{th}$ point,
averaged over the $j=1,\dots,k$ permutations
\item Compute a $p$-value as
$p^i=\sum_{j=1}^{k} \textbf{1}(T^{i,j}_{H_0} \geq T^i)/k$
\end{itemize}
\end{enumerate}
The test ends providing a vector $p$ of $p$- values, one for each point
in $\textbf{x}$.
If the test is performed for spatio-temporal point patterns as in
\cite{siino2018testing}, that is, on an object of class \texttt{stp}, the LISTA
functions $\hat{L}^{(i)}$ employed are the local $K$-functions of
\cite{adelfio2020some}, computed by the function
\texttt{KLISTAhat}
of the \textbf{stpp} package \citep{gabriel:rowlingson:diggle:2013}
.
%
If the function is applied to a \texttt{stlp} object, that is, on two spatio-temporal
point patterns observed on the same linear network \texttt{L},
the local $K$-functions
used are the ones proposed in \cite{dangelo2021assessing}, documented
in \texttt{localSTLKinhom}.
%
Details on the performance of the test are found in \cite{siino2018testing} and
\cite{dangelo2021assessing} for Euclidean and network spaces, respectively.
Alternative LISTA functions that can be employed to run the test are \texttt{LISTAhat} of \textbf{stpp} and \texttt{localSTLginhom} of \textbf{stopp}, that is, the pcfs on Euclidean space and
linear networks respectively.
The methods for this class of objects: (1) print the main information on the result of the local permutation test performed with \texttt{localtest} on either a \texttt{stp} or \texttt{stlp} object: whether the local test was run on point patterns lying on a linear network or not; the number of points in the background \texttt{X} and alternative \texttt{Z} patterns; the number of points in \texttt{X} which exhibit local differences in the second-order structure with respect to \texttt{Z}, according to the performed test;
(2) plot the result of the local permutation test performed with \texttt{localtest}: it highlights the points of the background pattern \texttt{X}, which exhibit local differences in the second-order structure with respect to \texttt{Z}, according to the previously performed test. The remaining points of \texttt{X} are also represented; it also shows the underlying linear network, if the local test has been applied to point patterns occurring on the same linear network, that is, if \texttt{localtest} has been applied to a \texttt{stlp} object. In the following, we provide an example of two point processes, both occurring on the unit cube.
\begin{example}
## background pattern
> set.seed(12345)
> X <- rstpp(lambda = function(x, y, t, a) {exp(a[1] + a[2]*x)}, par = c(.05, 4),
nsim = 1, seed = 2, verbose = TRUE)
## alternative pattern
> set.seed(12345)
> Z <- rstpp(lambda = 25, nsim = 1, seed = 2, verbose = TRUE)
## run the local test
> test <- localtest(X, Z, method = "K", k = 9, verbose = FALSE)
> test
Test for local differences between two
spatio-temporal point patterns
--------------------------------------
Background pattern X: 17
Alternative pattern Z: 20
1 significant points at alpha = 0.05
> plot(test)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Art14.pdf}
\caption{Output of the local test.}
\label{fig:p14}
\end{figure}
\section{Model fitting}\label{sec:models}
The description of the observed point pattern intensity is a crucial issue dealing with spatio-temporal point pattern data, and specifying a statistical model is a very effective way compared to analyzing data by calculating summary statistics. Formulating and adapting a statistical model to the data allows taking into account effects that otherwise could introduce distortion in the analysis \citep{baddeley2015spatial}. In this section, we outline the main functions to fit different specifications of inhomogeneous spatio-temporal Poisson process models.
\subsubsection{Spatio-temporal Poisson point processes with separable intensity}
When dealing with intensity estimation for spatio-temporal point processes, it is quite common to assume that the intensity function $\lambda(\textbf{u},t)$ is separable \citep{diggle2013statistical,gabriel2009second}. Under this assumption, the intensity function is given by the product
\begin{equation}
\lambda(\textbf{u},t)={\lambda}(\textbf{u}){\lambda}(t)
\label{eq:sep}
\end{equation}
where ${\lambda}(\textbf{u})$ and ${\lambda}(t)$ are non-negative functions on $W$ and $T$, respectively \citep{gabriel2009second}.
Under this assumption, any non-separable effects are interpreted as second-order, rather than first-order. Suitable estimates of $\lambda(\textbf{u})$ and $\lambda(t)$ in \eqref{eq:sep} depend on the characteristics of each application. The functions here implemented use a combination of a parametric spatial point pattern model, potentially depending on the spatial coordinates and/or spatial covariates, and a parametric log-linear model for the temporal component. Also, non-parametric kernel estimate form(s) are legit but still not implemented.
The spatio-temporal intensity is therefore obtained by multiplying the purely spatial and purely temporal intensities, previously fitted separately. The resulting intensity is normalised, to make the estimator unbiased, making the expected number of points
$$\mathbb{E}\bigg[ \int_{W \times T} \hat{\lambda}(\textbf{u},t)d_2(\textbf{u},t) \bigg] = \int_{W \times T} \lambda(\textbf{u},t)d_2(\textbf{u},t)=n,$$
and the final intensity function is obtained as
$$\hat{\lambda}(\textbf{u},t)=\frac{\hat{\lambda}(\textbf{u})\hat{\lambda}(t)}{\int_{W \times T} \hat{\lambda}(\textbf{u},t)d_2(\textbf{u},t)}.$$
The function \texttt{sepstppm} fits such a separable spatio-temporal Poisson process model.
The function \texttt{plot.sepstppm} shows the fitted intensity, displayed both in space and in space and time.
\begin{example}
> df1 <- valenciacrimes[valenciacrimes$x < 210000 & valenciacrimes$x > 206000
& valenciacrimes$y < 4377000 & valenciacrimes$y > 4373000, ]
> mod1 <- sepstppm(df1, spaceformula = ~x * y, timeformula = ~ crime_hour + week_day)
\end{example}
For linear network point patterns, non-parametric estimators of the intensity function $\lambda(\cdot,\cdot)$ have been proposed \citep{mateu2020spatio}, suggesting any variation of the distribution of the process over its state-space $L \times T$.
A kernel-based intensity estimator for spatio-temporal linear network point processes, based on the first-order separability assumption, considered in \cite{moradi2020first}, is obtainable with the package \textbf{stnlpp}.
The functions \texttt{sepstlppm} and \texttt{plot.sepstlppm} implement the network counterparts of the spatio-temporal Poisson point process with separable intensity and fully parametric specification.
\begin{example}
> mod1 <- sepstlppm(valenciacrimes[1:2500, ], spaceformula = ~x,
timeformula = ~ crime_hour + week_day, L = valencianet)
\end{example}
\subsubsection{Global inhomogeneous spatio-temporal Poisson processes trough quadrature scheme}
For a non-separable spatio-temporal specification, we assume that the template model is a Poisson process, with a parametric intensity or rate function
\begin{equation}
\lambda(\textbf{u}, t; \theta), \quad \textbf{u} \in
W,\quad t \in T, \quad \theta \in \Theta.
\label{eq:pois}
\end{equation}
The log-likelihood of the template model is
$$\log L(\theta) = \sum_i
\lambda(\textbf{u}_i, t_i; \theta) - \int_W\int_T
\lambda(\textbf{u}, t; \theta) \text{d}t\text{d}u$$
up to an additive constant, where the sum is over all points $\textbf{u}_i$
in the spatio-temporal point process $X$.
We might consider intensity models of log-linear form
\begin{equation}
\lambda(\textbf{u}, t; \theta) = \exp(\theta Z(\textbf{u}, t) + B(\textbf{u},t )), \quad
\textbf{u} \in W,\quad t \in T
\label{eq:glo_mod}
\end{equation}
where $Z(\textbf{u}, t)$ is a vector-valued covariate function, and $B(\textbf{u}, t)$ is a scalar offset.
In point process theory, the variables $Z(\textbf{u}, t)$ are referred to as spatio-temporal covariates. Their observable values are assumed to be knowable, at least in principle, at each location in the spatio-temporal window.
For inferential purposes, their values must be known at each point of the data point pattern and at least at some other locations.
This is the reason why we first implmented the dependence of the intensity function $\lambda(\textbf{u}, t; \theta)$ on the space and time coordinates first.\\
The \texttt{stppm} function fits a Poisson process model to an observed spatio-temporal point pattern stored in a \texttt{stp} object, assuming the template model \eqref{eq:pois}.
Estimation is performed by fitting a \texttt{glm} using a spatio-temporal version of the quadrature scheme by \cite{berman1992approximating}.
We use a finite quadrature approximation
to the log-likelihood. Renaming the data points as $\textbf{x}_1,\dots ,
\textbf{x}_n$ with $(\textbf{u}_i,t_i) = \textbf{x}_i$ for $i = 1, \dots , n$,
then generate $m$ additional 'dummy points' $(\textbf{u}_{n+1},t_{n+1})
\dots , (\textbf{u}_{m+n},t_{m+n})$ to
form a set of $n + m$ quadrature points (where $m > n$).
%
Then we determine quadrature weights $a_1, \dots , a_m$
so that a Riemann sum can approximate integrals in the log-likelihood
$$ \int_W \int_T \lambda(\textbf{u},t;\theta)\text{d}t\text{d}u \approx \sum_{k = 1}^{n + m}a_k\lambda(\textbf{u}_{k},t_{k};\theta)$$
where $a_k$ are the quadrature weights such that
$\sum_{k = 1}^{n + m}a_k = l(W \times T)$ where $l$ is the Lebesgue measure.
%
Then the log-likelihood of the template model can be approximated by
$$ \log L(\theta) \approx \sum_i \log \lambda(\textbf{x}_i; \theta) +\sum_j(1 - \lambda(\textbf{u}_j,t_j; \theta))a_j=\sum_je_j \log \lambda(\textbf{u}_j, t_j; \theta) + (1 - \lambda(\textbf{u}_j, t_j; \theta))a_j$$
where $e_j = 1\{j \leq n\}$ is the indicator that equals $1$ if
$u_j$ is a data point. Writing $y_j = e_j/a_j$ this becomes
$$ \log L(\theta) \approx
\sum_j
a_j
(y_j \log \lambda(\textbf{u}_j, t_j; \theta) - \lambda(\textbf{u}_j, t_j; \theta))
+
\sum_j
a_j.$$
%
Apart from the constant $\sum_j a_j$, this expression is formally equivalent
to the weighted log-likelihood of
a Poisson regression model with responses $y_j$ and means
$\lambda(\textbf{u}_j,t_j; \theta) = \exp(\theta Z(\textbf{u}_j,t_j) +
B(\textbf{u}_j,t_j))$.
%
This is
maximised by this function by using standard GLM software.
%
In detail, we define the spatio-temporal quadrature scheme by considering a
spatio-temporal
partition of $W \times T$ into cubes $C_k$ of equal volume $\nu$,
assigning the weight $a_k=\nu/n_k$
to each quadrature point (dummy or data) where $n_k$ is the number of
points that lie in the same cube as the point $u_k$ \citep{raeisi2021spatio}.
%
The number of dummy points should be sufficient for an accurate estimate of the
likelihood. Following \cite{baddeley2000non} and \cite{raeisi2021spatio},
we start with a number of dummy points $m \approx 4 n$, increasing it until
$\sum_k a_k = l(W \times T)$.
The \texttt{AIC.stppm} and \texttt{BIC.stppm} functions return the $AIC = 2k - 2 \log(\hat{L})$ and $BIC = k\log{n} - 2 \log(\hat{L})$ of a point process
model fitted through the
function \texttt{stppm} applied to an observed
spatio-temporal point pattern of class \texttt{stp}.
%
As the model returned by \texttt{stppm} is fitted through a quadrature scheme,
the log-likelihood is computed through the quantity
$$- \log{L(\hat{\theta}; \boldsymbol{x})} = \frac{D}{2} + \sum_{j = 1}^{n}I_j\log{w_j}+n(\boldsymbol{x}).$$
\begin{example}
## Homogeneous
> set.seed(2)
> ph <- rstpp(lambda = 200, nsim = 1, seed = 2, verbose = TRUE)
> hom1 <- stppm(ph, formula = ~ 1)
> hom1
Homogeneous Poisson process
with Intensity: 202.093
Estimated coefficients:
(Intercept)
5.309
## plot(hom1) won't show any plot, due to the constant intensity
> coef(hom1)
(Intercept)
5.308728
## Inhomogeneous
> set.seed(2)
> pin <- rstpp(lambda = function(x, y, t, a) {exp(a[1] + a[2]*x)}, par = c(2, 6),
nsim = 1, seed = 2, verbose = TRUE)
1.
> inh1 <- stppm(pin, formula = ~ x)
> inh1
Inhomogeneous Poisson process
with Trend: ~x
Estimated coefficients:
(Intercept) x
2.180 5.783
> plot(inh1)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{Art6.pdf}
\caption{Output of the model fitting.}
\label{fig:p6}
\end{figure}
\subsubsection{Local inhomogeneous spatio-temporal Poisson processes trough local log-likelihood}
The \texttt{locstppm} function fits a Poisson process model to an observed spatio-temporal
point pattern stored in a \texttt{stp} object, that is, a Poisson model with
a set of parameters $\theta_i$ for each point $i$.
We assume that the template model is a Poisson process, with a parametric
intensity or rate function $\lambda(\textbf{u}, t; \theta_i)$ with space
and time localtions $\textbf{u} \in W, t \in T$ and parameters $\theta_i \in \Theta.$
Estimation is performed through the fitting of a \texttt{glm} using a localised version of the quadrature scheme by \cite{berman1992approximating}, firstly introduced
in the purely spatial context by \citep{baddeley:2017local}, and in the spatio-temporal
framework by \cite{d2022locally}.
The local log-likelihood associated with the spatio-temporal location
$(\textbf{v},s)$ is given by
$$\log L((\textbf{v},s);\theta) = \sum_i w_{\sigma_s}(\textbf{u}_i - \textbf{v}) w_{\sigma_t}(t_i - s)
\lambda(\textbf{u}_i, t_i; \theta) - \int_W \int_T
\lambda(\textbf{u}, t; \theta) w_{\sigma_s}(\textbf{u}_i - \textbf{v}) w_{\sigma_t}(t_i - s) \text{d}t \text{d}u$$
where $w_{\sigma_s}$ and $w_{\sigma_t}$ are weight functions, and
$\sigma_s, \sigma_t > 0$ are the smoothing bandwidths. It is not
necessary to assume that $w_{\sigma_s}$ and $w_{\sigma_t}$
are probability densities. For simplicity, we shall consider only kernels of fixed
bandwidth, even though spatially adaptive kernels could also be used.
%
Note that if the template model is the homogeneous Poisson process with intensity
$\lambda$, then the local
likelihood estimate $\hat{\lambda}(\textbf{v}, s)$
reduces to the kernel estimator of the point process intensity with
kernel proportional to $w_{\sigma_s}w_{\sigma_t}$.
%
We now use an approximation similar to
$\log L(\theta) \approx
\sum_j
a_j
(y_j \log \lambda(\textbf{u}_j, t_j; \theta) - \lambda(\textbf{u}_j, t_j; \theta))
+
\sum_j
a_j,$
but for the local log-likelihood associated
with each desired location $(\textbf{v},s) \in W \times T$, that is:
$$\log L((\textbf{v},s); \theta) \approx
\sum_j
w_j(\textbf{v},s)a_j
(y_j \log \lambda(\textbf{u}_j,t_j; \theta) - \lambda(\textbf{u}_j,t_j; \theta))
+
\sum_j
w_j(\textbf{v},s)a_j ,$$
where $w_j(\textbf{v},s) = w_{\sigma_s}(\textbf{v} - \textbf{u}_j)
w_{\sigma_t}(s - t_j)$.
%
Basically, for each
desired location $(\textbf{v},s)$,
we replace the vector of quadrature weights $a_j$ by
$a_j(\textbf{v},s)= w_j(\textbf{v},s)a_j$ where
$w_j (\textbf{v},s) = w_{\sigma_s}(\textbf{v} - \textbf{u}_j)w_{\sigma_t}(s - t_j)$,
and use the GLM software to fit the Poisson regression.
%
The local likelihood is defined at any location $(\textbf{v},s)$ in continuous space.
In practice, it is sufficient to
consider a grid of points $(\textbf{v},s)$.
%
We refer to \cite{d2022locally} for further discussion on bandwidth selection
and on computational costs.
\begin{example}
> inh00_local <- locstppm(pin, formula = ~ 1)
> inh00_local
Homogeneous Poisson process
with median Intensity: 7.564067
Summary of estimated coefficients
V1
Min. :3.981
1st Qu.:7.291
Median :7.564
Mean :7.316
3rd Qu.:7.669
Max. :7.854
> inh01_local <- locstppm(pin, formula = ~ x)
> inh01_local
Inhomogeneous Poisson process
with Trend: ~x
Summary of estimated coefficients
V1 V2
Min. :1.282 Min. :0.7667
1st Qu.:2.634 1st Qu.:4.5470
Median :3.059 Median :5.0662
Mean :3.082 Mean :5.0373
3rd Qu.:3.528 3rd Qu.:5.5636
Max. :4.709 Max. :6.9729
\end{example}
\subsubsection{Log-Gaussian Cox processes estimation trough (locally weighted) joint minimum contrast}
In the Euclidean context, LGCPs are one of the most prominent clustering models. By specifying the intensity of the process and the moments of the underlying GRF, it is possible to estimate both the first and second-order characteristics of the process.
Following the inhomogeneous specification in \cite{diggle:moraga:13}, a LGCP for a generic point in space and time has the intensity
\begin{equation*}
\Lambda(\textbf{u},t)=\lambda(\textbf{u},t)\exp(S(\textbf{u},t))
\end{equation*}
where $S$ is a Gaussian process with $\mathbb{E}(S(\textbf{u},t))=\mu=-0.5\sigma^2$ and so $\mathbb{E}(\exp{S(\textbf{u},t)})=1$ and with variance and covariance matrix $\mathbb{C}(S(\textbf{u}_i,t_i),S(\textbf{u}_j,t_j))=\sigma^2 \gamma(r,h)$ under the stationary assumption, with $\gamma(\cdot)$ the correlation function of the GRF, and $r$ and $h$ some spatial and temporal distances. Following \cite{moller1998log}, the first-order product density and the pair correlation function of an LGCP are $\mathbb{E}(\Lambda(\textbf{u},t))=\lambda(\textbf{u},t)$ and $g(r,h)=\exp(\sigma^2\gamma(r,h))$, respectively.
The \texttt{stlgcppm} function estimates a local log-Gaussian Cox process (LGCP), following the locally weighted minimum contrast procedure introduced in \cite{d2022locally}.
Three covariances are available: separable exponential, Gneiting, and DeIaco-Cesare.
If both the first and second arguments are set to global, a log-Gaussian Cox process is fitted by means of the joint minimum contrast procedure proposed in \cite{siino2018joint}.
We may consider a separable structure for the covariance function of the GRF \citep{brix2001spatiotemporal} that has exponential form for both the spatial and the temporal components,
\begin{equation}
\mathbb{C}(r,h)=\sigma^2\exp \bigg(\frac{-r}{\alpha}\bigg)\exp\bigg(\frac{-h}{\beta}\bigg),
\label{eq:cov}
\end{equation}
where $\sigma^2$ is the variance, $\alpha$ is the scale parameter for the spatial distance and $\beta$ is the scale parameter for the temporal one.
The exponential form is widely used in this context and nicely reflects the decaying correlation structure with distance or time.\\
Moreover, we may consider a non-separable covariance of the GRF useful to describe
more general situations.
Following the parametrisation in \cite{schlather2015analysis}, Gneiting covariance function \citep{gneiting2006geostatistical} can be written as
$$
\mathbb{C}(r,h) = (\psi(h) + 1)^{ - d/2} \varphi \bigg( \frac{r}{\sqrt{\psi(h) + 1}} \bigg) \qquad r \geq 0, \quad h \geq 0,
$$
where $\varphi(\cdot)$ is a complete monotone function associated to
the spatial structure, and $\psi(\cdot)$ is a positive function with a
completely monotone derivative associated to the temporal
structure of the data. For example, the choice $d = 2$,
$\varphi(r)=\sigma^2 \exp ( - (\frac{r}{\alpha})^{\gamma_s})$ and
$\psi(h)=((\frac{h}{\beta})^{\gamma_t} + 1)^{\delta/\gamma_t}$
yields to the parametric family
\begin{equation}
\mathbb{C}(r,h) = \frac{\sigma^2}{((\frac{h}{\beta})^{\gamma_t} + 1)^{\delta/\gamma_t}} \exp \Biggl( - \frac{(\frac{r}{\alpha})^{\gamma_s}}{((\frac{h}{\beta})^{\gamma_t} + 1)^{\delta/(2\gamma_t)}} \Biggl),
\label{eq:nonsep}
\end{equation}
where $\alpha > 0$ and $\beta > 0$ are scale parameters of space and time, $\delta$ takes values in $(0, 2]$, and $\sigma^2$ is the variance.\\
Another parametric covariance implemented belongs to the Iaco-Cesare family \citep{de2002fortran,de2002nonseparable}, and there is a wealth of covariance families that could well be used for our purposes.
%
Following \cite{siino2018joint}, the second-order
parameters $\boldsymbol{\psi}$ are found by minimising
$$M_J\{ \boldsymbol{\psi}\}=\int_{h_0}^{h_{max}} \int_{r_0}^{r_{max}} \phi(r,h) \{\nu[\hat{J}(r,h)]-\nu[J(r,h;\boldsymbol{\psi})]\}^2 \text{d}r \text{d}h,$$
where $\phi(r, h)$ is a weight that depends on the space-time
distance and $\nu$ is a transformation function.
%
They suggest $\phi(r,h)=1$ and $\nu$ as
the identity function, while $r_{max}$ and $h_{max}$ are selected as 1/4
of the maximum observable spatial and temporal distances.\\
%
Following \cite{d2022locally}, we can fit a localised version of the LGCP,
that is, obtain a
vector of parameters $\boldsymbol{\psi}_i$ for each point $i$, by
minimising
$$M_{J,i}\{ \boldsymbol{\psi}_i \}=\int_{h_0}^{h_{max}}\int_{r_0}^{r_{max}}
\phi(r,h) \{ \nu[\bar{J}_i(r,h)]-\nu[J(r,h;\boldsymbol{\psi})]\}^2 \text{d}r \text{d}h
\qquad \text{with} \qquad
\bar{J}_i(r,h)= \frac{\sum_{i=1}^{n}\hat{J}_i(r,h)w_i}{\sum_{i=1}^{n}w_i}$$
is the average of the local functions
$\hat{J}_i(r,h)$, weighted by some point-wise kernel estimates.
In particular, we consider $\hat{J}_i(\cdot)$ as the local
spatio-temporal pair correlation function \citep{gabriel:rowlingson:diggle:2013} documented in \texttt{LISTAhat}.
The \texttt{print} and \texttt{summary} functions give the main information on the fitted model. In case of local parameters (both first- and second-order), the summary function contains information on their distributions.
Next, we perform and example with a complex seismic point pattern.
\begin{example}
> data("greececatalog")
\end{example}
If both first and second arguments are set to "global", a log-Gaussian Cox process is fitted by means of the joint minimum contrast.
\begin{example}
> lgcp1 <- stlgcppm(greececatalog, formula = ~ 1, first = "global", second = "global")
> lgcp1
Joint minimum contrast fit
for a log-Gaussian Cox process with
global first-order intensity and
global second-order intensity
--------------------------------------------------
Homogeneous Poisson process
with Intensity: 0.00643
Estimated coefficients of the first-order intensity:
(Intercept)
-5.046
--------------------------------------------------
Covariance function: separable
Estimated coefficients of the second-order intensity:
sigma alpha beta
6.989 0.225 156.353
--------------------------------------------------
Model fitted in 1.014 minutes
\end{example}
If first = "local", local parameters for the first-order intensity are provided. In this case, the summary function contains information on their distributions.
\begin{example}
> lgcp2 <- stlgcppm(greececatalog, formula = ~ x, first = "local", second = "global")
> lgcp2
Joint minimum contrast fit
for a log-Gaussian Cox process with
local first-order intensity and
global second-order intensity
--------------------------------------------------
Inhomogeneous Poisson process
with Trend: ~x
Summary of estimated coefficients of the first-order intensity
(Intercept) x
Min. :-6.400 Min. :-0.90689
1st Qu.:-2.526 1st Qu.:-0.38710
Median : 2.333 Median :-0.26876
Mean : 2.153 Mean :-0.26744
3rd Qu.: 5.070 3rd Qu.:-0.06707
Max. :16.323 Max. : 0.10822
--------------------------------------------------
Covariance function: separable
Estimated coefficients of the second-order intensity:
sigma alpha beta
2.612 0.001 36.415
--------------------------------------------------
Model fitted in 3.634 minutes
\end{example}
The \texttt{plot} function shows the fitted intensity, displayed
both in space (by means of a density kernel smoothing) and in space and time. In the case of local covariance parameters, the function returns the mean of the random intensity, displayed both in space (by means of a density kernel smoothing) and in space and time.
The \texttt{localsummary.stlgcppm} function breaks up the contribution of the local estimates to the fitted intensity, by plotting the overall intensity and the density kernel smoothing of some artificial intensities obtained by imputing the quartiles of the local parameters' distributions.
Finally, the function \texttt{localplot.stlgcppm} function plots the local estimates. In the case of local covariance parameters, the function displays the local estimates of the chosen covariance function.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Art7.pdf
\caption{Output of the \texttt{localsummary} function.}
\label{fig:p7}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Art8.pdf}
\caption{Estimated local coefficients.}
\label{fig:p75}
\end{figure}
\section{Diagnostics}\label{sec:diag}
Inhomogeneous second-order statistics can be constructed and used for assessing the goodness-of-fit of fitted first-order intensities.
Nevertheless, it is a widespread practice in the statistical analysis of spatial and spatio-temporal point pattern data primarily comparing the data with a homogeneous Poisson process, which is generally the null model in applications for the fitted model. Indeed, when dealing with diagnostics in point processes, often two steps are needed: the transformation of data into residuals (thinning or rescaling \citep{schoenberg2003multidimensional}) and the use of tests to assess the consistency of the residuals with the homogeneous Poisson process \citep{adelfio:schoenberg:09}. Usually, second-order statistics estimated for the residual process (i.e. the result of a thinning or rescaling procedure) are analysed.
Essentially, to each observed point a weight inversely proportional to the conditional intensity at that point is given. This method was adopted by \cite{veen2006assessing} in constructing a weighted version of the $K$-function of \cite{ripley1977markov}; the resulting weighted statistic is in many cases more powerful than residual methods \citep{veen2006assessing}. \\
The spatio-temporal inhomogeneous version of the $K$-function in \eqref{eq:k} is given by \cite{gabriel2009second} as
\begin{equation}
\hat{K}_{I}(r,h)=\frac{ \vert W \vert \vert T \vert }{n(n-1)}\sum_{i=1}^n \sum_{j > i} \frac{I( \vert \vert \textbf{u}_i-\textbf{u}_j \vert \vert \leq r,\vert t_i-t_j\vert \leq h)}{\hat{\lambda}(\textbf{u}_i,t_i)\hat{\lambda}(\textbf{u}_j,t_j)},
\label{eq:kinh}
\end{equation}
where $\lambda(\cdot,\cdot)$ is the first-order intensity at an arbitrary point.
We know that $\mathbb{E}[\hat{K}_{I}(r,h)]=\pi r^2 h$, that is the same as the expectation of $\hat{K}(r,h)$ in \eqref{eq:k}, when the intensity used for the weighting is the true generator model.
This is a crucial result that allows the use of the weighted estimator $\hat{K}_{I}(r,h)$ as a diagnostic tool, for assessing the goodness-of-fit of spatio-temporal point processes with generic first-order intensity functions.
Indeed, if the weighting intensity function is close to the true one $\lambda(\textbf{u},t)$, the expectation of $\hat{K}_{I}(r,h)$ should be close to $\mathbb{E}[\hat{K}(r,h)]=\pi r^2 h$ for the Poisson process. For instance, values $\hat{K}_{I}(r,h)$ greater than $\pi r^{2} h$ indicates that the fitted model is not appropriate, since the distances computed among points exceed the Poisson theoretical ones.
The \texttt{globaldiag} function performs global diagnostics of a model fitted for the first-order intensity of an spatio-temporal point pattern, using the spatio-temporal inhomogeneous K-function \citep{gabriel2009second} documented by the function \texttt{STIKhat} of the \textbf{stpp} package \citep{stpp}.
It can also perform global diagnostics of a model fitted for the first-order intensity of an spatio-temporal point pattern on a linear network, by means of the spatio-temporal inhomogeneous K-function on a linear network \citep{moradi2020first} documented by the function \texttt{STLKinhom} of the \textbf{stlnpp} package \citep{stlnpp}.
They both return the plots of the inhomogeneous K-function weighted by the provided intensity to diagnose, its theoretical value, and their difference.
\begin{example}
> globaldiag(greececatalog, lgcp1$l)
[1] "Sum of squared differences = 318213525081.852"
> globaldiag(greececatalog, lgcp2$l)
[1] "Sum of squared differences = 147029066885.741"
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Art9.pdf}\\
\vspace{-.5cm}
\includegraphics[width=\textwidth]{Art10.pdf}
\caption{Output of the global diagnostics for the two fitted LGPCs.}
\label{fig:p9}
\end{figure}
Moving to the local diagnostics, \cite{adelfio2020some} derived the expectation of the local
inhomogeneous spatio-temporal K-function, under the Poisson case:
$\mathbb{E}[\hat{K}^i(r,h) ]= \pi r^ 2 h.$
%
Moreover, they found that when the local estimator is weighted by the true
intensity function,
its
expectation, $\mathbb{E}[\hat{K}_{I}^i(r,h)]$, is the same as the expectation of
$\hat{K}^i(r,h)$.
%
These results motivate the usage of such local estimator
$\hat{K}_{I}^i(r,h)$ as a diagnostic tool for general spatio-temporal
point processes for assessing the goodness-of-fit of spatio-temporal
point processes of any generic first-order
intensity function $\lambda$.
%
Indeed, if the estimated intensity function
used for weighting in our proposed LISTA functions is
the true one, then the LISTA functions should behave as the
corresponding ones of a homogeneous Poisson process,
resulting in small discrepancies between the two.
%
Therefore, this function computes such discrepancies
by means of the $\chi_i^2$ values, obtained following the expression
$$ \chi_i^2=\int_L \int_T \Bigg(
\frac{\big(\hat{K}^i_{I}(r,h)- \mathbb{E}[\hat{K}^i(r,h) ]
\big)^2}{\mathbb{E}[\hat{K}^i(r,h) ]}
\Bigg) \text{d}h \text{d}r ,$$
one for each point in the point pattern.
%
Basically, departures of the LISTA functions $\hat{K}_{I}^i(r,h)$ from
the Poisson expected value $rh$ directly suggest the unsuitability of
the intensity function $\lambda(\cdot)$ used in the weighting of the
LISTA functions for that specific point. This can be referred to as an \textit{outlying point}.
%
Given that \cite{dangelo2021local} proved the same results for the network case,
that is,
$\mathbb{E}[\hat{K}_{L}^i(r,h) ]= rh$ and
$\mathbb{E}[\hat{K}_{L,I}^i(r,h) ]=\mathbb{E}[\hat{K}_{L}^i(r,h) ]$
when $\hat{K}_{L,I}^i(r,h)$ is weighted by the true intensity function,
we implemented the same above-mentioned diagnostics procedure to work on
intensity functions fitted on spatio-temporal point patterns occurring on
linear networks.
%
Note that the Euclidean procedure is implemented by means of the
local K-functions of
\cite{adelfio2020some}, documented in
\texttt{KLISTAhat} of the \textbf{stpp} package \citep{gabriel:rowlingson:diggle:2013}.
The network case uses the local K-functions on networks \citep{dangelo2021local},
documented
in \texttt{localSTLKinhom}.
The \texttt{localdiag} function performs local diagnostics of a model fitted for the first-order intensity of an spatio-temporal point pattern, by means of the local spatio-temporal inhomogeneous K-function \citep{adelfio2020some} documented by the function KLISTA of the \textbf{stpp} package \citep{gabriel:rowlingson:diggle:2013}.
It returns the points identified as outlying following the diagnostics procedure on individual points of an observed point pattern, as introduced in \cite{adelfio2020some}.
The points resulting from the local diagnostic procedure provided by this function can be inspected via the \texttt{plot}, \texttt{print}, \texttt{summary}, and \texttt{infl} functions.
\texttt{localdiag} is also able to perform local diagnostics of a model fitted for the first-order intensity of an spatio-temporal point pattern on a linear network, by means of the local spatio-temporal inhomogeneous K-function on linear networks \cite{dangelo2021assessing} documented by the function \texttt{localSTLKinhom}.
It returns the points identified as outlying following the diagnostics procedure on individual points of an observed point pattern, as introduced in \cite{adelfio2020some}, and applied in \cite{dangelo2021local} for the linear network case.
\begin{example}
> set.seed(12345)
> stlp1 <- rETASlp(cat = NULL, params = c(0.078915 / 2, 0.003696, 0.013362, 1.2,
0.424466, 1.164793),
betacov = 0.5, m0 = 2.5, b = 1.0789, tmin = 0, t.lag = 200,
xmin = 600, xmax = 2200, ymin = 4000, ymax = 5300,
iprint = TRUE, covdiag = FALSE, covsim = FALSE, L = chicagonet)
> res <- localdiag(stlp1, intensity = density(as.stlpp(stlp1), at = "points"))
> res
Points outlying from the 0.95 percentile
of the anaysed spatio-temporal point pattern on a linear network
--------------------------------------------------
Analysed pattern X: 65 points
4 outlying points
> plot(res)
> infl(res)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Art12.pdf
\caption{Output of the local diagnostics via the \texttt{plot.localdiag} function.}
\label{fig:p12}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=.95\textwidth]{Art13.pdf}
\caption{Output of the local diagnostics via the \texttt{infl.localdiag} function.}
\label{fig:p13}
\end{figure}
\section{Conclusions}\label{sec:concl}
This work has introduced the \textbf{stopp} \texttt{R} package, which deals with spatio-temporal point processes occurring either on the Euclidean space or on some specific linear networks, such as streets of a city.
The package includes functions for summarizing, plotting, and performing various analyses on point processes; these functions mostly use the approaches suggested in a few recent works in scientific literature. Modelling, statistical inference, and simulation difficulties on spatio-temporal point processes on Euclidean space and linear networks, with a focus on their local properties, are the core topics of such research and the package in turn.
To start with, we set the notation for spatio-temporal point processes that can occur in both linear networks and Euclidean spaces. After that, we went over the main methods implemented in the \textbf{stopp} package for dealing with simulations, data, and objects in point processes. After having recalled the definition of Local Indicators of Spatio-Temporal Association (LISTA) functions, we have moved to introduce the new functions that compute the LISTAs on linear networks. We then illustrated functions to run a local test to evaluate the local differences between two point patterns occurring on the same metric space. Moreover, many examples of the models included in the package are provided. These examples include: models for separable Poisson processes on both Euclidean space and networks, global and local non-separable inhomogeneous Poisson processes, and LGCPs. Then, techniques for performing both global and local diagnostics on such models (but not limited to those only) for point patterns on linear networks and planar spaces are provided.
The package tools are not exhaustive. This work represents the creation of a toolbox for different kinds of spatio-temporal analyses to be performed on observed point patterns, following the growing stream of literature on point process theory.
The presented work contributes to the existing literature by framing many of the most widespread methods for the analysis of spatio-temporal point processes into a unique package, which is intended to foster many further extensions.
|
{
"arxiv_id": "2302.13694",
"language": "en",
"timestamp": "2023-02-28T02:27:46",
"url": "https://arxiv.org/abs/2302.13694",
"yymm": "2302"
} | \section{Introduction}
Deformable Linear Objects (DLOs) are a class of objects that are characterized by two main features: deformability, which refers to the fact that the object is not a rigid body and its geometry can change, and linearity, which stands for the fact that the object is elongated and the ratio of its length to its width is substantial~\cite{survey}. Objects of this type are ubiquitous both in everyday life and in industry, where one can find ropes, cables, pipes, sutures, etc.
While the manipulation of rigid bodies is already solved for a wide range of objects~\cite{trends_and_challenges}, manipulating DLOs is still unsolved even for everyday objects such as cables and hoses. Due to the ubiquity of the DLOs, manipulating them poses a complex and vital challenge, which has been in the scope of researchers for over three decades~\cite{oldest_dlo_manip}. The interest in this topic has grown over the last few years, as the automatic wiring harness assembly is crucial for car manufacturers~\cite{wiring_harness_assembly}, as well as automatic completion of surgical sutures, which could help surgeons~\cite{surgical_sutures}.
\rev{To perform such tasks autonomously, robotic manipulators need to perceive the configuration of the manipulated object, as this is crucial for calculating the adequate control signal.}
To do so, accurate and real-time DLO tracking is necessary. However, state-of-the-art DLO tracking algorithms are relatively slow and do not meet the real-time requirements of the control systems. Additionally, they cannot correctly handle manipulation sequences, which contain occlusions~\cite{random_matrices} and self-intersections~\cite{cdcpd2}, or make many assumptions about the model of the tracked object~\cite{apetit_siciliano, tomizuka_gmm}.
\begin{figure}[th]
\centering
\includegraphics[width=0.98\linewidth]{img/main_pic_mask.png}
\vspace{-0.4cm}
\caption{Using the proposed DLO tracking algorithm, one can transform the DLO mask into a low-dimensional B-spline representation in real-time.}
\label{fig:scheme}
\vspace{-0.4cm}
\end{figure}
This paper proposes a fast, non-iterative method for estimating a DLO's shape using a walk through object's mask and B-spline regression. The proposed algorithm takes as an input the mask of the DLO and returns a sequence of control points of the B-spline curve that approximates the shape of the tracked DLO. Our solution can deterministically identify the shape of a DLO on the HD image within $\SI{40}{\milli\second}$ while handling non-trivial scenarios, like occlusions, self-intersections, and multiple DLOs in the scene. The general scheme of the proposed approach is presented in Figure~\ref{fig:scheme}.
The main contribution of this work is twofold:
\begin{itemize}
\item a novel deterministic real-time DLO tracking algorithm, which can handle occlusions, self-intersections, and multiple DLOs in the scene, while requiring no prior knowledge about the tracked DLO and is faster and more accurate than State-of-the-Art solutions for assumed quality of the output shape,
\item dataset of real and artificial 2D and 3D videos and images of several different DLOs, on which we performed a verification of the proposed method and which we share with the community for objective performance evaluation and to encourage the development of real-time DLO tracking \footnote{\url{https://github.com/PPI-PUT/cable_observer/tree/master}}.
\end{itemize}
\section{Related Work}
\subsection{DLO representation}
In the literature, there are several ways to represent the geometric shape of the DLO. The most straightforward one is to represent it as a sequence of points~\cite{RL_DLO_manipulation, fiducial}. However, more complex models are usually necessary for accurate cable modeling and tracking, like a B-spline model with multiple chained random matrices, proposed in~\cite{random_matrices}. A similar approach, but using Bezier curves and rectangle chains, was proposed in~\cite{bezier}, while in~\cite{nurbs} NURBS curves were used.
In our research, we use a B-spline representation (similar to the one used in~\cite{bezier, nurbs}) as it is flexible and enables one to accurately track the shape of a generic DLO while being compact, relatively easy, and cheap to work with. Using this representation, one can build more complex models, which consider the kinematics and dynamics of the DLO~\cite{gianluca_ICPS, cosserat_rod}.
\subsection{DLO tracking}
DLO tracking requires transforming the data gathered with sensors into the chosen representation.
While there are attempts to use data from tactile sensors~\cite{FEM}, the most successful way to perceive the DLO shape is to use vision and depth sensors. One of the most straightforward approaches to DLO shape tracking is to use the fiducial markers located along the DLO, and track them~\cite{fiducial} or use them to estimate the shape of a DLO~\cite{bretl}. A similar approach was presented in~\cite{tangled}, where colors denote consecutive rope segments.
The most common approach is to create a model of the DLO and use images or point clouds as measurements to modify its parameters and track the object deformation iteratively. One of the examples of this approach is the modified expectation-maximization algorithm (EM), proposed in~\cite{abbeel_probabilistic}, which is used to update the predefined DLO model based on the registered deformations and simulation in the physics engine. Similarly, in~\cite{apetit_siciliano}, the FEM methods were used to track the deformation of the predefined model. Whereas, in~\cite{tomizuka_gmm}, a Structure Preserved Registration algorithm with the object represented as a Mixture of Gaussians was used. Authors of~\cite{bezier} performed DLO tracking using Recursive Bayesian Estimator on Spatial Distribution Model, built with the Bezier curve and the chain of rectangles.
Due to the iterative and often probabilistic character of the model updates, these methods usually have problems tracking rapidly deforming objects and require an appropriate model and accurate initialization.
To mitigate the slow initialization problem, authors of~\cite{random_matrices} used the Euclidean minimum spanning tree and the Breadth-first search method to speed up initialization. However, it still takes hundreds of milliseconds to obtain the DLO shape estimate.
A much faster EM-based tracking approach, which utilizes a coherent point drift method extended with some geometric-based regularization and physically and geometrically inspired constraints, was presented in~\cite{cdcpd2}.
The instance segmentation method for multiple DLOs, which also can serve for tracking, was initially proposed in~\cite{ariadne} and extended using Deep Learning solutions in~\cite{ariadne_plus} and~\cite{fastdlo}.
The solution presented in this paper utilizes a similar idea to the one presented in~\cite{ariadne_plus, fastdlo}. However, using skeletons instead of the super-pixel graphs reduces the computational complexity~\cite{ariadne_plus}, and the lack of Deep Learning in our solution facilitates better generalization without sacrificing performance~\cite{ariadne_plus, fastdlo}. In our work, we do not try to model the DLO, but instead, quickly provide a compact representation of the DLO state\rev{, that is consistent and can be tracked between frames}. Thus, the proposed method can aid the existing model-based methods with accurate and real-time structured measurements of the system state.
\section{DLO Tracking}
\subsection{Problem Formulation}
\label{sec:problem_formulation}
The problem considered in this paper is to track the DLO on the video sequence. By tracking, we understand transforming consecutive video frames of the DLO's binary mask, obtained from the selected segmentation algorithm, into a 1D curve resembling the object's shape\rev{, which representation should be consistent between frames}.
In this paper, we will not consider the image segmentation problem similarly to~\cite{random_matrices, cdcpd2, abbeel_probabilistic}. But instead, we will focus on shape tracking only, with the assumption that for homogeneously colored cables, the mask is given by any color-based segmentation algorithm, or in more challenging scenarios, state-of-the-art deep learning method~\cite{ariadne_plus} is used.
\subsection{Proposed Method}
In this section, we introduce our proposed novel approach to fast tracking of DLO, called DLOFTBs, which by using the walks on the DLO mask's skeleton, enables rapid fitting of the B-spline curve into the masked image of the DLO. The general scheme of the proposed algorithm is presented in Figure~\ref{fig:solution}. To transform the mask image into a B-spline curve 4 main processing steps are made: (i) morphological open \& skeletonization, (ii) walk along the skeleton segments, (iii) filtering and ordering of segments, and finally, (iv) B-spline fitting. In the following subsections, we will describe each of these steps in detail.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{img/tracking_solution.pdf}
\caption{The general scheme of DLOFTBs. The mask image is transformed through several processing stages to obtain B-spline representation of its shape.}
\label{fig:solution}
\vspace{-0.35cm}
\end{figure*}
\subsubsection{Morphological open \& skeletonization}
\label{sec:skel}
The first operation we perform on the mask of the DLO is a $3 \times 3$ (the smallest possible) morphological open. We used it to remove some false positive pixels, which are common because of imperfect segmentation. After that, one of the most essential steps in mask processing is performed -- skeletonization~\cite{skeleton_used}.
This operation takes the mask image as an input and creates its skeleton, i.e., a thin version of the mask, which lies in the geometric centers of the DLO segments, preserves its topology, and reduces its width to a single pixel.
This significantly reduces the amount of information about the pixels representing the DLO while preserving the crucial information encoded in central pixels along the DLO. Moreover, using the skeleton, one can easily find the crucial parts of the DLO mask, such as segment endpoints -- pixels with only one neighbor or branching -- pixels with more than two neighbors. While the segment endpoints will constitute starting points for the walks on segments, the branching points are crucial while performing a walk, as they require the walker to choose one of several possible paths.
To avoid this inconvenience, we propose removing branching points and postponing the decision to make connections between segments for further processing. By doing so, we can simplify the segment walk algorithm considerably.
\subsubsection{Walk algorithm}
\label{sec:walk}
Having the skeleton prepared and segment endpoints determined, we can perform a walk along each segment. To do so, we start from a random segment endpoint and go pixel by pixel till the end of the segment, collecting the subsequent pixel coordinates. Such traversing is always possible and unambiguous, as we removed all pixels with more than two neighbors in the previous step.
After each walk, we remove two points from the set of endpoints that were the beginning and end of the considered segment.
Next, we draw another segment endpoint and perform a walk, which repeats until the end of the segment endpoints. As a result, we obtain a set of paths i.e., ordered lists of pixels representing all segments.
\subsubsection{Filtering and ordering of segments}
\label{sec:order}
In this step, we first filter out segments shorter than $p$ pixels, which are likely to represent some artifacts of the mask or resulting from the skeletonization procedure. While this approach may also result in the situation where a short part of the actual DLO is removed, it will not affect the resultant path significantly, as it will be treated as occlusion and handled by our algorithm at the next stage.
In order to fit a B-spline effectively into a set of segments, we need to order them. As a result of the previous processing step, we have an unordered set of ordered lists of pixels. To order them, we need to find segment endpoints pairs, which are most likely to connect each other. While there are many possible criteria and algorithms for deciding about connections, we decided to use criterion that takes into account both distance and orientation of the endpoints and is defined by
\begin{equation}
\label{eq:J}
J = m J_d + (1 - m) J_o,
\end{equation}
where $J_d$ is a euclidean distance between segment endpoints, $J_o$ is a criterion related to the mutual orientation of the segment endpoints, and $m \in [0; 1]$ is a linear mixing factor. While the definition of $J_d$ is rather straightforward, the exact formula of the $J_o$ is given by
\begin{equation}
J_d = |\pi - \phi_1 - \phi_2|,
\end{equation}
where $\phi_1, \phi_2$ are approximated orientations of the segment endpoints.
Using the criterion $J$~\eqref{eq:J} one can decide about the pairs of the segment endpoints. The most accurate solution would be to check for all possible pairing schemes and find the one with the lowest $J$. However, it is also the most computationally expensive one, as it requires checking $(2s - 1)(2s - 3)\ldots 1$ pairings, where $s$ is the number of segments. To limit the computational burden, we decided to use a potentially less accurate but much faster approach -- a greedy one. Thus, we need to choose only $s-1$ connections out of $s(2s-1)$ pairs of endpoints, considering the already taken endpoints.
Interestingly, our algorithm can be easily modified to track multiple cables also. In the single cable case, we are trying to make all possible connections between segment endpoints, except the last one, in the order determined by the criterion~$J$. To make it work for the multiple cables, we have to be more conservative and make the connections only if the criterion value is smaller than some user-defined threshold $J_{th}$. Thus, we are performing the connections in the greedy order as in the single cable case, but if we reach the limit $J_{th}$, the algorithm stops. As a result, we obtain several segment sequences instead of one, as for the single cable case. Finally, the B-spline fitting phase (described in the next point) is performed multiple times, separately for all resultant segment sequences, resulting in the multiple B-splines that represent the shapes of the cables in the image.
\subsubsection{B-spline fitting}
\label{sec:fit}
To fit the B-spline to the sequence of segments, we need an argument for the B-spline i.e., the vector $t$ of the relative position of the pixels on the curve we want to define. To do so, we calculate the distance along the segments, as well as euclidean distances between segments, and concatenate them into a single vector, the cumulative sum of which serves as the B-spline argument $t$.
Using the Euclidean distance between segments, we introduce an estimate of the distance along the DLO (we do not have access to the true one, as the parts of a DLO are occluded). This procedure prevents the sudden changes in the pixel's positions in terms of the B-spline argument $t$.
Moreover, we need to define the number and positions of knots. In the proposed solution, we defined knots as a sequence of $k$ equidistant, in terms of the element number, elements of the vector $t$, as it will ensure that the Schonenberg-Whitney conditions~\cite{Schoenberg1953} are met.
Finally, one can fit two B-splines, one for each axis, using the prepared argument vector $t$, knots, and $x$ and $y$ coordinates of the ordered pixels.
We used cubic splines, as higher continuity is unnecessary for the considered problem.
\subsubsection{3D data}
Even though the proposed DLO tracking algorithm is meant to work on images, it can be easily extended to work for the 3D data obtained from the RGBD sensor.
In this case, we deal with the mask in the same way as for the 2D case till the moment of the B-spline fitting. Given a sequence of segments in the 2D space, we augment it with the corresponding depth coordinates and then perform the B-spline fitting. Thus, we obtain three B-splines, each representing the value of a different coordinate ($x$, $y$, $z$) with respect to the curve length estimate.
\section{Experiments}
To perform \rev{all} experiments, we used a single core of the Intel Core i7-9750H CPU and following, heuristically chosen, set of parameters of our algorithm $m=0.05$ , $p=10$, and $k=25$, which are the mixing factor of segments connection criteria, segments length threshold, and the number of knots.
\subsection{Datasets}
To evaluate the proposed cable tracking method (DLOFTBs), we conducted several experiments, which show the performance of the proposed algorithm on 4 datasets:
\subsubsection{RGB real}
\label{sec:rgb_real}
7 sequences of RGB images ($\approx900$ frames in total), collected with the Intel RealSense D435 camera, of a single cable being manipulated by two UR3 manipulators.
\subsubsection{RGBD real}
\label{sec:rgbd_real}
10 sequences of RGBD images ($\approx2500$ frames in total), collected with the Kinect Azure, of the single cable being manipulated by a human.
\subsubsection{RGBD artificial}
\label{sec:rgbd_art}
5 sequences of the artificially created RGBD images ($\approx1400$ frames in total), generated from a reference curve evolving in time. This dataset allows us to compare directly to the reference curve instead to mask.
\subsubsection{Ariadne+}
\label{sec:ariadne}
The test set taken from~\cite{ariadne_plus}, which consists of 62 images of multiple cables. We enriched this dataset with manual annotations of the cable shapes to facilitate direct comparison between curve shapes.
\subsection{Performance criteria}
Assessing the quality of the DLO shape tracking is not a trivial task~\cite{shape_similarity}, especially when the only ground truth data available is the mask of the DLO (datasets 1 and 2).
For dataset 1 and dataset 2 we use two Mean Minimal Distance (MMD) criteria, which build upon the ideas of Modified Hausdorff Distance~\cite{mhd} and are defined by
\begin{equation}
\mathcal{L}_1 = \MMD(\mathcal{M}, \mathcal{C}_d) \quad\text{and}\quad \mathcal{L}_2 = \MMD(\mathcal{C}_d, \mathcal{M}),
\end{equation}
where
\begin{equation}
\MMD(X, Y) = \frac{1}{|X|}\sum_{x \in X} \min_{y \in Y} d(x, y),
\end{equation}
where $d(x, y)$ is a Euclidean distance between $x$ and $y$, $\mathcal{M}$ is a set of pixels belonging to a mask, while $\mathcal{C}_d$ is a set of points on the predicted curve $\mathcal{C}$.
In turn, for dataset 3 and dataset 4 we have access to the mathematical curve representing the reference shape $\mathcal{C}_r$. Therefore, we can formulate a much more accurate measure of the performance, which builds upon the Fr\'echet distance ~\cite{frechet}, and is defined by
\begin{equation}
\mathcal{L}_3(\mathcal{C}_d, \mathcal{C}_{r_d}) = \frac{F(\mathcal{C}_d, \mathcal{C}_{r_d}) + F(\mathcal{C}_{r_d}, \mathcal{C}_d)}{2},
\end{equation}
where $\mathcal{C}_{r_d}$ is a discretized version of the reference path, and where
\begin{equation}
\begin{split}
F(X, Y) = \frac{1}{|X|}\sum_{i = 0}^{|X| - 1} \min_{w \in [0; 1]} d(&X(i), (1-w) Y(k(i)) \\
& + w Y(k(i)+1)),
\end{split}
\end{equation}
where $k(i)$ satisfies $D_Y(k(i)) \leq D_X(i) \leq D_Y(k(i)+1)$ and is monotonically non-decreasing, where $D_X(i)$ is a normalized distance along $X$ curve at $i$-th discretization point. This function allows for fair alignment of curves despite possible differences in parameterization.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\linewidth]{img/merged_plots_real_2d_dataset.jpg}
\caption{Comparison of 2D DLO tracking algorithms. Only our approach was able to handle occlusions and self-intersections of the tracked cable.}
\label{fig:2d_tracking}
\vspace{-0.15cm}
\end{figure}
\subsection{2D videos of a single cable}
\begin{table*}[!th]
\setlength{\tabcolsep}{0.5pt}
\centering
\caption{Performance of the proposed DLO tracker on several 2D videos of dual-arm manipulation.}
\begin{tabular}{cc|ccccccc}
\hline
Algorithm & Scenario & 0 & 1 & 2 & 3 & 4 & 5 & 6\\
\hline
\multirow{3}{*}{DLOFTBs}
& $\mathcal{L}_1$ [px] & \textbf{4.82$\pm$0.05} & \textbf{6.15$\pm$0.13} & \textbf{2.45$\pm$0.09} & \textbf{2.88$\pm$0.17} & \textbf{4.97$\pm$0.30} & \textbf{2.92$\pm$0.08} & \textbf{4.45$\pm$0.04}\\
& $\mathcal{L}_2$ [px] & 0.88$\pm$2.35 & 0.39$\pm$0.12 & 1.67$\pm$3.13 & 7.08$\pm$14.58 & 0.56$\pm$0.03 & 0.72$\pm$0.07 & 0.99$\pm$0.02\\
& Time [ms] & \textbf{38.2$\pm$6.4} & \textbf{34.8$\pm$4.4} & \textbf{30.0$\pm$5.2} & \textbf{33.2$\pm$10.0} & \textbf{39.5$\pm$4.0} & \textbf{26.0$\pm$2.0} & \textbf{35.1$\pm$2.0} \\
\hline
\multirow{3}{*}{Ariadne+}
& $\mathcal{L}_1$ [px] & 8.03$\pm$4.38 & 12.45$\pm$16.80 & 30.01$\pm$32.57 & 29.55$\pm$0.64 & 54.85$\pm$71.67 & 117.60$\pm$38.23 & --\\
& $\mathcal{L}_2$ [px] & 0.51$\pm$0.8 & 1.99$\pm$4.71 & 1.22$\pm$3.18 & 47.59$\pm$1.55 & 0.74$\pm$0.23 & 2.63$\pm$6.60 & --\\
& Time [ms] & 973.9$\pm$35.3 & 956.0$\pm$61.3 & 974.2$\pm$30.5 & 935.5$\pm$37.0 & 962.5$\pm$31.0 & 923.3$\pm$30.9 & --\\
\hline
\multirow{3}{*}{FastDLO}
& $\mathcal{L}_1$ [px] & 9.17$\pm$12.8 & 8.98$\pm$10.32 & 29.3$\pm$33.2 & 29.1$\pm$47.5 & 177.6$\pm$16.6 & 173.7$\pm$6.5 & 171$\pm$13\\
& $\mathcal{L}_2$ [px] & \textbf{0.36$\pm$0.01} & \textbf{0.37$\pm$0.01} & \textbf{0.36$\pm$0.01} & \textbf{0.36$\pm$0.02} & \textbf{0.37$\pm$0.01} & \textbf{0.36$\pm$0.01} & \textbf{0.36$\pm$0.02}\\
& Time [ms] & 86.3$\pm$6.3 & 97.6$\pm$10.2 & 62.2$\pm$6.5 & 64.9$\pm$9.5 & 96.4$\pm$9.1 & 60.8$\pm$9.0 & 88.3$\pm$3.6 \\
\hline
& Frames &
\includegraphics[width=0.12\textwidth,valign=c]{img/2d_real/small/0.png} &
\includegraphics[width=0.12\textwidth,valign=c]{img/2d_real/small/1.png} &
\includegraphics[width=0.12\textwidth,valign=c]{img/2d_real/small/2.png} &
\includegraphics[width=0.12\textwidth,valign=c]{img/2d_real/small/3.png} & \includegraphics[width=0.12\textwidth,valign=c]{img/2d_real/small/4.png} & \includegraphics[width=0.12\textwidth,valign=c]{img/2d_real/small/5.png} &
\includegraphics[width=0.12\textwidth,valign=c]{img/2d_real/small/6.png} \\
\end{tabular}
\label{tab:2d_L1_L2}
\vspace{-0.4cm}
\end{table*}
In the first stage of the experimental evaluation, we evaluated DLOFTBs on \textit{RGB real} dataset (see Section~\ref{sec:rgb_real}) , with masks generated using hue-based segmentation, and compared it with Ariadne+~\cite{ariadne_plus} and FastDLO~\cite{fastdlo} learned approaches. We used cables with different widths and lengths, and tested all algorithms on challenging setups, shown in Table~\ref{tab:2d_L1_L2}, including self-intersection (\textit{scenario 6}) and occlusions (\textit{scenarios 2-6}).
Obtained results show that the proposed algorithm achieves the most stable behavior and outperforms all baselines in terms of criterion $\mathcal{L}_1$ and processing time, and achieves similar results in terms of criterion $\mathcal{L}_2$. Huge values of $\mathcal{L}_1$ for the baselines indicate, that, unlike DLOFTBs, they are unable to cover the whole cable with the predicted spline (extreme case is Ariadne+ which was unable to generate any curve for \textit{scenario 6}). Whereas, small values of $\mathcal{L}_2$ for almost all cases, ensures that predicted splines do not cover empty areas. Our method achieves a relatively big value of $\mathcal{L}_2$ only for \textit{scenario 3}, in which large parts of the cable are outside the camera's field of view, therefore, even reasonable and plausible curves generated by the proposed method results in the growth of $\mathcal{L}_2$. The behavior of the algorithms for some sample challenging frames is presented in Figure~\ref{fig:2d_tracking}. Even though both baselines were provided with a very clean mask of the tracked cable, they were unable to handle occlusions and self-intersections, whereas DLOFTBs handled them perfectly.
\begin{table*}[!b]
\vspace{-0.1cm}
\setlength{\tabcolsep}{1pt}
\centering
\caption{Comparison of the DLOFTBs with CDCPD2 algorithm on several 3D videos of a human manipulating the cable.}
\begin{tabular}{cc|cccccccccc}
\hline
\multicolumn{2}{c}{Scenario} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\\
\hline
\multirow{3}{*}{CDCPD2~\cite{cdcpd2}}
& $\mathcal{L}_1$ & 6.26 & 7.1 & 6.21 & 5.73 & 15.6 & 5.49 & 6.03 & 5.79 & 19.0 & 11.19\\
& $\mathcal{L}_2$ & 2.73 & 2.85 & 6.55 & 3.87 & 16.6 & 2.83 & 2.76 & 2.5 & 12.3 & 3.79\\
& Time [ms] & 81 & 110 & 125 & 120 & 87 & 85 & 91 & 92 & 92 & 103\\
\multirow{3}{*}{DLOFTBs}
& $\mathcal{L}_1$ & \textbf{5.05} & \textbf{5.31} & \textbf{4.41} & \textbf{4.5} & \textbf{4.07} & \textbf{3.79} & \textbf{4.69} & \textbf{4.51} & \textbf{4.79} & \textbf{5.23}\\
& $\mathcal{L}_2$ & \textbf{1.08} & \textbf{1.04} & \textbf{5.75} & \textbf{2.71} & \textbf{2.08} & \textbf{0.88} & \textbf{1.21} & \textbf{1.03} & \textbf{1.58} & \textbf{0.68}\\
& Time [ms] & \textbf{26} & \textbf{36} & \textbf{38} & \textbf{33} & \textbf{29} & \textbf{30} & \textbf{19} & \textbf{18} & \textbf{23} & \textbf{26}\\
\multicolumn{2}{c|}{Sample frame} &
\includegraphics[width=0.08\textwidth,valign=c]{img/3d_real/small/w00.png} &
\includegraphics[width=0.08\textwidth,valign=c]{img/3d_real/small/w01.png} &
\includegraphics[width=0.08\textwidth,valign=c]{img/3d_real/small/w02.png} &
\includegraphics[width=0.08\textwidth,valign=c]{img/3d_real/small/w03.png} &
\includegraphics[width=0.08\textwidth,valign=c]{img/3d_real/small/w04.png} &
\includegraphics[width=0.08\textwidth,valign=c]{img/3d_real/small/w05.png} &
\includegraphics[width=0.08\textwidth,valign=c]{img/3d_real/small/w06.png} &
\includegraphics[width=0.08\textwidth,valign=c]{img/3d_real/small/w07.png} &
\includegraphics[width=0.08\textwidth,valign=c]{img/3d_real/small/w08.png} &
\includegraphics[width=0.08\textwidth,valign=c]{img/3d_real/small/w09.png}\\
\hline
\end{tabular}
\label{tab:3d_L1_L2}
\end{table*}
\subsection{2D masks of multiple cables}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{img/merged_plots_ariadne_dataset_straight_small.jpg}
\caption{Comparison of multiple DLOs tracking methods on Ariadne+ dataset.}
\label{fig:multiple_cables}
\vspace{-0.5cm}
\end{figure}
In this experiment, we evaluated the ability of DLOFTBs to identify multiple cables at once on the masked image and compared directly with Ariadne+ and FastDLO algorithms~\cite{ariadne_plus, fastdlo} on the augmented version of the Ariadne+ test set (Section~\ref{sec:ariadne}) segmented using DeepLabV3+ network~\ref{tab:ariadne} for all algorithms. The result of this comparison can be found in Table~\ref{tab:ariadne}. We outperformed Ariadne+ and FastDLO in terms of algorithm execution time, and the accuracy of the DLO shape reconstruction, and scored second in the number of wrongly identified DLOs. The relatively high number of redundant curves fitted by DLOFTBs is a result of extremely noisy masks generated by DeepLabV3+ (see 3rd column of Figure~\ref{fig:multiple_cables}). In Figure~\ref{fig:multiple_cables} we present a qualitative analysis of algorithms behavior on 3 challenging images. Ariande+ has severe problems with handling complex backgrounds and bends at the intersections of cables, while FastDLO cannot solve the intersection in the 2nd image properly and produces a wavy shape for the left cable in the 1st image. In turn, DLOFTBs generates the most accurate solutions for the first two images, however, if the mask is very noisy (3rd image) it fits curves into linear false positives regions of the mask.
\begin{table}[htbp]
\vspace{-0.15cm}
\setlength{\tabcolsep}{3pt}
\centering
\caption{Comparison of DLOFTBs and Ariadne+ algorithms on multiple cable detection benchmark.}
\vspace{-0.1cm}
\begin{tabular}{c|cccc}
\hline
Algorithm & $\mathcal{L}_3$ & \# missing & \# redundant & Time [ms]\\
\hline
Ariadne+ & 45.06 & 9 & \textbf{16} & 421.3\\
FastDLO & 51.55 & \textbf{3} & 75 & 64.3\\
DLOFTBs & \textbf{27.17} & \textbf{3} & 33 & \textbf{39.2}\\
\hline
\end{tabular}
\label{tab:ariadne}
\vspace{-0.25cm}
\end{table}
\subsection{3D video sequences}
To accurately compare the proposed approach with another State-of-the-Art method, we made our algorithm work with 3D data, for which the State-of-the-Art CDCPD2 algorithm~\cite{cdcpd2} was designed.
\subsubsection{Real data}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{img/seq9white_new_small.jpg}
\caption{DLOFTBs (green) and CDCPD2 algorithm (red) on real-data 3D cable tracking sequence. With the fast movement of the cable and insufficiently large cable model updates, CDCPD2 could not track the moving cable. In contrast, our approach accurately tracks the cable shape throughout the sequence.}
\label{fig:seq9}
\vspace{-0.35cm}
\end{figure*}
In our experiments, to accurately compare the precision of shape tracking, we adjusted the video frame rate to enable each algorithm to process it at its own pace and reported the times needed to process a single frame
Furthermore, because the performance of the CDCPD2 method is strongly related to the cable length estimate, we tested it for several different lengths for each scenario and reported only the best result, while our algorithm required no parameter tuning.
In Table~\ref{tab:3d_L1_L2} we present mean values of the $\mathcal{L}_1$ and $\mathcal{L}_2$ errors and mean algorithm running times for 10 scenarios of cable being manipulated by a human, and sample frames for each scenario from the \textit{RGBD real} dataset (see Section~\ref{sec:rgbd_real}).
DLOFTB achieves lower errors than CDCPD2 for all considered scenarios and criteria, and its running times are about 3 times shorter.
While for many scenarios values of the criteria are rather comparable, there are some cases where the proposed approach outperforms the CDCPD2 by a large margin (scenarios 4, 8, 9). In these cases, the CDCPD2 algorithm lost the track of the cable shape due to the fast movements of the cable (scenarios 8 and 9) or the complexity of the initial shape (scenario 4).
In Figure~\ref{fig:seq9} we present a sample tracking sequence, in which our algorithm is able to keep track of the cable movements and deformations. At the same time, for CDCPD2 the changes are too significant to be possible to follow. Moreover, for the last images in the sequence, CDCPD2 produces a wavy shape, which does not reflect the actual cable shape but does not increase the performance error measures significantly.
\subsubsection{Artificial data}
To expose the aforementioned types of errors and accurately measure the quality of tracking, we need to utilize the $\mathcal{L}_3$ criterion
To do so, we used \textit{RGBD artificial} dataset (see Section~\ref{sec:rgbd_art}), which also includes challenging cases like high cable curvature (\textit{scenarios 0, 1}), self-intersections (\textit{scenarios 1, 2}) and rapid cable moves (\textit{scenarios 3, 4}).
The results of this comparison are presented in Table~\ref{tab:3d_L3}.
Also, in this experiment, our proposed approach outperforms the CDCPD2 algorithm. However, the use of the more accurate criterion emphasized the differences between compared methods. DLOFTB achieves mean $\mathcal{L}_3$ values that are from 6 to 20 times smaller than those achieved by CDCPD2.
Minimal mean values of $\mathcal{L}_3$ show that our approach is, on average, more accurate than the best predictions made by the CDCPD2 in 4 out of 5 scenarios. Moreover, maximal mean values show that throughout 3 out of 5 scenarios, DLOFTBs does not produce any significantly wrong measurements (max mean $\mathcal{L}_3 > 10$), while CDCPD2 does so for all scenarios.
In Figure~\ref{fig:seq_spline} we present a part of the \textit{scenario 2} in which cable was recovering from the self-intersection. Our algorithm was able to accurately track the cable throughout the whole process. In contrast, the CDCPD2 crushed when the cable was occluding itself a moment before the untangling and lost track for the rest of the sequence.
\begin{table}[!b]
\vspace{-0.15cm}
\setlength{\tabcolsep}{2.5pt}
\centering
\caption{Comparison of DLOFTBs and CDCPD2 algorithms on several artificially generated 3D cable manipulation scenarios.}
\begin{tabular}{cc|ccccc}
\hline
\multicolumn{2}{c}{Scenario} & 0 & 1 & 2 & 3 & 4\\
\hline
\multirow{3}{*}{CDCPD2}
& mean $\mathcal{L}_3$ & 7.94 & 10.62 & 26.8 & 23.9 & 30.4\\
& max mean $\mathcal{L}_3$ & 128 & 75 & 198 & 82 & 100\\
& min mean $\mathcal{L}_3$ & 1.96 & 2.96 & 2.02 & 2.82 & 2.48\\
\multirow{3}{*}{DLOFTBs}
& mean $\mathcal{L}_3$ & \textbf{1.59} & \textbf{4.22} & \textbf{1.99} & \textbf{2.42} & \textbf{2.47}\\
& max mean $\mathcal{L}_3$ & \textbf{2.6} & \textbf{38.1} & \textbf{47.7} & \textbf{5.75} & \textbf{6.91}\\
& min mean $\mathcal{L}_3$ & \textbf{1.23} & \textbf{1.57} & \textbf{1.1} & \textbf{1.48} & \textbf{1.34}\\
\multicolumn{2}{c|}{Sample frame} &
\includegraphics[width=0.05\textwidth,valign=c]{img/3d_art/w00.png} &
\includegraphics[width=0.05\textwidth,valign=c]{img/3d_art/w01.png} &
\includegraphics[width=0.05\textwidth,valign=c]{img/3d_art/w02.png} &
\includegraphics[width=0.05\textwidth,valign=c]{img/3d_art/w03.png} &
\includegraphics[width=0.05\textwidth,valign=c]{img/3d_art/w04.png} \\
\hline
\end{tabular}
\label{tab:3d_L3}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{img/seq2_white_new.png}
\caption{DLOFTBs (green) and CDCPD2 algorithm (red) on artificially generated data of 3D cable tracking sequence. In the complex case of deforming the cable from the self-intersection into the basic shape, our approach accurately tracks the deformation throughout the whole sequence. In contrast, the CDCPD2 started with accurate tracking but degraded significantly after the last moment when the cable was self-intersecting.}
\label{fig:seq_spline}
\vspace{-0.45cm}
\end{figure}
\section{Conclusions}
This paper proposes a novel approach to DLO tracking on 2D and 3D images and videos called DLOFTBs.
Using a segmented mask of the cable, we can precisely fit a B-spline representation of its shape within tens of milliseconds. The experimental analysis showed that DLOFTB is accurate and can handle tedious cases like occlusions, self-intersections, or even multiple DLOs at one time.
Moreover, it outperforms the State-of-the-Art DLO tracking algorithms CDCPD2~\cite{cdcpd2}, Ariadne+~\cite{ariadne_plus}, and FastDLO~\cite{fastdlo} in all considered scenarios both in terms of the quality of tracking, identification of multiple cables and algorithm runtime. Moreover, the proposed solution does not require any training, thus does not depend on the training data, and, unlike the CDCPD2, does not need any prior information about the DLO.
Our method was extensively tested against algorithmic and learned methods. The weakness of the approach with learning is that it has substantial problems with generalization and is not working with the data outside of the distribution present in the training set. Our approach is not suffering from this issue, so it is better suitable for robotics. We claim that there is still some space for non-deep-learning approaches, which are better in generalization and are fully explainable.
\addtolength{\textheight}{-10cm}
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13725",
"language": "en",
"timestamp": "2023-02-28T02:28:27",
"url": "https://arxiv.org/abs/2302.13725",
"yymm": "2302"
} | \section{Introduction and definition of the model}
\label{intro}
The study of rod-like molecules has a long history in statistical mechanics.
Onsager \cite{o49} has argued that they should show orientional (nematic) order
at sufficiently high densities. Although his original argument was related to a
continuum system, later rods placed on lattices were also considered, such as in
the approximate study by Flory \cite{f56} somewhat later modified by Zwanzig
\cite{z63}.
So far, these lattice models of rods, where each rod ($k$-mer) is composed by
$k$ consecutive sites aligned in one of the directions of the lattice, were
much studied in the literature but the only case where an exact solution was
found was for dimers ($k=2$) on two-dimensional lattices and in the full
lattice limit, when all lattice sites are occupied by rod monomers \cite{ktf61}. The full lattice dimer phase is isotropic. On the square lattice, it was
found by Ghosh and Dhar \cite{gd07} that nematic ordering appears for rods
with $k \ge 7$ and at intermediate densities, at low and high densities
the system is isotropic. The model has been studied extensively after the
pioneering work by Ghosh and Dhar, mainly using computational simulations. With
increasing rod densities, the first transition from the isotropic to the nematic
phase, is continuous and its universality class for two-dimensional lattices
was determined \cite{m08}. The second transition, where the system leaves
the nematic phase and enters the high density isotropic phase is more difficult
to investigate through simulations: due to the high density of rods the moves
are very rare. An alternative simulational procedure allowed for more efficient
simulations in the high density region \cite{k13}, and recently there were
shown arguments in favor of the possibility of this high density transition
being discontinuous on the square lattice \cite{s22}. Finally, the entropy
of these system of rods in the limit of the full lattice, a generalization of
the classical problem of the entropy of dimers for general size $k$ of the
rods, was also the subject of contributions in the literature. It was
estimated using transfer matrix techniques for trimers on the square lattice
by Ghosh et al \cite{g07}, computational simulations were used to produce
estimates, also on the square lattice for values of $k$ between 2 and 10
\cite{p21} and transfer matrix techniques were applied also for
$2 \le k \le 10$, leading to rather precise estimates for the full
lattice entropies \cite{r23}. An analytic approximation to this problem is
the exact solution of the problem on the Husimi lattice \cite{rso22}, a
generalization of the solution of the problem on the Bethe lattice \cite{drs11}.
These solutions on the central region of treelike lattices may be seen as
improved generalizations of simple mean field approximations.
In this paper we address a generalization of the athermal problem of trimers
on the square lattice mentioned above. If we allow the trimers to bend, they
now may be in two configurations, and we may associate the energy 0 to the
extended configuration and the energy $\epsilon$ to the angular one, so that now
the model is thermal. The entropy per site of the system is now a function of
the statistical weight of the bends $\omega=\exp[\epsilon/(k_BT)]$, where we
still are considering the full lattice limit. Besides the case $\omega=0$, which
corresponds to straight trimers and was discussed above, two other particular
cases of this problem were already studied before and discussed in the
literature. In the limit $\omega \to \infty$ all trimers are in the angular
configuration, and transfer matrix calculations on the square lattice were
used by Frob{\"o}se, Bonnemeier, and J{\"a}ckle to estimate the entropy
\cite{f95}. When straight and angular $k$-mers are equiprobable ($\omega=1$),
similar techniques were used for $k$ in the range between 2 and 7 for the
full lattice limit \cite{d03}.
In the two limiting situations of the model which were already studied
and described above ($\omega=0$ and $\omega \to \infty$), it is athermal,
and here we consider the general case on the square lattice using transfer
matrix techniques to study the thermodynamic properties in the full lattice
limit. This is done defining the model on stripes of the square lattice of
finite widths $L$ in the horizontal $x$ direction, extending in the vertical
direction $y$. As discussed in more detail in \cite{r23}, the initial condition
at $y=0$ is fixed and the transverse boundary condition is periodic, so
that we are solving the model on cylinders with perimeters $L$. We then
proceed defining a transfer matrix for the model, and in the thermodynamic
limit, where the height of the lattice diverges the properties of the model
are determined by the leading eigenvalue of this matrix. Among them, we
calculate particularly the entropy per site $s(\omega)$. Finally, we extrapolate
the results for a finite sets of widths $L$ to the two-dimensional limit
$L \to \infty$ using finite size scaling techniques.
In the next section \ref{method} we describe the definition of the transfer
matrix for the model and the method used to obtain the thermodynamic properties
of the system from it. In section \ref{results} we present the results and
compare them with previous ones were appropriate. Final comments and discussions
are presented in section \ref{final}
\section{Definition of the transfer matrix and calculation of the entropy}
\label{method}
Consider a strip of transverse width $L$, with periodic boundary conditions
in the transverse direction, such that every lattice site is occupied by a
monomer which belongs to a linear or bend trimer. The statistical weight of
a linear trimer is unitary and the Boltzmann factor associated to a
bend trimer is equal to $\omega$. A possible configuration of a section
of a strip with width $L=4$ is shown in figure \ref{fig1}.
\begin{figure}
\centering
\includegraphics{rede.eps}
\caption{Section of a lattice with $L=4$.}
\label{fig1}
\end{figure}
The transfer matrix is defined in a similar way as in previous studies
\cite{f95,g07,r23}. The states are defined by the configuration of the $L$
vertical edges of the lattice crossed by horizontal reference lines $R_i$
shown in the figure. Each time the transfer matrix is applied, $L$ sites are
added to de system, so that the line of the transfer matrix is defined by
the configurations of the reference line $R_i$ and its column by the
configuration of the reference line $R_{i+1}$. The state may be represented by a
vector with $L$ components. If the vertical edge which corresponds to a
component is not occupied by a trimer bond, the component is equal to zero,
otherwise, it will be equal to the number of monomers already included in the
trimer (1 or 2). For example, the state associated to the reference line $R_1$
is $(0,0,0,0)$, and the state corresponding to $R_4$ will be $(0,0,1,2)$. The
combinatorial problem of building the transfer matrix is the following:
given the state of reference line $R_i$, find all possible outputs for
reference line $R_{i+1}$. To build the transfer matrix, we start with the
state $(0,0,0,...0)$ and find all its possible outputs. Then we proceed finding
the second generation of output states. This iterative procedure ends when no
new states are generated. In general, a larger matrix could be defined and
the one we are considering is a particular block of this more general matrix,
but as discussed in \cite{r23} there are evidences that the dominant eigenvalue
is contained in the block we are considering.
For clearness, let us present in some detail the building of the transfer matrix
for $L=3$. We should notice that rotation and reflection symmetries are present
in the system, so we may reduce the size of the matrix using them. In figure
\ref{fig2} we see that the possible if we start with the state $(0,0,0)$:
1) we may proceed by placing an angular and starting a vertical trimer at the
three sites above, generating a state such as $(2,1,0)$, the statistical
weight associated is $\omega$, and there are six ways to do this;
2) a horizontal trimer may be placed on the three sites, the statistical weight
is unitary and the multiplicity is three. The output state is $(0,0,0)$;
3) three vertical trimers may be started at the sites, the statistical weight is
unitary, the multiplicity is one and the output state is $(1,1,1)$;
The only output state of $(1,1,1)$ is the state $(2,2,2)$ and this new state has
$(0,0,0)$ as its only output state, the multiplicity ans statistical weights are
unitary. We therefore, using the order of the four states as they appear in the
text above, have the following transfer matrix for the model on the $L=3$ strip:
\begin{equation}
T_3=
\begin{pmatrix}
3&6\omega&1&0\\
\omega&1&0&0\\
0&0&0&1\\
1&0&0&0
\end{pmatrix}
.
\end{equation}
\begin{figure}
\centering
\includegraphics[scale=0.7]{tm.eps}
\caption{Output states of the state $(O,O,O)$ for $L=3$. }
\label{fig2}
\end{figure}
For periodic boundary conditions in the longitudinal direction of the strip,
we find that the canonical partition function on a lattice of width $L$
and length $M$ may be written as:
\begin{equation}
Z=\Tr T_L^M=\sum_i \lambda_{i,L}^M,
\nonumber
\end{equation}
where the $\lambda_{i,L}$ are the eigenvalues of the transfer matrix $T_L$.
Therefore, in the thermodynamic limit $M \to \infty$, the free energy of
the model will be determined by the leading eigenvalue of the transfer matrix
$\lambda_{1,L}$:
\begin{equation}
f_L(T)=-\frac{k_BT}{L}\ln \lambda_{1,L}(\omega)
\end{equation}
and therefore the dimensionless entropy per site is given by
\begin{eqnarray}
s_L(\omega)&=&\frac{-1}{k_B}\frac{\partial f_L}{\partial T} \nonumber \\
&=&\frac{1}{L}\left(\ln \lambda_{1,L}+\omega \ln \omega
\frac{1}{\lambda_{1,L}}
\frac{\partial \lambda_{1,L}}{\partial\omega}\right).
\label{entrop}
\end{eqnarray}
In general, the transfer matrices are quite sparse and this favors using
numerical procedures related to the power method to determine their leading
eigenvalue
As the width $L$ grows, the number of states $N_S$ increases and this
sets an upper limit to the widths we were able to consider, due to the
limitations of the computational resources at our disposal. Already considering
the symmetries, these numbers are displayed in table \ref{stm}. We
considered widths up to $L=19$.
\begin{table}
\begin{center}
\begin{tabular}{c c c c c c}
\hline
\hline
$L$&$N_S$& & & $L$&$N_S$\\
\hline
3 & 4 & & &12 & 7643 \\
4 & 21 & & &13 & 62415 \\
5 & 39 & & &14 & 173088 \\
6 & 32 & & &15 & 160544 \\
7 & 198 & & &16 & 1351983 \\
8 & 498 & & &17 & 3808083 \\
9 & 409 & & &18 & 3594014 \\
10& 3210& & &19 & 30615354\\
11& 8418& & & & \\
\hline
\hline
\end{tabular}
\caption{Number of states (size of the transfer matrices) $N_S$ for
the widths $L$ of the strips.}
\label{stm}
\end{center}
\end{table}
\section{Results}
\label{results}
As described above, since we may calculate the largest eigenvalue of the
transfer matrix as a function of $\omega$, we can obtain the entropy per
site $s(\omega)$ numerically using Eq. \ref{entrop}. As it happens for
straight rods in the case $\omega=0$ \cite{r23},
the finite size behavior of those entropies can be separated in three sets of
values $\{s_L\}$, depending on the value of the remainder $R=L/k$, where $k=3$
is the number of monomers in the chain. This means that all entropy results
should be grouped in three sets of values, with $R=$ 0, 1, and 2. Each of those
sets tend to a thermodynamic limit, $s_{\infty}$, when $L\to\infty$, separately,
following finite size scaling behaviors of the form
\begin{equation}
s_L^{(R)}(\omega)=s_{\infty}^{(R)}(\omega)+\frac{A(\omega)}{L^2}+o(L^{-2}),
\label{relesc}
\end{equation}
also observed in the rigid chains limit and discussed in more detail in
\cite{r23}.
In the presentantion of the results, it is convenient to define the parameter
\begin{equation}
\Omega=\frac{\omega}{1+\omega},
\end{equation}
whose range is between the values $\Omega=0$ (straight chains) and $\Omega=1$
(angular chains). We have used the BST
method \cite{henkel} to extrapolate
the entropy values obtaining the estimates valid for the limit $L\to\infty$. The
BST was applied considering the relation \ref{relesc}, which implies to set the
value of the free parameter $w$ of this method to 2. Doing so, we
determine the curves shown in figure \ref{extraent} considering the BST
extrapolation in each set of values determined by the remainder $R$.
In that
figure we show the entropy as a function of the parameter $\Omega$.
The values obtained in each of those sets are quite close.
Actually, the differences among those values are within the uncertainty
furnished by the BST extrapolation procedure.
\begin{figure}[b]
\begin{center}
\includegraphics[scale=0.65]{fig3.eps}
\caption{Extrapolated entropy values (a) and their estimated errors (b) as
functions of $\Omega$, calculated using Eqs. \ref{avg}.}
\label{extraent}
\end{center}
\end{figure}
The circles shown in plot (a) of figure \ref{extraent} are the results of the
final extrapolation, obtained considering each of the sets statistically
independent and, for a given
value of $\Omega$, taking the values $s_i$ and its error, $\sigma_i$, evaluated
for each set, so that we get the final result for the extrapolation,
$\overline{s}_{\infty}$ and its error, $\Delta S_{\infty}$ below:
\begin{eqnarray}
\overline{s}_{\infty}&=&\frac{\sum_i s_i/\sigma_i^2}{\sum_i 1/\sigma_i^2}\nonumber\\
\Delta s_{\infty}&=&\sqrt{\frac{1}{\sum_i 1/\sigma_i^2}}.
\label{avg}
\end{eqnarray}
In plot (b) of figure \ref{extraent} the errors associated to the estimates
of the entropies are depicted, we notice that, in general, they are smaller
in the region of higher values of $\Omega$, where angular trimers predominate.
In order to better evaluate this behavior, the table \ref{tavg} shows some of
the final extrapolated entropy values and their errors. Also, we highlight three
special cases for this problem.
The case $\Omega=0$, previously mentioned as the limit where we have
only rigid trimers, was already studied by Gosh {\it{et. al}} \cite{g07} and
Rodrigues {\it{et. al}} \cite{r23}. In these works, that case was studied using
transfer matrices calculated for strips of widths up to $L=27$ and $L=36$,
respectively. Rodrigues {\it{et. al.}}, accomplish to reach such large widths
using a different method to determine the transfer matrix, called {\it Profile
Method}, which renders, in general, matrices with smaller dimensions. Still, our
result, obtained from the extrapolation of data considering widths up to $L=19$,
agrees with the more precise estimate reference \cite{r23}. Another particular
case mentioned before can be seen for the case $\Omega=1/2$, which means
$\omega=1$. This situation represents system of straight and angular trimers
with equal statistical weight in the full lattice limit. A previous result
obtained in \cite{d03} was calculated from transfer matrix approach considering
widths up to $L=12$ and is also consistent with the present value exhibited in
table \ref{tavg} for $\Omega=1/2$. Finally, our results do not include the case
$\Omega=1$, since the numerical calculation for the entropy from the general case
studied here displays strong fluctuations as long as we consider $\Omega\to 1$.
This case should be consider separately, as it was by Forb{\"o}se
{\it{et. al.}} \cite{f95}, whose result is shown in table \ref{tavg}.
\begin{table}[htbp!]
\begin{center}
\begin{tabular}{ccccc}
\hline
\hline
$\Omega$& & Extrapolated values& & Other results\\
\hline
0.0& &0.158539(37) & & 0.15850494(19)\cite{r23}\\
0.1& &0.23572(41) & & \\
0.2& &0.319592(49) & & \\
0.3& &0.37486(11) & & \\
0.4& &0.4036371(27) & & \\
0.5& &0.4119467(46) & & 0.412010(20)\cite{d03}\\
0.6& &0.4052574(28) & & \\
0.7& &0.38767859(85)& & \\
0.8& &0.36172496(57)& & \\
0.9& &0.3278335(26) & & \\
1.0& & & & 0.276931500(50)\cite{f95}\\
\hline
\hline
\end{tabular}
\caption{Some values for the final extrapolated entropy as determined by Eq.
\ref{avg}. Smaller errors are found for larger values of $\Omega$, where the
errors coming from the BST methods are minimum. The third column highlights
the values for the entropy calculated in previous studies considering special
cases, straight chains, $\Omega=0$, mixed trimers gas, $\Omega=1/2$ and
angular trimers, $\Omega=1$.}
\label{tavg}
\end{center}
\end{table}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[scale=0.65]{fig_limites.eps}
\caption{Same curve found in figure \ref{extraent}, but emphasizing three asymptotic regimes for the entropy behavior as a function of $\Omega$. The curves $s_1, s_2$ and $s_3$, explicited in Eqs.\ref{eq7},\ref{eq8} and Eq. \ref{eq9}. Each one of those regimes are related to the special cases mentioned in Table \ref{tavg}.}
\label{asym}
\end{center}
\end{figure}
It is also interesting to find out how the curve shown in
figure \ref{extraent} behaves close to those three special points,
$\Omega=0, 1/2$ and $\Omega=1$. This is illustrated in figure \ref{asym}.
We have found three power-law kind of behaviors,
each one with a particular exponent. In the vicinity of $\Omega=0$, which means
to consider a system with only chains in the straight configuration, we may
see that entropy can be approximated as a function of $\Omega$ as
\begin{equation}
s_1 \approx s_{\Omega=0}+A\Omega^{3/2},
\label{eq7}
\end{equation}
where $s_{\Omega=0}$ is the value of the entropy for the rigid trimers case
and $A$ is a coefficient numerically estimated as $A \approx 2.67$. On the
other hand, close to the point $\Omega=1$ (angular trimers only) we found
that the entropy behavior tends to be like,
\begin{equation}
s_2=s_{\Omega=1}+B(1-\Omega)^{4/5},
\label{eq8}
\end{equation}
where $s_{\Omega=1}$ is the numerical value for the entropy when $\Omega=1$ and
$B\approx 0.32$. Finally, when we look at the function $s(\Omega)$ in the
neighborhood of $\Omega=1/2$, where we have straight and angular trimers
with equal weight, we get a quadratic relation for the entropy
\begin{equation}
s_3=s_{\Omega=1/2}-C(1/2-\Omega)^2,
\label{eq9}
\end{equation}
with $C \approx 0.7$.
The fact that the maximum of the entropy is located exactly at $\Omega=1/2$,
which means $\omega=1$, is not really surprising, once this case favors no
particular arrangement (straight or angular), therefore corresponding to the
situation
where the number of configurations is maximum. Obviously, when we get away from
this point we are favoring one of the configurations. Since the case
of rigid trimers is the one with the least number of different configurations for
trimers occupying all sites of the lattice, this explains the asymmetry of the
curve around the point $\Omega=1/2$.
\section{Final discussion and conclusion}
\label{final}
The entropy of trimers fully occupying the square lattice was already estimated
before in three particular cases: when they are straight, composed by three
monomers on colinear sites \cite{g07,p21,r23}, when they are angular, so that
the two edges which join the monomers are perpendicular \cite{f95} and for a
mixture of equiprobable straight and angular chains \cite{d03}. Here we
generalize the problem studying a canonical ensemble of trimers in straight
and angular configurations, associating an unitary statistical weight to the
first configuration and a weight $\omega$ to the second one. We thus, using
transfer matrix methods, obtain estimates for the entropy $s(\omega)$ for
$\omega$ in the range $[0,\infty]$, which are consistent with the earlier
results in the literature.
The estimates are obtained from numerical diagonalization of the transfer
matrices of the model on strips of the square lattice with finite widths $L$ and
periodic boundary conditions in the transverse direction. The precision of the
estimates is essentially determined by the largest width of the strips we are
able to manage, since we use finite size scaling techniques to extrapolate
the results on strips of finite widths to the two-dimensional limit
$L \to \infty$. As usual, the size of the transfer matrices grows exponentially
with $L$ see table \ref{stm} and this leads to an upper limit $L=19$ we have
reached in this study. The computational part is divided in two stages: first
the transfer matrix has to be built, and this is a combinatorial problem which
involves logical and integer variables only, and in the second stage the leading
eigenvalue of the transfer matrix, for a given value os the statistical weight
$\omega$, has to be determined. As the transfer matrix is very sparse, the
determinations of the dominant eigenvalue is done using a variant of the
power method.
The extrapolation of the entropies of strips of finite widths $L$ to the
two-dimensional limit $L \to \infty$ was done using the BST method
\cite{henkel}, supposing that the asymptotic finite size scaling behavior
of the results is of the form given by Eq. \ref{relesc}. This is discussed
in some detail for straight trimers in \cite{g07}. Although in many cases
the data are consistent with this behavior, as was already found for
straight $k$-mers for other values of $k$ \cite{r23}, it would still be
very interesting to have results for larger widths, in order to confirm this
asymptotic behavior. In particular, the amplitude $A$ in Eq. \ref{relesc},
as discussed in \cite{g07}, is related to the central charge of the phase, and
the present results do not allow precise estimates for it.
Finally, we used an
alternative procedure of extrapolation of the results for finite strips,
adapted from the one employed by Frob{\"o}se et al. \cite{f95} in their study
of angular trimers. It relies the observation mentioned
above that the results for strips of widths $L$ should be divided in three
groups, according to the value of the rest $R$ of the division of $L$ by 3.
We notice, in general, that the results for the entropies of the group $R=0$
approach the asymptotic value from above, while the one for $R=1,2$
approach it from below. We therefore consider the largest 3 or 6 widths $L$
and define the extrapolated entropy to be the mean value of the entropies
for these widths. The error will be given by the standard deviation of this
set of entropies. For brevity, we will not present these results here,
but they were consistent with the ones obtained using the BST method.
As is clear from the discussion above, it would be interesting to have
results for larger strips. One possibility we are considering is to apply
an alternative way to define the transfer matrix to this problem, which was
already used in \cite{r23} for straight $k$-mers with considerable success, in
the sense that it has lead, in that athermal problem, to smaller matrices,
and therefore made it possible to solve that problem for larger widths than
the ones accessible with the conventional procedure. In this grand canonical
transfer matrix procedure, at each application of the transfer matrix a variable
number of trimers is added to the system. We are presently working
on this problem.
\section{Acknowledgments}
It is a great pleasure for us to be part of this commemoration of Prof. Silvio
Salinas 80'th birthday. We thank him for the many opportunities in our
academic and personal lives where he has been not only the great teacher
and researcher we all know, but also a close friend. PS acknowledges partial
financial support from CONICET and SECYT-UNC. and thanks the hospitality of
the Universidade Federal Fluminense, where part of this manuscript was
discussed and planned. This work used computational
resources from CCAD-UNC, which is part of
SNCAD-MinCyT, Argentina.
|
{
"arxiv_id": "2302.13719",
"language": "en",
"timestamp": "2023-02-28T02:28:19",
"url": "https://arxiv.org/abs/2302.13719",
"yymm": "2302"
} | \section{From Galois to Hilbert and Noether}
\label{sec:1}
\subsection{Introduction}
Galois theory turns the collection of all number fields into a profinite
group, the absolute Galois group $\mathrm{Gal}(\bar {\mathbf Q}/{\mathbf Q})$ of~${\mathbf Q}$.
The study of this group and of
its representations has been a cornerstone of number theory for more than a century.
Yet, even such a basic question as the following one remains wide-open to this day:
do all finite groups appear as quotients of $\mathrm{Gal}(\bar {\mathbf Q}/{\mathbf Q})$?
This is the so-called ``inverse Galois problem''.
The same question can be asked about the absolute Galois group of an
arbitrary field~$k$: given a finite group~$G$ and a field~$k$,
does there exist a Galois field extension~$K$ of~$k$ such that $\mathrm{Gal}(K/k)\simeq G$?
Obviously, the answer is in the negative for some fields~$k$ that have a small absolute Galois group
(e.g.\ the fields $\C$ and ${\mathbf R}$, trivially; or the field ${\mathbf Q}_p$, as its absolute Galois group is prosolvable).
When~$k$ is a number field, a positive answer is known when~$G$
is solvable (Shafarevich, see \cite[Chapter~IX, \textsection6]{neukirchschmidtwingberg}
and the references therein), when~$G$ is a symmetric or an alternating group (Hilbert~\cite{hilbertorig}),
when~$G$ is a sporadic group other than~$M_{23}$
(Shih,
Fried,
Belyi,
Matzat,
Thompson,
Hoyden--Siedersleben,
Zeh--Marschke,
Hunt,
Pahlings, see \cite{mallematzat}),
when~$G$ belongs to various infinite families of non-abelian simple groups of Lie type (e.g.\ the groups $\mathrm{PSL}_2({\mathbf F}_{\mkern-2mup})$, according to Shih, Malle, Clark, Zywina; see \cite{zywina});
but the problem remains open over~${\mathbf Q}$ even for such a small group as $\mathrm{PSL}_3({\mathbf F}_{8})$
(see \cite{zywinasmall}, to be complemented with \cite{dieulefaitfloritvila}).
Several variants or generalisations of the inverse Galois problem have been considered in the literature.
Here is one of them. Given a number field~$k$, we denote the set of its places by~$\Omega$
and the completion of~$k$ at $v \in \Omega$ by~$k_v$.
\begin{problem}[Grunwald]
\label{pb:grunwald}
Let~$k$ be a number field
and $S\subset \Omega$ be a finite subset.
Let~$G$ be a finite group.
For each $v \in S$, let~$K_v$ be a Galois field extension of~$k_v$
such that the group $\mathrm{Gal}(K_v/k_v)$ can be embedded into~$G$.
Does there exist a Galois field extension~$K$ of~$k$ such that $\mathrm{Gal}(K/k)\simeq G$
and such that for all $v \in S$, the completion of~$K/k$ at any place of~$K$ lying above~$v$
is isomorphic to $K_v/k_v$?
\end{problem}
The Grunwald--Wang theorem,
which was proved by Wang~\cite{wanggrunwald} following the work of Grunwald~\cite{grunwald}
and which has an interesting history (see
\cite[Chapter~X, footnote on p.~73]{artintate} and
\cite[Chapter~VIII, §2, p.~234, Notes]{milneCFT}),
gives a complete answer
to Problem~\ref{pb:grunwald}
when~$G$ is abelian,
via class field theory.
In particular, the answer to Grunwald's problem is negative for $G={\mathbf Z}/8{\mathbf Z}$ and $k={\mathbf Q}$
(see Proposition~\ref{prop:wang} below),
but it is positive, for any number field~$k$ and any finite abelian group~$G$,
as soon as~$S$ does not contain
any place dividing~$2$.
For an arbitrary finite group~$G$, the Grunwald problem
is expected to have a positive answer whenever~$G$ does not contain any place dividing
the order of~$G$. This is the ``tame'' Grunwald problem,
a terminology coined in \cite{dlan}.
Other variants include embedding problems (given a Galois field extension~$\ell/k$,
a finite group~$G$
and
a surjection $\phi:G \twoheadrightarrow \mathrm{Gal}(\ell/k)$, can one embed~$\ell/k$ into a Galois
field extension~$K/k$ such that $G\simeq \mathrm{Gal}(K/k)$, the map~$\phi$
being identified with the restriction map
$\mathrm{Gal}(K/k) \twoheadrightarrow \mathrm{Gal}(\ell/k)$?)\ or
the question of resolving the inverse Galois problem with
additional constraints, such as the constraint
that a given finite collection of elements of~$k$ be norms from~$K$
(a problem raised in \cite{freiloughrannewton}).
\subsection{Torsors and Galois extensions}
\label{subsec:torsorsandgalois}
Let us start by reformulating the inverse Galois problem in terms of torsors.
Hereafter, a \emph{variety} over a field~$k$ is a separated scheme of finite type over~$k$
(which may be disconnected or otherwise reducible) and~$\bar k$ denotes an algebraic closure of~$k$.
\begin{defn}
\label{def:torsor}
Let $\pi:Y\to X$ be a finite morphism between varieties over a field~$k$.
Let~$G$ be a finite group acting on~$Y$, in such a way that~$\pi$ is $G$\nobreakdash-equivariant
(for the trivial action of~$G$
on~$X$).
We say that~$\pi$ is a \emph{$G$\nobreakdash-torsor},
or that~$Y$ is a $G$\nobreakdash-torsor over~$X$,
if~$\pi$ is étale and~$G$
acts simply transitively on the fibres of the map $Y(\bar k)\to X(\bar k)$ induced by~$\pi$.
\end{defn}
When~$G$ is a finite group acting on a variety~$Y$, we denote by $Y/G$ the quotient variety,
characterised by the universal property of quotients,
when it exists.
Let us recall that the quotient~$Y/G$ exists if~$Y$ is quasi-projective;
the projection
$\pi:Y \to Y/G$ is then finite and surjective; it is étale if the action of~$G$
on~$Y$ is free (by which we mean
that~$G$
acts freely on the set~$Y(\bar k)$); and in the affine case, if $Y=\mathrm{Spec}(A)$, then $Y/G=\mathrm{Spec}(A^G)$ (see
\cite[Chapter~II, §7 and Chapter~III, §12]{mumford}).
It is easy to see that a finite $G$\nobreakdash-equivariant morphism $\pi:Y\to X$
is a $G$\nobreakdash-torsor if and only if~$G$ acts freely on~$Y$
and~$\pi$ induces an isomorphism
$Y/G \myxrightarrow{\,\sim\,} X$.
Thus, in particular, a Galois field extension~$K/k$ with Galois group~$G$
is the same thing
as an irreducible $G$\nobreakdash-torsor over~$k$ (that is, over~$\mathrm{Spec}(k)$);
and this, in turn, is the same thing as an irreducible variety of dimension~$0$, over~$k$,
endowed with a simply transitive
action of~$G$.
This rewording leads to a slight change in perspective, first emphasised by
Hilbert and Noether:
in order to solve the inverse Galois problem for~$G$,
we can now start with any irreducible quasi-projective variety~$Y$ endowed with a free action of~$G$;
setting $X=Y/G$, we obtain a
$G$\nobreakdash-torsor $\pi:Y\to X$; it is then enough
to look for rational points $x \in X(k)$ such that the fibre~$\pi^{-1}(x)$
is irreducible. Indeed, this fibre is in any case a $G$\nobreakdash-torsor over~$k$.
\begin{rmk}
\label{rmk:pushtorsor}
Given a subgroup $H \subseteq G$, any $H$\nobreakdash-torsor $Y\to X$
gives rise to a
$G$\nobreakdash-torsor
$Y'\to X$.
Namely, if~$H$ acts on the left on~$Y$,
we let it act on the right on
$G \times Y$ by $(g,y) \cdot h=(gh,h^{-1}y)$
and observe that $Y'=(G \times Y)/H$ inherits a free left action of~$G$,
turning the projection $Y' \to X$
into a $G$\nobreakdash-torsor.
The variety~$Y'$ is
(canonically) a disjoint union indexed by~$G/H$ of varieties
each of which is (non-canonically, in general) isomorphic, over~$X$, to~$Y$.
Conversely, if $Y' \to X$ is a $G$\nobreakdash-torsor and~$X$ is connected,
then any connected component~$Y$ of~$Y'$ is an $H$\nobreakdash-torsor over~$X$
for some subgroup~$H$ (namely, the stabiliser of~$Y$), and $Y'$ coincides with $(G \times Y)/H$.
All in all, when~$X$ is connected, the data of a $G$\nobreakdash-torsor $Y'\to X$ together with the choice
of a connected component of~$Y'$ is equivalent to the data of a subgroup $H \subseteq G$ and of a
connected $H$\nobreakdash-torsor $Y\to X$.
In particular, we see that if~$G$ is a finite group and~$k$ is a field,
the data of a $G$\nobreakdash-torsor over~$k$ together with the choice of a connected component
is equivalent to the data of a Galois field extension~$K/k$ endowed with an embedding $\mathrm{Gal}(K/k) \hookrightarrow G$.
\end{rmk}
\subsection{Hilbert's irreducibility theorem}
When the base of the $G$\nobreakdash-torsor $\pi:Y\to X$
is an open subset of~$\P^1_k$, with~$k$ a number field, and its total space~$Y$ is irreducible, the existence of
rational points $x \in X(k)$
such that the fibre $\pi^{-1}(x)$
is irreducible
is guaranteed by Hilbert's irreducibility theorem, which we state next.
\begin{thm}[Hilbert]
\label{thm:irredhilbertP1}
Let~$k$ be a number field.
Let $X \subseteq \P^1_k$ be a dense open subset. Let $\pi: Y \to X$ be an irreducible étale covering
(i.e.\ a finite étale morphism from an irreducible variety).
There exists $x \in X(k)$ such that $\pi^{-1}(x)$ is irreducible.
\end{thm}
Theorem~\ref{thm:irredhilbertP1}
is classically
formulated in the following equivalent way: given an irreducible two-variable polynomial $f(s,t)$
with coefficients in a number field~$k$, there exist infinitely many
$t_0 \in k$ such that $f(s,t_0)$ is an irreducible one-variable polynomial with coefficients in~$k$.
In fact, the set of such~$t_0$
is not just infinite:
asymptotically, it contains $100\%$ of the elements of~$k$, when they are ordered by height
(see \cite[\textsection13.1, Theorem~3]{serremw}).
A proof of Theorem~\ref{thm:irredhilbertP1}
can be found in \cite[\textsection9.2, \textsection9.6]{serremw}, where the next corollary
is also established.
\begin{cor}
\label{cor:irredhilbertPn}
Same statement,
with~$X$ now a dense open subset of~$\P^n_k$ for some $n\geq 1$.
\end{cor}
Combining Corollary~\ref{cor:irredhilbertPn} with
the remarks of~\textsection\ref{subsec:torsorsandgalois} leads to an observation, originating from Hilbert's work,
which is
extremely effective for the inverse Galois problem. Before stating it in Corollary~\ref{cor:cortohilbert} below,
let us
recall that a variety~$X$ over a field~$k$ is said to be \emph{rational}
if it is birationally equivalent to an affine space, i.e.\ if it contains a dense open subset isomorphic
to a dense open subset of an affine space; when~$X$ is irreducible and reduced, this
means that its function field~$k(X)$ is a purely transcendental extension of~$k$.
\begin{cor}
\label{cor:cortohilbert}
Let~$k$ be a number field. Let~$G$ be a finite group.
If there exist an irreducible quasi-projective variety~$Y$ over~$k$ and a faithful action of~$G$ on~$Y$ such that
the quotient $Y/G$ is rational, then the inverse Galois problem admits a positive
solution for~$G$ over~$k$.
\end{cor}
\begin{proof}
As~$G$ acts faithfully on~$Y$, it acts freely on a dense open subset of~$Y$, say~$V$.
By choosing~$V$ small enough,
we may assume that $V/G$ is isomorphic to an open subset of~$\P^n_k$.
Corollary~\ref{cor:irredhilbertPn} can now be applied to the projection $V\to V/G$.
\end{proof}
\begin{example}
The order~$3$ automorphism of~$\P^1_k$ given, in homogeneous coordinates,
by $[x:y] \mapsto [y:y-x]$ induces a faithful action of $G={\mathbf Z}/3{\mathbf Z}$ on~$\P^1_k$.
The quotient $\P^1_k/G$ is rational since it is a unirational curve (Lüroth's theorem).
Thus, any number field admits a cyclic extension of degree~$3$.
\end{example}
Ekedahl proved the following useful generalisation of Hilbert's irreducibility theorem.
We recall that~$\Omega$ denotes the set of places of a number field~$k$.
\begin{thm}[\cite{ekedahl}]
\label{th:ekedahl}
Let $\pi:Y \to X$ be a finite étale morphism between geometrically irreducible varieties
over a number field~$k$.
Let $S \subset \Omega$ be a finite subset.
If $X(k)\neq\emptyset$,
then there exists a nonempty open
subset ${\mathscr U} \subset \prod_{v \in \Omega\setminus S} X(k_v)$
such that for all $x \in X(k) \cap {\mathscr U}$,
the fibre $\pi^{-1}(x)$ is irreducible.
\end{thm}
In the above statement, we view~$X(k)$ as diagonally embedded
into $\prod_{v \in \Omega \setminus S}X(k_v)$,
which we endow with the product of the $v$\nobreakdash-adic topologies.
When~$X$ is rational, the set $X(k) \cap {\mathscr U}$ is automatically nonempty,
by the weak approximation property for affine spaces. In general, though, this set can be empty.
A proof of Theorem~\ref{th:ekedahl}, at least in the Galois case\footnote{It
can be checked that Theorem~\ref{th:ekedahl}, in the Galois case,
still holds when~$Y$ is only assumed to be irreducible, instead of geometrically irreducible. Under this weaker
assumption on~$Y$,
the Galois case does imply the general case; hence the validity of Theorem~\ref{th:ekedahl} as stated (and even slightly
more generally than stated,
since this weaker assumption on~$Y$ also suffices in the non-Galois case).}, which is the only one that we shall use
(we use it in the proof of Proposition~\ref{prop:versalrationalimpliesgrunwald} below),
can be found in \cite[Theorem~1.3]{ekedahl}.
\subsection{Noether's problem: statement}
The following problem, raised by Emmy Noether, takes on particular importance
in view of Corollary~\ref{cor:cortohilbert}.
\begin{problem}[Noether]
\label{pb:noether}
Let~$G$ be a finite group and~$k$ be a field.
Choose an embedding $G \hookrightarrow S_n$ for some $n\geq 1$.
Let~$G$ act on~${\mathbf A}^n_k$ through
this embedding by permuting the
coordinates. Is the quotient ${\mathbf A}^n_k/G$ rational over~$k$?
\end{problem}
By Corollary~\ref{cor:cortohilbert}, when~$k$ is a number field, a positive answer to Noether's problem for~$G$ over~$k$
implies a positive answer
to the inverse Galois problem for~$G$ over~$k$.
Beyond this implication,
Noether's problem has become a central problem in the study of rationality,
and has been the focus of
much research for its own sake.
\begin{example}
\label{ex:noethersn}
Noether's problem has a positive answer, over any field, for the symmetric group $G=S_n$.
Indeed, for $G=S_n$, the quotient ${\mathbf A}^n_k/G$ is in fact isomorphic
to ${\mathbf A}^n_k$, as the ring
$k[x_1,\dots,x_n]^{S_n}$
of symmetric polynomials coincides with the polynomial ring in the elementary symmetric
polynomials. Thus, in particular,
every number field admits a Galois field extension with group~$S_n$, for every $n\geq 1$.
\end{example}
\begin{example}
Noether's problem has a positive answer, for any $n \geq 1$ and any embedding $G \hookrightarrow S_n$, when~$G$ is
an abelian group of exponent~$e$ and~$k$ is
a field
that contains the $e$th
roots of unity
and
whose characteristic does not divide~$e$.
In particular, it has a positive answer for all abelian groups over~$\C$.
This is a theorem of Fischer~\cite{fischer}.
\end{example}
\begin{example}
\label{ex:noethera5}
Noether's problem has a positive answer, over any field, for the
alternating group $G=A_5$. This is a theorem of Maeda~\cite{maeda}.
On the other hand, as soon as $n \geq 6$, Noether's problem for $G=A_n$ is still open, over any field.
\end{example}
Noether knew that her problem has a positive answer for small groups
(namely, for all subgroups of~$S_4$).
In general, however, its answer is often negative,
as we discuss in some detail in~\textsection\ref{subsec:noether:counter} below.
\subsection{Versal torsors}
For some $G$\nobreakdash-torsors $\pi:Y\to X$, the existence of rational points $x \in X(k)$
such that $\pi^{-1}(x)$ is irreducible is not only a sufficient condition for a positive answer
to the inverse Galois problem for~$G$ over~$k$, but it is also necessary.
These are the \emph{versal} torsors.
\begin{defn}
Let~$G$ be a finite group, let~$k$ be a field and let~$X$ be a variety over~$k$.
A~$G$\nobreakdash-torsor $\pi:Y\to X$ is \emph{weakly versal}
if for any field extension $k'/k$ with~$k'$ infinite,
every $G$\nobreakdash-torsor over~$k'$ can be realised as the fibre of~$\pi$ above a $k'$\nobreakdash-point
of~$X$.
It is \emph{versal} if for any dense open subset $U \subseteq X$, the induced $G$\nobreakdash-torsor
$\pi^{-1}(U) \to U$ is weakly versal.
\end{defn}
\begin{example}
\label{ex:noetherisversal}
Choose an embedding $G \hookrightarrow S_n$ for some $n\geq 1$ and let~$G$ act on~${\mathbf A}^n_k$ through
this embedding by permuting the
coordinates. Let $Y$ be the open subset of~${\mathbf A}^n_k$ consisting of the points whose coordinates are all
pairwise distinct.
Then~$G$ acts freely on~$Y$ and
it can be checked,
as a consequence of Hilbert's Theorem~90\footnote{In this generality, this theorem is due to Speiser~\cite{speiser}.},
according to which the Galois cohomology set $H^1(k',\mathrm{GL}_n)$
is a singleton for any field~$k'$,
that
the resulting torsor $\pi:Y\to X=Y/G$ is versal (see \cite[Example~5.5]{garibaldimerkurjevserre}).
\end{example}
\begin{example}
\label{ex:noetherisversalsln}
Choose an embedding $G \hookrightarrow \mathrm{SL}_n(k)$ for some $n\geq 1$ and let~$G$ act on the algebraic
group $\mathrm{SL}_n$ over~$k$ through this embedding by right multiplication.
This action is free and it can again be checked, as a consequence of Hilbert's Theorem~90, that
the resulting torsor $\pi:\mathrm{SL}_n\to \mathrm{SL}_n/G$ is versal.
\end{example}
\begin{rmk}
\label{rmk:noname}
Two varieties~$V$ and~$W$ over~$k$ are called \emph{stably birationally equivalent}
if $V \times {\mathbf A}^r_k$ and $W \times {\mathbf A}^s_k$ are birationally equivalent
for some $r$, $s$.
It can be shown that for any finite group~$G$, the varieties~${\mathbf A}^m_k/G$ and~$\mathrm{SL}_n/G$ appearing in Examples~\ref{ex:noetherisversal}
and~\ref{ex:noetherisversalsln},
for all values of~$m$, $n$ and all possible choices of embeddings $G \hookrightarrow S_m$ and $G \hookrightarrow \mathrm{SL}_n(k)$,
all fall into the same stable birational equivalence class of varieties over~$k$.
This is the so-called ``no-name lemma'', see \cite[Corollary~3.9]{ct-sansuc-rationality}.
\end{rmk}
The notion of versality, in the context of these notes\footnote{Outside of the context discussed here,
the notion of versality notably also gives rise
to the definition of the ``essential dimension'' of a finite group~$G$ over a field~$k$---this is the minimal
dimension of a versal $G$\nobreakdash-torsor defined over~$k$---which is interesting in its own right
and has been the focus of many works (see \cite{buhlerreichstein,berhuyfavi,reichsteinicm,merkurjevsurveyed,merkurjevedbis,reichstein13}).
Even determining the essential dimension of ${\mathbf Z}/8{\mathbf Z}$ over~${\mathbf Q}$
is a highly nontrivial task (see~\cite{florenceed}).}, is motivated by the following observation,
which is an improved version of
Corollary~\ref{cor:cortohilbert}:
\begin{prop}
\label{prop:versalrationalimpliesgrunwald}
Let~$k$ be a number field. Let $S_0 \subset \Omega$ be a finite subset.
Let~$G$ be a finite group.
Suppose that there exist an irreducible smooth quasi-projective variety~$Y$ over~$k$ and a free action of~$G$ on~$Y$ satisfying the following two conditions:
\begin{enumerate}[(i)]
\setlength\itemsep{.11em}
\item
the variety $X=Y/G$ satisfies weak approximation off~$S_0$, i.e.\ the diagonal embedding
$X(k) \hookrightarrow \prod_{v \in \Omega\setminus S_0}X(k_v)$ has dense image;
\item
the $G$\nobreakdash-torsor $\pi:Y\to X$ is weakly versal.
\end{enumerate}
Then Grunwald's problem admits a positive
answer for~$G$ over~$k$, for any finite subset $S \subset \Omega$ disjoint from~$S_0$.
\end{prop}
\begin{proof}
We shall need the following classical lemma, proved in
\cite[Proposition~3.5.74]{poonenqpoints}
and whose statement holds for any finite étale morphism~$\pi$.
\begin{lem}[Krasner]
\label{lem:krasner}
For $v \in \Omega$,
the isomorphism class of the variety
$\pi^{-1}(x_v)$ over~$k_v$
is a locally constant function of $x_v \in X(k_v)$ with respect to the $v$\nobreakdash-adic
topology.
\end{lem}
Fix Galois field extensions $K_v/k_v$ for $v \in S$ as in Problem~\ref{pb:grunwald}
and
choose embeddings $\mathrm{Gal}(K_v/k_v) \hookrightarrow G$ for $v \in S$.
By Remark~\ref{rmk:pushtorsor},
these choices give rise to $G$\nobreakdash-torsors over~$k_v$ for $v \in S$.
By weak versality, the latter come from $k_v$\nobreakdash-points $x_v \in X(k_v)$.
Lemma~\ref{lem:krasner}
provides, for every $v \in S$, a neighbourhood ${\mathscr U}_v \subset X(k_v)$
of~$x_v$
such that
$\pi^{-1}(x_v')$
and $\pi^{-1}(x_v)$
are isomorphic,
as varieties over~$k_v$,
for all $x_v' \in {\mathscr U}_v$.
In particular, by Remark~\ref{rmk:pushtorsor} again,
for all $v \in S$ and all $x_v'\in{\mathscr U}_v$,
the fibre $\pi^{-1}(x_v')$ is isomorphic, over~$k_v$, to a disjoint union of copies of $\mathrm{Spec}(K_v)$.
The weak versality of~$\pi$ also implies that $X(k)\neq\emptyset$.
Theorem~\ref{th:ekedahl} therefore provides
a nonempty open subset
${\mathscr U}^0 \subset \prod_{v \in \Omega\setminus (S\cup S_0)} X(k_v)$ such that $\pi^{-1}(x)$ is irreducible
for all $x \in X(k)\cap {\mathscr U}^0$.
Let ${\mathscr U}=\big(\prod_{v \in S}{\mathscr U}_v\big) \times {\mathscr U}^0$.
As the variety~$X$ satisfies weak approximation off~$S_0$,
the set $X(k)\cap {\mathscr U}$ is nonempty.
We fix $x \in X(k)\cap {\mathscr U}$.
The fibre $\pi^{-1}(x)$ is now an irreducible $G$\nobreakdash-torsor
(i.e.\ $\mathrm{Spec}(K)$ for some Galois field extension~$K/k$ with Galois group~$G$)
whose scalar extension from~$k$ to~$k_v$, for each $v \in S$, is a disjoint union of copies of~$\mathrm{Spec}(K_v)$.
This proves the proposition.
\end{proof}
As smooth rational varieties satisfy weak approximation, one can apply
Proposition~\ref{prop:versalrationalimpliesgrunwald}
with $S_0=\emptyset$ whenever the variety $Y/G$ is rational and the torsor $Y \to Y/G$ is weakly versal.
In view of Example~\ref{ex:noetherisversal}, we deduce:
\begin{cor}
\label{cor:noetherimpliesgrunwald}
Given a finite group~$G$ and a number field~$k$,
a positive answer to Noether's problem for~$G$ and~$k$ implies a positive answer to Grunwald's problem for~$G$
and~$k$, for any $S \subset \Omega$.
\end{cor}
Corollary~\ref{cor:noetherimpliesgrunwald} was first established by Saltman (see \cite[Theorem~5.1, Theorem~5.9]{saltmangeneric}).
As an example of an application, Corollary~\ref{cor:noetherimpliesgrunwald} implies that over any number field~$k$,
Grunwald's problem has a positive answer for~$S_n$ and for~$A_5$ over~$k$,
without the need to exclude any place from $S \subset \Omega$
(see
Example~\ref{ex:noethersn} and Example~\ref{ex:noethera5}).
\subsection{Noether's problem: some counterexamples}
\label{subsec:noether:counter}
The hope that a positive solution to the inverse Galois problem might in general come
from a positive solution to Noether's problem turned out, however, to be too optimistic.
Indeed, Noether's problem seems to have a negative solution more often than not,
as we briefly discuss below.
\subsubsection{Counterexamples among abelian groups}
Noether's problem has a negative answer even for cyclic groups over~${\mathbf Q}$.
Swan and Voskresenski\u{\i}
discovered, at the end of the 1960's, the counterexample ${\mathbf Z}/47{\mathbf Z}$ over~${\mathbf Q}$
(see \cite{swan47}, \cite{voskbirlin}). An even smaller counterexample, the group ${\mathbf Z}/8{\mathbf Z}$ over~${\mathbf Q}$,
was then exhibited by Lenstra~\cite{lenstrainvent}.
As Saltman~\cite{saltmangeneric} later observed,
Corollary~\ref{cor:noetherimpliesgrunwald}
provides a direct proof that Noether's problem
admits a negative answer for~${\mathbf Z}/8{\mathbf Z}$ over~${\mathbf Q}$.
Indeed,
by this corollary,
it suffices to show that Grunwald's problem has a negative answer for $G={\mathbf Z}/8{\mathbf Z}$, $k={\mathbf Q}$ and $S=\emptyset$,
and this is exactly what Wang had done in the 1940's:
\begin{prop}[Wang]
\label{prop:wang}
In a cyclic field extension $K/{\mathbf Q}$
of degree~$8$,
the prime~$2$ cannot be inert.
In other words, the completion of a cyclic field extension $K/{\mathbf Q}$ of degree~$8$
at a place dividing~$2$ cannot
be the unramified extension of~${\mathbf Q}_2$ of degree~$8$.
\end{prop}
An elementary proof can be found in \cite[p.~29, end of~\textsection5]{swansurvey}.
Further work on Noether's problem for abelian groups, by Endo, Miyata,
Voskresenski\u{\i} and Lenstra, led to a complete characterisation, by Lenstra~\cite{lenstrainvent}, of
the stable rationality of the quotient ${\mathbf A}^n_k/G$ appearing in Problem~\ref{pb:noether} (and even
of its rationality, in the case where~$G$ acts through its regular representation), when~$G$ is a finite
abelian group
and~$k$ is an arbitrary field. This characterisation is in terms of
the arithmetic of cyclotomic number fields.
For cyclic groups over~${\mathbf Q}$, it reads as follows (see \cite[\textsection3]{lenstracyclic}):
\begin{thm}[Lenstra]
\label{th:lenstra}
Let~$n \geq 1$ be an integer.
Let~$G={\mathbf Z}/n{\mathbf Z}$ faithfully act on~${\mathbf A}^n_{\mathbf Q}$ by cyclically
permuting the coordinates.
The following conditions are equivalent:
\begin{enumerate}
\item The variety ${\mathbf A}^n_{\mathbf Q}/G$ is rational.
\item The variety ${\mathbf A}^n_{\mathbf Q}/G$ is stably rational.
\item
The integer~$n$ is not divisible by~$8$, and for every prime factor~$p$ of~$n$,
if $s$ denotes the $p$\nobreakdash-adic valuation of~$n$,
the cyclotomic ring ${\mathbf Z}\big[\zeta_{(p-1)p^{s-1}}\big]$ contains an element whose norm is equal to~$p$ or to~$-p$.
\end{enumerate}
\end{thm}
We recall that a variety is said to be \emph{stably rational} if its product with an affine space of large enough
dimension is rational.
Stable rationality is known
to be strictly weaker than rationality in general, even over~$\C$, see \cite{bctssd}.
Theorem~\ref{th:lenstra} when~$n$ is a prime number is due to
Voskresenski\u{\i}~\cite{voskprime}. Even when~$n$ is prime, determining whether condition~(3)
of Theorem~\ref{th:lenstra} does or does not hold for a given~$n$ is in general a hard problem;
for instance, it is only recently that this condition was shown to fail for $n=59$ (see \cite[Added remark~3.2]{hoshicomputer}).
Even more recently, based on Theorem~\ref{th:lenstra}, on a height estimate due to Amoroso and Dvornicich~\cite{amorosodvornicich}
and on
extensive computer calculations run by Hoshi~\cite{hoshicomputer}, among other tools,
Plans~\cite{plans} was able to give a complete answer to Noether's problem for cyclic groups over~${\mathbf Q}$:
\begin{thm}[Plans]
\label{th:plans}
The conditions of Theorem~\ref{th:lenstra} are equivalent to the following:
\begin{enumerate}
\setcounter{enumi}{3}
\item The integer~$n$ divides
$2^2 \cdot 3^m \cdot 5^2 \cdot 7^2 \cdot 11 \cdot 13 \cdot 17 \cdot 19 \cdot 23 \cdot 29 \cdot 31 \cdot 37 \cdot 41 \cdot 43 \cdot 61 \cdot 67 \cdot 71$
for some integer $m \geq 0$.
\end{enumerate}
In particular, Noether's problem has a negative answer over~${\mathbf Q}$ for $G={\mathbf Z}/p{\mathbf Z}$ for all but finitely many
prime numbers~$p$.
\end{thm}
\subsubsection{Counterexamples over~$\C$}
\label{subsubsec:counterexamplesoverc}
For non-abelian groups, Noether's problem has a negative answer even over~$\C$. Saltman~\cite{saltmannoether}
gave the first counterexamples over~$\C$.
His results were then generalised by Bogomolov~\cite{bogomolovbrnr},
who established the following theorem
(see \cite[\textsection7]{ct-sansuc-rationality}
and \cite[\textsection6.6]{gilleszamuelycsa} for accounts of its proof):
\begin{thm}[Bogomolov's formula]
\label{th:bogomolovformula}
Let~$n \geq 1$ and $G \subset \mathrm{SL}_n(\C)$ be a finite subgroup.
The unramified Brauer group of the complex variety $\mathrm{SL}_n/G$ is isomorphic to
\begin{align}
\label{eq:bogomolovkernel}
{\mathrm{Ker}}\Big(H^2(G,{\mathbf Q}/{\mathbf Z}) \to \prod H^2(H,{\mathbf Q}/{\mathbf Z})\Big)\rlap,
\end{align}
where the product ranges over all \emph{bicyclic} subgroups $H\subseteq G$ (i.e.\ abelian subgroups of~$G$
that are generated by at most two elements).
\end{thm}
We recall that the Brauer group, defined by Grothendieck as $H^2_{\mathrm{\acute et}}(-,\mathbf{G}_\mathrm{m})$, is a stable birational invariant among smooth proper
varieties over a field of characteristic~$0$, and
that the \emph{unramified Brauer group} of a variety over a field of characteristic~$0$
is by definition
the Brauer group of any smooth proper variety
birationally equivalent to it; for instance, the unramified Brauer group of~${\mathbf A}^n_\C$ is trivial.
Thus, if the unramified Brauer group of a variety over~$\C$ does not vanish,
then this variety is not stably rational, a fortiori it is not rational.
The unramified Brauer group was first considered and used as a
tool for rationality questions
by Saltman~\cite{saltmangenericmatrices,saltmannoether}.
For smooth proper unirational varieties over~$\C$, it coincides with
the invariant that had earlier been employed
by Artin and Mumford~\cite{artinmumford}
to give ``elementary'' examples of complex unirational threefolds failing to be
rational.
For a thorough treatment of the Brauer group, we refer the reader to \cite{ctskobook}.
In view of Remark~\ref{rmk:noname}, Bogomolov's formula gives an easy recipe for computing the
unramified
Brauer group of the variety ${\mathbf A}^n_\C/G$ that appears in Noether's problem over~$\C$.
The kernel~\eqref{eq:bogomolovkernel} can be computed to be nonzero
for some $p$\nobreakdash-groups~$G$, thus yielding counterexamples to Noether's problem over~$\C$
(see \cite[Example~7.5]{ct-sansuc-rationality}, \cite[\textsection6.7]{gilleszamuelycsa}).
Other counterexamples over~$\C$ were later produced by Peyre~\cite{peyrenoether} based on a further
stable birational invariant
introduced by
Colliot-Thélène and Ojanguren~\cite{ctojanguren}, called
\emph{unramified cohomology of degree~$3$}. The unramified Brauer
group coincides with unramified cohomology of degree~$2$.
Many more results about Noether's problem can be found in the survey~\cite{hoshisurvey}.
\subsection{Retract rationality}
Saltman introduced a useful weakening of the notion of stable rationality:
a variety~$X$ over a field~$k$ is said to be \emph{retract rational}
if there exist an integer $n\geq 1$, a dense open subset $U \subseteq {\mathbf A}^n_k$
and a morphism $U \to X$ that admits a rational section.
Retract rationality is a stable birational invariant.
In the situation of Noether's problem, it can happen that the variety ${\mathbf A}^n_k/G$ fails to be rational
and even to be stably rational, but is nevertheless retract rational.
For instance this is so when $G={\mathbf Z}/47{\mathbf Z}$ and $k={\mathbf Q}$:
\begin{thm}[Saltman~\cite{saltmangeneric}]
\label{thm:saltmangeneric}
Taking up the notation of Problem~\ref{pb:noether},
assume that~$G$ is abelian, that~$k$ has characteristic~$0$,
and,
letting~$2^r$ denote the
highest power of~$2$ that divides the exponent of~$G$, that the cyclotomic field extension $k(\zeta_{2^r})/k$
is cyclic. Then the quotient ${\mathbf A}^n_k/G$ is retract
rational over~$k$.
\end{thm}
Theorem~\ref{thm:saltmangeneric} can in particular be applied to all finite abelian groups of odd order.
Thus, retract rationality is weaker than rationality
(compare with Theorem~\ref{th:plans}).
Nevertheless, as far as the applications to the inverse
Galois problem are concerned, it is just as good:
indeed,
as Saltman observed,
smooth retract rational varieties
over number fields are easily seen to satisfy weak approximation, so that
Proposition~\ref{prop:versalrationalimpliesgrunwald} can be applied with $S_0=\emptyset$ whenever the variety $Y/G$
is retract rational and the torsor $Y\to Y/G$ is weakly versal.
Combining this observation with Theorem~\ref{thm:saltmangeneric}
and with
Proposition~\ref{prop:versalrationalimpliesgrunwald},
we deduce, in view of Example~\ref{ex:noetherisversal},
that
Grunwald's problem has a positive answer over any number field~$k$, without excluding any place,
for all abelian groups~$G$ satisfying the assumption of Theorem~\ref{thm:saltmangeneric}---a conclusion
that already resulted from the Grunwald--Wang theorem, but whose proof now fits into the framework
of Hilbert's and Noether's general strategy, even though, according to Theorem~\ref{th:plans},
Noether's problem itself has a negative answer
for many of these groups~$G$ (perhaps even for ``almost all'' of them?).
Conversely, by the same token, Wang's negative answer to Grunwald's problem
(see Proposition~\ref{prop:wang}) implies that
when $G={\mathbf Z}/8{\mathbf Z}$ and $k={\mathbf Q}$, the quotient ${\mathbf A}^n_k/G$ fails not only to be stably rational
but also to be retract rational. Similarly, the negative answers to
Noether's problem over~$\C$ discussed in~\textsection\ref{subsubsec:counterexamplesoverc}
are in fact counterexamples to the retract rationality of the quotients ${\mathbf A}^n_\C/G$ in question.
Thus, despite the wider scope of applicability of the Hilbert--Noether method when rationality
is replaced with the weaker notion of retract rationality, further ideas are necessary to address arbitrary finite groups.
\section{Regular inverse Galois problem}
\label{sec:rig}
\subsection{Statement}
\label{subsec:rigstatement}
We saw, in~\textsection\ref{sec:1},
that Noether's problem does not always admit a positive answer,
i.e.\ the quotient variety ${\mathbf A}^n_k/G$ can fail to be rational, or stably rational, or even
retract rational. A simple way out, if one still wants to apply Hilbert's irreducibility theorem,
is to look
for rational \emph{subvarieties} of~${\mathbf A}^n_k/G$, in particular rational curves. To take advantage of the
geometry of the situation, it is natural to focus on those rational
curves whose inverse image in~${\mathbf A}^n_k$ is geometrically irreducible
and meets the locus $Y \subset {\mathbf A}^n_k$ on which~$G$ acts freely. By the versality of the torsor $Y \to Y/G$
(Example~\ref{ex:noetherisversal}),
finding such curves
is the same as solving the \emph{regular inverse Galois problem} (when~$k$ is perfect):
\begin{problem}[regular inverse Galois]
\label{pb:rig}
Let~$k$ be a field.
Let~$G$ be a finite group.
Do there exist a smooth, projective, geometrically irreducible curve~$C$ over~$k$
and a finite morphism $\pi:C\to \P^1_k$ such that the corresponding extension of function fields $k(C)/k(t)$
is Galois with $\mathrm{Gal}(k(C)/k(t))\simeq G$?
\end{problem}
When~$k$ is a perfect field, this is equivalent to asking for the existence of
a field extension of~$k(t)$ with Galois group~$G$
in which~$k$ is algebraically closed,
i.e.\ a field extension that is regular over~$k$. Following standard practice, we shall refer to such a field extension
as a \emph{regular Galois extension of~$k(t)$ with group~$G$}.
When~$k$ is a number field,
a positive answer to Problem~\ref{pb:rig} for~$k$ and~$G$ implies a positive answer to the inverse Galois
problem for~$k$ and~$G$,
by Hilbert's irreducibility theorem.
Over an arbitrary field and for an arbitrary finite group,
the inverse Galois problem and Noether's problem both have negative answers in general, as we have seen;
in contrast,
Problem~\ref{pb:rig} might well always have a positive answer.
\begin{rmk}
\label{rmk:noetherimpliesrig}
It follows from the Bertini theorem that if~$k$ is infinite and perfect,
a positive answer to Noether's problem for~$k$ and~$G$ implies a positive answer to the regular inverse Galois
problem for~$k$ and~$G$
(see \cite[Théorème~6.3]{jouanoloubertini}).
In fact, for such~$k$, one can check that the retract rationality of ${\mathbf A}^n_k/G$ already
implies a positive answer to the regular inverse Galois
problem for~$k$ and~$G$.
\end{rmk}
\subsection{Riemann's existence theorem}
A solution to the regular inverse Galois problem over~$k$ gives rise, by scalar extension, to a solution
over any field extension of~$k$. Thus, in order to find a solution over~${\mathbf Q}$, it is necessary to first
solve the problem over~$\C$ and over~$\bar {\mathbf Q}$.
The key tool for this is Riemann's existence theorem, which allows one
to transform this algebraic question into a purely topological one.
\begin{thm}[Riemann's existence theorem]
\label{th:ret}
Let~$k$ be an algebraically closed subfield of~$\C$.
Let~$X$ be a variety over~$k$. The natural functor
\begin{align*}
\Big(\text{étale coverings of~$X$}\Big)
\to \Big(\text{finite topological coverings of $X(\C)$}\Big)
\end{align*}
that maps $Y \to X$ to $Y(\C) \to X(\C)$ is an equivalence of categories.
\end{thm}
An \emph{étale covering} of~$X$ is a variety over~$k$ endowed with a finite étale morphism to~$X$.
A~topological covering is \emph{finite} if its fibres are finite. Theorem~\ref{th:ret} in the above
formulation
is proved in \cite{sga1}.
To be precise, the case where $k=\C$
is \cite[Exp.~XII, Théorème~5.1]{sga1}
and builds on Grothendieck's reworking of Serre's GAGA
theorems; the case of an arbitrary algebraically closed subfield of~$\C$
then results from it by \cite[Exp.~XIII, Corollaire~3.5]{sga1}.
\begin{cor}
\label{cor:ret}
Let~$k$ be an algebraically closed subfield of~$\C$.
Let~$X$ be a connected variety over~$k$.
Let $x \in X(k)$. For any finite group~$G$,
isomorphism classes of $G$\nobreakdash-torsors (resp.\ of connected $G$\nobreakdash-torsors)
$Y\to X$
endowed with a lift $y\in Y(k)$ of~$x$
are canonically in one-to-one correspondence with homomorphisms $\pi_1(X(\C),x) \to G$
(resp.\ with surjective homomorphisms
$\pi_1(X(\C),x) \twoheadrightarrow G$).
Changing the choice of~$y$ amounts to conjugating the homomorphism
by an element of~$G$.
\end{cor}
\begin{proof}
Indeed, this
follows from Theorem~\ref{th:ret} combined with the well-known
equivalence of categories between the category of topological
coverings of~$X(\C)$ and the category of sets endowed with an action of~$\pi_1(X(\C),x)$
(see \cite[Theorem~2.3.4]{szamuelygaloisgroups}).
The homomorphism $\pi_1(X(\C),x) \to G$ corresponding to $Y \to X$
sends $\gamma \in \pi_1(X(\C),x)$ to the unique $g \in G$ such that $\gamma y = yg$,
where we are taking the convention that the action of~$G$ on~$Y$ is a right action
and that the monodromy action of $\pi_1(X(\C),x)$ on the fibre of $Y(\C)\to X(\C)$ above~$x$
is a left action.
\end{proof}
\begin{rmk}[reminder on monodromy groups and Galois groups]
\label{rk:ret}
Let~$k$ and~$X$ be as in Corollary~\ref{cor:ret}.
Let $x \in X(\C)$.
The \emph{monodromy group}~$M$ of
an étale covering $Y\to X$ is, by definition,
the largest quotient of~$\pi_1(X(\C),x)$ through which the monodromy action of this group on
the fibre of $Y(\C) \to X(\C)$ above~$x$ factors.
Assume that~$X$ is normal and irreducible
and let $Y' \to Y \to X$
be a tower of irreducible étale coverings
such that the field extension $k(Y')/k(X)$ is a Galois
closure of $k(Y)/k(X)$.
Let $G=\mathrm{Gal}(k(Y')/k(X))$.
Then $Y' \to X$ is the normalisation of~$X$ in~$k(Y')$; as such, it receives an action of~$G$,
with respect to which it is a $G$\nobreakdash-torsor;
in addition, the surjective homomorphism $\pi_1(X(\C),x)\twoheadrightarrow G$
that Corollary~\ref{cor:ret}
associates with $Y' \to X$ and with the choice of a lift $y' \in Y'(k)$ of~$x$
induces an isomorphism $M \myxrightarrow{\,\sim\,} G$.
(Changing the choice of the lift~$y'$ amounts to composing this isomorphism with an inner automorphism.)
Thus, computing the Galois group of the Galois closure of the field extension $k(Y)/k(X)$
is tantamount to computing a monodromy group in the topological setting.
\end{rmk}
\subsection{Classifying Galois covers of the projective line over~\texorpdfstring{$\C$}{𝐂} or over~\texorpdfstring{$\bar {\mathbf Q}$}{Q̄}}
\label{subsec:classifyc}
Let us apply Theorem~\ref{th:ret} to the open subsets of the projective line.
The fundamental group
of the complement of finitely many points in~$\P^1(\C)$ is easy to describe:
\begin{prop}
\label{prop:descpi1}
Let $X \subseteq \P^1_\C$ be a dense open subset.
Write $\P^1_\C \setminus X = \{b_1,\dots,b_r\}$.
Let $x \in X(\C)$. The group $\pi_1(X(\C),x)$ admits a canonical presentation with~$r$
generators $\gamma_1,\dots,\gamma_r$ and a unique relation $\gamma_1 \cdots \gamma_r=1$,
such that~$\gamma_i$ belongs, for every $i\in\{1,\dots,r\}$,
to the conjugacy class in~$\pi_1(X(\C),x)$
of a local counterclockwise loop around~$b_i$.
\end{prop}
What the last sentence of Proposition~\ref{prop:descpi1} means is this:
if~$N_i$ denotes
a small enough open neighbourhood of~$b_i$ in~$\P^1(\C)$ that is biholomorphic to the unit disc, then
a loop contained in $N_i \setminus \{b_i\}$ and going once around~$b_i$ in the counterclockwise direction
gives rise, after choosing a path from~$x$ to a point of this loop, to an element of $\pi_1(X(\C),x)$
whose conjugacy class does not depend on the chosen path.
The content of Proposition~\ref{prop:descpi1} is that these paths can be chosen in such a way
that the~$\gamma_i$ generate~$\pi_1(X(\C),x)$ and satisfy the relation $\gamma_1\cdots\gamma_r=1$.
This is elementary and well-known.
Using Proposition~\ref{prop:descpi1}, we can draw the following corollary
from Riemann's existence theorem.
Corollary~\ref{cor:retgcovers}
completely describes $G$\nobreakdash-torsors over dense open subsets
of the projective line over algebraically closed subfields of~$\C$,
and implies a positive solution
to the regular inverse Galois problem over such fields.
(The notation $\ni^\ast_r(G)$ appearing in its statement
refers to the name Nielsen,
see \cite[\textsection9.2]{volklein}, \cite[\textsection3.1]{romagnywewers}.)
\begin{cor}
\label{cor:retgcovers}
Let $k$ be an algebraically closed subfield of~$\C$.
Let $X \subseteq \P^1_k$ be a dense open subset.
Write $\P^1_\C \setminus X = \{b_1,\dots,b_r\}$.
Let~$G$ be a finite group.
Consider the set
of $r$\nobreakdash-tuples $(g_1,\dots,g_r) \in G^r$ such that
$g_1\cdots g_r=1$ and that $g_1,\dots,g_r$ generate~$G$.
Let $\ni^\ast_r(G)$ denote the quotient of this set
by the action of~$G$ by simultaneous conjugation.
The set of isomorphism classes of irreducible $G$\nobreakdash-torsors
over~$X$ is canonically in bijection with~$\ni^\ast_r(G)$.
\end{cor}
\begin{proof}
By Corollary~\ref{cor:ret}, isomorphism classes of irreducible $G$\nobreakdash-torsors
over~$X$ are canonically in one-to-one correspondence
with conjugacy classes of surjections $\pi_1(X(\C),x)\twoheadrightarrow G$.
Apply Proposition~\ref{prop:descpi1} to conclude.
\end{proof}
\begin{cor}
For any finite group~$G$,
the regular inverse Galois problem admits a positive answer over~$\bar {\mathbf Q}$.
\end{cor}
\begin{proof}
Let~$r$ be an integer, large enough that~$G$ can be generated by~$r-1$ elements.
Pick~$r$ points of~$\P^1(\bar {\mathbf Q})$
and let $X \subset \P^1_{\bar {\mathbf Q}}$ denote their complement.
As $\ni^\ast_{r}(G)\neq\emptyset$,
Corollary~\ref{cor:retgcovers}
ensures the existence of an irreducible $G$\nobreakdash-torsor $p:Y \to X$.
As~$Y$ is normal and~$p$ is finite,
the normalisation of~$\P^1_{\bar {\mathbf Q}}$ in the function field of~$Y$
is a smooth curve~$C$ over~$\bar {\mathbf Q}$ containing~$Y$
as a dense open subset, equipped with a finite morphism $\pi:C \to \P^1_{\bar {\mathbf Q}}$
that extends~$p$.
As~$p$ is a $G$\nobreakdash-torsor, the function field extension $\bar {\mathbf Q}(C)/\bar {\mathbf Q}(t)$
is Galois with group~$G$
(see~\textsection\ref{subsec:torsorsandgalois}).
\end{proof}
\subsection{Monodromy of some non-Galois covers of the projective line}
Proposition~\ref{prop:descpi1} is also useful for computing
the monodromy of ramified covers of the complex projective line that are not necessarily Galois,
via the following result.
\begin{prop}
\label{prop:ssgsd}
Let~$C$ be a smooth, projective, irreducible curve over~$\C$,
endowed with a finite morphism $\pi:C \to \P^1_{\C}$.
Let $X \subseteq \P^1_\C$
be a dense open subset over which~$\pi$ is étale.
Fix $x \in X(\C)$ and write $\P^1_\C \setminus X = \{b_1,\dots,b_r\}$.
Let~$M$ denote the monodromy group of~$\pi$, i.e.\ the largest quotient of $\pi_1(X(\C),x)$
that still acts on~$\pi^{-1}(x)$.
After choosing a bijection $\pi^{-1}(x) \simeq \{1,\dots,n\}$, we view~$M$ as a transitive subgroup of
the symmetric group~$S_n$.
There exist
$\mu_1,\dots,\mu_r \in M$ satisfying the following three properties:
\begin{enumerate}
\item the elements $\mu_1,\dots,\mu_r$ generate the group~$M$;
\item their product $\mu_1\cdots\mu_r$ is the identity of~$M$;
\item for each $i \in \{1,\dots,r\}$, the element $\mu_i \in S_n$ is a product
of cycles whose lengths are the ramification indices of~$\pi$ at the points of $\pi^{-1}(b_i)$.
\end{enumerate}
\end{prop}
\begin{proof}
Applying
Proposition~\ref{prop:descpi1}
and letting~$\mu_i$ denote the image of~$\gamma_i$ in~$M$, we obtain~(1) and~(2).
Property~(3) only depends on the conjugacy class of~$\gamma_i$
and is a standard calculation of the monodromy of the étale coverings of the punctured
unit disc.
\end{proof}
\begin{example}
\label{ex:rigsn}
Let~$C$ be a smooth, projective, irreducible curve over an algebraically closed subfield~$k$ of~$\C$, endowed with a morphism
$\pi:C \to \P^1_k$ of degree $n \geq 1$.
Assume that all ramification points have ramification index~$2$ and that
no two of them lie in the same fibre of~$\pi$.
Then the Galois group of a Galois closure of the function field extension
$k(C)/k(t)$ is the full symmetric group~$S_n$. Indeed, Remark~\ref{rk:ret}
and Proposition~\ref{prop:ssgsd} show that this Galois group is a transitive subgroup of~$S_n$
generated by transpositions; the only such subgroup is~$S_n$ itself.
\end{example}
\begin{rmk}
\label{rk:geomconnected}
Let~$k$ be a field of characteristic~$0$.
Let $k(t) \subseteq K \subseteq K' \subset \overline{k(t)}$ be a tower of fields,
where~$\overline{k(t)}$ denotes an algebraic closure of~$k(t)$,
where~$K/k(t)$ is a finite extension and where~$K'/k(t)$ is its Galois closure
inside~$\overline{k(t)}$.
Let us assume that~$k$ is algebraically closed in~$K$. The field~$k$ need not, in general, be algebraically closed in~$K'$.
(For example, if $k={\mathbf Q}$ and $K={\mathbf Q}(t^{1/n})$, then $K'={\mathbf Q}(\zeta_n)(t^{1/n})$, where~$\zeta_n$ denotes a primitive
$n$th root of unity.)
This pathology, however, cannot occur if the underlying topological monodromy
group is the full symmetric group, or, more generally, if it is a self-normalising subgroup of the ambient
symmetric group.
Indeed, let~$k'$ denote the algebraic closure of~$k$ in~$K'$, set $G=\mathrm{Gal}(K'/k(t))$ and $G_{{\mathrm{geom}}}=\mathrm{Gal}(K'/k'(t))$.
Letting~$\bar k$ denote the algebraic closure of~$k$ in~$\overline{k(t)}$, we
remark that $K' \otimes_{k'} \bar k$ and $K \otimes_k \bar k$ are fields and that the field extension
$K' \otimes_{k'} \bar k / \bar k(t)$
is a Galois closure of
$K \otimes_k \bar k / \bar k(t)$, so that its Galois group $G_{{\mathrm{geom}}}$
can be viewed as the topological monodromy group
associated with $K/k(t)$
(see Remark~\ref{rk:ret}).
Fix a primitive element $\alpha_1 \in K$ over~$k(t)$.
Denote by $\alpha_1,\dots,\alpha_n \in K'$ the collection of its Galois conjugates.
As~$G$ acts faithfully on the~$\alpha_i$'s, there is a sequence of inclusions
$G_{{\mathrm{geom}}} \subseteq G \subseteq S_n$. As~$k'/k$ is a Galois field extension,
the group~$G_{{\mathrm{geom}}}$ is normal in~$G$; hence,
if~$G_{{\mathrm{geom}}}$ is self-normalising in~$S_n$,
then
$G=G_{{\mathrm{geom}}}$
and~$k$ is algebraically closed in~$K'$. Thus, for instance,
if the curve~$C$ and the morphism~$\pi$ of Example~\ref{ex:rigsn} come by scalar extension
from a curve and a morphism defined over~${\mathbf Q}$, and if~$K/{\mathbf Q}(t)$ denotes the function field extension
given by the latter morphism, then a Galois closure of~$K/{\mathbf Q}(t)$ has Galois group~$S_n$.
\end{rmk}
In conjunction with Remark~\ref{rk:geomconnected},
Example~\ref{ex:rigsn} leads to many concrete examples of regular Galois extensions of~${\mathbf Q}(t)$ with group~$S_n$.
Let us recall, however, that the mere existence of
regular Galois extensions of~${\mathbf Q}(t)$ with group~$S_n$
already followed
from the positive answer to Noether's problem for~$S_n$
over~${\mathbf Q}$ (see Example~\ref{ex:noethersn} and
Remark~\ref{rmk:noetherimpliesrig}).
As Noether's problem is open for the alternating group~$A_n$ over~${\mathbf Q}$ as soon as $n \geq 6$,
it is of interest to note that Proposition~\ref{prop:ssgsd} also leads to concrete examples of regular
Galois extensions of~${\mathbf Q}(t)$ with group~$A_n$ for all values of~$n$, as we now illustrate.
\begin{example}
\label{ex:alternating}
Let~$C$ be a smooth, projective, geometrically irreducible curve over a subfield~$k$ of~$\C$, endowed with a morphism
$\pi:C \to \P^1_k$ of degree $n \geq 3$.
Assume that~$\pi$ has exactly three ramification points,
that these ramification points are rational points of~$C$ lying above $0,1,\infty \in \P^1(k)$, with ramification indices~$e_0$, $e_1$, $e_\infty$, respectively,
and that $(e_0,e_1,e_\infty)=(n,n-1,2)$ if~$n$ is even
and $(e_0,e_1,e_\infty)=(n-1,n,2)$ if~$n$ is odd.
Let~$K'/k(t)$ denote a Galois closure of
the function field extension
$k(C)/k(t)$.
We first note that~$K'$ is a regular Galois extension of~$k(t)$ with group~$S_n$.
Indeed, when~$k$ is algebraically closed, Remark~\ref{rk:ret}
and Proposition~\ref{prop:ssgsd} imply that $\mathrm{Gal}(K'/k(t))$ is a transitive subgroup of~$S_n$
that contains a cycle of order~$n-1$ and a transposition, but the only such subgroup is~$S_n$ itself (see
\cite[Lemma~4.4.3]{serretopics}); by Remark~\ref{rk:geomconnected}, the case of arbitrary~$k$ follows.
Secondly, we claim that there exists $\alpha \in k^*$
such that $\alpha t$ is a square in~$K'$.
Setting $u=\sqrt{\alpha t}$, this will imply that~$K'$ is a regular Galois extension of~$k(u)$
with group~$A_n$ (where~$u$ can now be viewed as a free variable), as desired.
To verify this claim, we note that the topological monodromy of the double cover of~$\P^1_k$
corresponding to the (unique) quadratic subextension~$L/k(t)$ of~$K'/k(t)$
is obtained by composing the topological monodromy of~$\pi$ with the signature morphism $S_n \to {\mathbf Z}/2{\mathbf Z}$.
As the local monodromy of~$\pi$ at~$1$ is given by a cycle of odd length,
it follows that~$L/k(t)$
is unramified outside of~$0$ and~$\infty$, and, hence, that $L=k\big(\sqrt{\alpha t}\mkern1mu\big)$ for some $\alpha \in k^*$
(as~$k$ is algebraically closed in~$L$).
\end{example}
For explicit equations to which Example~\ref{ex:alternating} can be applied,
see \cite[\textsection4.5]{serretopics}.
\subsection{Looking for covers over non-algebraically closed ground fields}
Now that we know that the regular inverse Galois problem has a positive answer
over~$\bar {\mathbf Q}$, we can try to
find solutions over~${\mathbf Q}$ or at least over overfields of~${\mathbf Q}$ as small as possible.
This has been achieved over the completions of~${\mathbf Q}$,
thus yielding, for all finite groups, a positive answer to the regular inverse Galois problem
over~${\mathbf R}$
(Krull and Neukirch~\cite{krullneukirch}) and over the field~${\mathbf Q}_p$ of $p$\nobreakdash-adic numbers
for every prime~$p$ (Harbater~\cite{harbaterinvgal}).
Pop~\cite{poplarge} generalised these results as follows\footnote{It is not immediately clear that
the article \cite{poplarge}
establishes
Theorem~\ref{th:pop} as we have stated it, without assuming the field to be perfect:
in our definition of the regular inverse
Galois problem, the sought-for field extension of~$k(t)$
was required to admit a \emph{smooth} projective model, which could fail over imperfect fields.
However, in any case, Theorem~\ref{th:pop} as we have stated it is proved in \cite[Théorème~1.1]{moretbaillyconstruction}.}\textsuperscript{,}\footnote{Theorem~\ref{th:pop} (at least for perfect large fields, see the previous footnote) also follows from the results of Harbater~\cite{harbaterinvgal}, see
\cite[\textsection4.5]{harbatericm}.}:
\begin{thm}[Harbater and Pop]
\label{th:pop}
The regular inverse Galois problem has a positive answer over any large field, for any finite group.
\end{thm}
By definition, a field~$k$ is \emph{large} when every smooth curve over~$k$ that has a rational point
has infinitely many of them. Examples include all fields that are complete with respect to an absolute value,
such as~${\mathbf R}$ and~${\Q_{p}}$, as well as infinite algebraic extensions of finite fields or more generally
all so-called pseudo-algebraically closed fields (fields over which every smooth geometrically connected
curve has infinitely many rational points).
The proofs of Theorem~\ref{th:pop} given by Harbater and by Pop rely, in the formal or in
the rigid analytic context over a
complete discretely valued ground
field,
on the construction, by patching, of appropriate ``topological coverings'',
and on a variant, in the corresponding context, of Riemann's existence theorem. Over~$\C$,
the underlying patching construction is presented in \cite[\textsection3.5]{szamuelygaloisgroups}.
Theorem~\ref{th:pop} had previously been established by Fried and Völklein~\cite{friedvolkleinmoduli}
in the case of pseudo-algebraically closed fields of characteristic~$0$. From this special case they
deduced the following result in positive characteristic:
\begin{thm}[Fried and Völklein]
Let~$G$ be a finite group. The regular inverse Galois problem has a positive answer for~$G$
over~${\mathbf F}_p(t)$ for all but finitely many primes~$p$.
\end{thm}
Colliot-Thélène later
shed new light on Theorem~\ref{th:pop} by recasting it
as a theorem about the existence of suitable rational curves
on the varieties ${\mathbf A}^n_k/G$ appearing in Noether's problem, and by noting that
even though these varieties can fail to be rational,
they are in any case \emph{rationally connected}, which opens the door
to applications of the theory of deformation of rational
curves on rationally connected varieties over large fields---a
theory developed, in great generality, by Kollár~\cite{kollarloc}.
Over large fields of characteristic~$0$,
a geometric proof of Theorem~\ref{th:pop} that proceeds by constructing rational curves
on~${\mathbf A}^n_k/G$ was thus given in~\cite{ctrc}.
See also
\cite{kollarfundamentalgroups}, \cite{kollarfundamentalgroups2},
\cite{moretbaillyconstruction} for generalisations.
Unfortunately,
no method for the systematic construction of rational curves on rationally
connected varieties over~${\mathbf Q}$ is known;
more generally,
the various methods on which all known proofs of Theorem~\ref{th:pop} rely
fall short
of solving any case of the regular inverse Galois problem over a given number field.
As of today, all known constructions of realisations of finite groups as regular Galois groups over~${\mathbf Q}$ exploit
more or less ad hoc ideas. One of the most successful approaches is the rigidity method,
initiated by Shih, Fried, Belyi, Matzat and Thompson in the 1970's and the 1980's, which we discuss next.
\subsection{Hurwitz spaces}
\label{subsec:hurwitzspaces}
Even though Hurwitz spaces are not necessary for the description and the implementation of the rigidity method,
their introduction makes the theory rather transparent; in addition, they are indeed indispensable
for some of its refinements.
Hurwitz spaces are moduli spaces of smooth projective irreducible covers of the
projective line. We shall consider them only in characteristic~$0$.
In addition, we shall restrict attention to the moduli space of $G$\nobreakdash-covers;
the term ``$G$\nobreakdash-cover'' is another
name for the regular Galois extensions of~$k(t)$ with group~$G$ that we have been considering
since the beginning of~\textsection\ref{sec:rig}:
\begin{defn}
\label{def:gcover}
Let~$G$ be a finite group. Let~$k$ be a field.
A \emph{$G$\nobreakdash-cover} over~$k$
is a smooth, proper, geometrically irreducible curve~$C$ over~$k$
endowed, on the one hand, with a finite morphism $\pi:C\to \P^1_k$
such that the corresponding extension of function fields $k(C)/k(t)$
is Galois, and, on the other hand,
with an isomorphism $G \myxrightarrow{\,\sim\,} \mathrm{Gal}(k(C)/k(t))$.
(In particular~$G$ acts faithfully on~$C$
and the morphism $\pi^{-1}(U) \to U$ induced by~$\pi$
is a $G$\nobreakdash-torsor for any dense open subset
$U \subset \P^1_k$ above which~$\pi$ is étale.)
\end{defn}
The group of automorphisms of any $G$\nobreakdash-cover, i.e.\ the group of automorphisms of~$C$ that respect not only
the morphism~$\pi$
but also the given isomorphism $G \myxrightarrow{\,\sim\,} \mathrm{Gal}(k(C)/k(t))$,
is the centre of~$G$. We shall assume, until the end of~\textsection\ref{sec:rig},
that~$G$ has trivial centre. This is not too serious a restriction (as any
finite group is a quotient of a finite group with trivial centre,
see \cite[Lemma~2]{friedvolkleinmoduli})
and it will ensure that our
moduli space is a variety rather than a stack
(as the objects that we want to classify have no
nontrivial automorphism).
To prepare for the statement of the next theorem, we need to introduce some notation.
When~$k$ has characteristic~$0$,
the \emph{branch locus} of $\pi:C\to\P^1_k$ is by definition the smallest
reduced
$0$\nobreakdash-dimensional subvariety~$B$ of~$\P^1_k$ such that~$\pi$ is étale over $\P^1_k\setminus B$.
Its \emph{degree} is the cardinality of $B(\bar k)$, where~$\bar k$ denotes an algebraic closure of~$k$.
For any integer $r \geq 1$, we denote by
${\mathscr U}^r \subset (\P^1_{\mathbf Q})^r$ the locus of $r$\nobreakdash-tuples with pairwise distinct components,
and by~${\mathscr U}_r$ the quotient of ${\mathscr U}^r$ by the natural
action of the symmetric group~$S_r$. Thus~${\mathscr U}_r$ is a smooth variety over~${\mathbf Q}$,
and for any field~$k$ of characteristic~$0$,
the set ${\mathscr U}_r(k)$ can be identified with the set of
reduced
$0$\nobreakdash-dimensional subvarieties of~$\P^1_k$ of degree~$r$,
i.e.\ with the set of subsets of $\P^1(\bar k)$ of cardinality~$r$ that are stable under
$\mathrm{Gal}(\bar k/k)$.
\begin{thm}[Fried and Völklein \cite{friedvolkleinmoduli}]
\label{th:hurwitzspaces}
Let~$G$ be a finite group with trivial centre
and $r\geq 1$ be an integer.
With~$G$ and~$r$,
one can canonically associate
a smooth variety~${\mathscr H}_{G,r}$ over~${\mathbf Q}$ such that for any field~$k$ of characteristic~$0$,
the set ${\mathscr H}_{G,r}(k)$ is the set of isomorphism classes of $G$\nobreakdash-covers over~$k$
whose branch locus has degree~$r$.
It is equipped with
a finite étale morphism $\rho:{\mathscr H}_{G,r} \to {\mathscr U}_r$ that maps the isomorphism class
of a $G$\nobreakdash-cover to its branch locus.
\end{thm}
The variety ${\mathscr H}_{G,r}$ is called a \emph{Hurwitz space}.
A modern approach to Theorem~\ref{th:hurwitzspaces} consists
in defining $G$\nobreakdash-covers
not just over fields, as in Definition~\ref{def:gcover},
but more generally over schemes; one then proves
that the resulting moduli
functor on the category of schemes of characteristic~$0$
is representable, by~${\mathscr H}_{G,r}$.
This is the approach adopted by
Wewers~\cite{wewersphd}, who works more generally over~${\mathbf Z}$ (with tame covers)
and without assuming that the centre
of~$G$ is trivial (thus obtaining a moduli stack ${\mathscr H}_{G,r}$). See \cite{romagnywewers}.
We note that
Hurwitz spaces were first contemplated by Hurwitz~\cite{hurwitzorig}, and, with a functorial point of view,
by Fulton~\cite{fultonhurwitz}. However, these authors only considered
(non-Galois) covers with ``simple'' ramification, i.e.\ such
that all ramification points have ramification index~$2$
and no two of them lie over the same branch point. This is insufficient for the purposes
of the regular inverse Galois problem (see Example~\ref{ex:rigsn}).
Let us come back to our motivation.
It is tautological that for any finite group~$G$ with trivial centre, the regular inverse Galois problem
admits a positive answer for~$G$ over~${\mathbf Q}$ if and only if there exists an
integer $r \geq 1$ such that ${\mathscr H}_{G,r}({\mathbf Q})\neq\emptyset$.
For the question of the existence of a rational point
on one of the varieties
${\mathscr H}_{G,r}$
to be tractable, one needs, in turn, some understanding of their geometry.
The varieties~${\mathscr H}_{G,r}$ can be described in a very explicit combinatorial fashion, at least
geometrically,
thanks to the finite étale morphism
$\rho:{\mathscr H}_{G,r} \to {\mathscr U}_r$
given by Theorem~\ref{th:hurwitzspaces}.
Let us pick up the notation $\ni^\ast_r(G)$ introduced in Corollary~\ref{cor:retgcovers}
and write
$\ni_r(G) \subseteq \ni^\ast_r(G)$ for the subset formed by the conjugacy classes of those $r$\nobreakdash-tuples
$(g_1,\dots,g_r)$ such that none of the~$g_i$'s is equal to~$1$.
It then follows from
(the proof of) Corollary~\ref{cor:retgcovers}
that
the fibre of~$\rho$
above any complex point of~${\mathscr U}_r$ can be canonically identified with~$\ni_r(G)$.
In addition, the fundamental group of ${\mathscr U}_r(\C)$ admits a down-to-earth presentation (as a quotient
of the Artin braid group by one relation) and its action on~$\ni_r(G)$ can also be made explicit
(see \cite[\textsection1.3]{friedvolkleinmoduli}).
Thus, for instance, the task of describing the irreducible components of the variety~$({\mathscr H}_{G,r})_{\bar {\mathbf Q}}$
becomes equivalent to that of computing the orbits of a certain action of the braid group on~$\ni_r(G)$.
Unfortunately, as~$r$ increases, even this ``simple'' task quickly becomes computationally infeasible for modern computers (see e.g.\ \cite{haefner}).
\subsection{The rigidity method}
\label{subsec:rigidity}
This method consists in cleverly
identifying irreducible components of~${\mathscr H}_{G,r}$ that contain rational points
for somehow ``trivial'' reasons.
To explain it, we need to refine the étale covering~$\rho$ that appears in Theorem~\ref{th:hurwitzspaces}.
\subsubsection{Algebraic local monodromy}
\label{subsubsec:alglocalmon}
Let~$k$ be a field of characteristic~$0$.
Let $\pi:C \to \P^1_k$ be a $G$\nobreakdash-cover over~$k$.
Let $b_i \in \P^1(k)$ be a rational branch point.
Let $X \subset \P^1_k$ be a dense open subset over which~$\pi$ is étale.
Under the assumption that~$k$ is a subfield of~$\C$,
we have associated with~$\pi$ and~$b_i$,
in~\textsection\ref{subsec:classifyc},
a canonical conjugacy class of~$G$,
namely the conjugacy class of the element~$g_i$ appearing
in Corollary~\ref{cor:retgcovers}.
This is the ``local monodromy'' of~$\pi$ at~$b_i$.
We recall that it is the image,
by a surjection $\pi_1(X(\C),x)\twoheadrightarrow G$ that is well-defined up to conjugation,
of the conjugacy class of a local counterclockwise loop around~$b_i$.
To make this topological definition fit in with the moduli picture of Theorem~\ref{th:hurwitzspaces}
and in particular to understand how it
behaves
with respect to the action of the group of automorphisms of~$k$,
we need to make it algebraic.
We do this as follows.
Let~$\bar k$ be an algebraic
closure of~$k$.
The completion $\bar k(t)_{b_i}$
of~$\bar k(t)$ at the discrete valuation defined by~$b_i$ is isomorphic
to the field of formal power series~$\bar k((u))$, whose algebraic
closure is the field of Puiseux series $\bigcup_{n\geq 1} \bar k((u^{1/n}))$.
By Kummer theory,
the absolute Galois group of $\bar k(t)_{b_i}$ is
canonically isomorphic to ${\hat\Z}(1)_{\bar k}=\varprojlim_{n\geq 1} \boldsymbol{\mu}_n(\bar k)$. The inclusion of fields $\bar k(t) \hookrightarrow \bar k(t)_{b_i}$ induces a continuous
homomorphism in the reverse direction,
well-defined up to conjugation,
between their absolute Galois groups;
hence it induces a continuous
homomorphism $\hat{\mathbf Z}(1)_{\bar k} \to G$,
well-defined up to conjugation by an element of~$G$.
This conjugacy class of homomorphisms $\hat{\mathbf Z}(1)_{\bar k} \to G$ is the analogue of the~$g_i$ from Corollary~\ref{cor:retgcovers}. We call it the \emph{algebraic local monodromy} of~$\pi$ at~$b_i$.
\begin{rmks}
\label{rmks:zhat1zhat}
(i)
When~$k$ is a subfield of~$\C$, one can use the generator $\zeta_n=e^{2i\pi/n}$
of~$\boldsymbol{\mu}_n(\bar k)$ to identify~${\hat\Z}(1)_{\bar k}$ with~${\hat\Z}$ as topological groups (i.e.\ disregarding Galois actions).
The algebraic local monodromy of~$\pi$ at~$b_i$
then becomes identified with
a conjugacy class of~$G$.
One verifies that through this identification, the algebraic local monodromy of~$\pi$ at~$b_i$
coincides with the conjugacy class of~$g_i$ from Corollary~\ref{cor:retgcovers}.
(ii)
The action of~$\mathrm{Gal}(\bar k/k)$ on the algebraic local monodromy of~$\pi$ at~$b_i$
plays a crucial rôle in the rigidity method.
The identification
of Remark~\ref{rmks:zhat1zhat}~(i)
loses track of it,
except when $k={\mathbf R}$. Indeed, in this case,
complex conjugation acts by multiplication by~$-1$ on $\hat{\mathbf Z}(1)_{\bar k}$,
hence maps the algebraic local monodromy of~$\pi$ at~$b_i$ to its ``inverse'';
while
on the topological side, it sends a local counterclockwise loop around~$b_i$
to a local clockwise loop around~$b_i$, and hence again it maps the conjugacy
class of~$g_i$ to the conjugacy class of~$g_i^{-1}$.
(iii)
For an arbitrary field~$k$ of characteristic~$0$,
the choice of a topological generator of the procyclic group
${\hat\Z}(1)_{\bar k}$ should be thought of as an algebraic analogue of the choice of an orientation
of the punctured unit disc (an insight of Grothendieck,
see \cite[(2.1)]{deligneweil1}),
as is illustrated by Remark~\ref{rmks:zhat1zhat}~(ii).
\end{rmks}
\subsubsection{Factoring~$\rho$}
The
natural action of $\mathrm{Gal}(\bar {\mathbf Q}/{\mathbf Q})$
on the
finite set
${\mathrm{Hom}}_{\text{cont}}\Big(\hat{\mathbf Z}(1)_{\bar {\mathbf Q}},G\Big)$
of continuous homomorphisms
$\hat{\mathbf Z}(1)_{\bar {\mathbf Q}} \to G$ induces a continuous action
of $\mathrm{Gal}(\bar {\mathbf Q}/{\mathbf Q})$
on the quotient of this finite set by the conjugation action of~$G$.
As
the functor
\begin{gather*}
\Big(\text{reduced $0$-dimensional varieties over~${\mathbf Q}$}\Big)
\to \Bigg(\begin{array}{c}\text{finite sets endowed with a}\\\text{continuous action of $\mathrm{Gal}(\bar {\mathbf Q}/{\mathbf Q})$}\end{array}\Bigg)
\end{gather*}
that sends a variety~$Z$ to the set~$Z(\bar {\mathbf Q})$ is an equivalence of categories,
we can canonically associate with~$G$
a reduced $0$\nobreakdash-dimensional variety~${\mathscr C}_G$ over~${\mathbf Q}$
such that
\begin{align*}
{\mathscr C}_G(\bar {\mathbf Q})={\mathrm{Hom}}_{\text{cont}}\Big(\hat{\mathbf Z}(1)_{\bar {\mathbf Q}},G\Big)/(\text{conjugation by $G$})\rlap,
\end{align*}
compatibly with the natural continuous actions of~$\mathrm{Gal}(\bar {\mathbf Q}/{\mathbf Q})$ on both sides.
One then has a canonical
$\mathrm{Gal}(\bar k/k)$\nobreakdash-equivariant
identification
\begin{align*}
{\mathscr C}_G(\bar k)={\mathrm{Hom}}_{\text{cont}}\Big(\hat{\mathbf Z}(1)_{\bar k},G\Big)/(\text{conjugation by $G$})
\end{align*}
for any field~$k$ of characteristic~$0$, with algebraic closure~$\bar k$.
Given a $G$\nobreakdash-cover
$\pi:C \to \P^1_k$ over
a field~$k$ of characteristic~$0$
and a rational branch point
$b_i \in \P^1(k)$,
the algebraic local monodromy of~$\pi$ at~$b_i$
defined in~\textsection\ref{subsubsec:alglocalmon}
is thus an element of ${\mathscr C}_G(k)$. More generally, if $b_i \in \P^1_k$ is an
arbitrary
branch point of~$\pi$ (i.e.\ a closed point, not necessarily rational),
applying the definition of~\textsection\ref{subsubsec:alglocalmon}
to the $G$\nobreakdash-cover over~$k(b_i)$ obtained from~$\pi$ by scalar extension
from~$k$ to~$k(b_i)$
and to the rational branch point of this $G$\nobreakdash-cover
induced by~$b_i$, we obtain an element of ${\mathscr C}_G(k(b_i))$, which we still
call the \emph{algebraic local monodromy of~$\pi$ at~$b_i$}.
Thus, if $B \subset \P^1_k$ denotes the branch locus of~$\pi$,
the collection of the algebraic local monodromies of~$\pi$ at the points of~$B$
defines a morphism $B \to {\mathscr C}_G$
of varieties over~${\mathbf Q}$. We call it the \emph{algebraic local monodromy
morphism of~$\pi$}.
Let us endow $({\mathscr C}_G)^r={\mathscr C}_G \times \dots \times {\mathscr C}_G$ with the natural action of the symmetric group~$S_r$
and denote by ${\mathscr V}_r$ the quotient of ${\mathscr U}^r \times ({\mathscr C}_G)^r$ by the diagonal action of~$S_r$.
We have thus produced an étale covering $\nu:{\mathscr V}_r \to {\mathscr U}_r$
of smooth varieties over~${\mathbf Q}$.
For any field~$k$ of characteristic~$0$,
the set ${\mathscr V}_r(k)$ can be identified with the set
of
reduced
$0$\nobreakdash-dimensional subvarieties of~$\P^1_k$ of degree~$r$
endowed with a morphism to~${\mathscr C}_G$.
Associating with each $G$\nobreakdash-cover
its branch locus together with
its algebraic local monodromy morphism
finally provides us with a morphism $\rho':{\mathscr H}_{G,r} \to {\mathscr V}_r$ such that $\rho=\nu\circ\rho'$.
Let~$\mathrm{Cl}(G)$ denote the set of conjugacy classes of~$G$.
Remark~\ref{rmks:zhat1zhat}~(i) and (the proof of)
Corollary~\ref{cor:retgcovers} together
imply the following explicit description
of the complex fibres of~$\rho'$:
\begin{prop}
\label{prop:descrhop}
Let $B \subset \P^1_\C$ be a reduced $0$\nobreakdash-dimensional subvariety of degree~$r$.
Write $B=\{b_1,\dots,b_r\}$.
Let $C=(C_1,\dots,C_r)$ be an $r$\nobreakdash-tuple of nontrivial conjugacy classes of~$G$,
viewed as a map $B(\C) \to {\mathscr C}_G(\C)$
via the identification ${\mathscr C}_G(\C)=\mathrm{Cl}(G)$ of Remark~\ref{rmks:zhat1zhat}~(i).
Then the fibre of~$\rho'$
above the point of~${\mathscr V}_r(\C)$ defined by~$B$ and~$C$ can be identified with the
quotient $\ni^C_r(G)$ of the set of $r$\nobreakdash-tuples $(g_1,\dots,g_r) \in G^r$ satisfying the following
three conditions by the action of~$G$ on this set by simultaneous conjugation:
\begin{enumerate}
\item
$g_1\cdots g_r=1$;
\item $g_1,\dots,g_r$ generate~$G$;
\item
$g_i \in C_i$ for all $i \in \{1,\dots,r\}$.
\end{enumerate}
\end{prop}
\subsubsection{Rational points of~${\mathscr C}_G$}
Viewing~$\bar {\mathbf Q}$ as a subfield of~$\C$,
Remark~\ref{rmks:zhat1zhat}~(i) also
induces an identification ${\mathscr C}_G(\bar{\mathbf Q})=\mathrm{Cl}(G)$.
Via this identification,
the natural action of $\mathrm{Gal}(\bar {\mathbf Q}/{\mathbf Q})$ on~${\mathscr C}_G(\bar {\mathbf Q})$ gives rise to the action
of $\mathrm{Gal}(\bar {\mathbf Q}/{\mathbf Q})$
on~$\mathrm{Cl}(G)$ given by the formula $\sigma(g) = g^{-\chi(\sigma)}$ for $\sigma \in \mathrm{Gal}(\bar {\mathbf Q}/{\mathbf Q})$
and $g \in G$, where $\chi:\mathrm{Gal}(\bar {\mathbf Q}/{\mathbf Q})\twoheadrightarrow \hat{\mathbf Z}^*$ denotes the
cyclotomic character.
As a consequence,
the set ${\mathscr C}_G({\mathbf Q})$ of
rational points of~${\mathscr C}_G$
gets identified with the set of \emph{rational} conjugacy classes of~$G$ in the following
sense:
\begin{defn}
\label{def:rationalconjugacy}
A conjugacy class~$C$ of a finite group~$G$ is \emph{rational} if
for every $g \in C$ and every integer $n \geq 1$
prime to the order of~$g$,
the element $g^n$ belongs to~$C$.
\end{defn}
\subsubsection{Rational points of~${\mathscr V}_r$}
\label{subsubsec:rpvr}
Here is a simple way to exhibit rational points of~${\mathscr V}_r$.
Let $b_1,\dots,b_r \in \P^1({\mathbf Q})$ be pairwise distinct.
Let $B=\{b_1,\dots,b_r\}$.
The rational points of~${\mathscr V}_r$ lying above the rational point of~${\mathscr U}_r$ defined by~$B$
are exactly the $r$\nobreakdash-tuples of rational points of~${\mathscr C}_G$, i.e.\ they are
the $r$\nobreakdash-tuples of rational conjugacy
classes of~$G$.
\subsubsection{Rational points of~${\mathscr H}_{G,r}$}
The morphism $\rho':{\mathscr H}_{G,r} \to {\mathscr V}_r$ is an étale covering. As such, it has a degree, which is
a locally constant function on~${\mathscr V}_r$. This function is not constant on~${\mathscr V}_r$ in general---unlike
the degree of~$\rho$, which is constant, equal to the cardinality of~$\ni_r(G)$, as we
have seen in~\textsection\ref{subsec:hurwitzspaces}.
A connected component of~${\mathscr V}_r$ over which~$\rho'$ has degree~$1$,
i.e.\ over which~$\rho'$ restricts to an isomorphism,
is said to be \emph{rigid}.
It is a trivial but fruitful observation, which forms the basis of the rigidity method,
that the existence of a rational point
of a rigid connected component of~${\mathscr V}_r$ implies the existence
of a rational point of~${\mathscr H}_{G,r}$.
By Proposition~\ref{prop:descrhop}, the rigidity of a connected component
can be verified by computing
the set $\ni^C_r(G)$ associated with its complex points.
This motivates
the following definition:
\begin{defn}
An $r$\nobreakdash-tuple $C=(C_1,\dots,C_r)$ of nontrivial conjugacy classes of~$G$
is \emph{rigid} if
the set $\ni^C_r(G)$ defined in Proposition~\ref{prop:descrhop}
has cardinality~$1$.
\end{defn}
\subsubsection{Summing up}
We thus arrive at a down-to-earth condition that implies that the
rational points of~${\mathscr V}_r$ constructed in~\textsection\ref{subsubsec:rpvr}
can be lifted to rational points of~${\mathscr H}_{G,r}$.
\begin{thm}
\label{th:rigidity}
Let~$G$ be a finite group with trivial centre.
Let $r\geq 1$ be an integer.
If there exists a rigid $r$\nobreakdash-tuple
$C=(C_1,\dots,C_r)$ of nontrivial rational conjugacy classes of~$G$,
then for any pairwise distinct $b_1,\dots,b_r \in \P^1({\mathbf Q})$,
the ${\mathbf Q}$\nobreakdash-point of~${\mathscr U}_r$ defined by $\{b_1,\dots,b_r\}$ can be lifted to a
${\mathbf Q}$\nobreakdash-point of~${\mathscr H}_{G,r}$. In particular, the regular inverse Galois problem admits a positive
solution for~$G$ over~${\mathbf Q}$.
\end{thm}
Theorem~\ref{th:rigidity} represents the base case of the rigidity method. It admits many variants;
for instance, one can allow non-rational branch points. (Pro: this weakens
the condition that the prescribed conjugacy classes be rational; con: these conjugacy classes cannot be chosen independently of one another
any longer.)
Even just the above base case is already unreasonably effective: the hypothesis of
Theorem~\ref{th:rigidity}
has been shown to be satisfied,
with $r=3$, for at least~$10$ of the~$26$ sporadic simple groups,
including the monster (by Thompson) and the baby monster (by Malle and Matzat);
see \cite[Chapter~II, \textsection9]{mallematzat}.
As another example, the variant of Theorem~\ref{th:rigidity} in which $r=3$ and~$b_1$ is rational
while~$b_2$ and~$b_3$ are conjugate
quadratic points of the projective line can be applied to $\mathrm{PSL}_2({\mathbf F}_{\mkern-2mup})$ for all primes~$p$ such that~$2$ or~$3$
is not a square modulo~$p$ (see \cite[\textsection8.3.3]{serretopics}),
thus recovering the positive answer
to the regular inverse
Galois problem over~${\mathbf Q}$ for this infinite family of groups that had been obtained
by Shih using modular curves rather than the rigidity
method.
When the rigidity method is applicable, it is in principle possible to deduce from it an explicit polynomial
that realises the desired regular Galois extension of~${\mathbf Q}(t)$
(see \cite[Chapter~II, \textsection9]{mallematzat}). This has some limits in practice (e.g.\ for the
monster group, the degree of the polynomial cannot have less than~$20$ digits) but it leads to
interesting computational challenges (see e.g.\ \cite{barthwenz}).
\section{Grunwald's problem and the Brauer--Manin obstruction}
\label{sec:3}
\subsection{Looking for rational points}
\label{subsec:lookingforratp}
Despite its successes, the rigidity method,
discussed in~\textsection\ref{subsec:rigidity},
often fails to be applicable.
For instance, it fails, in general, for $p$\nobreakdash-groups;
indeed, the regular inverse Galois problem
is still open for most $p$\nobreakdash-groups over~${\mathbf Q}$, even though the inverse Galois problem itself is known to have a positive
answer, over any number field,
for all $p$\nobreakdash-groups---and more generally, for all solvable groups, by a celebrated theorem of Shafarevich
(see \cite[Chapter~IX, \textsection6]{neukirchschmidtwingberg}).
In other words,
after learning, in~\textsection\ref{sec:1},
that the quotient variety ${\mathbf A}^n_k/G$ can fail to be rational or even retract rational,
we now find ourselves unable, at least in practice,
to salvage the Hilbert--Noether method by constructing rational curves in~${\mathbf A}^n_k/G$
over which to apply Hilbert's irreducibility theorem,
as envisaged at the beginning of~\textsection\ref{subsec:rigstatement}. This leads us to our next question:
letting
$Y \subseteq {\mathbf A}^n_k$ denote the locus where~$G$ acts freely,
can we directly construct rational points of $Y/G$ above which the fibre of the quotient map $Y\to Y/G$
is irreducible?
An approach put forward by Colliot-Thélène consists in noting that
Ekedahl's Theorem~\ref{th:ekedahl} reduces this
question, in full generality, to the problem of finding rational points on~$Y/G$
subject to certain weak approximation conditions.
In particular, if the variety~$Y/G$ satisfies weak approximation
off a finite set of places of~$k$, then the inverse Galois problem has a positive answer for~$G$ over~$k$.
Such a weak approximation property can be proved unconditionally
in some cases; for instance, under the assumptions of the following
remarkable theorem from~\cite{neukirch-solvable}, in which no place of~$k$ is excluded:
\begin{thm}[Neukirch]
\label{th:neukirch}
Let~$k$ be a number field.
Let~$G$ be a finite solvable group, acting linearly on~${\mathbf A}^n_k$ for some $n \geq 1$.
Let $Y \subseteq {\mathbf A}^n_k$ be the locus where~$G$ acts freely.
Let $X=Y/G$.
Assume that the order of~$G$ and the number of roots of unity contained in~$k$ are coprime.
Then~$X$ satisfies weak approximation, i.e.\ the set $X(k)$ is dense in $X(k_\Omega)$.
In particular, Grunwald's problem admits a positive
answer for~$G$ over~$k$, for any finite subset $S \subset \Omega$.
\end{thm}
We recall that Grunwald's problem is Problem~\ref{pb:grunwald}.
Without the assumption on the order of~$G$, the conclusion of this theorem can fail,
as we have seen in Proposition~\ref{prop:wang}.
The validity of the weak approximation property is a problem of general interest that makes sense,
and has been studied, for arbitrary smooth varieties. As we shall now explain, the tools that have been developed
for its study on arbitrary smooth varieties turn out to be useful also in the special case of the quotient~$Y/G$.
\subsection{Brauer--Manin obstruction}
A general mechanism, introduced by Manin~\cite{maninicm}
and now called the \emph{Brauer--Manin obstruction},
explains, in some cases, why
the weak approximation property fails for certain varieties over number fields.
Let us recall it briefly. (For details,
see \cite[\textsection13.3]{ctskobook}.)
Let~$X$ be a smooth variety over a number field~$k$.
Let $\mathrm{Br}_\mathrm{nr}(X)$ denote its unramified Brauer group
(see~\textsection\ref{subsubsec:counterexamplesoverc}).
We let $X(k_\Omega)=\prod_{v \in \Omega}X(k_v)$ and endow this set with the product of the $v$\nobreakdash-adic topologies.
The \emph{Brauer--Manin set}
$X(k_{\Omega})^{\mathrm{Br}_{\mathrm{nr}}(X)}$
is the set of all
families $(x_v)_{v \in \Omega} \in X(k_\Omega)$
such that $\sum_{v \in \Omega} \inv_v \beta(x_v)=0$ for all $\beta \in \mathrm{Br}_{\mathrm{nr}}(X)$.
Here $\beta(x_v) \in \mathrm{Br}(k_v)$ denotes the evaluation of~$\beta$ at~$x_v$,
and $\inv_v:\mathrm{Br}(k_v) \hookrightarrow {\mathbf Q}/{\mathbf Z}$ is the invariant map from local class
field theory.
(To make sense of the sum, one checks that only finitely many of its terms are nonzero.)
Manin's fundamental observation is that the image of the diagonal embedding $X(k) \hookrightarrow X(k_\Omega)$
is contained in the Brauer--Manin set, as a consequence
of
the reciprocity law
of global class field theory.
Thus, we have a sequence of
inclusions
\begin{align}
X(k) \subseteq X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)} \subseteq X(k_\Omega)\rlap.
\end{align}
As $X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}$ is a closed subset of $X(k_\Omega)$,
the
weak approximation property,
i.e.\ the density of~$X(k)$ in $X(k_\Omega)$, can hold only if $X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}=X(k_\Omega)$.
When this last equality fails,
one says that there is a Brauer--Manin obstruction to weak approximation
on~$X$.
\subsection{Reinterpreting the Grunwald--Wang theorem}
\label{subsec:reinterpretinggrunwaldwang}
Let us come back to the variety $X=Y/G$ considered in~\textsection\ref{subsec:lookingforratp}:
$G$ is a subgroup of~$S_n$, which acts on~${\mathbf A}^n_k$ by permuting the coordinates, and $Y \subseteq {\mathbf A}^n_k$ is the locus
where~$G$ acts freely.
As Grunwald's problem has a negative answer
for
$G={\mathbf Z}/8{\mathbf Z}$
and
$k={\mathbf Q}$
(see Proposition~\ref{prop:wang})
and as $Y\to X$ is a versal $G$\nobreakdash-torsor (see Example~\ref{ex:noetherisversal}),
the variety~$X$ cannot satisfy the weak approximation property in this case, according
to Proposition~\ref{prop:versalrationalimpliesgrunwald}. Hence a natural question: is there a Brauer--Manin
obstruction to weak approximation on~$X$ when $G={\mathbf Z}/8{\mathbf Z}$ and $k={\mathbf Q}$?
The answer is yes
by the following theorem,
which states, more precisely,
that the weak approximation property on~$X$ is fully controlled by the Brauer--Manin obstruction
as soon as~$G$ is abelian.
\begin{thm}[Voskresenski\u{\i}, Sansuc]
\label{th:vosksansuc}
Let~$k$ be a number field.
Let~$G$ be a finite abelian group, acting linearly on~${\mathbf A}^n_k$ for some $n \geq 1$.
Let $Y \subseteq {\mathbf A}^n_k$ be the locus where~$G$ acts freely.
Let $X=Y/G$.
The set $X(k)$ is dense in $X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}$.
\end{thm}
Theorem~\ref{th:vosksansuc} can be found in the literature by combining
\cite[§7.2, Theorem]{voskbook}
with
\cite[Corollaire~8.13]{sansuclinear}.
We shall explain a proof of it
in~\textsection\ref{subsec:sketch} below.
Returning to an arbitrary finite group~$G$ and keeping~$k$, $Y$ and~$X$ as above,
it is a general fact
that the density of~$X(k)$ in~$X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}$ implies the existence of a finite
subset $S_0 \subset \Omega$ such that~$X$ satisfies weak approximation off~$S_0$.
When such an~$S_0$ exists,
a refinement of the arguments underlying the proof of
Proposition~\ref{prop:versalrationalimpliesgrunwald}
leads to the following conclusion
(a point of view advocated in~\cite{chernousov} and in~\cite[\textsection1.2]{harariquelques}):
fully solving Grunwald's problem for~$G$ over~$k$ is in fact equivalent to describing
the closure of~$X(k)$ inside $X(k_\Omega)$.
Thus, the Grunwald--Wang theorem, which indeed
fully solves Grunwald's problem when~$G$ is abelian, can now be
viewed, in retrospect, as the combination of Theorem~\ref{th:vosksansuc} with an explicit
computation of the Brauer--Manin set $X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}$ when~$G$ is abelian.
\subsection{Rationally connected varieties}
\label{subsec:rcvar}
Whether or not the abelianness hypothesis on~$G$ can be removed from Theorem~\ref{th:vosksansuc} is a fundamental
open question.
When~$X$ is an arbitrary smooth variety possessing a rational point, the set~$X(k)$
cannot be expected to be dense
in $X(k_\Omega)^{\mathrm{Br}_{\mathrm{nr}}(X)}$ without strong assumptions on the geometry of~$X$;
for instance,
Lang's conjectures predict that
for~$d$ and~$N$ such that $d-2 \geq N \geq 4$,
this density should fail
for all smooth hypersurfaces of degree~$d$ in~$\P^N$
that have a rational point (see \cite[Appendix~A]{poonenvoloch}).
The variety $X=Y/G$ that we have been considering
in~\textsection\ref{subsec:lookingforratp}
and in~\textsection\ref{subsec:reinterpretinggrunwaldwang},
despite not being
geometrically rational for an arbitrary finite group~$G$
(see \textsection\ref{subsubsec:counterexamplesoverc}), still has a reasonably
tame geometry: it is unirational and therefore belongs to the class of \emph{rationally connected varieties} according
to the following definition\footnote{To be precise, Definition~\ref{def:rc} coincides with the standard definition
(found, e.g., in \cite[Chapter~IV]{kollarbook}) only when~$X$ is proper.}.
\begin{defn}[Campana, Kollár, Miyaoka, Mori]
\label{def:rc}
A smooth variety~$X$ over a field~$k$ is said to be
\emph{rationally connected} if for any algebraically closed field extension $K/k$
and any two general $K$\nobreakdash-points $x_0,x_1 \in X(K)$, there exists a rational map $f:{\mathbf A}^1_K \dashrightarrow X_K$ over~$K$,
defined in a neighbourhood of~$0$ and~$1$, such that $f(0)=x_0$ and $f(1)=x_1$. (``General'' means that the set
of pairs $(x_0,x_1)$ satisfying the stated condition contains a dense Zariski open subset of $X(K) \times X(K)$.)
\end{defn}
Theorem~\ref{th:vosksansuc} conjecturally extends to all smooth rationally connected varieties:
\begin{conj}[Colliot-Thélène]
\label{conj:ct}
Let~$X$ be a smooth, rationally connected variety, over a number field~$k$.
The set $X(k)$ is dense in $X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}$.
\end{conj}
A number of known results towards this conjecture are listed in~\cite{wittenbergslc}.
Conjecture~\ref{conj:ct} would imply that all smooth rationally connected varieties satisfy
weak approximation off a finite set of places (see e.g.\ \cite[Remarks~2.4~(i)--(ii)]{wittenbergslc}).
In particular, it would imply a positive answer to the inverse Galois problem in general
(by Theorem~\ref{th:ekedahl} applied to the torsor of Example~\ref{ex:noetherisversal}).
\subsection{Determining the Brauer--Manin set}
\label{subsec:determiningbm}
As discussed in~\textsection\ref{subsec:reinterpretinggrunwaldwang}, Conjecture~\ref{conj:ct} in the case $X=Y/G$
would, more precisely,
reduce
Grunwald's problem for~$G$ over~$k$
to the computation of the Brauer--Manin set
of~$X$.
Even partial knowledge of the Brauer--Manin set can lead
to concrete results, as the following theorem illustrates:
\begin{thm}[Lucchini Arteche \cite{lucchiniunramifiedbrauer}]
\label{th:lucchiniarteche}
Let~$k$ be a number field and~$G$ be a finite group acting linearly on~${\mathbf A}^n_k$. Let $Y \subseteq {\mathbf A}^n_k$ be
the locus where~$G$
acts freely. Let $X=Y/G$. Let~$S_0$ be the set of finite places of~$k$ that divide the order of~$G$.
If~$X$ satisfies Conjecture~\ref{conj:ct}, then
Grunwald's problem admits a positive
answer for~$G$ over~$k$, for any finite subset $S \subset \Omega$ disjoint from~$S_0$.
\end{thm}
The proof of Theorem~\ref{th:lucchiniarteche} consists in studying
the evaluation of unramified Brauer classes at the local points of~$X$,
so as
to deduce, from the density of~$X(k)$ in~$X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}$,
that~$X$ satisfies weak approximation off~$S_0$;
Proposition~\ref{prop:versalrationalimpliesgrunwald} then yields the desired statement.
The complete determination of~$X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}$, for~$X$ as in the statement of Theorem~\ref{th:lucchiniarteche},
is in general a difficult task.
The case of a metabelian group~$G$ is investigated in \cite{demeioramified}.
In general, even the computation of~$\mathrm{Br}_\mathrm{nr}(X)$ itself is a delicate problem.
Over an algebraic closure~$\bar k$ of~$k$, one can apply Bogomolov's formula (Theorem~\ref{th:bogomolovformula}).
If the unramified Brauer group of~$X_{\bar k}$ turns out to be nontrivial, one has
to find out which classes of the finite group $\mathrm{Br}_\mathrm{nr}(X_{\bar k})$
are invariant under $\mathrm{Gal}(\bar k/k)$, and to determine
the image
of the natural map $\mathrm{Br}_{\mathrm{nr}}(X)\to \mathrm{Br}_{\mathrm{nr}}(X_{\bar k})^{\mathrm{Gal}(\bar k/k)}$; there is no general recipe for carrying this out.
The kernel of the latter map, on the other hand, is now well understood:
its quotient by the image
of the natural map $\mathrm{Br}(k) \to \mathrm{Br}_\mathrm{nr}(X)$
is finite and is described by a formula due to Harari \cite[Proposition~4]{harariquelques}.
In the most favourable cases, the combination of these formulae can lead to the conclusion
that the natural map $\mathrm{Br}(k) \to \mathrm{Br}_\mathrm{nr}(X)$ is onto, so that $X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}=X(k_\Omega)$
and Grunwald's problem is then expected to have a positive solution for~$G$ over~$k$ with no restriction
on the finite subset $S \subset \Omega$. See \cite[Remarque~7]{demarchebrnr} for a concrete example.
In a different direction,
by adapting
the proof of Bogomolov's formula
to non-algebraically closed ground fields,
Colliot-Thélène~\cite[Corollaire~5.7]{ctbrnslng} showed that the natural map $\mathrm{Br}(k) \to \mathrm{Br}_\mathrm{nr}(X)$
is onto, and hence
that $X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}=X(k_\Omega)$,
whenever the order of~$G$ and the number of roots of unity contained in~$k$ are coprime,
which explains, a posteriori, why the Brauer--Manin obstruction plays no rôle in Neukirch's Theorem~\ref{th:neukirch}.
\subsection{Supersolvable groups}
\label{subsec:ss}
A finite group~$G$ is said to be \emph{supersolvable} if there exists a filtration
$1 = G_0 \subseteq G_1 \subseteq \dots \subseteq G_n = G$ such
that each~$G_i$ is a normal subgroup of~$G$ and each successive
quotient $G_{i+1}/G_i$ is cyclic. All nilpotent groups
(in particular, all $p$\nobreakdash-groups) are supersolvable.
We proved, in \cite{hwzceh}, that Theorem~\ref{th:vosksansuc} generalises to such groups:
\begin{thm}[Harpaz, W.]
\label{thm:hwzceh}
Let~$k$ be a number field.
Let~$G$ be a finite supersolvable group, acting linearly on~${\mathbf A}^n_k$ for some $n \geq 1$.
Let $Y \subseteq {\mathbf A}^n_k$ be the locus where~$G$ acts freely.
Let $X=Y/G$.
The set $X(k)$ is dense in $X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}$.
\end{thm}
A positive answer to the inverse Galois problem
for supersolvable groups results from this
(via Theorem~\ref{th:ekedahl})
but had already been established---more generally, for solvable groups---by Shafarevich, as mentioned in~\textsection\ref{subsec:lookingforratp}.
As discussed in \textsection\textsection\ref{subsec:reinterpretinggrunwaldwang}--\ref{subsec:determiningbm},
Theorem~\ref{thm:hwzceh} refines this positive
answer by bringing information about Grunwald's problem.
It may be that the strategy underlying the proof of Theorem~\ref{thm:hwzceh} can be extended to all solvable groups.
It will not, however, be of any help with
non-abelian simple groups; in fact,
to this day, no approach is known towards
Grunwald's problem for non-abelian simple groups (with the exception of~$A_5$ and~$\mathrm{PSL}_2({\mathbf F}_7)$,
for which Noether's problem itself has a positive answer; see Example~\ref{ex:noethera5}, \cite{mestrepsl2f7} and
Corollary~\ref{cor:noetherimpliesgrunwald}).
\subsection{Descent in a nutshell}
Theorem~\ref{thm:hwzceh} can be seen as a direct application of
a general tool that is useful for proving cases
of Conjecture~\ref{conj:ct}, the so-called ``descent'' method.
We now briefly discuss it. We shall illustrate it by proving
Theorem~\ref{th:vosksansuc} in~\textsection\ref{subsec:sketch}, before indicating its applicability
to other variants of the inverse Galois problem in~\textsection\ref{subsec:prescribednorms}.
To get started, we need to extend the notion of $G$\nobreakdash-torsor from the case where~$G$
is a finite abstract group (Definition~\ref{def:torsor})
to the case where~$G$ is an algebraic group (i.e.\ a group scheme over a field,
possibly disconnected or of positive dimension).
\begin{defn}
\label{def:torsoralg}
Let $\pi:Y\to X$ be a surjective morphism between smooth varieties over a field~$k$ of characteristic~$0$,
with algebraic closure~$\bar k$.
Let~$G$ be an algebraic group over~$k$, acting on~$Y$ in such a way that~$\pi$ is $G$\nobreakdash-equivariant
(for the trivial action of~$G$
on~$X$).
We say that~$\pi$ is a \emph{$G$\nobreakdash-torsor},
or that~$Y$ is a $G$\nobreakdash-torsor over~$X$,
if~$\pi$ is smooth and~$G(\bar k)$
acts simply transitively on the fibres of the map $Y(\bar k)\to X(\bar k)$ induced by~$\pi$.
\end{defn}
Unless otherwise specified, we now let~$k$ denote an arbitrary field of characteristic~$0$.
(For the correct definition of a torsor without this assumption,
see \cite[Definition~2.2.1]{skobook}.)
As usual, by a \emph{$G$\nobreakdash-torsor over~$k$} we shall mean a $G$\nobreakdash-torsor over~$\mathrm{Spec}(k)$.
When $\pi:Y\to X$ is a $G$\nobreakdash-torsor, the morphism~$\pi$ identifies~$X$ with the categorical
quotient~$Y/G$ (see \cite[Proposition~0.2, Proposition~0.1]{mumfordgit}).
\begin{example}
\label{ex:hilbert90}
Hilbert's Theorem~90, which we encountered in Example~\ref{ex:noetherisversal},
is equivalent to the following statement: for any integer $n\geq 1$,
any $\mathrm{GL}_n$\nobreakdash-torsor over~$k$ is isomorphic to~$\mathrm{GL}_n$.
As an easy consequence, any $\mathrm{SL}_n$\nobreakdash-torsor over~$k$ is isomorphic to~$\mathrm{SL}_n$.
\end{example}
\begin{defn}
Let~$X$ be a smooth variety over~$k$. Let~$G$ be an algebraic group over~$k$.
Let $\pi:Y\to X$ be a $G$\nobreakdash-torsor.
The \emph{twist} of~$Y$ by a $G$\nobreakdash-torsor~$P$ over~$k$ is the quotient
\begin{align*}
_PY=(P \times Y)/G
\end{align*}
of~$P \times Y$
by the diagonal action of~$G$, endowed with the natural morphism $_P\pi:{}_PY\to X$ induced by
the second projection $P\times Y\to Y$ and the identification $Y/G=X$. (We are not claiming that $_P\pi$ is a
$G$\nobreakdash-torsor. This is true when~$G$ is commutative, not in general. This point, which will be irrelevant
for us, is discussed in \cite[p.~21]{skobook}.)
\end{defn}
The gist of the descent method is summarised in the following conjecture.
\begin{conj}
\label{conj:descent}
Let~$X$ be a smooth variety over a number field~$k$.
Let~$G$ be a
linear algebraic group over~$k$.
Let $\pi:Y\to X$ be a $G$\nobreakdash-torsor,
with~$Y$ rationally connected.
Assume that for every twist~$Y'$ of~$Y$ by a $G$\nobreakdash-torsor over~$k$,
the set $Y'(k)$ is dense in $Y'(k_\Omega)^{\mathrm{Br}_{\mathrm{nr}}(Y')}$.
Then the set $X(k)$ is dense in $X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}$.
\end{conj}
Conjecture~\ref{conj:descent}, and the first significant cases in which it was established,
appeared in a series of works by Colliot-Thélène and Sansuc. See \cite{skobook} for an account.
We content ourselves with mentioning the following positive result, which in
this form can be found
in \cite[Corollaire~2.2]{hwzceh}.
\begin{thm}[Colliot-Thélène, Sansuc, Harpaz, W.]
\label{th:descenttori}
Conjecture~\ref{conj:descent} holds true if~$G$ is a torus (i.e.\ an algebraic group such that
$G_{\bar k} \simeq \mathbf{G}_{\mathrm{m},\bar k} \times \dots \times \mathbf{G}_{\mathrm{m},\bar k}$).
\end{thm}
\subsection{Sketch of proof of Theorem~\ref{th:vosksansuc}}
\label{subsec:sketch}
We shall deduce Theorem~\ref{th:vosksansuc} from
Theorem~\ref{th:descenttori}. (More precisely, descent will be applied
to a geometrically rational variety;
Theorem~\ref{th:descenttori}
in the case of such varieties is due to Colliot-Thélène and Sansuc alone, see~\cite{ctsandescent2}.)
Before starting the proof of Theorem~\ref{th:vosksansuc}, let us slightly change notation.
We now fix an embedding $G \hookrightarrow \mathrm{SL}_n(k)$ for some $n\geq 1$, let~$Y$
be the algebraic group~$\mathrm{SL}_n$ over~$k$, and let~$G$ act on~$Y$ by right multiplication. As the resulting variety $X=Y/G$
is stably birationally equivalent to the variety~$X$ of Theorem~\ref{th:vosksansuc} (see Remark~\ref{rmk:noname}),
and as the density of~$X(k)$ in $X(k_\Omega)^{\mathrm{Br}_\mathrm{nr}(X)}$ is a stable birational
invariant among smooth, rationally connected varieties (see \cite[Proposition~6.1~(iii)]{cps}
and \cite[Remark~2.4~(ii)]{wittenbergslc}),
this change of notation is harmless.
Recall that~$G$, by assumption, is a finite abelian group.
Let us view it as a constant algebraic group over~$k$.
It is easy to see that~$G$ fits into a short exact sequence
\begin{align}
1 \to G \to T \to Q \to 1
\end{align}
of algebraic groups over~$k$, where~$T$ and~$Q$ are tori
and~$Q$ is quasi-trivial, i.e.\ the character group ${\mathrm{Hom}}(Q_{\bar k},\mathbf{G}_{\mathrm{m},\bar k})$ of~$Q$
admits a basis over~${\mathbf Z}$ that is stable under the action of~$\mathrm{Gal}(\bar k/k)$ (see \cite[Proposition~4.2.1]{serretopics}).
Letting~$G$ act on~$T$ by translation,
we now consider the quotient $W=(\mathrm{SL}_n \times T)/G$
of~$\mathrm{SL}_n \times T$ by the diagonal action,
together with the morphism $\pi:W\to X=\mathrm{SL}_n/G$
induced by the first projection.
The action of~$T$ on~$\mathrm{SL}_n \times T$ by multiplication on the second factor induces an action
of~$T$ on~$W$, with respect to which~$\pi$ is a $T$\nobreakdash-torsor.
According to Theorem~\ref{th:descenttori} applied to~$\pi$,
it will suffice,
in order to complete the proof of Theorem~\ref{th:vosksansuc},
to show that for every $T$\nobreakdash-torsor~$P$ over~$k$,
the variety~$_PW$ satisfies Conjecture~\ref{conj:ct}.
We observe that $_PW=(\mathrm{SL}_n \times P)/G$, that the morphism $p:{}_PW \to P/G$
induced by the second projection is an $\mathrm{SL}_n$\nobreakdash-torsor
(with respect to the action of~$\mathrm{SL}_n$ on~$_PW$ coming from its action on
$\mathrm{SL}_n \times P$ by left multiplication on the first factor), and that
$P/G$ is a $Q$\nobreakdash-torsor over~$k$.
By Hilbert's Theorem~90 (see Example~\ref{ex:hilbert90}),
the generic fibre of~$p$ is isomorphic to~$\mathrm{SL}_n$, in particular it is rational.
It also follows from Hilbert's Theorem~90 (case $n=1$ of
Example~\ref{ex:hilbert90}), combined with Shapiro's lemma, that
any torsor under a quasi-trivial torus over~$k$ is rational; in particlar, the variety~$P/G$
is rational. These two remarks
imply that~$_PW$ is itself rational over~$k$. Thus, it satisfies Conjecture~\ref{conj:ct}
for trivial reasons, and Theorem~\ref{th:vosksansuc} is proved.
\subsection{Supersolvable descent}
The ideas sketched in~\textsection\ref{subsec:sketch} are a starting point
for the proof of the following theorem, established in \cite[Corollary~3.3]{hwsupersolvable}.
\begin{thm}[Harpaz, W.]
\label{th:descentsupersolvable}
Conjecture~\ref{conj:descent} holds true if~$G$ is finite and supersolvable.
\end{thm}
Here ``supersolvable'' means that $G(\bar k)$ is supersolvable in the sense
recalled at the beginning of~\textsection\ref{subsec:ss}, except that the
filtration is now required, in addition, to be stable under the action
of~$\mathrm{Gal}(\bar k/k)$ on~$G(\bar k)$, in case this action is not trivial.
Theorem~\ref{th:descentsupersolvable} implies
Theorem~\ref{thm:hwzceh}: in the notation introduced at the beginning of~\textsection\ref{subsec:sketch},
it suffices to apply
Theorem~\ref{th:descentsupersolvable} to the $G$\nobreakdash-torsor $\mathrm{SL}_n \to \mathrm{SL}_n/G$
and to note that any twist of~$\mathrm{SL}_n$ by a $G$\nobreakdash-torsor over~$k$
is an $\mathrm{SL}_n$\nobreakdash-torsor
(through left multiplication),
hence is isomorphic to~$\mathrm{SL}_n$, by Hilbert's Theorem~90,
hence is rational and satisfies Conjecture~\ref{conj:ct}.
\subsection{Prescribed norms}
\label{subsec:prescribednorms}
Our last theorem is an application of supersolvable descent to a variant of the inverse Galois problem of
a slightly different flavour, meant to demonstrate the flexibility of descent as a tool.
\newcommand{\cite[Theorem~4.16]{hwsupersolvable}}{\cite[Theorem~4.16]{hwsupersolvable}}
\begin{thm}[\cite[Theorem~4.16]{hwsupersolvable}]
\label{th:normsfrom}
Let~$G$ be a finite group.
Let~$k$ be a number field.
Let $\alpha_1,\dots,\alpha_m \in k^*$.
If~$G$ is supersolvable, there exists a Galois field extension~$K$ of~$k$
such that $\mathrm{Gal}(K/k)\simeq G$ and $\alpha_1,\dots,\alpha_m \in N_{K/k}(K)$.
\end{thm}
The idea of the proof is to construct, in a formal and explicit way,
a $G$\nobreakdash-torsor $\pi:Y\to X$
together with invertible functions $\beta_1,\dots,\beta_m$ on~$Y$ whose norms along~$\pi$ are equal
to the constant invertible functions~$\alpha_1,\dots,\alpha_m$ on~$X$. Namely, say $m=1$ for simplicity,
embed~$G$ into~$\mathrm{SL}_n(k)$ and consider the subvariety~$Y$ of $\mathrm{SL}_n \times \prod_{g \in G} \mathbf{G}_\mathrm{m}$
consisting of all $(s,(t_g)_{g\in G})$ such that $\prod_{g \in G} t_g=\alpha_1$;
the invertible function~$\beta_1$ on~$Y$
given by projection onto the~$\mathbf{G}_\mathrm{m}$ factor corresponding to $1 \in G$
has the required norm.
One then checks that the twists of~$Y$ satisfy Conjecture~\ref{conj:ct} (despite not being rational in general, even
when they have a rational point and~$G$ is assumed abelian), so that Theorem~\ref{th:descentsupersolvable}
implies the validity of Conjecture~\ref{conj:ct}, and hence of weak approximation off a finite set of places,
for~$X$. As $X(k)\neq\emptyset$ (indeed even $Y(k)\neq\emptyset$),
it follows, by Theorem~\ref{th:ekedahl}, that there exists $x \in X(k)$ such that~$\pi^{-1}(x)$ is irreducible,
i.e.\ gives rise to a Galois field extension~$K/k$ with group~$G$.
Restricting~$\beta_1,\dots,\beta_m$ to~$\pi^{-1}(x)$ yields elements of~$K^*$ with norms~$\alpha_1,\dots,\alpha_m$.
In the case where~$G$ is abelian, Theorem~\ref{th:normsfrom} was first shown to hold
by Frei, Loughran and Newton~\cite{freiloughrannewton}, who established an asymptotic estimate for the number of such~$K$.
\bibliographystyle{amsalpha}
|
{
"arxiv_id": "2302.13595",
"language": "en",
"timestamp": "2023-02-28T02:24:56",
"url": "https://arxiv.org/abs/2302.13595",
"yymm": "2302"
} | \section*{Appendix}
\end{document}
\section{Network Communication}
\label{sec:Communication}
Industrial control systems apply network communication to transmit periodic and event-based signals such as control signals, measurements, and alarms between field devices, digital controllers, and operator interfaces. Network communication standards such as the open platform communications unified architecture (OPC UA) standardize the communication between industrial components and many industries apply this standard.
The OPC UA standard applies a client-server communication architecture and offers a variety of transport protocols such as TCP/IP based protocols for reliable data transmission \cite{Galloway:Hancke:2013, Mois:etal:2015}.
In this paper, we limit the discussion to the periodic transmission of data between the plant and control system by applying a client-server communication architecture with a TCP/IP protocol. We demonstrate such a communication architecture for the CPS-RT-APC framework in C with the internet domain stream sockets applied to a real-time simulator. We apply the functions packMultiMsg() and unpackMultiMsg() defined in the custom-made library \textit{apcsockets} to pack and unpack data from numerous processes into single strings and we transmit these strings between the client and the server. We also apply this library for the initialization of the clients and servers. We do not show the source code of this library but \cite{Kerrisk:2010} provide code examples for the initialization of these internet domain stream socket clients and servers. We apply real-time interval timers for the client module for the periodic transmission of this data and we pass a pointer to a shared struct between the server and the real-time simulator.
\subsection{Client}
Listing \ref{lst:client_timer_code} demonstrates how to initialize and connect a client to a server. We demonstrate how to apply an interval timer for the periodic transmission of information between the shared data and the server. The timer callback function is client() and it requires connections to the shared data and to the server. We pass these as arguments to the callback function by using the struct \lstinline{Args_client}.
We choose the timer interval of a client module to be smaller compared to the timer interval of a control module. We do this to ensure a fast update of the data between the plant and the control module.
\begin{figure}[htb]
\centering
\begin{minipage}{.47\textwidth}
\begin{lstlisting}[caption={Example code for client module that connects to a Server at Local-host with port number 43051. The main function does not show how to stop and delete the interval timer for the Client module.},label={lst:client_timer_code}, style=CStyle]
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <libpq-fe.h>
#include <signal.h>
#include <arpa/inet.h>
#include <sys/socket.h>
#include "apctimer.h"
#include "apcshareddata.h"
#include "apcsockets.h"
typedef struct Args_client{
int sock_conn; // Connection to socket Server
PGconn* db_conn; // Connection to PostgreSQL database (Shared Data)
}Args_client;
void client(union sigval sv){
// Init buffers for sending and receiving messages
char buff_send[1024] = {0}, buff_recv[1024] = {0};
// Dereference pointer to timer data struct
Args_client* args = (Args_client*)sv.sival_ptr;
// Dimensions {1) measurements, 2) setpoints, 3) manipulated variables}
int n[3] = {1,1,1};
// Read number of inputs and outputs from database
readMultiRecentValsInt(args->db_conn, "dim", 3, NULL, NULL, n);
// Arrays for storing numerical values, timestamps, status codes
double y[n[0]], u[n[2]];
char ts_send[n[2]][1024], stat_send[n[2]][1024]; // timestamp and status
char ts_recv[n[0]][1024], stat_recv[n[0]][1024]; // timestamp and status
// Read actuator values from the Shared Data
readMultiRecentValsFloat(args->db_conn, "actuator", n[2], ts_send, stat_send, u);
// Pack actuator data as a string.
memset(buff_send, 0, sizeof(buff_send));
packMultiMsg(buff_send, n[2], ts_send, stat_send, u);
// Send string message to the Server
send(args->sock_conn, buff_send, strlen(buff_send), MSG_NOSIGNAL);
// Init and read string message from the Server
memset(buff_recv, 0, sizeof(buff_recv));
read(args->sock_conn, buff_recv, 1024);
// Unpack sensor data from the Server into arrays
unpackMultiMsg(ts_recv, stat_recv, y, buff_recv);
// Write sensor data from the Server to the Shared Data
for(int i = 0; i < n[0]; i++)
insertIntoTableFloat(args->db_conn, "sensor", i+1, ts_recv[i], stat_recv[i], y[i]);
}
int main(int argc, char const* argv[]){
// Get the ip-address and port number of the Server
char serv_ip[128] = "127.0.0.1"; // Local-host
int portnumber = 43051; // Example of Port number
// Real-time timer interval
double interval = 1.0; // seconds
// Create connection to PostgreSQL database using conn_str.
PGconn *db_conn = PQconnectdb(conn_str);
// Init tables in database using db_conn
// Create file descriptor for client
int client_fd = client_init();
// Connect to the Server (args: IP address, portnumber)
client_connect(client_fd, serv_ip, portnumber);
// Create Args_client-struct for the Client callback
Args_client *args = malloc(sizeof(Args_client));
args->sock_conn = client_fd;
args->db_conn = db_conn;
// Create a timer for the Client module
timer_t* clientTimer = create_apctimer(client, args);
// Start timer (args: timerid, period)
start_apctimer(timer, interval);
// Stop and delete timer and free "args" (NOT SHOWN)
// Close socket
close(client_fd);
return 0;
}
\end{lstlisting}
\end{minipage}\hfill\\
\end{figure}
\subsection{Server}
We demonstrate the implementation of a server that reads the sensor measurements and write actuator commands from and to a real-time simulator. We implement such a server as an infinite loop that waits for requests from a client. Listing \ref{lst:server_code_main} demonstrates a server for the CPS-RT-APC with server() representing the infinite loop. We pass the data for process variables applying the custom struct Args\_sim\_plant. We use pointers to pass the data of such a struct to both the server and real-time simulator. Such an implementation requires mutex-like principles to synchronize the communication between the server and the real-time simulator and we demonstrate this by locking and unlocking the code where server() manipulates the data in the shared struct.
\begin{figure}[tb]
\centering
\begin{minipage}{.46\textwidth}
\begin{lstlisting}[caption={Example code for a server connected to a real-time simulator. The server and real-time simulator apply the Args\_sim\_plant-struct as shared memory for measurements and manipulated variables.},label={lst:server_code_main}, style=CStyle]
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <libpq-fe.h>
#include <sys/time.h>
#include <pthread.h>
#include <netinet/in.h>
#include <sys/socket.h>
#include "apcshareddata.h"
#include "apcsockets.h"
// Mutex for writing/reading of measurements
pthread_mutex_t mutex;
void server(Args_sim_plant* args, int sock_conn){
// Init buffers for sending and receiving messages
char buff_recv[1024] = {0}, buff_send[1024] = {0};
char ts[27]; // timestamp
// Arrays for storing numerical values, timestamps, and status codes
double y[args->n[1]], u[args->n[2]];
char ts_send[args->n[1]][1024], stat_send[args->n[1]][1024];
char ts_recv[args->n[2]][1024], stat_recv[args->n[2]][1024];
while(1){
// Read string message from the Client
if (read(sock_conn, buff_recv, 1024) < 0){
printf("Client disconnected...\n");
break;
};
// Insert numerical values for actuator from the Client into arrays
unpackMultiMsg(ts_recv, stat_recv, u, buff_recv);
for(int i = 0; i < args->n[2]; i++)
args->u[i] = u[i];
// Lock shared memory
pthread_mutex_lock(&mutex);
// Get measurements with timestamps and status
for(int j = 0; j < args->n[1]; j++){
genTimeStampUTC(ts);
y[j] = args->y[j];
strcpy(ts_send[j], ts);
strcpy(stat_send[j], "'ok'"); // Write status code
}
// Unlock shared memory
pthread_mutex_unlock(&mutex);
// Pack sensor data as string.
memset(buff_send, 0, sizeof(buff_send));
packMultiMsg(buff_send, args->n[1], ts_send, stat_send, y);
// Send string message to the Client
if (send(sock_conn, buff_send, strlen(buff_send), MSG_NOSIGNAL) < 0){
printf("Client disconnected...\n");
break;
}
}
}
int main(int argc, char const* argv[]){
// Create a custom struct for the Server and the Simulator callback
Args_sim_plant *argsSimulator = malloc(sizeof(Args_sim_plant));
// Fill in members of Args_sim_plant-struct (NOT SHOWN)
// Init server
int server_fd, new_fd;
// Length of queue for clients (Number chosen arbitrarily)
int client_queue = 3;
server_fd = server_init();
// Listening for clients
server_listen(server_fd, client_queue);
// Accept client communication with the Server
new_fd = server_accept(server_fd);
// Start the Server
server(argsSimulator, new_fd);
// Close socket
close(new_fd);
// Free argsSimulator (NOT SHOWN)
return 0;
}
\end{lstlisting}
\end{minipage}\hfill\\
\end{figure}
Finally, such a client-server communication architecture enables cloud computing, storage, and remote monitoring of CPS-RT-APC systems. Fig. \ref{fig:CPS_RT_APC2} illustrates how a client-server architecture for the CPS-RT-APC may enable the usage of remote computers for monitoring and cloud computing.
\begin{figure}[tb]
\centering
\includegraphics[width=0.485\textwidth]{fig/RT_APC_main_fig2.pdf}
\caption{Schematic diagram of the CPS-RT-APC software architecture with remote monitoring and cloud computing.}
\label{fig:CPS_RT_APC2}
\end{figure}
\subsection{Python Implementation}
\lstset{style=PythonStyle}
Python's socket communication mechanism is similar to the socket functions applied in the code examples in Listings \ref{lst:client_timer_code} and \ref{lst:server_code_main}, i.e., sock = socket.socket(socket.AF\_INET, socket.SOCK\_STREAM) creates a socket server.
We apply sock.bind(server\_address) to bind the socket to the host-machine and we use sock.listen() to listen for clients. We use \lstinline{sock = socket.socket()} to create client sockets and apply sock.connect(ipaddress, port) to connect this client to a server \cite{Hunt:2019}.
\lstset{style=CStyleInline}
\section{Conclusion}
\label{sec:Conclusion}
In this paper, we demonstrate the key software principles and components applied in the implementation of cyber-physical systems for real-time advanced process control. These principles and components are 1) shared data, 2) timers and
threads for concurrent periodic execution of tasks, and 3) network
communication between the control system and the process and between the control system and the internet, e.g., for cloud computing. We demonstrate these principles and components by presenting code examples applying the C programming language for Linux operating systems.
\section{Control computations}
\label{sec:ControlComputations}
Fig. \ref{fig:timers_and_threads} outlines the fundamental tasks in a control module for the CPS-RT-APC and we demonstrate the implementation of these tasks in this section. However, it is also common to see logic tasks in the form of evaluating the state of actuators and sensors included in such a module. Listing \ref{lst:timer_code_control} presents an example of such an implementation applying the callback function control(). This callback function requires the control function, the controller update function, the sampling time, and a connection to the shared data. We pass these as arguments to the callback function by using the struct Args\_control.
\begin{figure}[tb]
\centering
\begin{minipage}{0.47\textwidth}
\begin{lstlisting}[caption={Example code for the implementation of a callback function for a control module.},label={lst:timer_code_control}, style=CStyle]
#include <stdio.h>
#include <libpq-fe.h>
#include "apctimer.h"
#include "apcshareddata.h"
typedef struct Args_control{
void (*cfunc)(); // controller function
void (*cfunc_update)(); // controller update function
double Ts; // Sampling time
PGconn* db_conn; // Connection to PostgreSQL database
}Args_control;
void cfunc(double *u, double *y, double* z_bar, PGconn* db_conn){
/* Unpack varargin for setpoints
Compute control signal
u[0] = ...
u[1] = ... */
}
void cfunc_update(double *u, double *y, double* z_bar, PGconn* db_conn){
/* Update controller */
}
void control(union sigval sv){
int op_mode = 0;
char ts[27]; // timestamp
// Dereference sigval pointer
Args_control* args = (Args_control*)sv.sival_ptr;
// Get operation mode
readRecentValInt(args->db_conn, "opmode", 0, NULL, NULL, &op_mode);
// Update control algorithm operation mode = 1 (automatic)
if(op_mode){
// Read setpoint and measurements from Shared Data
// Dimensions: 1) measurements, 2) setpoints, 3) manipulated variables
int n[3] = {1,1,1};
readMultiRecentValsInt(args->db_conn, "dim", 3, NULL, NULL, n);
// Measurements, setpoints, and manipulated variables
double y[n[0]], z_bar[n[1]], u[n[2]];
// Read only numerical values of sensor and setpoint
readMultiRecentValsFloat(args->db_conn, "sensor", n[0], NULL, NULL, y);
readMultiRecentValsFloat(args->db_conn, "setpoint", n[1], NULL, NULL, z_bar);
// Control algorithm
args->cfunc(u, y, z_bar, args->db_conn);
genTimeStampUTC(ts); // Get current Unix time
// Write manipulated variables to Shared Data
for(int i = 0; i < n[2]; i++)
insertIntoTableFloat(args->db_conn, "actuator",i+1, ts, "'ok'", u[i]);
// Update controller
args->cfunc_update(u, y, z_bar, args->db_conn);
}
}
int main(int argc, char const* argv[]){
// Interval of timer for Control module
double Ts = 2.0; // seconds
// Create connection to PostgreSQL database
char conn_str[] = "user=apc_user dbname=apc_db password=apc_password";
PGconn *db_conn = PQconnectdb(conn_str);
// Init tables in database using db_conn
// Create struct for controller callback function
Args_control *args = malloc(sizeof(Args_control));
// Fill in members of Args_control-struct
// Create a timer for the Controller
timer_t* cntrTimer = create_apctimer(control, args);
// Start the timer. Stop and delete the timer when finish (NOT SHOWN);
free(args);
// Close connection to db
PQfinish(db_conn);
// Free Args_control argument
free(args);
return 0;
}
\end{lstlisting}
\end{minipage}\hfill\\
\end{figure}
\section{Real-time simulation experiments}
\label{sec:RTSimulationExperiment}
We test the CPS-RT-APC framework with a real-time simulator of the ODE $\dot x(t) = (1/\tau)(-x(t)+Ku(t))$ and with measurement equations $y(t_k) = x(t_k)$ and output equation $z(t) = x(t)$, where $t$ is time, $x(t)$ is the state, $u(t)$ is the manipulated variable, $y(t_k)$ is the measurement, and $z(t)$ is the controlled variable. We choose the parameters $K = 10.0$ and $\tau = 10.0$ and the simulator timer interval $T_s^{p}$ = 0.2 s. We apply a zero-order hold parameterization of the manipulated variable $u(t) = u_k$ for $t_{k}^{p} \leq t \leq t_{k+1}^{p}$ with $t_{k+1}^{p} = t_{k}^{p}+T_s^{p}$ being the time increment for the simulator.
We track a time-varying setpoint, $\bar{z}_k$, with the PI-control strategy $e_k = \bar{z}_k-y_k$, $P_k = K_pe_k$, $u_k = \bar{u}_k + P_k + I_k$, $I_k = I_{k+1}+({T_s^c}/{\tau_i})e_k$, and $I_0 = \bar{I}$ where $T_s^c$ is the interval time of the controller, and $K_p$, $\tau_i$, and $\bar{u}_k$ are the proportional gain, integral time constant, and chosen operating point for $u_k$, respectively. We choose the controller parameters $K_p = 0.2$, $\tau_i = 10.0$, $\bar{u} = 0.0$, and the interval time $T_s^c = 2.0$ s. We update a sequence of setpoints applying a real-time timer with interval $T_s^{\bar{z}} = 150.0$ s. We apply a client module with interval $T_s^{cl} = 0.5$ s and we start the timers simultaneously. We execute the control system and the simulator using the same Linux computer. Finally, we apply a PostgreSQL database system for storing the data for $y(t_k), u_k$ and $\bar{z}_{k}$. Fig. \ref{fig:example_combined_plot} presents histograms for the jitter $\Delta t_k^{i} = t_{k+1}^i-t_{k}^i$ where $t_{k+1}^i = t_k^i+T_s^i$ for $i\in\{p, c, cl\}$ as well as the process variables of the simulation experiment.
\begin{figure}[tb]
\centering
\includegraphics[width=0.49\textwidth]{fig/example_ecc2023_combined_plot_converted_from_eps.pdf}
\caption{Top) histograms of jitter for the simulator, control module, and client module with mean time increments. Bottom) the setpoints, measurements, and manipulated variables for the simulation experiment.}
\label{fig:example_combined_plot}
\end{figure}
\section{Introduction}
\label{sec:Introduction}
Real-time advanced process control (RT-APC) enables the automatic, reliable, and efficient operation of complex industrial process systems. RT-APC relates to digital control methods such as proportional-integral-derivative (PID) control and model predictive control (MPC) executed periodically with real-time interval timers \cite{Aastrom:Wittenmark:1997}.
Cyber-physical systems (CPSs) combine automatic control with network communication principles for remote control and monitoring of distributed processes.
A CPS for RT-APC (CPS-RT-APC) consists of numerous modules, e.g., controllers, user interfaces, and network communication modules that exchange data periodically and concurrently \cite{Jbar:etal:2018, Mois:etal:2015}. Consequently, implementations of such systems require real-time interval timers and threading principles as well as shared data and network communication concepts \cite{Gabier:2004, Wittenmark:Aastrom:Arzen:2022}.
Additionally, interval timers and threading principles enable the construction of real-time simulators as proxies for the physical plants. Therefore, these software principles enable real-time closed-loop simulations.
Software tools for simulations and developments of real-time networked control systems exist: \cite{Ceven:etal:2003} applies \textit{TrueTime}, a MATLAB/Simulink-based simulation toolbox for real-time networked control systems, to a double tank system and \cite{Kim:Kumar:2010} describes real-time enhancements to \textit{Etherware}, a framework for
networked control systems, and applies this framework to an experimental setup of an inverted pendulum. Additionally, \cite{Bartusiak:etal:2022} describes an industry initiative called \textit{Open Process Automation} that defines a standards-based, open, secure, interoperable process automation architecture with an objective to provide the computing platforms for the improvement of industrial use of MPC and machine learning based process technologies.
However, the multidisciplinary nature of a CPS-RT-APC still makes it a challenging task to implement despite the existence of these software tools and standards, and the descriptions of the key principles of such a system in \cite{Aastrom:Wittenmark:1997, Gabier:2004, Wittenmark:Aastrom:Arzen:2022}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.485\textwidth]{fig/RT_APC_main_fig.pdf}
\caption{Schematic diagram of the CPS-RT-APC software architecture.}
\label{fig:CPS_RT_APC}
\end{figure}
In this paper, we demonstrate the key software principles and components in the implementation of a CPS-RT-APC. These software principles and components are 1) shared data, 2) timers and threads, and 3) network communication. We implement the shared data as databases, files, or shared memory. We apply the timers and threading principles for the concurrent and periodic execution of tasks. Finally, we apply network communication between the control system and the plant, and between the control system and the internet for cloud computing and remote monitoring. The novelty of our paper is the explicit demonstration of these software principles and components through code examples. We apply the C programming language in Linux operating systems (OS) and we also comment on the implementation using the high-level programming language of Python. Finally, we present a simulation experiment for a CPS-RT-APC applied to a real-time simulator.
The remaining parts of this paper are structured as follows.
Section \ref{sec:SoftwareArchitecture} provides an overview of the CPS-RT-APC software architecture. Section \ref{sec:SharedData} presents the shared data for such a framework. Section \ref{sec:TimersAndThreading} describes the real-time interval timers and threading principles. In Section \ref{sec:Communication}, we describe the client-server architecture for network communication. Sections \ref{sec:ControlComputations} and \ref{sec:RTSimulationExperiment} present implementations of a control module and a real-time simulator.
Section \ref{sec:RTSimulationExperiment} presents a simulation experiment for the CPS-RT-APC framework. Finally, Section \ref{sec:Conclusion} presents conclusions.
\section{Real-time Simulator}
\label{sec:PlantAndRTsimulator}
In this section, we demonstrate the implementation of a real-time simulator applying a POSIX interval timer.
Listing \ref{lst:simulator} presents an example of such an implementation applying the callback function simulator(). This callback function requires a mathematical model of the plant, the states, inputs, and output dimensions, model parameters and control signals, and an ordinary differential equation (ODE) solver. We pass these as arguments to the callback function through the struct Args\_sim\_plant.
\begin{figure}[tb]
\centering
\begin{minipage}{.47\textwidth}
\begin{lstlisting}[caption={Example code for the implementation of a callback function for a real-time simulator. The Main function code is not presented.},label={lst:simulator}, style=CStyle]
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <libpq-fe.h>
#include "apctimer.h"
#include "apcshareddata.h"
struct varargin{
void *args; // Vector of arguments
int nargs; // Number of arguments
};
typedef struct Args_sim_plant{
void (*ffunc)(); // x' = f(t, x, args)
void (*gfunc)(); // 0 = g(t, x, args)
struct varargin arg_vec; // argument vector for f and g
double Ts; // sampling time
double *x; // states
double *y; // measurements
double *u; // manipulated variables
int *n; // dimensions: (states, outputs, inputs)
void (*odesolver)(); // ODE solver function
int N; // Number of steps in ODE solver
}Args_sim_plant;
void ffunc(double t, double* x, double* dx, double* varargin){
/* Unpack parameters and manipulated variables from varargin */
/* Compute dx/dt
dx[0] = ....
dx[1] = .... */
}
void gfunc(double* x, double* y, double* varargin){
/* Unpack parameters and manipulated variables from varargin */
/* Compute y
y[0] = ....
y[1] = .... */
}
void odesolver( void (*ffunc)(), double t0, double tN, int N,
double* x0,int nx,double* varargin,
double (*X)[N+1], double T[N]){
// Compute the solution T, X
}
void simulator(union sigval sv){
// Dereference sigval pointer
Args_sim_plant* args = (Args_sim_plant*)sv.sival_ptr;
// Number of states and manipulated variables
int nx = args->n[0], nu = args->n[2];
// ODE solver configurations
int N = args->N; // Steps for ODE solver
double X[nx][N+1], T[N+1]; // Arrays for ode solution
double t0 = 0.0, tf = args->Ts; // Integration time
double *arg = args->arg_vec.args;
int nargs = args->arg_vec.nargs;
// Init vector for parameters and manipulated variables
double varargin[nargs+nu];
// Parameters
for(int i = 0; i < nargs; i++)
varargin[i] = arg[i];
// Manipulated variables
for(int k = 0; k < nu; k++)
varargin[nargs+k] = args->u[k];
// Solve ODE from timspan [t0 tf] with N steps
args->odesolver(args->ffunc, t0, tf, N, args->x, nx, varargin, X, T);
// Update the states
for(int j = 0; j < nx; j++)
args->x[j] = X[j][N];
// Compute measurements
args->gfunc(args->x, args->y, varargin);
}
int main(int argc, char const* argv[]){
// Init parameters of model
// Create Args_sim_plant-struct for the Simulator callback using
// Fill in members of Args_sim_plant-struct
// Create, start, stop, and delete af timer with simulator() as callback
return 0;
}
\end{lstlisting}
\end{minipage}\hfill\\
\end{figure}
\section{Shared Data}
\label{sec:SharedData}
We implement shared data as a collection of tables for sensor measurements, actuator commands, and setpoint values and we use $y$, $u$, and $\Bar{z}$ to represent these, respectively. We also represent control and operational configurations as tables. We may apply database systems, files, or shared memory to implement these tables. A single table represents the historical data of a unique variable and for each row in a table, we include a time stamp, a status code/description, and a numerical value. We apply Unix time to represent the timestamp and we order the data in the tables in time. Fig. \ref{fig:shared_data} illustrates tables for sensor and actuator data as well as setpoint and control configurations.
We apply mutual exclusion (mutex) principles to prevent a module from accessing a table in the shared data when another module is using it.
We demonstrate the interaction with shared data using the PostgreSQL database system. Such a database system implements mutex-like principles. Consequently, we do not demonstrate mutex principles explicitly. For these demonstrations, we apply a custom library \textit{apcshareddata} for wrapping \textit{libpg-dev}, a PostgreSQL C development library, into the functions insertIntoTableFloat() and readMultiRecentValsFloat() for writing and reading of tables, respectively. We integrate these functions into the real-time timer callback functions. Hence, we demonstrate these functions in later sections of this paper where we introduce various callback functions.
\begin{figure}[tb]
\centering
\includegraphics[width=0.485\textwidth]{fig/shared_data_main_idea.pdf}
\caption{Illustration of the tables in shared data.}
\label{fig:shared_data}
\end{figure}
\section{Software Architecture}
\label{sec:SoftwareArchitecture}
The CPS-RT-APC software architecture consists of modules that exchange data with shared data periodically. A module executes periodic tasks with a real-time interval timer and we execute the tasks of numerous modules concurrently by applying threading principles. The CPS-RT-APC framework applies a client-server network communication architecture for remote control and monitoring of physical plants. Such a client-server architecture also enables communication between the control systems and the internet, e.g., cloud computing. Additionally, interval timers, threading, and network communication principles enable the implementation of real-time simulators. We connect the servers to the physical plants and simulators, and we implement client modules with real-time interval timers for the periodic transmission of data.
Fig. \ref{fig:CPS_RT_APC} presents a schematic diagram of the CPS-RT-APC software architecture with shared data and modules for a user interface, a controller, and a client. This client module connects to either a physical plant or a real-time simulator.
\section{Timers and Threading}
\label{sec:TimersAndThreading}
Fig. \ref{fig:timers_and_threads} illustrates the concurrent and periodic execution of tasks for the control and client modules in Fig. \ref{fig:CPS_RT_APC} applying interval timers with thread invocation for the callback functions.
\begin{figure}[tb]
\centering
\includegraphics[width=0.485\textwidth]{fig/timer_loop_tasks.pdf}
\caption{Flow diagram of tasks for control and client modules executed concurrently and periodically using interval timers with thread invocation.}
\label{fig:timers_and_threads}
\end{figure}
We may implement such interval timers with \textit{hard} real-time systems principles, e.g., real-time timers with high timing determinism and deadline guarantees. However, such systems rely on special-purpose software such as real-time operating systems (RTOS) \cite{Aastrom:Wittenmark:1997}. Alternatively, we may apply general-purpose operating systems (GPOS) such as the Linux OS for CPS-RT-APC systems that do not require such high timing determinism. We refer to such systems as \textit{soft} real-time systems and these will experience larger varying delays (jitter) compared to the hard real-time systems.
We demonstrate the implementation of soft real-time system interval timers for the periodic and concurrent execution of user-defined tasks. We demonstrate such an implementation by applying the POSIX timers API, defined in \textit{POSIX.1b real-time extensions} for Linux OS.
We configure the timers to invoke their callback functions in new threads to achieve concurrency of the timers \cite{Kerrisk:2010}.
We apply a custom library called \textit{apctimer} to wrap these POSIX interval timers into functions for creating, starting, and stopping timers. We also apply custom-made structs to pass functions and function arguments to the callback functions through the \textit{sigevent} structure.
Listing \ref{lst:timers_example} presents an example for creating, starting, stopping, and deleting a POSIX interval timer with an interval of 2.5 s. Listing \ref{lst:timers_src} shows the source code applied in Listing \ref{lst:timers_example}.
\subsection{Python implementation}
\lstset{style=PythonStyle}
We use Python's Timer class to schedule the execution of user-defined tasks. This class enables the user to schedule the execution of these tasks in separate threads, similar to the POSIX interval timers configured with thread invocation of the callback functions.
We create an instance of the Timer class as timer = Timer(interval, func, args=None, kwargs=None), where \textit{func} is the callback function and \textit{interval} is the start delay. We achieve periodic execution of the callback function by recursively applying the timer.start() and we apply timer.cancel() to stop the timers\cite{Hunt:2019}. Alternatively, we may design interval timers by applying the sleep function from the \textit{time} module together with loops. \cite{Wittenmark:Aastrom:Arzen:2022} discusses such an implementation without accumulation of time drift.
\lstset{style=CStyleInline}
\begin{figure}[tb]
\centering
\begin{minipage}{0.47\textwidth}
\begin{lstlisting}[caption={Example code for creating, starting, stopping, and deleting a timer.},label={lst:timers_example}, style=CStyle]
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
#include "apctimer.h"
typedef struct Args_custom{
void (*func)(); // Function to be called inside Callback function
double interval; // Interval of timer
}Args_custom;
void callback(union sigval sv){
// Dereference sigval pointer
Args_custom* args = (Args_custom*)sv.sival_ptr;
// Call the function
args->func(args->interval);
}
void printInterval(double interval){
// Print interval of timer
printf("Interval time for this timer is
}
int main(int argc, char const* argv[]){
// Real-time timer interval
double interval = 2.5; // seconds
Args_custom *timer_args = malloc(sizeof(Args_custom));
timer_args->func = printInterval;
timer_args->interval = interval;
// Create a timer that applies "callback" as the callback function
timer_t* timer = create_apctimer(callback, timer_args);
// Start timer (args: timerid, period)
start_apctimer(timer, interval);
// Sleep for 1 min
sleep(60);
// Stop the timer
stop_apctimer(timer);
// Delete the timer
sleep(3);
timer_delete(timer); // From POSIX.1b timer API
// Free Args_custom argument
free(timer_args);
return 0;
}
\end{lstlisting}
\end{minipage}\hfill\\
\end{figure}
\begin{figure}[tb]
\centering
\begin{minipage}{0.47\textwidth}
\begin{lstlisting}[caption={Source code in \textit{apctimer} library for POSIX.1b timer API applied in Listing \ref{lst:timers_example}.},label={lst:timers_src}, style=CStyle]
timer_t create_apctimer(void (*fnc)(), void* args){
timer_t timerid; // Timer identifier for POSIX.1b interval timer
struct sigevent sev; // Structure for notification from async. routines
memset(&sev, 0, sizeof(struct sigevent)); // Set struct members to 0
sev.sigev_notify = SIGEV_THREAD; // Config: Invoke callback function in new thread
sev.sigev_notify_function = fnc; // Callback function
sev.sigev_value.sival_ptr = args; // Arguments
// Create a real-time timer using System-wide realtime clock
if(timer_create(CLOCK_REALTIME, &sev, &timerid) == -1){
perror("timer_create");
};
return timerid;
}
int start_apctimer(timer_t timerid, double Ts){
int sec2nsec = 1000000000; // Convert seconds to nanoseconds
struct itimerspec ts;
int intpart = (int)Ts; // Integer part of interval
double decpart = Ts - intpart; // Decimal part of interval
memset(&ts, 0, sizeof(struct itimerspec)); // Set struct members to 0
// Seconds and nanoseconds for timer interval
ts.it_interval.tv_sec = intpart;
ts.it_interval.tv_nsec = (int)(decpart*sec2nsec);
// Start time for timer
ts.it_value.tv_nsec = 1; // Start time in nanoseconds. If = 0 then timer is disabled
if(timer_settime(timerid, 0, &ts, NULL) == -1){
perror("timer_settime");
return 1;
};
return 0;
}
int stop_apctimer(timer_t timerid){
struct itimerspec ts;
// Set interval and start time to 0
memset(&ts, 0, sizeof(struct itimerspec)); // Set struct members to 0
if(timer_settime(timerid, 0, &ts, NULL) == -1){
perror("timer_settime");
return 1;
};
return 0;
}
\end{lstlisting}
\end{minipage}\hfill\\
\end{figure}
\section{User Interface}
\label{sec:UserInterface}
A user interface module enables an operator to view process data through real-time plotting of measurements and manipulated variables, and to influence the control system through manipulation of setpoints and control configurations.
Such a user interface requires a connection to the shared data for exchanging information between the other modules as well as real-time interval timers for real-time plotting and an event-based mechanism for changing setpoints. Fig. \ref{fig:user_interface} illustrates a graphical user interface for a CPS-RT-APC with real-time plotting of data from two sensors and two actuators and buttons and text-fields for manipulating setpoints, tuning parameters, and the operation mode.
\begin{figure}[tb]
\centering
\includegraphics[width=0.485\textwidth]{fig/user_interface_idea.pdf}
\caption{Illustration of a graphical user interface for the CPS-RT-APC.}
\label{fig:user_interface}
\end{figure}
\subsection{Discussion the results}
- natural delay in control due to the communication layer. The measurement data from the plant needs to be sampled and send to the database.
|
{
"arxiv_id": "2302.13771",
"language": "en",
"timestamp": "2023-02-28T02:29:54",
"url": "https://arxiv.org/abs/2302.13771",
"yymm": "2302"
} | \section{Introduction} \label{sec:intro}
Efforts towards the miniaturization of technological components have naturally led to an increase in the research of low-dimensional materials. Due to their constrained dimensions, low-dimensional materials host interesting collective behaviours, some of which are linked to an interplay of topology and wavefunctions \cite{RevModPhys.88.035005}. The advent and rise of topological materials in the past couple of decades can be traced to peculiar exotic states of matter, such as topological edge states, that can exist within these materials \cite{RevModPhys.88.035005,Liu2019, shortcourse}. Signatures of nontrivial topologies in bulk materials are commonly studied by Angle-Resolved PhotoEmission Spectroscopy (ARPES), a powerful method to probe the momentum-space band structure of a material \cite{lv2015observation,xu2015discovery,lv2017observation,ma2018three}. A common signature of topological insulators is the gapless energy dispersion due to edge states and a nontrivial Berry phase. However, in one-dimensional systems, the dispersion is not sufficient and transport/transmission probes, which typically involve a study of the conductance through a sample, have been used. For instance, quasiparticle interference via the use of scanning tunnelling spectroscopy can be used to resolve backscattering-protected topological surface states \cite{roushan2009topological,yin2021probing}. Transmission probes can prove highly useful for studying edge states in lower-dimensional systems but cannot easily discern how the edge states are distributed along the boundaries of a system or whether a localized mode is on the left or right boundary of a one-dimensional chain \cite{ZAIMI2021127035}. Most other techniques used to probe topological materials including magnetotransport \cite{yan2017topological}, thermal conductance \cite{van2014thermal}, ultrafast dynamics \cite{marsi2018ultrafast}, and superconducting qubits for Majorana zero modes \cite{pekker2013proposal} cannot probe the spatial dependence of edge states.
This work exploits the intimate relationship between the decoherence of a qubit (two-level system) and its environment to provide a real-space diagnostic of environment state amplitudes. To be concrete, rather than striving to minimize decoherence, as is often the goal in quantum information technologies \cite{Fedorov:2004:1546-1955:132, BROADBENT20092489, 8914134, Burnett2019}, an investigation of how the environment affects the decoherence dynamics of a qubit reveals how these dynamics can be used, in reverse, to probe local properties of the qubit's surroundings. In effect, a novel quantum sensor is proposed, dubbed the decoherence probe, which utilizes as its mechanism of action the natural measurement induced from interactions with an environment to offer superior position-basis diagnostics for low-dimensional topological systems. As opposed to probes that are sensitive to the dispersion, such as ARPES \cite{lv2019angle} and our earlier work on a localized qubit \cite{ZAIMI2021127035}, the decoherence probe presented in this work accesses the entirety of real-space and can be utilised to map out the spatial amplitude of populated states at a given energy.
This probe has potential for applications to low-dimensional engineered topological systems and various mesoscopic systems. For definiteness, we demonstrate the decoherence probe’s ability to study the topological edge states arising in the Su-Schrieffer-Heeger (SSH) model. The SSH model, owing to its one-dimensional nature, is widely considered the simplest model of a topological material. The model's simplicity has made it a promising platform for the design of engineered/synthetic topological materials \cite{PhysRevX.11.011015, Kiczynski2022, PSJ2022, St_Jean_2017,PhysRevLett.126.043602,PhysRevLett.128.203602,topodissip2022,PhysRevLett.124.023603}. This work demonstrates the probe's complete spatial characterization capabilities. The full system to be probed is composed of a finite SSH chain between two semi-infinite conducting leads. Semi-infinite leads model an extended environment and allow for the decoherence of the qubit. Furthermore, two semi-infinite leads are chosen as a \textit{realistic} constraint to model the response of systems where the topological material of interest is integrated within some circuit geometry \cite{Kiczynski2022, PhysRevX.11.011015}. The qubit decoherence rate is shown to provide a measure of the local density of states (LDOS) within the SSH chain, local to where the qubit was coupled. Weak coupling is considered both for the SSH-qubit and SSH-lead coupling. The former ensures an analytically tractable expression for the qubit's decay, while the latter ensures that the eigenstates of the coupled SSH chain differ from the isolated chain only up to some perturbative correction. Various open system configurations of the SSH model, including open boundary couplings such as leads or $\mathcal{PT}$-symmetric defect terms, have been previously studied in literature \cite{OSTAHIE2021127030,PhysRevB.95.115443,PhysRevA.98.013628,PhysRevA.89.062102, PhysRevLett.127.250402}.
This work is structured as follows. The composite system consisting of the SSH chain, semi-infinite leads, and qubit is introduced and detailed in Section \ref{sec:composite}. Topological edge states and their properties are briefly reviewed, followed by the detailed treatment of the qubit and the extraction of its decoherence rate. In Section \ref{sec:deco}, the composite system is studied using the decoherence of the qubit in order to obtain a full spatial characterization of SSH edge states for chains comprising both an even and odd number of sites.
\section{The Composite System} \label{sec:composite}
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\linewidth]{CompositeSystFig.jpg}%
\caption{Composite system geometry for the qubit probe shown here for an SSH chain of length $N=6$.}\label{tripartgraph}
\end{figure}
Consider the mesoscopic system illustrated in Fig.~\ref{tripartgraph} consisting of a finite SSH chain of length $N$, two semi-infinite leads, and a qubit (which can be viewed more generally as a two-level system).
The SSH chain is described by the Hamiltonian
\begin{align}\label{HSSH}
H_{\text{SSH}} = & \, \, \, \sum_{m=1}^{N-1} t_m \big( c^\dagger_m c_{m+1} + c_{m+1}^\dagger c_m\big),
\end{align}
where $m$ is the site index, $c_m^\dagger,\ c_m$ are the creation and annihilation operators for site $m$, and the hopping parameter $t_m$ between sites $m$ and $m+1$ alternates between two real values, $t_1$ for odd $m$ and $t_2$ for even $m$.
The left and right semi-infinite leads are described by the Hamiltonians
\begin{equation}\label{Hsemiinf}
\begin{split}
H_{L,\infty} = & \sum_{m=1}^{\infty} t \big( l_m^\dagger l_{m+1} + l_{m+1}^\dagger l_{m} \big),\\
H_{R,\infty} = & \sum_{m=1}^{\infty} t \big( r_m^\dagger r_{m+1} + r_{m+1}^\dagger r_{m} \big),
\end{split}
\end{equation}
where again $m$ is the site index and where $l$, $r$ are the respective site creation and annihilation operators for the two leads.
The double dot qubit is described by the Hamiltonian
\begin{equation}\label{HDD}
H_{\mathrm{DD}}= \epsilon_{1}d_1^\dagger d_1 +\epsilon_{2}d_2^\dagger d_2 +\tau \left( d_1^\dagger d_2 + d_2^\dagger d_1 \right) ,
\end{equation}
where $\epsilon_1$, $\epsilon_2$ and $\tau$ are supposed real and $d$ represent creation and annihilation operators for the two sites.
The subsystems are coupled together as displayed in Fig.~\ref{tripartgraph}. In what follows, the normalization of all the parameters in the Hamiltonian will be chosen such that the lead hopping parameter is set to $t=1$. The left (right) lead is coupled to the first (last) SSH site with coupling $t_L$ ($t_R$), while the second qubit site is coupled to the $n^{\mathrm{th}}$ SSH site with coupling $t_c$.
The leads serve as the extended environment through which the qubit ultimately decoheres. For simplicity, we will later set $t_L=t_R$, ensuring a symmetric qubit response over the SSH chain. Putting everything together, the full Hamiltonian for the system is
\begin{equation}\label{eq1}
\begin{split}
H &= \, H_{\mathrm{DD}}+H_{\text{SSH}}+H_{L\infty}+H_{R\infty} \\&
+t_c \big(c_n^\dagger d_2 + d_2^\dagger c_n\big)+t_L \big( c_1^\dagger l_1 + l_1^\dagger c_1\big)\\&
+t_R \big( c_N^\dagger r_1 + r_1^\dagger c_N\big).
\end{split}
\end{equation}
\iffalse
For the single-particle case considered here, the Hamiltonian can also be represented in matrix form. In the basis \{qubit, left lead, SSH chain, right lead\} the matrix Hamiltonian is
\begin{equation}
H=\left(\begin{array}{c|c|c|c}
H_{\mathrm{DD}} & 0 & T_c & 0 \\
\hline 0 & H_{L\infty} & T_L & 0 \\
\hline T_{c}^{\dagger} & T_L^{\dagger} & H_{\mathrm{SSH}} & T_R\\
\hline 0 & 0 & T_R^{\dagger} & H_{R\infty}
\end{array}\right),
\label{eq-matrixham}
\end{equation}
where each element is itself a matrix.
The diagonal (subsystem) elements of \eqref{eq-matrixham} are
\begin{equation}
H_{\mathrm{DD}} = \left(\begin{array}{ll}
\epsilon_{1} & \tau \\
\tau & \epsilon_{2}
\end{array}\right),
\end{equation}
\begin{equation}
H_{\mathrm{SSH}} =
\left(\begin{matrix}
0 & t_1 \\
t_1 & 0 & t_2 &\phantom{\ddots}\\
& t_2 & 0 & t_1 &\phantom{\ddots} \\
& & t_1 & 0 & \ddots & \phantom{\ddots} \\
\phantom{\ddots}&\phantom{\ddots} &\phantom{\ddots} & \ddots & \ddots & t \\
& & & & t & 0 \\
\end{matrix}\right)
\end{equation}
(where $t=t_1$ or $t_2$ for $N$ even or odd, respectively),
\begin{equation}
H_{L\infty} = \left(\begin{matrix}
\ddots & \ddots &\phantom{\ddots} &\phantom{\ddots}\\
\ddots & 0 & 1 & \phantom{\ddots} \\
& 1 & 0 & 1 \\
\phantom{\ddots} & & 1 & 0
\end{matrix}\right),
\end{equation}
\begin{equation}
H_{R\infty} = \left(\begin{matrix}
0 & 1 \\
1 & 0 & 1 & \phantom{\ddots} \\
& 1 & 0 & \ddots \\
\phantom{\ddots} & \phantom{\ddots} & \ddots & \ddots
\end{matrix}\right).
\end{equation}
The off-diagonal elements of \eqref{eq-matrixham} represent the couplings among the subsystems. The coupling matrices $T_L$ and $T_R$ are of size $\infty \times N$ and $N\times\infty$, having as their only nonzero entries $T_L^{\,(\infty,1)}=t_L$ and $T_R^{\,(N,1)}=t_R$. The qubit is coupled to an arbitrary $n^{\mathrm{th}}$ site of the SSH chain through a tunneling parameter $t_c$ acting on the second basis state, $\epsilon_2$. The $N \times N$ coupling matrix $T_c$ has element $T_c^{\,(2,n)}=t_c$ as its only nonzero entry.
\fi
\subsection{Topological Edge States of the SSH Model}
It is useful to highlight the spectral and real-space properties of the model's topological edge states, as they are the central feature chosen to demonstrate the probe's spatial characterization abilities. Analytical solutions to the model along with energy spectra are obtained and detailed in previous work \cite{ZAIMI2021127035}. The distinct phases within the SSH model are defined by a topological invariant, the winding number $\nu$, which can be computed from the closed trajectory of the bulk Hamiltonian vector $\Vec{h}(k)=(t_1+t_2 \cos{2k},\, t_2 \sin{2k}, \,0)$, expressed here in the Pauli matrix basis, sweeping through the Brillouin Zone (BZ) \cite{shortcourse}. Note that the BZ is defined here for the wavevector range $k \in (-\pi/2, \pi/2$]. The winding number is intimately tied to the staggered nature of the hopping terms and can be related to the parameter $r=t_1/t_2$.
\begin{figure}[ht]
\centering
\includegraphics[width=1\linewidth]{SSHedgestates1.png}%
\caption{ \textbf{a)} Pair of topological edge states arising for $r<r_C$ in a finite SSH chain of even-length. \textbf{b)} Edge state for a finite SSH chain of odd-length. A single edge state is always present in the odd-length SSH chain, with localization occurring on the left (right) boundary for $r<1$ ($r>1$).}\label{sshedgegraph}
\end{figure}
For finite even-length SSH chains with $N=2M$ (where $M$ denotes the number of unit cell), the winding number is $\nu=0$ when $r>r_C=N/(N+2)$, where there are $N$ bulk states and the chain is said to be in the trivial phase. The critical parameter $r_C$ as obtained in \cite{ZAIMI2021127035,delplace_zak_2011} is a feature of finite chains and reproduces the expected topological phase transition at $r_C=1$ in the thermodynamic limit, $N \rightarrow \infty$. When $r<r_C$, the winding number becomes $\nu=1$ and the SSH chain is in a topological phase. In this configuration, there exist $N-2$ bulk states and the two remaining solutions take the form of near-zero-energy modes. From the dispersion relation, $E(k)= \pm \sqrt{t_{1}^{2}+t_{2}^{2}+2 t_{1} t_{2} \cos{2k}}$, states lying within the gap must have complex wavenumbers $k \rightarrow \pi/2 + i\kappa$ and, hence, are excitations localized at the boundaries of the open chain, forming so-called edge states. In the thermodynamic limit, edge states tend towards zero energy and get pinned at the midgap. This differs to open finite chains, where a splitting about the Fermi level gives rise to small but non-zero energies for these modes. As a direct consequence of chiral symmetry, a non-zero-energy eigenstate has equal support on both sublattices, contrary to the $E=0$ eigenstates which are confined to a single sublattice. As the edge state energies tend exponentially to zero with $N\rightarrow \infty$, one observes an apparent confinement near the boundaries of the finite chain, but with hybridization on both odd and even sites taking place in the bulk, as seen in Fig.~\ref{sshedgegraph}a.
An important difference for finite SSH chains with an odd number of sites, $N=2M+1$, is that there always exist $N-1$ bulk solutions and a single localized mode pinned to the midgap regardless of the value of $r$. Chiral symmetry, which imposes symmetry of the spectrum, forces the unpartnered eigenvalue to $E=0$ and leads to the perfect confinement of the edge state on a single sublattice of the chain, as shown in Fig.~\ref{sshedgegraph}b for two values of $r$.
\subsection{Effective treatment of the Qubit}
We refer the reader to \cite{ZAIMI2021127035} for a more thorough discussion of the qubit described by (\ref{HDD}). The time evolution of level transitions is obtained from the off-diagonal element of the Green's function of the double dot qubit: $\left|G^{12}_{\mathrm{DD}}(t)\right|^{2}$, where $G_{\mathrm{DD}}(E)=(E-H_{\mathrm{DD}})^{-1}$. The eigenvalues and energy splitting of the isolated qubit are given by
\begin{equation}
\lambda_{\pm}=\epsilon_{0} \pm \frac{\delta}{2} \quad , \quad \delta=\sqrt{\delta_{0}^{2}+4 \tau^{2}},
\end{equation}
where, $\epsilon_0 =(\epsilon_1 +\epsilon_2)/2$ is the average energy of the two basis states and $\delta_0=(\epsilon_1 - \epsilon_2)$ is the energy splitting of the uncoupled basis states. With the goal of characterizing the time evolution of the qubit embedded within an environment, an effective projective approach, detailed in \ref{appendix1} for an arbitrary environment, is used. This description captures the influence of the environment through a self-energy term added to $G_{\mathrm{DD}}$ in a new Green's function $G'_{\mathrm{DD}}$, to which one can associate the effective Hamiltonian $H'_{\mathrm{DD}}$ satisfying $G'_{\mathrm{DD}}=(E-H'_{\mathrm{DD}})^{-1}$. This technique is exact within the $2\times2$ subspace of the qubit, and is used to treat the qubit's response within the composite system, as will now be detailed.
Starting from (\ref{eq1}) and following the method in \ref{appendix1}, the SSH-lead blocks are addressed first and reduced to an $N\times N$ effective Hamiltonian $H_{\mathrm{SSH},\infty}$ for the subspace of the SSH chain reflecting the influence of the leads,
\begin{equation}\label{sshleadscombined}
H_{\mathrm{SSH}, \infty} \equiv H_{\text{SSH}} + \Sigma_L c_1^\dagger c_1 + \Sigma_R c_N^\dagger c_N.
\end{equation}
This is exactly $H_{\mathrm{SSH}}$ with the addition of self-energies taking the form of complex on-site potentials at the boundary elements of the SSH chain. The self-energies $\Sigma_{L,R}$ originate from, and account for, the left and right leads, respectively. As the leads are surface-coupled to the SSH chain,
\begin{equation}\label{GLEAD}
\Sigma_{L,R}(E)=t_{L,R}^2 G_{\infty}^S(E),
\end{equation}
where
\begin{equation}\label{GLEAD1}
G_{\infty}^S(E) = \frac{1}{2} \left(E -\mathrm{sgn}(E+2)\sqrt{E^2-4}\right)
\end{equation}
is the surface term of the semi-infinite lead's Green's function \cite{ZAIMI2021127035,datta_2005,economou}. The effective Hamiltonian for the $(N+2)\times(N+2)$ subspace of the SSH chain and qubit is now
\begin{equation}\label{2by2HSSHinfDD}
H = H_{\text{DD}} + H_{\mathrm{SSH},\infty} +t_c \big(c_n^\dagger d_2 + d_2^\dagger c_n\big),
\end{equation}
and is further reduced to the $2\times 2$ effective Hamiltonian for the qubit subspace:
\begin{equation}\label{effectivequbit}
H'_{\mathrm{DD}}= H_{\mathrm{DD}} + \Sigma^{nn}_{\mathrm{SSH},\infty}d_2^\dagger d_2,
\end{equation}
where the environment's self-energy $\Sigma^{nn}_{\mathrm{SSH,\infty}}(E)=t^2_c G^{nn}_{\mathrm{SSH,\infty}}(E)$ stems from the combined SSH-lead environment to which the qubit is coupled at a site $n$ with strength $t_c$. The superscript $G^{nn}$ denotes the $(n,\,n)$ diagonal element of $G$. This is an important distinction compared to previous work \cite{ZAIMI2021127035}, where only the surface coupling between the qubit and the end of a chain was considered. The derivation of the diagonal elements of the Green's functions satisfying $G_{\mathrm{SSH},\infty}(E) = (E-H_{\mathrm{SSH},\infty})^{-1}$ is detailed in \ref{gnnsshinfderiv}. The four analytical forms for $G^{nn}_{\mathrm{SSH},\infty}(E)$, which depend on the parity of the SSH chain length $N$ and the parity of the sublattice site $n$, are given below. We find, for chains of odd length $N$,
\begin{equation}\label{Godd}
G^{nn}_{\mathrm{SSH},\infty}(E)= \begin{cases} \big[ E + g_{L1} +h_{R1}(t_1,t_2) \big] ^{-1} & \text{for $n$ odd}\\
\big[ E + g_{L2} +h_{R2}(t_1,t_2) \big] ^{-1} &\text{for $n$ even} \end{cases},
\end{equation}
while for SSH chains of even length $N$,
\begin{equation}\label{Geven}
G^{nn}_{\mathrm{SSH},\infty}(E)= \begin{cases} \big[ E + g_{L1} +h_{R2}(t_2,t_1) \big] ^{-1} & \text{for $n$ odd}\\
\big[ E + g_{L2} +h_{R1}(t_2,t_1) \big] ^{-1} &\text{for $n$ even} \end{cases}.
\end{equation}
The terms in $G^{nn}_{\mathrm{SSH},\infty}(E)$ are given below, adopting the shorthand notation $\sin{(Ak)}=s_A$:
\begin{align}\label{gcoeff}
g_{L1}=&\frac{t_{2}^{2} E s_{n-1}-t_{2} \Sigma_{L}\left(t_{1} s_{n-3}+t_{2} s_{n-1}\right)}{E \Sigma_{L} s_{n-1}-t_{2}\left(t_{1} s_{n+1}+t_{2} s_{n-1}\right)}, \\[10pt]
g_{L2}=&\frac{t_{1} \Sigma_L E s_{n-2}-t_1 t_{2} \left(t_{1} s_{n}+t_{2} s_{n-2}\right)}{E t_2 s_{n}-\Sigma_L \left(t_{1} s_{n-2}+t_{2} s_{n}\right)},\\[10pt]
\begin{split}
&\hspace{-6.5cm} h_{R1}(t_1, t_2) = \cr \hspace{-1cm}\frac{t_{1}^{2} E s_{N-n}-t_{1} \Sigma_{R}\left(t_{1} s_{N-n}+t_{2} s_{N-n-2}\right)}{E \Sigma_{R} s_{N-n}-t_{1}\left(t_{1} s_{N-n}+t_{2} s_{N-n+2}\right)},
\end{split}\\[10pt]
\begin{split}
&\hspace{-7.5cm}h_{R2}(t_1, t_2) =\cr \hspace{-1cm} \frac{t_{2} E \Sigma_R s_{N-n-1}-t_{1} t_2 \left(t_{1} s_{N-n-1}+t_{2} s_{N-n+1}\right)}{E t_1 s_{N-n+1}-\Sigma_R \left(t_{1} s_{N-n+1}+t_{2} s_{N-n-1}\right)}.
\end{split}
\end{align}
Note that the expressions (\ref{Godd},\ref{Geven}) are arranged such that the second terms stem from the interface between the left lead and the SSH chain, while the third terms correspond to the right lead/chain-interface.
Solving for the eigenvalues of (\ref{effectivequbit}), one finds
\begin{align}\label{lambdaprimE}
\lambda'_{\pm}(E)&=\frac{1}{2}\left(\epsilon_{1}+\epsilon_{2} +\Sigma^{nn}_{\mathrm{SSH,\infty}}(E) \pm \delta'(E)\right)\nonumber\\
&=\epsilon_{0} + \frac{\Sigma^{nn}_{\mathrm{SSH,\infty}}(E)\pm\delta'(E)}{2} ,
\end{align}
where $\delta'(E)$ is the energy splitting of the open qubit:
\begin{equation}\label{deltaprim}
\delta'(E)=\sqrt{(\delta_{0}-\Sigma^{nn}_{\mathrm{SSH,\infty}}(E))^{2}+4 \tau^{2}}.
\end{equation}
Unsurprisingly, the above expressions reduce to the isolated qubit case when $\Sigma^{nn}_{\mathrm{SSH,\infty}}(E) = 0$.
\subsection{Extracting and Interpreting the Decoherence Rate}\label{decointerp}
The Green's function associated with (\ref{effectivequbit}) is
\begin{equation}\label{greensDD}
G_{\mathrm{DD}}'(E)=\left(\begin{array}{cc} E-\epsilon_1 & -\tau \vspace{0.1cm} \\ -\tau \qquad & E-\epsilon_2 -\Sigma^{nn}_{\mathrm{SSH,\infty}}(E) \end{array}\right)^{-1},
\end{equation}
given here in matrix notation following (\ref{appendixGeff}). The decay of the qubit can again be linked to the off-diagonal elements of the time-dependent Green's function. Unlike the isolated qubit, however, the energy dependence of $\Sigma^{nn}_{\mathrm{SSH,\infty}}(E)$ in $G'_{\mathrm{DD}}(E)$ precludes a direct analytical evaluation of the Fourier transform for $G'_{\mathrm{DD}}(t)$. Qualitatively, the effective description is non-Hermitian and the self-energies lead to complex eigenvalues such that $G'_{\mathrm{DD}}(t)$ is no longer purely oscillatory and instead takes the form of a damped exponential, reflecting the decay of the qubit due to the open nature of the system. A perturbative weak-coupling approximation with $t_c^2 N \ll 1$ is used to obtain an analytical form for the time-dependent Green's function as detailed in Appendix A of \cite{PhysRevA.95.062114}. The decay rate expression extracted from $G'_{\mathrm{DD}}(t)$, taking into account a weak coupling to an arbitrary site $n$, yields
\begin{equation}\label{tauinv}
(\tau_\phi^{nn})^{-1} \approx \mathrm{min} \left(-\frac{1}{2}\mathrm{Im} \left\{ \Sigma^{nn}_\mathrm{SSH,\infty}(\lambda_\pm) \pm \delta'(\lambda_\pm) \right\} \right).
\end{equation}
Note that, as a result of the perturbative treatment, the energy-dependent quantities in $(\tau_\phi^{nn})^{-1}$ are evaluated at the eigenvalues of the isolated qubit, $\lambda_\pm$. The choice of $\lambda_+$ or $\lambda_-$ is that which gives the slowest decay, since this term dominates the long-time decoherence of the qubit.
It is useful to reflect on (\ref{tauinv}) to understand what physical quantity can be probed through a study of decoherence rates. Evidently, the first term in (\ref{tauinv}) is proportional to the negative imaginary part of $G^{nn}_{\mathrm{SSH},\infty}(E)$. This is precisely the definition of the local density of states (LDOS) at a site $n$ within the SSH chain,
\begin{equation}
d(E,n)=-\mathrm{Im}\{G^{nn}_{\mathrm{SSH},\infty}(E)\}.
\end{equation} The term $\mathrm{-Im}\{\Sigma^{nn}_{\mathrm{SSH},\infty}(\lambda_\pm) \}$ in \eqref{tauinv} is therefore directly proportional to the LDOS in the SSH chain at the energies of the isolated qubit. The second term, $ \delta'(\lambda_\pm)$, is more complex. There is a dependence to the LDOS, but a linear proportionality between $\mathrm{Im} \{ \delta' \}$ and $\mathrm{Im}\{\Sigma^{nn}_{\mathrm{SSH},\infty}\}$ depends on the relationship between the self-energy and the qubit parameters. Factoring $\Sigma^{nn}_{\mathrm{SSH},\infty}$ and expanding the root in (\ref{deltaprim}) for $\Sigma^{nn}_{\mathrm{SSH},\infty} \gg (\delta_0$, $\tau )$ yields
\begin{align}\label{simped}
\delta'&=\Sigma^{nn}_{\mathrm{SSH},\infty} \bigg( 1 - \frac{\delta_0}{ \Sigma^{nn}_{\mathrm{SSH},\infty}} + \mathcal{O}\left(\frac{1}{\Sigma^{nn\quad 2}_{\mathrm{SSH},\infty}}\right) + ... \bigg) \nonumber\\
&\approx \Sigma^{nn}_{\mathrm{SSH},\infty},
\end{align}
with small corrections of order $\delta_0$. We can then write to a good approximation that $ - \mathrm{Im} \{ \delta'\} \approx - \mathrm{Im} \{ \Sigma^{nn}_{\mathrm{SSH},\infty}\}$. Note that (\ref{simped}) leads to a nearly perfect cancellation for the appropriate upper or lower sign in (\ref{tauinv}) for a given $\delta'(\lambda_\pm)$, showing that this near-cancellation is crucial in extracting the slower of the two competing decay rates. Choosing TLS parameters $\delta_0$ and $\tau$ small with respect to $\Sigma^{nn}_{\mathrm{SSH},\infty}$ we obtain, for the appropriate $\lambda_+$ or $\lambda_-$ which minimizes the decay rate, an interesting conclusion:
\begin{align}\label{tauDOS}
(\tau_\phi^{nn})^{-1} & \propto -\mathrm{Im} \{ G^{nn}_{\mathrm{SSH},\infty}(\lambda_\pm)\} = d(\lambda_\pm,\,n) .
\end{align}
This useful interpretation of the decoherence rate as proportional to the LDOS is conceptually reasonable; couplings to densely populated sectors of the environment's energy spectrum should lead to faster decoherence than couplings to an empty or sparse sector.
\section{Scanning Decoherence Probe} \label{sec:deco}
The decoherence rate (\ref{tauinv}) depends on the self-energy $\Sigma_{\mathrm{SSH},\infty}^{nn}(\lambda_\pm)$ and so is a function of both the eigenvalues $\lambda_\pm$ of the qubit and the coupling site $n$ between the qubit and the SSH chain. This, paired with the proportionality between $(\tau_\phi^{nn})^{-1}$ and the LDOS, can be used to extract the amplitude profiles of states residing within the SSH chain. Tuning one of the qubit frequencies to the midgap effectively couples the qubit to edge states, and decoherence rates can be used to obtain spatial characteristics of these states.
The probe can be implemented in two ways. The first is a scanning probe approach where the qubit is coupled to a site $n$ and initialized such that it possesses an eigenvalue, say $\lambda_-$, tuned to an energy of interest in the SSH chain. The decoherence rate of the qubit is measured as it naturally decoheres due to its environment coupling. This process can be repeated at all sites along the chain, reinitializing the qubit at $\lambda_-$ every time, and yields a spatial profile of states in the SSH chain at energy $\lambda_-$. Alternatively, multiple qubits can be integrated along the entire length of a mesoscopic system as done in \cite{PhysRevX.11.011015} to provide simultaneous readouts along the sample of interest. LDOS profiles are therefore extracted from the decoherence rate as a function of the site number. Additionally, the DOS can be obtained by simply summing the LDOS over all sites. It has been demonstrated in previous work that the decoherence probe coupled to the end of the SSH chain is sensitive to the presence or absence of edge states \cite{ZAIMI2021127035}. We now show that scanning the probe along the length of the SSH chain can map out the profile of edge states. Results for both SSH chains of even and odd length are presented.
\subsection*{Spatial characterization of topological edge states}
With the goal of first studying the topological edge states of an even-$N$ SSH chain, the qubit is initialized such that it has an eigenvalue $\lambda_-$ tuned in the gap at the edge state energy. Since $\mathrm{Im}\left\{G^{nn}_{\mathrm{SSH},\infty}(E)\right\}$ is symmetric, we need not worry about the sign of the edge state energy we input; chiral partner states of equal and opposite energy have identical amplitudes $| \psi_{\mathrm{edge}}|^2$ and will yield identical decoherence rate measurements. Recall that only the slowest decay rate is considered relevant in the treatment of (\ref{tauinv}); fine-tuning of the qubit parameters may be necessary to ensure that the desired $\lambda_-$ contribution remains minimal at all sites $n$ of the SSH chain. This allows one to precisely map the edge state over the entire length of the chain. Notably, qubit parameters should be chosen such that $\lambda_+$ is resonant with a bulk state. In this way, bulk contribution $\lambda_+$ to the decay rate is maximal and the decoherence rate actually reflects the $\lambda_-$ contribution -- the coupling to an edge state as desired.
\begin{figure}[ht]
\begin{center}
{\includegraphics[width=1\linewidth]{NevenDecoProbe.png}}
\end{center}
\caption{Plot of the decoherence rate (dashed black line) and its two $\lambda_\pm$ contributions for a scan of a topological SSH chain of length $N=40$ with $t_1=0.6$ and $t_2=0.8$. Qubit parameters are tuned to ($\epsilon_1$, $\epsilon_2$, $\tau$) = (0.4017, 0.0034, 0.03) such that $\lambda_+=0.4039$ is tuned to a densely populated sector of the bulk bands while the $\lambda_-=1.11\times10^{-3}$ contribution, tuned to the edge state energy (red odd sites, blue even sites), is minimal all along the SSH chain. Coupling parameters are $t_c = 0.003$ and $t_L= t_R = 0.02$. Edge states have the largest penetration into the bulk for $r$ less than but close to $r_C$. Note the beyond-exponential edge state suppression expected in near-zero energy states of finite even-length SSH chains, accentuated here from choosing $r = 0.75$, relatively near $r_C=0.95$.}\label{NevenProbe}
\end{figure}
A plot of the decoherence rate $(\tau^{nn}_\phi)^{-1}$ as a function of $n$ for a topological SSH chain with $N=40$ and $r=0.75$ ($r<r_C$) is shown in Fig.~\ref{NevenProbe}. Different $\lambda_\pm$ contributions to the decoherence rate are shown, with $(\tau^{nn}_\phi)^{-1}$ being the minimum of the two. The decoherence rate is further decomposed into even and odd sites for visual clarity; see the figure caption for details. We note a stark contrast in the behaviour of both decay contributions. This is the result of $\lambda_+=0.4039$ residing within the bulk bands, which span the absolute energy range $\{0.1909,\,2.009\}$, while $\lambda_-=1.11\times10^{-3}$ is tuned in the gap to the energy of the topological edge state of the finite SSH chain. Thus, on the one hand, the $\lambda_+$ terms are associated with an oscillatory bulk LDOS. On the other hand, the $\lambda_-$ terms, corresponding here to $(\tau^{nn}_\phi)^{-1}$, show quasi-exponential behaviour as a function of $n$. Near the left (right) edge, decoherence is rapid on odd (even) sites and gets suppressed as one moves into the bulk, reflecting the localization of the probed excitations. Additionally, even- (odd-) site decoherence is heavily suppressed in the vicinity of the left (right) edge, a result of edge states having near-zero amplitudes on these sites. Strong oscillations between the $\lambda_-$ contributions on even and odd sites when scanning along the chain are observed near zero energy. These behaviours are in agreement with open SSH chain topological edge states which give an impression of sublattice confinement near the boundaries.
The decoherence probe is now used to study odd-$N$ SSH chains and their respective edge states. In such chains, there always exists a zero-energy mode with localization on the left (right) edge for $r<1$ ($r>1$) as was shown in Fig.~\ref{sshedgegraph}b. As a result of being pinned to the midgap, these states are confined to the odd-site sublattice with a pure exponential fall-off in the bulk. The qubit is tuned to have to $\lambda_-=0$ to probe the edge state. A decoherence plot for the $r<1$ phase for a chain of length $N=41$ is shown in Fig.~\ref{Noddprobe}. The case for the $r>1$ configuration is symmetric to Fig.~\ref{Noddprobe} under spatial inversion. The $\lambda_+$ contribution to the decoherence rate is again oscillatory from coupling to bulk states. Looking at the $\lambda_-=0$ terms (odd sites highlighted in red), one observes large decoherence rates on the left edge and exponentially suppressed decoherence rates as one travels through the bulk on odd sites, indicative of a zero-energy state localized at the left edge. Strong $(\tau^{nn}_\phi)^{-1}$ oscillations arise on odd and even sites, with even-site decoherence rates being many orders of magnitude smaller than on odd sites. In principle, these edge states have no support on even sites, and the non-zero decay rates on these sites can be attributed to hybridization with lead states at the midgap, which is kept negligible by choosing $t_{L,R}\ll 1$. In the absence of fine tuning, which guarantees that the decay rate only corresponds to $\lambda_-$ terms, the probe can be insensitive to the complete edge state profiles and may instead sample bulk-oscillatory behaviour. The site $n=5$ in Fig.~\ref{Noddprobe} is an example of the bulk LDOS competing strongly with the edge state LDOS. Edge state signatures could nonetheless be probed by seeking out exponential-like suppression of the decay rate deeper in the chain, where the lower amplitude of the edge states yields smaller decoherence rates.
\begin{figure}[ht]
\begin{center}
{\includegraphics[width=1\linewidth]{NoddDecoProbe.png}}
\end{center}
\caption{Both $\lambda_\pm$ contributions to the qubit decoherence rate as a function of position along an SSH chain of length $N=41$ are shown for $r=0.75$. The decoherence rates on odd sites are highlighted in red. The qubit is initialized with parameters ($\epsilon_1$, $\epsilon_2$, $\tau$) = (0.2843, 0.0031, 0.03), yielding $\lambda_-=0$ and the bulk-resonant frequency $\lambda_+ = 0.2875$ such that the zero energy contribution to the decay is always minimal. Coupling parameters are $t_c = 0.003$ and $t_L= t_R = 0.02$. The decoherence rate is indicative of an SSH chain with a left edge state: as expected of the zero-energy states, the decay rate shows an exponentially decaying profile entering the bulk.}\label{Noddprobe}
\end{figure}
\begin{figure}[ht]
\begin{center}
{\includegraphics[width=1\linewidth]{ProfileES3.png}}
\end{center}
\caption{Comparison of the normalized LDOS profile extracted from the decoherence rate to the edge state profile for an open SSH chain of length $N=40$ (blue) and $N=41$ (red), both with $r = 0.75$. The LDOS is normalized to match the edge state amplitude. }\label{profileES}
\end{figure}
The above analysis demonstrates the sensitivity of the scanning decoherence probe to various edge states that arise in the SSH chain within the composite system. This includes the ability to differentiate edges states from even-$N$ and odd-$N$ SSH chains as well as the ability to know which boundary the state is localized on in the case of odd-$N$ zero-modes. Through its decay rate proportionality to the LDOS, the probe can infer the presence of topological edge states and provide a spatial description of the state, revealing both the localization boundary and the penetration depth into the bulk of the edge state. The LDOS retrieved from the scanning probe is scaled and compared to the amplitude profiles of corresponding edge states in an open-boundary isolated SSH chain in Fig.~\ref{profileES}. As expected from our interpretation of $(\tau^{nn}_\phi)^{-1}$, the decoherence rate has the same decay constant as a function of $n$ as the edge states' density profiles. These combined capabilities establish the proposed decoherence probe as a powerful tool to spatially characterize and classify localized states.
\section{Discussion and Conclusion}\label{sec:conclusions}
We have demonstrated that the decoherence of a qubit coupled to an open SSH chain environment can be used to probe states within the SSH chain subspace. This ability stems from the proportionality between the decoherence rate of the qubit with the LDOS within the SSH chain of the composite system. In particular, sweeping over the topological material of interest with a qubit probe of appropriately tuned energy, we demonstrate the ability to spatially characterize topological edge states of the SSH model through decoherence dynamics. The decoherence probe is sensitive to the presence (or absence) of edge states and determines the localization boundary of edge states as well as their spatial profile. It should be highlighted that these ideas are applicable to a wide array of systems and the results are not exclusive to the composite system studied here: the decoherence expression of the qubit can be adapted to various environments by considering the appropriate self-energy, and will have the same form as (\ref{tauinv}) so long as a single environment-qubit coupling is considered. As such, other one-dimensional systems featuring localized states, such as the Kitaev chain hosting Majorana zero-modes \cite{Kitaev_2001}, could be spatially characterized by means of the decoherence probe. Additionally, one could envision a generalization of the probe discussed here to two dimensions. Two-dimensional topological materials, which feature edge modes along their perimeter, could be scanned over their area to study the penetration depth of edge modes as well as their location along the boundary of a sample. Corner modes, higher-order topological excitations in 2-d materials confined not to the full edges but only the corners \cite{PhysRevB.100.205406,PhysRevLett.122.076801}, should also lead to distinct signatures from the decoherence probe. It is hoped that this novel quantum sensor can become a practical platform for studying real-space properties of localized states in low-dimensional materials and metamaterials.
\section*{Acknowledgements} \label{sec:acknowledgements}
This work was supported in part by the Natural Science and Engineering Research Council of Canada and
by the Fonds de Recherche Nature et Technologies du
Québec via the INTRIQ strategic cluster grant.
|
{
"arxiv_id": "2302.13616",
"language": "en",
"timestamp": "2023-02-28T02:25:25",
"url": "https://arxiv.org/abs/2302.13616",
"yymm": "2302"
} | \section{Introduction}
Inverse problems reconstruct an unknown object from measurements based on a mathematical model that describes the data acquisition. It inverts the measured signal into information about an unknown object. A well known inverse problem is computed tomography (CT), where an unknown
object is reconstructed from a projection sinogram In algebraic reconstruction the problem is formulated as a least squares problem,
$\min_{x\in \mathbb{R}^n} \|Ax-b\|^2_2$, that minimizes the mismatch between the data and model. Here, the matrix $A \in \mathbb{R}^{m \times n}$ models the
propagation of the X-rays through the object \cite{kak2001principles}, $x \in \mathbb{R}^n$
describes the unknown pixel values and $b \in \mathbb{R}^m$ is the vector with measurements.
However, since $A$ is ill-conditioned, e.g oscillating $+1$,$-1$ pixel values are in the near null space, the straightforward least-squares solution, $x =(A^TA)^{-1}A^Tb$
will be contaminated due to the noise in the measurements \cite{hansen2006deblurring}.
To avoid overfitting to the noise, regularization techniques are used. For example, in Tikhonov regularization \cite{golub1999tikhonov} the reconstruction problem
is replaced by {$\min_{x \in \mathbb{R}^n} \|Ax-b\|_2^2 + \lambda \|Lx\|_2$}
where a regularization term with a regularization operator $L$ is added. The regularization parameter $\lambda >0$ is tuned such that the Morozov discrepancy principle is satisfied \cite{morozov2012methods}.
In LASSO \cite{tibshirani1996regression} the problem becomes {$\min_{x \in \mathbb{R}^n} \|Ax-b\|_2^2 + \lambda \|Lx\|_1$}.
The regularization term now uses a $\ell_1$-norm that promotes sparsity in the solution.
Elastic net \cite{zou2005regularization} combines the $\ell_1$-norm and the $\ell_2$-norm in the regularization.
The focus of this paper is on an alternative regularization technique where bounds on the variables limit the sensitivity of the solution to measurement noise. We solve
\begin{equation}\label{eqn:LS}
\min_{l \le x \le u} \, \frac{1}{2}\|Ax-b\|_2^2,
\end{equation}
where the lower and bounds are vectors $l, u \in \mathbb{R}^n$. This problem is known as the \textit{bounded-variable least squares} (BVLS) that can, for example, be solved by
the Stark-Parker algorithm
\cite{stark1995bounded}. When only lower bounds are used, it is a \textit{non-negative least squares} (NNLS) problem \cite{lawson1995solving}. Note that the Lasso problem can be formulated as a non-negative QP problem.
This results in a constrained quadratic programming (QP) problem.
We will call the objective $f(x)\coloneqq \frac{1}{2}\|Ax-b\|_2^2$.
In tomography, for example, these bounds limit the pixel values of the reconstructed object.
The Karush-Kuhn-Tucker (KKT) optimality conditions \cite{nocedal1999numerical} that accompany \cref{eqn:LS} are
\begin{subequations}\label{eqn:kkt}
\begin{align}
A^T(Ax-b) - \lambda + \mu & =0 \label{eqn:kkt:stationarity} \\
\lambda_i(x_i - \ell_i) & = 0 & & i\in\{1,\ldots,n\}\label{eqn:kkt:lambda} \\
\mu_i(u_i - x_i) & = 0 & & i\in\{1,\ldots,n\} \label{eqn:kkt:mu} \\
\ell_i \leq x_i & \leq u_i & & i\in\{1,\ldots,n\} \\
(\lambda, \mu) & \geq 0
\end{align}
\end{subequations}
where $\lambda \in \mathbb{R}^n$ and $\mu\in \mathbb{R}^n$ are vectors with Lagrange multipliers associated, respectively, for the lower and upper bounds.
The state-of-the-art method to solve a general large-scale QP or linear programming (LP) problem are interior-point methods \cite{nocedal1999numerical,gondzio2012interior}
where each iteration are weighted sparse-matrix needs to be factorized by a Cholesky factorization.
For large-scale LP problem \textit{column generation} \cite{lubbecke2005selected} is a technique where the structure of the matrix is exploited by writing
the solution as a convex combination of solutions of subproblems. Resulting in an iterative method that alternates between solving a master problem, which finds
the coefficients of the convex combination, and a new subproblem to expand the vectors for the convex combination.
In this paper we use a linear combination of residuals as an approximation to the solution. Each iteration we solve a projected problem, similar to the master problem in
column generation. The use of residuals as a basis for the expansion is common in Krylov methods.
The use of Krylov methods is widespread in scientific computing. They are attractive for applications with sparse matrices. Since
a sparse-matrix vector product, $Av$, is easy to parallize, it is cheap to generate a Krylov subspace.
The convergence of Krylov methods is well understood and various preconditioning such as multigrid or incomplete factorisation techniques
can accelerate the convergences. For a review of Krylov methods and various preconditioning techniques we refer to \cite{liesenstrakos,saad2003iterative}.
For inverse problems that typically involve a non-square matrix Golub-Kahan bi-diagonalization is used \cite{golub1965calculating}. It searches a solution in the subspace
formed by the matrix powers,
\begin{equation}
\mathcal{K}_k(A^TA, A^Tb) = \text{span} \left\{ A^Tb, (A^TA)A^Tb, \ldots, (A^TA)^{k-1}A^Tb \right\}.
\end{equation}
So far as we know Krylov methods have not been extended to include lower and upper bounds on the variables.
There is a literature on enforcing non-negative constraints in \cite{nagy2000enforcing}.
In this paper we make the following contributions. We propose a subspace method that uses the residuals from the KKT condition, \eqref{eqn:kkt}. Each iteration we solve a small projected QP problem with dense matrices
using the active-set QP algorithm. When only a few bounds are active in the solution we observe superlinear convergence behavior.
We explain this superlinear behavior by making the link to Krylov convergence theory. From a certain point on, the residuals can be written as polynomials
of the normal matrix and the convergence is determined by the Ritz values of the projected matrix.
In addition, we contribute an efficient implementation that uses warm starting as the basis is expanded and updates the factorization
of the matrices that need to be solved each iteration.
The analysis of the propagation of the rounding errors and the final attainable accuracy is not part of this study and will be the subject of a future paper.
The outline of the paper is as follows. It has two parts. In \cref{sec:basis} we derive the algorithm and prove some properties of the basis. \Cref{sec:implementation} discusses several ways to speed up the algorithm.
\section{The Residual QPAS Subspace method and its convergence}\label{sec:basis}
In this section we introduce the methodology and analyse the convergence.
\subsection{Residual QPAS Subspace Method}
We propose to solve \cref{eqn:LS} by iteratively solving a projected version of the problem. We write the solution $x_k=V_k y_k$ with a zero initial guess.
\begin{definition} \label{def:resqpas}
The \textit{residual QPAS subspace} iteration for a $A \in \mathbb{R}^{m \times n}$,
$b \in \mathbb{R}^m$ and $l,u \in \mathbb{R}^n$, lower and upper bounds such that $l \le 0 \le u$, generates a series of guess $\{x_k\}_{k \in \mathbb{N}}$ that are solution of
\begin{equation}
\begin{aligned}
x_k = \operatorname{argmin}_{x \in \text{span}\{r_0,\ldots, r_{k-1}\}} & \|Ax-b\|_2^2 \\
\text{s.t.} & \, l \le x \le u
\end{aligned}
\end{equation}
where
\begin{equation}\label{eqn:rk}
r_k := A^T (Ax_k-b) - \lambda_k +\mu_k
\end{equation}
and $\lambda_k, \mu_k \in \mathbb{R}^n$ are the Lagrange multipliers associated with the lower and upper bound.
The feasible initial guess is $x_0=0$ and $r_0 :=-A^Tb$.
\end{definition}
\begin{remark}
The condition $l\leq0\leq u$, to ensure feasibility of $x_0$, does not imply any restrictions on the problems that can be solved.
The problem with arbitrary $l<u$, can always be shifted, such that $l\leq 0 \leq u$ holds.
\end{remark}
A high level implementation of this algorithm is given in algorithm~\ref{alg:resBasis} and a more detailed implementation in algorithm~\ref{alg:outer}.
If $V_k \in \mathbb{R}^{n \times k} $ is a basis for $\text{span}\{r_0,\ldots, r_{k-1}\}$, the optimization problem at iteration $k$ is
\begin{equation}\label{eqn:LSproj}
\begin{aligned}
\min_{y_k\in\mathbb{R}^k} & \quad\frac{1}{2}\|AV_ky_k-b\|_2^2 \\
\text{s.t.} & \quad \ell_i \leq [V_ky_k]_i \leq u_i \quad i\in\{1,\ldots,m\}.
\end{aligned}
\end{equation}
The coefficients $y_k \in \mathbb{R}^k$ are the projections of the unknown solution on $V_k$ and $k$ is the dimension of the subspace.
The corresponding KKT conditions for \eqref{eqn:LSproj} are now:
\begin{subequations} \label{eqn:kktP}
\begin{align}
V_k^TA^T(AV_ky_k-b) - V_k^T \lambda_k + V_k^T \mu_k & =0 \label{eqn:kktP:stationarity} \\
(\lambda_k)_i([V_ky_k]_i - \ell_i) & = 0 & & i\in\{1,\ldots,n\}\label{eqn:kktP:lambda} \\
(\mu_k)_i(u_i - [V_ky_k]_i) & = 0 & & i\in\{1,\ldots,n\} \label{eqn:kktP:mu} \\
\ell_i \leq [V_ky_k]_i & \leq u_i & & i\in\{1,\ldots,n\} \label{eqn:kktP:bound} \\
(\lambda_k, \mu_k) & \geq 0.
\end{align}
\end{subequations}
We call the \textit{active set} the set $\mathcal{A}_k \subset \{1,\ldots,n\}$ where the bound constraint, \eqref{eqn:kktP:bound}, become equalities.
As is done in Krylov-type methods (see \cite{liesenstrakos}) we will use residuals $r_k$ to expand our basis.
For every iteration $k$, the basis of the subspace becomes $V_k=\begin{bmatrix}r_0/\|r_0\| & \cdots & r_{k-1}/\|r_{k-1}\| \end{bmatrix}$.
This choice of residual basis is natural for Krylov methods such as CG. We will prove their pairwise orthogonality in \cref{thm:ortho},
how this is a generalization of Krylov in \cref{thm:generalize} and how there is asymptotic Krylov convergence \cref{thm:ritzvalue}.
\begin{lemma}[Orthogonality of $r_k$]\label{thm:ortho}
If $y_k \in \mathbb{R}^k$ is an optimal solution of \cref{eqn:LSproj}
then the $\{r_k\}_{k \in \{1,2 \ldots\}} $ as defined in \cref{eqn:rk} are pairwise orthogonal.
\end{lemma}
\begin{proof}
If $y_k$ is optimal, the KKT conditions \eqref{eqn:kktP} hold.
The stationarity conditions \cref{eqn:kktP:stationarity} can be rewritten as follows:
\begin{equation}
V_k^T\left(A^T(AV_ky_k-b) - \lambda + \mu\right)= V_k^Tr_k = 0.
\end{equation}
Since $V_k$ is a basis for $\text{span}[r_0, r_1, \ldots r_{k-1}]$, this proofs the orthogonality of the $r_k$'s to $V_k$.
\end{proof}
There exists a natural link between the residuals in \cref{eqn:rk} and the Krylov-subspace.
\begin{lemma}[Generalization of Krylov-subspace]\label{thm:generalize}
In the unconstrained case none of the bounds of \cref{eqn:LSproj} are ever active, we get:
\begin{equation}\label{eqn:equivKrylov}
\mathcal{V}_{k+1}\coloneqq\vct\left\{r_0,r_1,\ldots,r_k\right\} \cong \mathcal{K}_{k+1}(A^TA,r_0)\coloneqq \vct\left\{r_0,A^TAr_0,\ldots, \left(A^TA\right)^kr_0\right\}.
\end{equation}
With $r_k$ defined as in \cref{eqn:rk}.
\end{lemma}
\begin{proof}
If the problem is unconstrained, there are no Lagrange multipliers $\lambda$ and $\mu$.
If none of the constraints are ever active, the complementarity condition \cref{eqn:kktP:lambda,eqn:kktP:mu} ensures that $\lambda=0=\mu$.
Thus our residuals $r_k$ simplify to
\begin{equation}
r_k = A^T(AV_ky_k - b).
\end{equation}
For $k=1$, the statement is trivial.
Assume \cref{eqn:equivKrylov} holds for $k=j-1$. Then
\begin{align}
r_k & = A^TA\begin{bmatrix} r_0 & r_1 & \cdots & r_{k-1} \end{bmatrix}\begin{bmatrix}y_0\\ \vdots\\ y_{k-1}\end{bmatrix} - A^Tb \\
& = A^TA\left( y_0r_0 + \cdots + y_{k-1}r_{k-1}\right) + r_0 \\
& = r_0 + y_0A^TAr_0 + \cdots + y_{k-1}A^TAr_{k-1}.
\end{align}
\end{proof}
\begin{example}
If we now compare our method applied to the unconstrained problem, with the implementation of CG in matlab (\verb+pcg+) applied to $A^TAx=A^Tb$, we notice in \cref{fig:cg} that they converge similarly. This suggests the possibility of preconditioning
\end{example}
\begin{figure}
\centering
\includegraphics{figures/cg.pdf}
\caption{Comparison of conjugate gradients and ResQPASS for an unconstrained least squares problem. Here CG is applied to the normal equation $A^TA=A^Tb$.}
\label{fig:cg}
\end{figure}
\begin{example}
Let us look at an example with a limited number of active constraints, where we have control over the maximal number of active constraints.
Consider the least-squares problem $\min\|Ax-b\|_2^2$ with $A\in\mathbb{R}^{1000\times 600}$ where 4\% of the entries are 1 and the rest are 0. Let half of the entries of the exact solution $x^*$ be 0 and the others be $\pm 1$. Finally, let $b=Ax$.
Let $m_{\max}$ be the maximal number of active constraints, then we add the following constraints to the problem:
\begin{equation}
-\frac{1}{2}\|x^*_i\|-0.01 \leq x_i \leq \frac{1}{2}\|x_i^*\|+0.01 \qquad i\in\{1,\ldots, m_{\max}\}
\end{equation}
The $0.01$ offset is to ensure that the lower- and upper bounds are never equal. The experiment is performed for $m_{\max}=\{0,1,2,4,8,16,32,64,128\}$. From the results in \cref{fig:maxConstr} we can conjecture that the method has two steps: discovery of the active set and Krylov convergence. Notice that the discovery phase takes a number of iterations, roughly equal to $m_{\max}$, or the number of active constraints. The number of iterations for the Krylov convergence is always more or less the same as for the unconstrained problem (where there is no discovery phase).
\begin{figure}
\includegraphics[width=\textwidth]{figures/limitWS.pdf}
\caption{This figure illustrates the convergence behavior for different number of active constraints.
The residual and objective behave similar to the unbounded ($m_{\max}=0$, Krylov convergence) case, with a delay that is roughly equal to $m_{\max}$, the number of active constraints in the problem.}
\label{fig:maxConstr}
\end{figure}
\end{example}
\subsection{Convergence theory}\label{sec:convergence}
\begin{algorithm}
\caption{Residual QPAS subspace (ResQPASS)}
\label{alg:resBasis}
\begin{algorithmic}[1]
\Require{$A \in \mathbb{R}^{m \times n},b \in \mathbb{R}^m$, $l,u \in \mathbb{R}^n$}
\State $r_0 =A^Tb$
\State $V_1 =[r_0/\|r_0\|]$
\State $y_1 = 0$
\State $\mathcal{W}_1 = \emptyset$
\For{$k=1,2,\ldots,m$}
\State $y_k^*, \lambda_k, \mu_k, \mathcal{W}_k^* \gets$ Solve \cref{eqn:LSproj} using \textproc{qpas}, with initial guess $y_k$ and initial working set $\mathcal{W}_{k}$
\State $r_k = A^T\left(AV_ky_k^* - b\right)-\lambda_k+\mu_k$
\If{ $\|r_k\|_2 \le tol$}
\State $x = V_k y_k$ break;
\EndIf
\State $V_{k+1} \gets \begin{bmatrix} V_k & r_k/\|r_k\| \end{bmatrix}$
\State $y_{k+1} \gets \begin{bmatrix} (y_k^*)^T & 0 \end{bmatrix}^T$
\State $W_{k+1} \gets W_k^*$
\EndFor
\end{algorithmic}
\end{algorithm}
Based in these observations we develop a convergence theory.
\begin{lemma}\label{thm:conjecture}
Let $\lambda_k, \mu_k \in \mathbb{R}^n$ be the solution for the Lagrange multipliers of the projected KKT conditions, \eqref{eqn:kktP}, for iteration $k$
and let $V_k \in \mathbb{R}^{n \times k}$ be the subspace generated by the residuals $r_k$, as in
\cref{def:resqpas}. Then there exists an
iteration $0< k_0 \le n$ such that for all iterations $k$, where $k_0 \le k$ holds,
that $-\lambda_k+\mu_k \in \mathcal{R}([V_{k_0}, A^TAV_{k_0-1}])$.
\end{lemma}
\begin{proof}
$V_k$ can maximally grow to $n$ independent vectors and ${\text{span}\{V_n\} = \mathbb{R}^n}$.
In that case, the $-\lambda + \mu \in \mathbb{R}^n$, solutions of \eqref{eqn:kkt}, are in $\text{span}(V_n)$.
The number of active constraints is determined by the number of non-zeros
in $-\lambda + \mu$, solutions of \eqref{eqn:kkt} and it is usually much smaller
then $n$.
Because we solve \eqref{eqn:kktP} with a basis, the number of active bounds in $l \le V_ky_k \le u$, can only have $k$ equalities, in absence of degeneracies.
So the number of non-zeros in $-\lambda_k+\mu_k$, at iteration $k$, is limited to $k$.
We can construct the following subspace of $\mathbb{R}^n$
\begin{equation}\label{eqn:spanlagrange}
\text{span}\left\{-\lambda_1 + \mu_1,-\lambda_2 + \mu_2, \ldots, -\lambda_k + \mu_k \right\},
\end{equation}
where the first vector has, at-most, one non-zero, the second at most two, and so on. The vectors are linear independent. At some point, this space spans the exact active set of the problem
and also $-\lambda+\mu$, the solution of the full problem \eqref{eqn:kkt}, can be written as a linear combination of the vectors in this subspace.
Once these vectors span the space of the active set, this basis does not expand anymore and linear dependence appears.
By construction, these vectors in \eqref{eqn:spanlagrange} are equivalent with the following vectors
\begin{equation}
\begin{aligned}
\text{span}\left\{r_1-A^T(AV_1y_1-b),r_2-A^T(AV_2y_2-b), \ldots, r_k-A^T(AV_ky_k-b) \right\}.
\end{aligned}
\end{equation}
As soon as linear dependence appears in \eqref{eqn:spanlagrange}, the last vector, $-\lambda_k+\mu_k$, can be written as a linear combination of the previous vectors. This means
that there are coefficients $a_l \in \mathbb{R}$ such that
\begin{equation}
-\lambda_k + \mu_k = r_k-A^TAV_ky_k = \sum_{l=1}^{k-1} (r_l- A^T(AV_ly_l-b)) a_l \in \mathcal{R}[V_{k}, A^TAV_{k-1}],
\end{equation}
where we use that $-A^Tb = V_{k_0}\|A^Tb\| e_1$, since we start with $x_0 =0$.
At the first iteration $k$ where this linear dependence appears, we set $k_0:=k-1$.
\end{proof}
This \cref{thm:conjecture} means that, for an iteration $k_0 \le k$, there are $\alpha^{(k)}\in \mathbb{R}^{k_0}$ and $\beta^{(k)} \in \mathbb{R}^{k_0-1} $ such that we can write the residual as
\begin{equation} \label{eq:assumption}
\begin{aligned}
r_{k} & = A^T(AV_k y_k -b) - \lambda_k +\mu_k \\
& = A^T A V_{k} y_k + V_{k_0}\alpha^{(k)} + A^TA V_{k_0-1}\beta^{(k)} \\
& =A^T A V_{k}(y_k + (\beta^{(k)}, 0)^T) + V_{k_0} \alpha^{(k)} \quad \forall k \ge k_0.
\end{aligned}
\end{equation}
Here we use again that $A^Tb = V_{k_0}\|A^T b\|e_1$, since we start with $x_0 =0$. Since $\lambda_k$ and $\mu_k$
can still change with the iteration, the coefficients $\alpha^{(k)}$ and $\beta^{(k)}$ depend on $k$.
\begin{lemma}\label{lemma:polynomials}
If $y_k \in \mathbb{R}^k$ and $\lambda_k, \mu_k \in \mathbb{R}^n$ are the solution of \eqref{eqn:kktP} and the iteration
$k \ge k_0$ is such that \ref{thm:conjecture} holds,
then there are $\gamma_i^{(k)} \in \mathbb{R}^{k_0}$ with $i \in \{0, \ldots,k-k_0 \}$
such that the residual is
\begin{equation}
r_k = A^T(AV_k y_k -b) - \lambda_k +\mu_k = \sum_{m=0}^{k-k_0+1} (A^TA)^{m} V_{k_0} \gamma^{(k)}_{m} \quad \forall k \ge k_0.
\end{equation}
\end{lemma}
\begin{proof}
Since \cref{thm:conjecture} holds, the residual
${r_k = A^T(AV_k y_k -b) - \lambda_k +\mu_k}$ can be rewritten for iteration $k_0$, as
\begin{equation}
\begin{aligned}
r_{k_0} & = A^TAV_{k_0}y_{k_0} + V_{k_0}\alpha^{(k_0)} + A^TA V_{k_0-1}\beta^{(k_0)} \\
& = A^TA V_{k_0} \left(y_k + (\beta^{(k_0)} 0)^T \right) + V_{k_0-1}\alpha^{(k_0)} = \sum_{m=0}^{1} (A^T A)^m V_{k_0}\gamma^{(k_0)}_m,
\end{aligned}
\end{equation}
where we can use $\gamma^{k_0}_1 = y_{k_0} + (\alpha_2^{(k_0)} 0)^T $ and $\gamma^{(k_0)}_0 = \alpha_1^{(k_0)}$. So the lemma holds for $k=k_0$.
The residual $r_{k_0}/\|r_{k_0}\|$ will be added to the basis and it becomes now $V_{k_0+1} = [V_k \,\, r_{k_0}/\|r_{k_0}\|]$.
The solution of the next iteration, $k_0+1$, is {$V_{k_0+1} y_{k_0+1} = V_{k_0}(y_{k_0+1})_{1:k_0} + r_{k_0}/\|r_{k_0}\| (y_{k_0+1})_{k_0+1}$} and can be rewritten
as a linear combination of the vectors $A^TA V_{k_0}$ and $V_{k_0}$.
As a result, the residual for iteration $k_0+1$ is now
\begin{equation}
r_{(k_0+1)} =
(A^TA)^2 V_{k_0} \gamma^{(k_0+1)}_2 + (A^TA) V_{k_0}\gamma^{(k_0+1)}_1 + V_{k_0}\gamma^{(k_0+1)}_0.
\end{equation}
So for each next iteration $k > k_0$ this now becomes.
\begin{equation}\label{eq:power}
\begin{aligned}
r_k & = (A^TA)^{k-k_0+1} V_{k_0} \gamma^{(k)}_{k-k_0+1} + (A^TA)^{k-k_0} V_{k_0} \gamma^{(k)}_{k-k_0}
+ \ldots + V_{k_0} \gamma^{(k)}_0 \\
& = \sum_{m=0}^{k-k_0+1} (A^TA)^{m} V_{k_0} \gamma^{(k)}_{m}.
\end{aligned}
\end{equation}
\end{proof}
We now define the subspace $W_{k_0,l}$ as the space spanned by $l$ vectors since the linear dependence appeared in the Lagrange multipliers
for iterations $k_0 < k$, including the vector $v_{k_0}$
\begin{equation}
W_{k_0,l}:=\text{span}\left\{v_{k_0},v_{k_0+l-1}, \ldots, v_{k}\right\} = \text{span}\left\{r_{k_0-1}, \ldots, r_{k_0 + l -2}\right\}
\end{equation}
where $k = k_0 + l-1$.
We introduce the operator $B_{k_0,l}$ on this subspace $W_{k_0,l}$. The action of $B_{k_0,l}$ on $W_{k_0,l}$, restricted to $W_{k_0,l}$
corresponds with the action of $A^TA$ on $W_{k_0,l}$.
The operator $B_{k_0,l}$ is fully determined by its action on the basis vector of $W_{k_0,l}$. But instead of the basis vectors
we use the vectors $(A^TA)^m V_{k_0}\eta $ with $m \in \{1,\ldots, l\}$ with an arbitrary choice of $\eta \in \mathbb{R}^{k_0}$
We then have the equalities, that should hold for any choice of $\eta \in \mathbb{R}^{k_0}$
\begin{equation}\label{eqn:operator_B}
\begin{aligned}
B_{k_0,l} V_{k_0} \eta & = (A^TA) V_{k_0}\eta, \\
B^2_{k_0,l} V_{k_0}\eta & = (A^TA)^2 V_{k_0}\eta, \\
& \vdots \\
B^l_{k_0,l} V_{k_0} \eta & = W_{k_0,l} W_{k_0,l}^T (A^TA)^l V_{k_0}\eta, \\
\end{aligned}
\end{equation}
where, in the last equality, we project again on the subspace $W_{k_0,l}$.
Since these equalities must hold for any $\eta$, we can replace them with matrix equalities.
This is similar to the Vorobyev moment problem, see \cite{liesenstrakos}.
\begin{lemma}
Let $B_{k_0,l} = W_{k_0,l} \,T_{ll}\,W_{k_0,l}^T$ the operator defined by the equations \eqref{eqn:operator_B}.
Let $\{r_0,r_1, \ldots, r_k \}$ the series of residual vectors from system \eqref{eqn:kktP}.
Then there is a polynomial $P_l(t)$ of order $l$ with $P_l(0)=1$ such that the projection of $r_k$ on $W_{k_0,l}$ satisfies
\begin{equation}
W_{k_0,l}^T r_k = P_l(T_{ll})e_1=0.
\end{equation}
\end{lemma}
\begin{proof}
Because $r_k \perp V_k$ it is also holds that also $r_k \perp W_{k_0,l}$, since $W_{k_0,l} \subset V_k$. We can then write, for $k \ge k_0$
\begin{equation}
\begin{aligned}
0 = W_{k_0,l}^T r_{k} & = W^T_l \sum_{m=0}^{l} (A^TA)^m V_{k_0} \gamma^{(k)}_m = \sum_{m=0}^{l} W^T_{k_0,l} (A^TA)^m V_{k_0} \gamma^{(k)}_m \\
& = \sum_{m=0}^{l} W^T_{k_0,l} B_l^m V_{k_0} \gamma^{(k)}_m = \sum_{m=0}^{l} W_{k_0,l}^TW_l T_{ll}^m W_{k_0,l}^T V_{k_0} \gamma^{(k)}_m \\
& = \sum_{m=0}^{l} T_{ll}^m (\gamma^{(k)}_m)_{k_0} e_1 = P_l(T_{ll})e_1,
\end{aligned}
\end{equation}
where we use the properties of \eqref{eqn:operator_B}
\end{proof}
\begin{lemma}
Let $k > k_0$ such that \cref{thm:conjecture} holds. Let $W_{k_0,l} = \text{span}[v_{k_0},\ldots, v_k]$ then
\begin{equation}
W_{k_0,l}^T A^T A W_{k_0,l} = T_{ll}
\end{equation} is tridiagonal.
\end{lemma}
\begin{proof}
The subspace $W_{k_0,l}$ is spanned by the residuals $\text{span}\{ r_{k_0-1}, \ldots, r_{k_0+l-2}\}$, this equivalent to $\text{span}\{v_{k_0},\ldots, v_{k_0 +l-1}\}$. Based on \cref{lemma:polynomials}, each of these can be written
as
\begin{equation}
v_{k_0+t+1}\|r_{k_0+t}\|= r_{k_0+t} = \sum^{t+1}_{m=0} (A^T A)^m V_{k_0} \gamma^{(k_0+t)}_m,
\end{equation}
for $t \in \{0,\ldots, l-2\}$.
When we apply that $A^T A$ on these residuals, we have
\begin{equation}
A^T A r_{k_0+t} = \sum^{t+1}_{m=0} (A^T A)^{m+1} V_{k_0} \gamma^{(k_0+t)}_m \in \text{span} \left[ r_{k_0-1}, \ldots,r_{k_0+t+1} \right].
\end{equation}
Since $\text{span}\left\{r_{k_0-1}, \ldots,r_{k_0+t} \right\} = \text{span}\left\{v_{k_0},\ldots,v_{k_0+t+1}\right\}$ we can write
\begin{equation}
A^TA W_{k_0,l} = W_{k_0,l+1} H_{l+1,l},
\end{equation}
where $H_{l+1,l} \in \mathbb{R}^{(l+1)\times l}$ has a Hessenberg structure, i.e all its element below the first sub-diagonal are zero.
Now, if we project back on $W_{k_0,l}$ we get
\begin{equation}
W_{k_0,l}^T A^T A W_{k_0,l} = H_{l,l}.
\end{equation}
Since $A^T A$ is symmetric, the left hand side is symmetric, hence the matrix $H_{l,l}$ is also symmetric and becomes tridiagonal. We denote the matrix with $T_{l,l}$.
\end{proof}
\begin{lemma}
\label{thm:ritzvalue}
The polynomial $P_l(t)$ has zeros in the Ritz values $\theta_j^{(l)}$ of $T_{ll}$.
\end{lemma}
\begin{proof}
The proof follows section 5.3 of \cite{van2003iterative} or theorem 3.4.3 of \cite{liesenstrakos}. The eigenvectors $y_j^{(l)}$ of the tridiagonal matrix $T_{ll}$ span $\mathbb{R}^l$.
We can then write $e_1 = \sum_j \gamma_j y_j^{(l)}$.
When we assume that $\gamma_j = (y_j^{(l)},e_1) =0$ then
\begin{equation}
T_{l,l}y_j^{(l)} = \theta^{(l)}_j y_j^{(l)}.
\end{equation}
Since $(T_{l,l}y^{(l)}_j,e_1) = (y^{(l)}_j,T_{ll}e_1) = 0$ this implies that also $(y_j^{(i)},e_2)=0$.
We can then see that $y_j^{(l)} \perp \{e_1,e_2,\ldots, e_l\}$. This leads to a contradiction.
Hence, the linear system
\begin{equation}
0 = P_l(T_{ll})e_1 = \sum_{j=1}^l \gamma_j P_l(T_{ll}) y_j^{(l)} = \sum_{j=1}^l \gamma_j P_l(\theta_j^{(l)}) y_j^{(l)}.
\end{equation}
Since $\gamma_j$ are non-zero, the linear system determines the coefficients of $P_l(t)$ such that the Ritz values $\theta_j^{(l)}$ are zeros $P_l(t)$.
\end{proof}
\begin{remark}
Note that the objective of the minimization problem $f(V_ky_k^*)$ decreases monotonically. The objective of our initial guess $f(V_{k+1}[(y_k^*)^T,0]^T)$ is equal to the optimal objective of the previous iteration $f(V_ky_k^*)$. At worst, QPAS will not make a step towards an improved solution. This does, however, not imply that we remain stuck in this point. Bounds could be removed from the working set and the basis is expanded.
\end{remark}
\subsection{Algorithm}
Until now we have used the $y_k \in \mathbb{R}^k$ to denote the solution of the projected problem \eqref{eqn:LSproj}.
We will now introduce an inner iteration to solve for the optimal coefficients. We will use
$y_k^*$ to denote the optimal solution of subspace $V_k$ and use $y_{k}$ to denote a guess for this solution, in the inner iteration. But this is
not necessarily the optimal solution.
In a similar way we will use $\mathcal{A}_k:=\mathcal{W}_k^*$ to denote the active set and the optimal working set.
Each outer iteration $k$, the projected optimization problems is solved with the QPAS algorithm \cite{nocedal1999numerical}.
The rationale behind the choice of QPAS is that it has warm-start capabilities that drastically improve the runtime of the algorithm.
This is discussed in detail in \cref{sec:warmStart}.
We follow the same strategy as in \cite{vanroose2021krylov} where a simplex method solve the minimization of $\|Ax-b\|_1$ and $\|Ax-b\|_\infty$
by creating a sequence of small projected problems.
The \cref{alg:resBasis} describes the proposed method. The initial basis is the normalized initial residual.
We then project the solution to the subspace and solve the constraint optimization problem with QPAS.
The initial guess is $y_1=0$ and the working set $\mathcal{W}_1$ is empty.
After we have solved the projected problem, we calculate the residual and expand the basis. Because of \cref{thm:ortho} the basis vector is orthogonal to the previous ones.
\subsection{Stopping criteria}
It is useful to have a stopping criterion to stop the method and return the solution.
One possibility is to look at $\|r_k\|_2$ and stop when it is small. Intuitively, this makes sense because $r_k$ is a measure of the
distance to the solution that accounts for the bounds using the Lagrangian multipliers. Alternatively, but closely related, we can use
the loss of positive definiteness in $V^T_kA^TAV_k$. This occurs when a basis vector is added that is linearly dependent on the previous basis.
The detection of this loss is relatively straightforward, because a Cholesky factorisation is used and updated (see \cref{sec:chol}).
Positive definiteness is necessary for Cholesky factorisation, and thus our update will fail.
In experiments, both methods seem to work fine, as long as the threshold is not taken too small
(because of rounding errors), as can be seen in the residuals in \cref{fig:maxConstr}.
\section{Implementation}\label{sec:implementation}
\subsection{Warm start}\label{sec:warmStart}
By using the QPAS algorithm for the inner iterations, we can employ warm-starting.
Let $\mathcal{W}^*_k$ be the active set, i.e the optimal working set, corresponding to the solution $y_k^*$, obtained from QPAS (see \cref{alg:qpas}).
In iteration $k+1$ we start QPAS with initial guess $\begin{bmatrix} (y_k^*)^T & 0 \end{bmatrix}^T$.
Because of this choice, the previous working set $\mathcal{W}_k$ is a valid initial working set for outer iteration $k+1$.
This approach yields a significant runtime improvement. In \cref{fig:warmStart} we compare the number of QPAS inner iteration for each outer iteration $k$.
It shows the active set is more or less discovered and superlinear Krylov convergence sets in (see \cref{sec:convergence} )
warm-starting bring significant benefit.
A final advantage of the warm starting is that when the number of QPAS iterations is limited (see also \cref{sec:limittingInner}
and a cold start is used, the maximal size of the active set is equal to this limit. Warm starting starts from the previous working set and can then change,
add or remove a number of constraints equal to this limit.
\begin{figure}
\includegraphics[width = \textwidth]{figures/warmVScold.pdf}
\caption{The figure shows the number of inner QPAS iterations required to reach optimality for a given subspace size.
When we use warm-starting where the working set of the previous optimal subspace is used as initial guess, the number of inner iterations is reduced. }
\label{fig:warmStart}
\end{figure}
\subsection{Factorisation update}\label{sec:factorisation}
\subsubsection{Solving the linear system in QPAS with QR}
In the QPAS algorithm (see \cref{alg:qpas}), a system of equation is solved each iteration to obtain the Lagrange multiplier $\lambda_k$:
The linear system is
\begin{equation}
\begin{pmatrix}
G & A^T_{\mathcal{W}_k} \\
A_{\mathcal{W}_k} & 0
\end{pmatrix}
\begin{pmatrix}
x \\
\lambda
\end{pmatrix}
= \begin{pmatrix}
f \\
0
\end{pmatrix}
\end{equation}
Block elimination leads to the following linear system that needs to be solved, each iteration,
\begin{equation}
A_{\mathcal{W}_k}G^{-1}A^T_{\mathcal{W}_k}.
\end{equation}
Every iteration, the working set $\mathcal{W}_k$ is expanded, reduced by one index or remains unchanged. This then means that the QR-factorization of
$\left(A_{\mathcal{W}_k}G^{-1}A^T_{\mathcal{W}_k}\right)$ can be easily expanded or reduced. There exist routines to efficiently add or remove rows and columns from a QR factorization,
\cite{hammarling2008updating}.
\subsubsection{Cholesky}\label{sec:chol}
The only linear system that we need to solve in the Hermitian $G = V_k^TA^TAV_k^T \in \mathbb{R}^{k\times k}$. Because of the positive definiteness, a natural choice for a factorization is the (lower) Cholesky decomposition. The lower Cholesky decomposition of some Hermitian, positive definite matrix $H$ is $LL^T$ with $L$ a lower triangular matrix. If $H$ is expanded as follows
\begin{equation}
\tilde{H} = \begin{bmatrix} H & c\\ c^T & d \end{bmatrix},
\end{equation}
with $c \in \mathbb{R}^k$ a column vector and $d \in \mathbb{R}$ a scalar, then the Cholesky factorisation can be updated as follows:
\begin{equation}
\tilde{L} = \begin{bmatrix} L & 0\\ L_{21} & L_{22} \end{bmatrix}
\end{equation}
with
\begin{subequations}
\begin{align}
L_{21} & = (L^{-1}c)^T \\
L_{22} & = d-L_{21}L_{21}^T
\end{align}
\end{subequations}
\begin{algorithm}
\caption{Update of lower Cholesky factorization}
\label{alg:cholesky}
\begin{algorithmic}[1]
\Procedure{cholesky\_append}{$L$, $c$, $d$}
\State $U_{12} \gets L^{-1}c$ \Comment Forward substitution
\State $L \gets \begin{bmatrix}L & 0\\ U_{12}^T & \sqrt{d-U_{12}^TU_{12}} \end{bmatrix}$
\State\Return $L$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{remark}
The forward substitution in \cref{alg:line:forwardBackward} can be started before the Cholesky update has completed.
\end{remark}
\subsection{Limiting the inner iterations}\label{sec:limittingInner}
In the early iterations it is often better to expand the basis first and then find the optimal coefficients. Indeed, a small basis cannot fully represent the solution
and finding the optimal solution within this subspace is premature.
An alternative is to limit the number of inner iterations and calculate the residual when this maximum is reached. In principle, we only required optimality at the final subspace.
A drawback of limiting the number of inner iterations, however, is that the orthogonality between the residuals is lost.
As can be seen in \cref{fig:limitInner}, there is a sweet spot for what the maximal number of inner iterations should be.
If the number of inner iterations is high, we find the optimal solution in each intermediate subspace we converge in the fewest number of iterations.
However, these iterations take longer because of the large number of inner iterations.
Especially at the halfway point, it appears to be not useful to solve the projected problem exactly. An approximate solution is sufficient up to a point.
If the number of inner iterations is limited harshly, additional basis vectors are needed in the outer iterations leading to a larger subspace.
This then has the effect of taking longer to converge. The sweet spot in this example seems to be 50, where we almost get a 25\% speed-up.
The tradeoff in this case is a loss of orthogonality. \Cref{thm:ortho} only holds if we solve each projected problem until optimality.
The fewer inner iterations are performed, the worse the loss of orthogonality, as can be seen in \cref{fig:lossOrtho}.
\begin{figure}
\includegraphics[width = \textwidth]{figures/limitInner.pdf}
\caption{Limiting the number of inner iterations has little effect on the convergence but a big effect on the time to solution.
At the left we show the number of inner iteration for various choices of inner iteration. On the right we show the effect on the convergence.
The time to solution is shown for each choice of inner iterations.
\label{fig:limitInner}}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width = .8\linewidth]{figures/lossOrtho5.pdf}
\caption{5}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width = .8\linewidth]{figures/lossOrtho10.pdf}
\caption{10}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width = .8\linewidth]{figures/lossOrtho25.pdf}
\caption{25}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width = .8\linewidth]{figures/lossOrtho50.pdf}
\caption{50}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width = .8\linewidth]{figures/lossOrtho100.pdf}
\caption{100}
\end{subfigure}
\caption{This figure shows the sparsity pattern of $V_k^T V_k$ for various choices of maximal inner iteration. When we limit the number of inner iterations, the residuals
are not necessary orthogonal for some of the iteration. When the limit is large, e.g. 100, most of vectors are orthogonal.}
\label{fig:lossOrtho}
\end{figure}
The final algorithm, with all improvements is written down in \cref{alg:outer,alg:qpas}
\begin{algorithm}
\caption{Outer loop}
\label{alg:outer}
\begin{algorithmic}[1]
\Procedure{ResQPASS}{$A$, $b$, $l$, $u$}
\State $r_0 = -A^Tb$
\State $V_1 = \begin{bmatrix} r_0/\|r_0\| \end{bmatrix}$
\State $L =$ \Call{lower\_cholesky}{$V_1^TA^TAV_1$}
\State $\mathcal{W}_0 = \emptyset$
\State $y_0 = []$
\For{$i = 1,2,\ldots$}
\State $\left[y_i,\mathcal{W}_i,[\lambda_i;\mu_i]\right] \gets$ \\ \hspace{2cm}\Call{qpas}{$L, -b^TAV_i, [-V^T, V^T]^T, [l^T, u^T]^T, [y_{i-1}^T, 0]^T, \mathcal{W}_{i-1}$}
\State $r_i = A^T(AV_iy_i-b) - \lambda_i + \mu_i$
\State $v_{i+1} = r_i/\|r_i\|$
\State $V_{i+1} = \begin{bmatrix} V_i & v_{i+1}\end{bmatrix}$
\If{$\|r_i\|<\epsilon_1$}
\State\Return $V_iy_i$
\EndIf
\State $c \gets V_i^TA^TAv_{i+1}$
\State $d \gets v_{i+1}^TA^TAv_{i+1}$
\State $L\gets$ \Call{cholesky\_append}{$L,c,d$} \label{alg:outer:cholAppend} using algorithm \ref{alg:cholesky}
\If{$|L|<\epsilon_2$}\Comment{$V_i^TA^TAV_i$ no longer pos. def.}
\State\Return $V_iy_i$
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{QPAS with Cholesky factor and QR updates}
\label{alg:qpas}
\begin{algorithmic}[1]
\Procedure{qpas}{$L,c,A,b,x_0,\mathcal{W}_0$}
\State $G^{-1}c \gets$ Solve: $LL^Tx=c$\label{alg:line:forwardBackward} \Comment{Forward- and backward substitution}
\For{$k=0,1,2\ldots$}
\State {Compute $p_k,\lambda_k$ and helper variables:}
\If{$|\mathcal{W}_k|=0$}
\State $p_k \gets -(x_k + G^{-1}c)$
\State $\lambda, G^{-1}A^T_{\mathcal{W}_k}, q \gets \emptyset$
\Else
\If{$k=1$} \Comment{Warm-start}
\State $G^{-1}A^T_{\mathcal{W}_k} \gets$ Solve: $LL^Tx=A^T_{\mathcal{W}_k}$
\State $[Q,R] \gets \textsc{qr}\left(A_{\mathcal{W}_k} \left(G^{-1}A^T_{\mathcal{W}_k}\right) \right)$
\State $q \gets -A_{\mathcal{W}_k}\left( x_k + G^{-1}c \right)$
\EndIf
\State $\lambda_k \gets$ Solve: $Rx = Q^Tq$
\State $p_k \gets - \left( x_k\left(G^{-1}c\right)+ \left(G^{-1}A^T_{\mathcal{W}_k}\lambda \right) \right)$
\EndIf
\State {Compute $x_{k+1}$ and update the working set $\mathcal{W}_{k+1}$:}
\If{$p_k=0$} \Comment{Optimum reached or unnecessary bound}
\If{$\forall i \in \mathcal{W}_k\cap\mathcal{I}: (\lambda_k)_i \geq 0$}
\State \Return $[x_k,\mathcal{W}_k,\lambda_k]$
\Else
\State $j\gets \argmin_{j\in\mathcal{W}_k\cap\mathcal{I}}\lambda_j$
\State $\mathcal{W}_{k+1}\gets\mathcal{W}_k\setminus\{j\}$
\State $G^{-1}A^T_{\mathcal{W}_{k+1}} \gets$ remove column $j$ in $G^{-1}A^T_{\mathcal{W}_k}$
\State $[Q,R] \gets$ Update QR factorisation, \\ \hspace{3.5cm} with column, $j$ and row $j$ removed
\State $q\gets$ Remove $q_j$ from $q$
\State $x_{k+1}\gets x_k$
\EndIf
\Else \Comment{$\exists$ blocking constraint}
\State $\alpha_k \gets \min\left(1,\min_{i\not\in\mathcal{W}_k,\ a_i^Tp_k<0}\frac{b_i-a_i^Tx_k}{a_i^Tp_k}
\right)$
\State $x_{k+1} \gets x_k + \alpha_kp_k$
\If{$\exists j \in \mathcal{I}$ blocking constraint}
\State $\mathcal{W}_{k+1} \gets \mathcal{W}_k \cup \{j\}$
\State $G^{-1}A^T_{\mathcal{W}_{k+1}} \gets \begin{pmatrix} G^{-1}A^T_{\mathcal{W}_k} & x \end{pmatrix}$ with $x:LL^Tx=A^T_j$
\State $[Q,R]\gets$ Update QR factorisation, \\ \hspace{3.5cm} with column $A_{\mathcal{W}_k}x$ and row $A_j\left(G^{-1}A^T_{\mathcal{W}_{k+1}} \right)$ added
\Else
\State $\mathcal{W}_{k+1}\gets\mathcal{W}_k$
\EndIf
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Discussion and conclusion}
In this paper we present the Residual QPAS subspace method. The method solves box-constrained least-squares problems with a sparse matrix, where the variables
can take values between a lower and an upper bound. In this paper we have proposed the method, analysed the convergence and provided an efficient
implementation that uses warm-starting and reuses and updates the factorisation.
The ResQPASS works fast if the Lagrange multipliers are sparse, i.e. when only a few of the box constraints are active and most Lagrange multipliers are zero.
As soon as the active set is discovered, Krylov convergence sets in and we converge fast to the solution. As in classical Krylov methods when you restart
it is hard to benefit from superlinear convergence.
Note that it is possible to further accelerate the Krylov convergence with the help of a preconditioner. We then use residuals that are the solution of $Mr_k = A^T(Ax_k-b)-\lambda_k + \mu_k$ for some non-singular matrix
$M$ that is cheap to invert. The residuals are then $M$-orthogonal.
However, there are many problems where many unknowns hit the boundary. For example, a deblurring of a satellite picture against a black background.
In that case the Lagrange multipliers are not sparse. However, the deviation from the bounds in $x$ is sparse, hence the a dual ResQPASS algorithm might be applicable.
Initial experiments confirm this intuition and the dual algorithm exhibits similar superlinear convergence. The analysis of the dual Simplex is the subject of a next paper.
Other future work is a thorough analysis of the propagation of rounding errors and effects on the loss of orthogonality when the number of inner iteration is limited.
\section*{Acknowledgments}
We thank Jeffrey Cornelis for fruitful discussions during the initial phase of the research.
Bas Symoens acknowledges financial support from the Department of Mathematics, U. Antwerpen.
\bibliographystyle{siamplain}
|
{
"arxiv_id": "2302.13655",
"language": "en",
"timestamp": "2023-02-28T02:26:38",
"url": "https://arxiv.org/abs/2302.13655",
"yymm": "2302"
} |
\section{Introduction} \label{deimos_sec:introduction}
Immersive environments, such as virtual and augmented reality (VR/AR), offer people a platform for human-computer interaction that utilises a variety of human senses and a range of physical human interactions.
Compared to traditional desktop interaction, immersive environments offer users a more natural and \textit{embodied} experience of interaction \cite{dourishWhereActionFoundations2001}.
Affordances for interaction can be embedded directly within virtual objects, allowing people to use their bodies to physically act upon those objects in a manner that leverages proprioception \cite{mineMovingObjectsSpace1997}.
In the same way real-world objects can morph and change shape in response to physical actions, so should embodied representations of data in Immersive Analytics \cite{marriottImmersiveAnalytics2018}.
Interaction is crucial in data visualisation to handle complexity and allow changes to views \cite{munznerVisualizationAnalysisDesign2014}.
When an embodied visualisation is acted on by a user, it may undergo a transition in its visual state reflecting a change in encoding of data to representation. Animation is a very common technique to help users naturally keep track of such visual changes in statistical graphics \cite{heerAnimatedTransitionsStatistical2007,robertsonAnimatedVisualizationMultiple2002}.
Animation that preserves congruency between changes in data and its visual representation \cite{tverskyAnimationCanIt2002} has been demonstrated to confer benefits in myriad situations. It can aid decision-making in certain tasks \cite{gonzalezDoesAnimationUser1996}, increase viewer engagement in data-driven stories \cite{heerAnimatedTransitionsStatistical2007, aminiHookedDataVideos2018}, and promote literacy of unfamiliar and/or complex visualisation designs \cite{ruchikachornLearningVisualizationsAnalogy2015,wangNarvisAuthoringNarrative2019}. However, these past explorations of animation in visualisation do not consider deep integration of animation and user interaction \cite{zongAnimatedVegaLiteUnifying2022}.
Since embodied interaction relies on gestural congruency between the interaction and resulting visual changes, interaction and animation both clearly go hand in hand for embodied Immersive Analytics applications.
However, compared to the decades of research and development of desktop-based data visualisation packages for animation (e.g.\ \cite{kimGeminiGeneratingKeyframeOriented2021,kimGeminiGeneratingKeyframeOriented2021,zongAnimatedVegaLiteUnifying2022,geCanisHighLevel2020,geCASTAuthoringDataDriven2021,thompsonDataAnimatorAuthoring2021}) and interaction (e.g.\ \cite{satyanarayanVegaLiteGrammarInteractive2017,bostockDataDrivenDocuments2011,zongAnimatedVegaLiteUnifying2022}), equivalent tools for Immersive Analytics lag far behind. While some Immersive Analytics research has investigated the combination of animation and interaction \cite{yangTiltMapInteractive2020,leeDesignSpaceData2022}, no work has yet presented a unified language and grammar for the definition of such immersive interactive animations. Moreover, despite the numerous toolkits supporting the authoring of immersive visualisations (e.g.\ \cite{sicatDXRToolkitBuilding2019,cordeilIATKImmersiveAnalytics2019,reipschlagerPersonalAugmentedReality2021,butcherVRIAWebBasedFramework2021}), none allow for the rapid design and prototyping of embodied interactions: a glaring gap in the literature given the prevalence of embodiment in Immersive Analytics \cite{buschelInteractionImmersiveAnalytics2018}.
Therefore in this paper we introduce Deimos: a declarative grammar for authoring \textbf{d}ynamic \textbf{e}mbodied \textbf{i}mmersive \textbf{mo}rph\textbf{s} for immersive visualisations. We use the term morph to signify an embodied visualisation's ability to change shape when actions are performed on it by a user.
In contrast to traditional animated transitions, morphs are \textit{adaptive} and can be applied to any data visualisation in the environment that matches the partial visualisation specification of one of the morph's \textit{states}. \textit{Transitions} connect these states through animation that can be controlled by \textit{signals}: data streams which stem from \textit{embodied} user interaction. These are specified using the Deimos grammar and are written in JSON. The adaptivity of morphs allows them to be used in both analysis and presentation, depending on the degree of specificity of the morph.
We begin by detailing a set of design goals that allow morphs to leverage the strengths of immersive environments not present on desktops (Section \ref{deimos_sec:design-goals}). We then introduce the Deimos grammar itself, detailing its components, primitives, and specification (Section \ref{deimos_sec:grammar}). Next, we describe a prototype implementation of the Deimos grammar (Section \ref{deimos_sec:prototype}), developed in Unity as an extension to the DXR toolkit by Sicat et al.\ \cite{sicatDXRToolkitBuilding2019}. To demonstrate the expressivity of Deimos, we present an example gallery of morphs created in Deimos which highlights key characteristics of the grammar (Section \ref{deimos_sec:example-gallery}). We also conducted a user study in which six Immersive Analytics researchers used Deimos to create their own morphs. Through semi-structured interviews with these participants, we gauge the usability of Deimos (Section~\ref{deimos_sec:user-study}) and elicit discussion topics and future research directions for morphs (Section~\ref{deimos_sec:discussion}).
Our contributions include both engineering efforts and theoretical knowledge, and are summarised as follows:
\begin{enumerate}
\item A grammar for the declaration of dynamic, embodied, interactive animated morphs in immersive environments called Deimos, and an implementation of the grammar in Unity.
\item An example gallery of interactive morphs, and a user study and semi-structured interview with six Immersive Analytics researchers that validates the design, implementation, and usability of the Deimos grammar.
\item An open-source toolkit that enables rapid design and prototyping of embodied interactions for Immersive Analytics which can accelerate future research in this area.
\item A conceptualisation of how morphs can be defined as keyframe animations but be later applied as presets \& templates during analysis and/or presentation in VR/AR.
\item A shift towards animation that is designed around and driven by (embodied) interaction, as opposed to existing methods that are mostly driven by the data.
\end{enumerate}
\section{Conclusion} \label{deimos_sec:conclusion}
This paper presented Deimos, a grammar and toolkit for prototyping morphs in immersive environments. Morphs are a collection of animated transitions that occur between different defined states, which are triggered and modified by the use of signals. These morphs are dynamically applied to visualisations during runtime, and are capable of leveraging embodied interaction to enable interactive animated transitions. We view Deimos as an initial foray into what a grammar to create embodied animated transitions in immersive environments would look like. While our example gallery and user study demonstrated Deimos' ability to create a wide range of morphs, future work would seek to understand how these morphs are used by actual data analysts and/or audiences of immersive data stories in VR/AR. We also hope that this work fuels greater interest in the use of dynamically morphing embodied visualisations in Immersive Analytics.
\section{Related Work} \label{deimos_sec:related-work}
\subsection{Interactive Animated Transitions on 2D Screens}
When a visualisation changes between visual states, animation is commonly used to help viewers maintain awareness of how data marks have changed throughout the transition \cite{heerAnimatedTransitionsStatistical2007,robertsonAnimatedVisualizationMultiple2002}, thus minimising change blindness \cite{munznerVisualizationAnalysisDesign2014}.
Various grammars and toolkits have been developed to aid designers in creating animated 2D statistical graphics for use in data-driven storytelling, such as Gemini~\cite{kimGeminiGrammarRecommender2021} and Gemini{\textsuperscript{2}}~\cite{kimGeminiGeneratingKeyframeOriented2021}, Canis~\cite{geCanisHighLevel2020} and CAST~\cite{geCASTAuthoringDataDriven2021}, and DataAnimator~\cite{thompsonDataAnimatorAuthoring2021}. These all fundamentally use keyframe animation, which has been shown to be the preferred paradigm of animation designers \cite{thompsonUnderstandingDesignSpace2020}.
Earlier work by Tversky et al.\ \cite{tverskyAnimationCanIt2002} however could not find strong evidence of animated graphics being superior to static ones, especially as animations were often too complex or fast to be accurately perceived. They instead suggested that interactivity may be one way to capitalise on the strengths of animation by allowing users to directly control its playback (start, stop, rewind, etc.). Indeed, later research found that combining interactivity with animations can improve outcomes for certain data analysis tasks (e.g.\ \cite{robertsonEffectivenessAnimationTrend2008,abukhodairDoesInteractiveAnimation2013}).
More recent work by Zong and Pollock et al.\ \cite{zongAnimatedVegaLiteUnifying2022} formalised interactive animation in the form of Animated Vega-Lite, an extension to Vega-Lite~\cite{satyanarayanVegaLiteGrammarInteractive2017} which adds a time encoding channel and event streams to enable interactive animations for use in data analysis. Such interactive animations (e.g.\ \cite{robertsonEffectivenessAnimationTrend2008,abukhodairDoesInteractiveAnimation2013,zongAnimatedVegaLiteUnifying2022,roslingBestStatsYou2007}) oftentimes expose their animation controls via a time slider and toggleable start/stop button.
A good example of more direct interaction with conventional 2D animations is that of DimpVis by Kondo and Collins \cite{kondoDimpVisExploringTimevarying2014}. Through direct manipulation, users can touch a mark to select it, revealing a ``hint path'' that they can drag their finger along. This causes the visualisation to temporally navigate forwards or backwards using animation, with the selected mark following the hint path. The subsequent work on Glidgets by Kondo et al.\ \cite{kondoGlidgetsInteractiveGlyphs2014} followed a similar premise but for dynamic graphs.
Of course, our work is differentiated from that of previous works by its immersive nature. We introduce new concepts and ideas to accommodate the shift to immersive environments, as we later detail in Section~\ref{deimos_sec:design-goals}.
\subsection{Embodied Interaction and Metaphors for Immersive Animations} \label{deimos_ssc:embodied-interaction-and-metaphors}
Immersive Analytics is characterised by the use of interactive, engaging, and embodied analysis tools \cite{marriottImmersiveAnalytics2018}. As such, there is a desire to move away from WIMP-based controls in favour of more direct, embodied styles of interaction \cite{cordeilImAxesImmersiveAxes2017,buschelInteractionImmersiveAnalytics2018}.
In embodied interaction \cite{dourishWhereActionFoundations2001}, affordances are embedded within the artefact (in our case the data visualisation) itself, re-framing computational processes and operations as direct interactions of one's body with the physical world \cite{williamsInteractionParticipationConfiguring2005,fishkinEmbodiedUserInterfaces2000}. This approach, as Dourish \cite{dourishWhereActionFoundations2001} notes, moves the user interface into the background where it is no longer the centre of attention.
Embodied interaction is capable of leveraging metaphors \cite{lakoffMetaphorsWeLive2008}, which can make it easier to remember interaction techniques and help users develop their mental model of the target domain \cite{carrollChapterInterfaceMetaphors1988}. Such metaphors have been extensively used in embodied Immersive Analytics research as a result, typically involving mid-air input. ImAxes by Cordeil et al.\ \cite{cordeilImAxesImmersiveAxes2017} used several interaction metaphors, such as direct manipulation to compose visualisations based on the proximity and relative orientation of embodied axes (a similar metaphor was also employed using the MADE-Axis by Smiley et al.\ \cite{smileyMADEAxisModularActuated2021}), and a ``throw away'' metaphor to delete these visualisations. FIESTA by Lee et al.\ \cite{leeSharedSurfacesSpaces2021} used a similar throwing metaphor but for pinning visualisations onto surfaces in the environment. FiberClay by Hurter et al.\ \cite{hurterFiberClaySculptingThree2019} used a ``grab'' metaphor for translating, rotating, and scaling a 3D trajectory visualisation.
Embodied interaction has also been used to directly control immersive animated transitions. Tilt Map by Yang et al.\ \cite{yangTiltMapInteractive2020} is a visualisation that transforms between three states: a choropleth map, prism map, and barchart. As the visualisation is tilted using a VR controller, the visualisation is interpolated between the three states based on the tilt angle.
More interesting is the recent work by Lee et al.\ \cite{leeDesignSpaceData2022} which demonstrated the use of the visualisation's spatial context as part of the metaphor. They described techniques for transforming visualisations between 2D and 3D, such as ``extruding'' a 2D visualisation into 3D using a ``pinch and pull'' gesture. For the technique to be valid however, the 2D visualisation must also be placed against a physical 2D surface. Through this, the metaphor is not only of the visualisation being extruded, but also of it being taken from a surface and ``brought out into'' space. Both of these works \cite{yangTiltMapInteractive2020,leeDesignSpaceData2022} also demonstrate a high level of gestural congruency between the interaction and the visualisation that is manipulated, which is vital in embodied interaction \cite{johnson-glenbergEmbodiedScienceMixed2017,johnson-glenbergImmersiveVREducation2018}. For example, the aforementioned extrusion technique described by Lee et al.\ \cite{leeDesignSpaceData2022} causes the visualisation to expand at the same rate as the hand is being pulled, directly mapping the extent of the extrusion to the user's hand position.
While other works do use animations in prototype implementations (e.g.\ \cite{hayatpurDataHopSpatialData2020,flowimmersiveinc.DataStorytellingImmersive2022,cordeilIATKImmersiveAnalytics2019}), animation has largely been used to maintain awareness during transitions and has not been the focal point of the research (unlike that of Yang et al.\ \cite{yangTiltMapInteractive2020} and Lee et al.\ \cite{leeDesignSpaceData2022}). Therefore in this work we further explore the use of embodied interaction to control visualisation animations in immersive environments.
\subsection{Toolkits and Grammars for Immersive Analytics}
In recent years, many toolkits and frameworks have emerged to support research and development in Immersive Analytics. Some specialised toolkits have been developed which focus on specific application cases. MIRIA~\cite{buschelMIRIAMixedReality2021} allows user experiment data such as head and hand movements to be replayed in an AR environment for in-situ analytics.
RagRug~\cite{fleckRagRugToolkitSituated2022} is a situated analytics toolkit that updates immersive visualisations in either VR or AR through the use of a distributed data flow from the Internet of Things and NODE-Red.
Toolkits have also been developed to facilitate more generic visualisation authoring in immersive environments. While certainly not as mature as desktop-based packages such as gg2plot \cite{wickhamLayeredGrammarGraphics2010} and D3 \cite{bostockDataDrivenDocuments2011}, they typically provide a strong foundation that can and have been extended in subsequent works. These toolkits can largely be distinguished by how visualisations are created by the user. IATK~\cite{cordeilIATKImmersiveAnalytics2019} and u2vis~\cite{reipschlagerDesignARImmersive3DModeling2019} primarily expose their authoring tools through a GUI---typically through the Inspector window of the Unity game engine's editor. In contrast, DXR~\cite{sicatDXRToolkitBuilding2019} and VRIA~\cite{butcherVRIAWebBasedFramework2021} facilitate visualisation authoring using human-readable JSON files. A grammar defines the syntactical rules of this JSON file, which is then interpreted by the system to produce the visualisation. In the case of both DXR and VRIA, the grammar is based on Vega-Lite's grammar \cite{satyanarayanVegaLiteGrammarInteractive2017}. Declarative grammars such as these have proven to be popular in data visualisation (e.g.\ \cite{satyanarayanVegaLiteGrammarInteractive2017,zongAnimatedVegaLiteUnifying2022,kimGeminiGrammarRecommender2021,geCanisHighLevel2020}) as they separate how a visualisation is defined from how it is created by the system. These declarative grammars can also make it easier to author data visualisations, thus leading to more rapid prototyping of ideas.
A common limitation in Immersive Analytics toolkits however is their support for interactivity. While toolkits like IATK~\cite{cordeilIATKImmersiveAnalytics2019} and DXR~\cite{sicatDXRToolkitBuilding2019} provide built-in methods for interacting with the visualisation such as brushing and range filtering, they do not expose user-friendly means to create new interactions and instead require extending the source code itself. In contrast, our work aims to devise a grammar that can enable \textit{interactive} animated transitions in immersive environments. As a result, our work contributes a grammar that can support both authoring of immersive animated transitions and help design new (embodied) interaction techniques.
\section{Deimos Design Goals} \label{deimos_sec:design-goals}
The shift from 2D to 3D is more than just a third spatial encoding.
Early in the development of Deimos, we identified several key differences between animated transitions in immersive and non-immersive environments that give rise to new research challenges.
These challenges were rephrased and synthesised into three design goals (DG) which influenced the creation of the Deimos grammar, allowing us to focus on the novel characteristics of immersive headsets and environments, in turn opening up further design opportunities. Section~\ref{deimos_sec:grammar} will explain the grammar itself and highlight how it addresses these design goals.
\subsection{DG1: Morphs should be adaptable and flexible} \label{deimos_ssc:design-goal-1}
Most animated transition grammars allow for rapid prototyping between the specification and the resulting animation. A low viscosity authoring process is particularly important when creating interactive animations for data analysis~\cite{zongAnimatedVegaLiteUnifying2022}, allowing for fast and easy changes in the specification. The ability to rapidly prototype is facilitated by the constant access of keyboards for text input and pointing devices (i.e.\ mice) in desktop environments. In contrast, a challenge of immersive environments is that they often lack a convenient and comfortable form of text input that is required to write textual specifications, especially in VR or in highly mobile AR contexts. While a GUI can help facilitate this authoring process in VR/AR, designing a GUI is premature if there is no underlying grammar to support it, especially in such a novel environment.
To resolve this conflict, we take an approach inspired by Lee et al.'s recent work~\cite{leeDesignSpaceData2022}. Many animated transition grammars treat transitions as a bespoke set of changes applied to visualisations predefined by the animation designer. Instead, we treat animated transitions as discrete operations that analysts can use to apply changes to their visualisations during their analysis. For example, the analyst might apply an animated transition that adds another spatial encoding to their visualisation, or converts a 3D barchart into a faceted 2D barchart. This turns animated transitions into a catalogue of adaptive and flexible operations that can be applied to immersive visualisations by analysts depending on the situation and goals.
In this way, there exists two types of users of Deimos: immersive analytics system designers who use the grammar to create a catalogue of animated transitions in a desktop environment (e.g.\ Unity editor), and data analysts in VR/AR who use said animated transitions in their workflows and either do not have access to or are unfamiliar with the grammar.
This necessitates a functional shift in grammar design, moving from highly tailored transitions with known data fields and encodings to generic transitions that operate on baseline idioms. As a result, any given transition specification can be reused across multiple visualisations, so long as they meet the baseline criteria specified by the author (e.g.\ be a barchart, have no \textit{z} encoding).
\subsection{DG2: Morphs should support embodied interaction} \label{deimos_ssc:design-goal-2}
Animated transition grammars (e.g.\ \cite{kimGeminiGrammarRecommender2021, geCanisHighLevel2020, thompsonDataAnimatorAuthoring2021}) have paid little attention to how transitions are triggered and controlled. In cases where these grammars do (e.g.\ \cite{zongAnimatedVegaLiteUnifying2022}) it is limited to WIMP-style controls, with practitioners using similar input methods for their narrative visualisations (e.g.\ play button~\cite{roslingBestStatsYou2007}, linear slider/scroll~\cite{yeeVisualIntroductionMachine2015}).
In contrast, immersive environments rely on a completely different interaction paradigm which goes beyond the desktop and is both embodied (e.g.\ \cite{hurterFiberClaySculptingThree2019, cordeilIATKImmersiveAnalytics2019}) and spatial in nature (e.g.\ \cite{hubenschmidSTREAMExploringCombination2021, buschelInvestigatingUseSpatial2017}).
Novel language primitives are needed to support embodied interaction as existing ones (i.e.\ streams in Animated Vega-Lite \cite{zongAnimatedVegaLiteUnifying2022}) do not adequately express relationships between entities, especially desktop-based grammars. One such relationship is that of the user and the visualisation itself: which part of the user is performing the interaction (e.g.\ hand, head), and which part of the visualisation contains the affordance to be interacted with (e.g.\ mark, axis).
Spatial relationships and interaction also play a significant role in immersive environments \cite{buschelInvestigatingUseSpatial2017,langnerMARVISCombiningMobile2021,hubenschmidSTREAMExploringCombination2021}---which is not generally the case in non-immersive environments. For example, an immersive transition may be controlled based on the position of a handheld relative to a table~\cite{buschelInvestigatingUseSpatial2017}. By supporting this, immersive transitions become spatially aware. There can also be a relationship between the visualisation and its immediate environment, allowing immersive transitions to become context-aware \cite{svanaesContextAwareTechnologyPhenomenological2001,deyConceptualFrameworkToolkit2001}. An example of this is the aforementioned ``extrusion'' techniques by Lee et al.\ \cite{leeSharedSurfacesSpaces2021} which require the 2D visualisation to be on a surface to be usable.
By expanding the Deimos grammar to support this paradigm, we enable a richer design space of visualisation transitions not otherwise possible on desktop environments, as they allow users to ``reach through'' and interact with their data in a more embodied and engaging manner~\cite{dourishWhereActionFoundations2001}.
It should be noted however that the actual design of such embodied interactions is left up to the end-users of Deimos. We decide not to enforce best practices in the grammar, such as the use of easy to understand metaphors \cite{lakoffMetaphorsWeLive2008,carrollChapterInterfaceMetaphors1988} and proper gestural congruency \cite{johnson-glenbergEmbodiedScienceMixed2017,johnson-glenbergImmersiveVREducation2018}. Instead, we ensure Deimos is designed to allow said best practices to be followed---much in the same way that conventional programming languages do not enforce best practices.
\subsection{DG3: Morphs should still support conventional approaches}
While the two previous design goals are intentionally forward-thinking, we still want Deimos to be rooted in the same foundational elements as existing grammars. This is to both ensure that Deimos follows tried and true concepts and theories, and also to preserve a sense of familiarity for users of the grammar---especially for those new to immersive analytics. This includes the use of keyframe animation as the chief animation paradigm~\cite{thompsonDataAnimatorAuthoring2021}, the ability to specify timing and staging rules to customise the animation, and supporting WIMP-based interaction in hybrid immersive analytics setups or via immersive UX elements (e.g.\ \cite{microsoftMixedRealityUX2021}).
Moreover, while DG1 advocates for generalised transitions that can be applied to a wide range of visualisations, Deimos should still allow for highly customised transitions that affect predefined visualisations created by designers. This is to allow animated transitions in Deimos to still be useful in controlled situations such as immersive data-driven storytelling.
Therefore, our grammar should support both ends of two orthogonal spectrums: support both WIMP and embodied interaction to control and interact with animated transitions; and support animated transitions that are either highly generalised and can apply to any visualisation, or highly specific and apply only to a particular visualisation in a controlled context.
\section{The Deimos Grammar} \label{deimos_sec:grammar}
Deimos is a declarative grammar used to specify \textit{transitions} between \textit{states} (keyframes), as well as the \textit{signals} (interactions) used to control them. The grammar is largely based on the design goals listed in Section~\ref{deimos_sec:design-goals} and prior work by Lee et al.\ \cite{leeDesignSpaceData2022} on visualisation transformations.
The Deimos grammar was developed in conjunction with its toolkit implementation (Section~\ref{deimos_sec:prototype}) through an iterative process. At each iteration, a working version of the grammar was defined and the toolkit was updated to support it. We created new example morphs at each iteration to test the new features added to the grammar, and maintained prior examples to validate any adjustments to the grammar (similar to unit testing). Many of these examples can be seen in Section~\ref{deimos_sec:example-gallery}. We continued this process until we felt that the grammar sufficiently met our design goals.
The target audience of the grammar are developers and designers of immersive analytics systems. The morphs they create are then used by analysts in VR/AR.
A Deimos specification can formally be described as a three-tuple (elements suffixed with ``?'' are optional):
\begin{center}
\textit{Morph := (states, signals?, transitions)}
\end{center}
These components constitute what we call a \textit{Morph}, the term signifying an embodied visualisation's ability to dynamically change shape and morph from one state to another via transitions upon matching certain conditions.
A morph can be modelled as a state machine (Figure~\ref{deimos_fig:deimos-state-machine-1}). A visualisation in the immersive environment only enters a morph's state machine when it matches one of its \textit{states}. The state node that was matched with determines the possible \textit{transition} nodes that it can access. These transition nodes are where changes are actually made to the visualisation, and are only entered when specified criteria are met. These criteria take the form of \textit{signals}, which are streams of data typically generated by user interaction. They can also be used to control the behaviour of transitions themselves.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{Chapters/Deimos/images/deimos-state-machine.pdf}
\caption{Baseline state machine for Deimos morphs showing a single unidirectional transition. More states and transitions can be added to the state machine with their own signal criteria, with support for bidirectional transitions.}
\Description{A state machine for a baseline morph. A node labelled ``State 1'' is connected with a directed edge to a node labelled ``Transition'', and is labelled with ``Signal criteria met''. The latter node is connected with another directed edge to another node labelled ``State 2'', and is labelled with ``Transition completed''. Two entry nodes connect to the two State nodes with labels reading ``State matched''. The two State nodes are also connected to an exit node reading ``Vis spec manually changed''.}
\label{deimos_fig:deimos-state-machine-1}
\end{figure}
Morphs are an extension to any immersive visualisation authoring system already in place. That is, visualisations can still be manipulated in their usual way, but can have morphs applied to them should the relevant conditions be met. In this way, morphs serve purely to augment existing authoring techniques rather than supplanting them outright. When a visualisation is modified by the user in a manner external to the morph, it exits the morph state machine. It may then immediately re-enter following the same rules as before. A visualisation can have multiple morphs (and therefore state machines) active simultaneously. Multiple morphs can also be applied to the same visualisation concurrently, so long as the properties and encodings they affect do not overlap. The same morph specification can also be active across multiple eligible visualisations. This ability for the state machine to adapt to different visualisation configurations through a set of rules and conditions is what helps it satisfy DG1.
Morph specifications are written and stored as standalone JSON files. The use of JSON is very common amongst related grammars and allows for the separation between grammar and implementation (i.e.\ portability). A JSON schema provides auto-completion and tooltips for writing morph specifications with supported text editors. Figure~\ref{deimos_fig:deimos-json} shows a basic example of a morph specification, and how it translates to the immersive environment and the state machine. The three main components of morphs are annotated with coloured boxes: states in yellow, signals in blue, and transitions in green. The same colour coding is used across all other figures.
The rest of this section will explain in general terms what these components do.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{Chapters/Deimos/images/deimos-json.pdf}
\caption{A basic example of a morph changes the mark colour of uncoloured visualisations to red whenever the left hand performs a pinch gesture. Colour-coded boxes denote the same component in different representations. Left: The morph specification. Top right: Still images of this morph being applied to a 2D scatterplot in an immersive environment. Bottom right: The state machine for this morph. The \texttt{``restrict'': true} (shown in the left-hand box labelled with ``2'') prevents the morph from starting at the \textit{highlighted} state, and \texttt{``bidirectional'': true} (shown at the end of morph specification) allows the transition to function in both directions.}
\Description{An image composed of three parts. The first part shows a morph specification written in JSON. The second part shows a series of still images in which a hand does a pinch gesture, and causes the marks on a 2D scatterplot to turn red. The second part shows the state machine of this morph. The following three nodes are connected in a linear order: a ``unhighlighted'' state node, a ``highlighting'' transition node, and a ``highlighted'' state node.}
\label{deimos_fig:deimos-json}
\end{figure}
\subsection{States} \label{deimos_ssc:states}
A morph is comprised of at least two state specifications. A state can be defined by the following tuple:
\begin{center}
\textit{state := (name, restrict?, partial visualisation specification)}
\end{center}
The \textit{name} property is a unique case-sensitive string used to reference this state specification in a transition Section~\ref{deimos_ssc:transitions}).
The \textit{restrict} property is a Boolean that if set to \texttt{true} will remove the entry point associated with the state's node on the state machine Figure~\ref{deimos_fig:deimos-json} for an example). This prevents a morph from starting at that state, making it only accessible via interconnecting transition(s). This is useful if it is illogical for a morph to start at that state, such as in unidirectional transitions.
\textit{Partial visualisation specification} is an arbitrary number of properties and components in the state object that all follow the same declarative notation as an actual visualisation. In other words, its syntax is the same as the visualisation package used in the system. For our implementation of Deimos, this is the DXR grammar~\cite{sicatDXRToolkitBuilding2019} which in turn is based on the Vega-Lite grammar~\cite{satyanarayanVegaLiteGrammarInteractive2017}. In the context of the DXR grammar, a partial specification can consist of any number of view-level properties (e.g.\ \textit{mark}, \textit{depth}) and/or encoding-level properties declared inside of an \textit{encoding} component (e.g.\ \textit{x}, \textit{color}).
The partial specification serves two purposes: (i) to determine if a visualisation matches (and therefore enters) this state; and (ii) to generate the keyframe used in the transition.
\subsubsection{State matching process} \label{deimos_sss:state-matching-process}
Any visualisation properties specified as part of the \textit{partial visualisation specification} in a state are used in the matching process against active visualisations. It is important to differentiate between the two types of specifications being used in this process: the visualisation specification created by the end-user, and the state specification (i.e.\ the \textit{partial visualisation specification}) that exists as a part of the state component in a morph.
Generally speaking, for a state specification to be matched against a visualisation specification, all properties defined in the former should also be defined in the latter, including their associated values. For example, if the state has \texttt{``color'': \string{``type'': ``quantitative''\string}}, then the visualisation must also have a \texttt{color} encoding with the same \texttt{type} for it to match.
As a rule of thumb, the fewer properties defined in the state specification, the more likely a visualisation can match successfully and have morphs applied to it. The opposite is also true, with more properties in the state specification making it less likely for any visualisation to match successfully. This effectively forms a spectrum. Morphs can be highly generic and can apply to many visualisations, allowing for adaptive morphs as per DG1. They can also only apply to specific datasets and field names, allowing for highly tailored morphs that are used in controlled environments as per DG3.
Deimos provides several primitives which affect the matching process that can be used in place of any JSON value in the state specification. They allow for more nuanced control over which visualisations can and cannot match, and are useful to prevent morphs from being accidentally applied to incompatible visualisations.
Note that this is not an exhaustive set of primitives. While they were adequate for the purposes of this work, the grammar can easily be extended to include more if need be.
\begin{itemize}
\item \textbf{``*'' (wildcard)}: The property should be in the visualisation but its value can be anything.
\item \textbf{An inequality expression}: The property should be in the visualisation and its value should satisfy the inequality. Only applicable to numeric properties. e.g.\ \texttt{``value'': ``>= 100''}.
\item \texttt{null}: The property should not be included in the visualisation regardless of its value.
\end{itemize}
\subsubsection{Keyframe creation process} \label{deimos_sss:keyframe-creation-process}
When a visualisation matches a state and one of its connecting transitions is activated, keyframes are generated for both initial and final states. These keyframes are used for actual animation during the transition.
The initial keyframe is always the active visualisation's specification prior to the transition. No changes need to be made to it as it already matches the properties of the state itself.
The final keyframe is created by modifying the initial keyframe using the following set of rules: (i) visualisation properties that are defined in the initial state but not in the final state are removed; (ii) properties that are not defined in the initial state but are defined in the final state are added; and (iii) properties defined in both states are set to the final state's value.
As with the state machine process (Section~\ref{deimos_sss:state-matching-process}), Deimos provides primitives that can be used in place of any JSON value to refine the keyframe creation process. These primitives functionally act as placeholders which are later substituted with real values calculated at runtime, akin to the notion of variables. This allows for morphs to adapt to a wider range of situations without the need to hard-code field names, data types, etc. in morph specifications.
For the purposes of the state matching process, all of these primitives are treated as wildcards. Their values are resolved after the keyframes have been created but before the transition is applied. Once again, this is not an exhaustive list of primitives and can easily be extended if need be.
\begin{itemize}
\item \textbf{JSON path accessor}: The value residing at the JSON path will be substituted into the property's value. Is either prefixed with ``this.'' to access a property from this keyframe, or ``other'' to access a property from the other keyframe which is being transitioned to/from. e.g.\ \texttt{``x'': ``this.\linebreak[0]encoding.\linebreak[0]y'', \texttt{``field'': ``this.\linebreak[0]encoding.\linebreak[0]size.\linebreak[0]field''}}.
\item \textbf{A signal name}: The value emitted by the specified signal (Section~\ref{deimos_ssc:signals}) will be substituted into the property's value.
\item \textbf{An expression}: The evaluated result of the expression will be substituted into the property's value. JSON path accessors and signal names can be used as variables. Only applicable to numeric properties. e.g.\ \texttt{``value'': ``other.\linebreak[0]encoding.\linebreak[0]size.\linebreak[0]value * 10''}.
\end{itemize}
All keyframes are stored throughout the entire lifespan of a morph. When the morph exits the state machine---the result of the associated visualisation having its specification manually changed by the user (Figure~\ref{deimos_fig:deimos-state-machine-1})---all stored keyframes are deleted.
Any added or changed properties will take their values from the state's keyframe if one already exists.
The main purpose for this is to handle situations where a property is removed by a transition in one direction, but needs to be added back in by a transition in the reverse direction. Without stored keyframes, the removed property would no longer be known and therefore could not be added back in.
\subsection{Signals} \label{deimos_ssc:signals}
In Deimos, a signal is the resulting value from a stream of data captured from input events, drawing inspiration from Vega's signals~\cite{satyanarayanDeclarativeInteractionDesign2014} and event-driven functional reactive programming principles~\cite{wanEventDrivenFRP2002}. Signals can be used in Deimos to: (i) be substituted as values in keyframes (Section~\ref{deimos_sss:keyframe-creation-process}); (ii) act as conditional triggers that control when a transition actually begins (Section~\ref{deimos_ssc:transitions}); and (iii) act as a tweening variable to control the progression of a transition (Section~\ref{deimos_ssc:transitions}). No type safety is enforced in Deimos. A morph may contain zero or more signal specifications.
Deimos has two main types of signals: signals that stem from some given source, and signals that evaluate a mathematical expression.
\begin{center}
\textit{signal := sourceBasedSignal | expressionSignal}
\end{center}
\subsubsection{Source-based Signals} \label{deimos_sss:source-based-signals}
Source-based signals, as the name suggests, emit values from some input source. This is primarily from user interactions but could be extended to passively updating values from sensors, etc. We define two classes of source-based signals: deictic and non-deictic signals. Deictic signals express relationships between a source and target entity.
While they mainly serve to model direct manipulation which is commonly associated with embodied interaction (DG2), they can also model situations where there is no actual direct contact. Non-deictic signals capture everything else, although these are mainly input sources that do not require some target/context to make sense (e.g.\ mid-air hand gestures, input source states, sensor data). Their production rules are:
\begin{center}
\begin{tabular}{l}
\textit{sourceBasedSignal := nonDeicticSignal | deicticSignal} \\
\textit{nonDeicticSignal := (name, source, handedness?, value)} \\
\textit{deicticSignal := (name, source, handedness?, target, criteria?, value)}
\end{tabular}
\end{center}
Both signal classes share the same three attributes.
The \textit{name} property references this signal in either a state (Section~\ref{deimos_sss:keyframe-creation-process}), an expression signal (Section~\ref{deimos_sss:expression-signals}), or a transition (Section~\ref{deimos_ssc:transitions}).
The \textit{source} property denotes the type of source that values are to be retrieved from (e.g.\ \texttt{hand}, \texttt{head}, \texttt{vis}, \texttt{ui}).
Certain sources can also specify the source's \textit{handedness} to distinguish between \texttt{left}, \texttt{right}, or defaulting to \texttt{any}.
For non-deictic signals, the \textit{value} property denotes what type of value to derive from the source, which is then emitted by the signal. This can either be the state of the user interaction (e.g.\ whether the hand is performing a \texttt{select} gesture) or the geometric properties of the source as an object in the immersive environment (e.g.\ \texttt{position} of the user's head). As previously mentioned, these are useful when some value of the input source is to be retrieved without it needing to be in the context of some other target or object. Figure~\ref{deimos_fig:deimos-json} shows an example of a non-deictic signal: it does not matter what the hand is touching so long as it is performing the pinch gesture.
Deictic signals model relationships between entities, and are based on the interaction section of the design space by Lee et al.\ \cite{leeDesignSpaceData2022}.
The \textit{target} property denotes the type of object that the source is attempting to target. This can either be a part of the visualisation (e.g.\ \texttt{mark}, \texttt{axis}), a separate object in the environment (e.g.\ \texttt{surface}), or part of the user themselves (e.g.\ \texttt{head}).
For the first two, a \textit{criteria} property needs to be included to determine the logic used in selecting the target (e.g.\ \texttt{select}, \texttt{touch}, \texttt{nearest}). This logic is needed when there are multiple potential target objects that could be selected.
Lastly, the \textit{value} property can be used to derive three types of values. First, it can derive values from the \textit{target} much in the same way as non-deictic signals do. For example, a \texttt{hand} source might target the \texttt{mark} that it is \texttt{select}ing, and the \texttt{position} of that mark is used as the value. Second, it can derive values from a comparison between the source and target. For example, a \texttt{vis} source might target the \texttt{surface} that it is \texttt{touch}ing, and the point of \texttt{intersection} between the vis and surface is used as the value. Third, a \texttt{boolean} value simply emits true if a target has been selected successfully, and false if no targets are selected.
Deictic signals in particular address the challenges in DG2 as they express relationships between entities, allowing morphs to react to direct interactions by the user (e.g.\ user's hand selects a mark). Of course, whether or not these interactions are truly embodied (i.e.\ it follows best practices) is dependent on how the morph designer uses deictic signals in conjunction with the grammar's other components.
Deictic signals also allow morphs to be spatially-aware \cite{buschelInvestigatingUseSpatial2017,langnerMARVISCombiningMobile2021,hubenschmidSTREAMExploringCombination2021}, as they can emit values that are based on spatial relationships between objects which can then be used to control the morph's behaviour (e.g.\ distance between user's head and the visualisation, orientation of two standalone tracked objects).
Lastly, deictic signals allow morphs to become context-aware \cite{svanaesContextAwareTechnologyPhenomenological2001,deyConceptualFrameworkToolkit2001}, as they can emit values derived from a visualisation's relationship with its environment (e.g.\ is the visualisation touching a surface, is the visualisation close to a particular object). This may then act as conditionals to allow/disallow the morph from triggering (Section~\ref{deimos_ssc:transitions}).
While not as critical to this work, the ability to facilitate WIMP-style interaction using these signals also helps fulfil DG3.
\subsubsection{Expression Signals} \label{deimos_sss:expression-signals}
\begin{center}
\textit{expressionSignal := (name, expression)}
\end{center}
Expression signals allow for the arbitrary composition of signals using mathematical expressions. Their primary purpose is to modify and refine values emitted by source-based signals.
We choose to use expressions as they allow arbitrary calculations to be performed in a familiar manner, instead of designing a completely new and potentially confusing domain-specific language.
The \textit{name} property references this signal in the same way as source-based signals. The \textit{expression} property is a mathematical expression as a string. Basic mathematical operators can be used alongside select primitive functions (e.g.\ \texttt{normalise}, \texttt{distance}, \texttt{angle}).
As with all other primitives, the list of supported functions can easily be extended. Any type of signal can be used as a variable by referencing its name. As previously mentioned, no type safety is enforced, meaning the user has to be aware of the data types present in the expression.
Expression signals are similar to deictic signals in that they help further address the challenges in DG2, but are more powerful in comparison. For example, while deictic signals only allow for a single entity to be targeted, expression signals can combine two (or more) deictic signals together to calculate a new relationship between the targeted entities (e.g.\ distance between two marks selected by the user's hands).
\subsection{Transitions} \label{deimos_ssc:transitions}
A morph is comprised of at least one transition specification. They functionally connect two state specifications together in the state machine (Figure~\ref{deimos_fig:deimos-state-machine-1}). A transition can be defined by the following seven-tuple:
\begin{center}
\textit{transition := (name, states, trigger?, control?, bidirectional?, disablegrab?, priority?)}
\end{center}
The \textit{name} property serves to identify this transition especially when multiple transitions are involved.
The \textit{states} property is an array of two strings, corresponding to the names of the initial and final states in the transition respectively. Referencing states via their name in this manner helps with encapsulation, keeping all state related syntax separated from the transitions.
The \textit{trigger} property is an equality expression that activates the transition when it evaluates as true, but only when the visualisation matches the initial state in the \textit{states} property. The expression follows similar rules as expression signals (Section~\ref{deimos_sss:expression-signals}) but must return a Boolean value. Triggers are mainly used to let the user control when the transition is actually applied, usually as the result of some sort of input action or condition caused by the user. Not setting a trigger will cause the transition to be immediately applied when it enters the initial state.
The \textit{control} component is optionally used to further customise the behaviour of the transition. It is formally described by the following five-tuple:
\begin{center}
\textit{control := (timing?, easing?, interrupted?, completed?, staging?)}
\end{center}
The \textit{timing} property controls the duration of the transition. If a number is used, the transition will interpolate between the two state keyframes over the given duration in seconds. Alternatively, the name of a signal can be used, in which case the signal will be used as the tweening variable \textit{t}. This allows for the duration and direction of the interpolation to be controlled by the signal (and subsequently the user). In this situation, the transition will only begin when the signal is a value between 0 and 1, in addition to any other conditions. This defaults to 0 if not specified, which will result in jump cuts.
The \textit{easing} property applies an easing function to the transition, defaulting to a linear function if none is specified. Easing functions are commonly used in animations and help make animations look more natural. Functions that slow down the animation at the start and end can also make it easier to keep track of visual changes by making movement more predictable \cite{dragicevicTemporalDistortionAnimated2011}.
The \textit{interrupted} property determines what happens when the \textit{trigger} returns false whilst the transition is in progress. \texttt{initial} and \texttt{final} will cause the visualisation to immediately jump to the specified state. \texttt{ignore} will instead allow the transition to keep progressing until it naturally terminates. The \texttt{ignore} condition is particularly useful in cases where the \textit{trigger} may inadvertently return false mid-transition but the transition should still continue, acting as a sort of fail-safe. This defaults to \texttt{final}.
Similarly, the \textit{completed} property determines what happens when the visualisation naturally terminates, either remaining at the \texttt{final} state or resetting back to the \texttt{initial} state instantaneously. Using the \texttt{initial} condition may be useful if the transition should not cause any long-term changes to the visualisation, particularly if the animation is alone sufficient to serve its purpose~\cite{leeDesignSpaceData2022}. This also defaults to \texttt{final}.
The \textit{staging} property allows for specific visualisation properties to be staged. Name-value pairs can be specified where the name is the property to be staged, and the value is an array of two numbers between 0 and 1 that correspond to start and end percentages. The property will only be animated when the transition period is within the given range. Any property not specified will not be staged. Staging is a common feature of animated transition grammars~\cite{heerAnimatedTransitionsStatistical2007} and ours is no different. Note that the grammar does not support staggering.
The \textit{bidirectional} property of the transition, if set to true (default false), allows the transition to start and end in the reverse direction. All transition settings remain the same, except the \textit{trigger}, if specified, needs to return false in order for the reverse transition to activate. This serves mainly as a convenience function that prevents the need for two transition specifications to be written whenever a single bidirectional transition is desired. However, doing so is necessary in order to have distinct settings for either direction.
The \textit{disablegrab} property, if set to true (default false), will automatically disable the standard VR/AR grab action performed on the visualisation when the transition starts. This helps prevent visualisations from being inadvertently moved by the user when a transition's \textit{trigger} uses a similar grab gesture.
Lastly, the \textit{priority} property can be used to handle edge cases where multiple transitions due to similar \textit{trigger} conditions are activating on the same frame, but they conflict with the visualisation properties they modify. In this situation, the transition with the highest numbered priority will activate first, and all other conflicting transitions will be blocked. If priorities are equal, then the order in which they activate is random. The priority property defaults to 0.
\subsection{Satisfaction of Design Goals}
We now reiterate how our grammar satisfies the design goals listed in Section~\ref{deimos_sec:design-goals}.
For DG1, the use of partial visualisation states (Section~\ref{deimos_sss:state-matching-process}) and the keyframe creation process (Section~\ref{deimos_sss:keyframe-creation-process}) helps satisfy it. As the Deimos grammar is defined solely through JSON text, a library of generic morphs can be created in a development environment that has access to ergonomic text input (i.e.\ keyboards). When deployed in a production environment, the end-user in the immersive environment then has access to these (embodied) interactive morphs without needing to write the specifications themselves---a process which is notoriously difficult in VR and/or remote-AR environments. We provide a direct example of one such generic morph in Section~\ref{deimos_ssc:example-gallery-generic-specific}. Establishing this JSON-based grammar also sets the foundation for designing a GUI that is intended for use in VR/AR, much in the same way that CAST \cite{geCASTAuthoringDataDriven2021} is the GUI implementation of the Canis grammar \cite{geCanisHighLevel2020}. Through this, a morph author can rapidly prototype entirely in VR/AR.
For DG2, certain components such as deictic (Section~\ref{deimos_sss:source-based-signals}) and expression signals (Section~\ref{deimos_sss:expression-signals}) directly support embodied interaction, as these signals listen to user input and/or changes in the entities in the environment and thus the relationships between them. As previously stated in Section~\ref{deimos_ssc:design-goal-2}, the grammar intentionally does not enforce any best practices, including embodied interaction and animated transition principles. However, adherence to these guidelines is not isolated to any one component of a morph but instead across the entire specification. For example, even if direct manipulation is emulated through a deictic signal between the user's hands and the visualisation's marks, there would be little to no gestural congruency if the morph instead changed the visualisation's geometric size. Therefore, the ability of the grammar to express embodied interactions is dependent on the morph designer. We describe how a morph can use embodied interaction in a practical example in Section~\ref{deimos_ssc:example-gallery-embodied}. We also describe how morphs can be rapidly iterated on in order to test new (embodied) interaction ideas in additional examples in Section~\ref{deimos_ssc:example-gallery-prototyping}.
For DG3, certain source signals (Section~\ref{deimos_sss:source-based-signals}) allow for WIMP UI elements to be used to control morphs. This of course stands at odds with the embodied interactions of DG2, but our goal with Deimos is to support both ends of this theoretical spectrum. Section~\ref{deimos_sec:example-gallery} as a whole contains multiple examples of these more conventional types of morphs.
\section{Deimos Implementation and Toolkit} \label{deimos_sec:prototype}
We created a prototype implementation of the Deimos grammar using the Unity game engine in order to demonstrate its concepts and use. Deimos is open source, with its source code and documentation available on a public GitHub repository\footnote{\url{https://github.com/benjaminchlee/Deimos}}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\linewidth]{Chapters/Deimos/images/deimos-pipeline-2.pdf}
\caption{High-level overview of the Deimos pipeline and how it interacts with our updated version of DXR \cite{sicatDXRToolkitBuilding2019}. Red bubbles represent stages that wait for event(s) to fire before execution continues, also indicated by the preceding hatched arrow.}
\Description{A linear flowchart of the Deimos pipeline. The nodes in linear order are Load morph specifications, DXR vis updated (in red), Find matching states, Create observables from signals, Trigger returns true (in red), Create keyframes, Start transition, Transition finished (in red), Stop transition, and Dispose observables. The last node is connected back to the Find matching states node.}
\label{deimos_fig:deimos-pipeline}
\end{figure}
\subsection{Data Visualisations} \label{deimos_ssc:prototype-data-visualisations}
As Deimos is primarily an animated transition grammar, we need data visualisations to apply transitions to.
We decided to use DXR by Sicat et al.\ \cite{sicatDXRToolkitBuilding2019} as the basis of our work. It is a toolkit developed for the Unity game engine designed for rapid prototyping of immersive data visualisations. The original DXR implementation provided support for an assortment of visualisation types, including scatterplots, barcharts, radial barcharts, and streamlines. These visualisations are specified in JSON files using an extended version of the Vega-Lite grammar \cite{satyanarayanVegaLiteGrammarInteractive2017}, adding support for the $z$ and \textit{depth} encodings. We use DXR instead of other toolkits like IATK \cite{cordeilIATKImmersiveAnalytics2019} as we found it easier to extend for our purposes. It already supports the Vega-Lite declarative grammar which is very popular in the visualisation community. DXR also uses individual GameObjects for each individual mark, simplifying mesh generation and management.
This came at the cost of rendering performance however, especially when thousands of marks are displayed on the screen. To this end, we made performance improvements to how DXR instantiates and updates its marks and axes by introducing object pooling, especially since marks and axes may be modified multiple times in a morph.
We also added several new visualisation types: choropleth and prism maps, stacked and side-by-side barcharts, and faceted charts (Section~\ref{deimos_sec:example-gallery}). However, as the original DXR implementation does not have support for data transformations like in Vega-Lite, neither does Deimos. This also means that animated transitions involving a time dimension (e.g.\ time varying scatterplots, barchart races) are not supported in Deimos.
\subsection{Code Structure and Pipeline}
Figure~\ref{deimos_fig:deimos-overview} provides an overview of Deimos' structure and how it interacts with our updated version of DXR.
Morph specifications are contained in JSON files that are read by Deimos at initialisation. They can also be refreshed during runtime if the specifications are edited. Deimos interacts with DXR in two main ways. Deimos receives events from DXR whenever a visualisation has been updated, which includes the visualisation specification as an argument. Deimos also sends start and stop function calls to DXR which executes the animated transitions.
Figure~\ref{deimos_fig:deimos-pipeline} provides a high-level overview of the Deimos pipeline in relation to DXR. While it is presented as a linear set of stages, the pipeline can reset or be exited in certain conditions.
First, all morph specifications are read and loaded into Deimos.
Next, whenever a DXR visualisation updates, Deimos is notified via event with the visualisation's specification.
This specification is used to check against all state specifications in the loaded morphs using the rules in Section~\ref{deimos_sss:state-matching-process}.
For any state that has matched, observable streams are created for each signal that is part of the state's transitions, including trigger signals. Observables are created using the UniRx package \cite{kawaiUniRxReactiveExtensions2022} and are composed together where necessary.
When a transition's trigger signal returns true (or if no trigger was specified in the first place), initial and final keyframes are created using the rules in Section~\ref{deimos_sss:keyframe-creation-process}.
These two keyframes, along with other transition parameters such as tweening and staging variables, are sent to the relevant DXR visualisation to start the transition.
When the transition has finished, Deimos stops the transition on the DXR visualisation. This step also updates the visualisation specification to reflect the new changes made by the transition.
Deimos then disposes of all observables related to the transition.
This process then starts anew again, with Deimos finding matching states to see if this newly updated visualisation is eligible for any morphs once more.
While Deimos is designed such that it exists separately from the visualisation framework used, they are still intrinsically linked to each other. Deimos is dependent on the visualisation framework to implement the actual animation and transition. It is also dependent on the grammar and syntax of the visualisations themselves. Therefore, translating Deimos to other visualisation toolkits requires adaptation to support the new declarative grammar, and the toolkit itself needs to support animation between keyframes via interpolation. While it is technically possible to create a middleware to translate visualisation specifications and thus increase modularity, we did not explore this option in this work.
\subsection{XR Interactions}
We use the Mixed Reality Toolkit (MRTK) \cite{microsoftMixedRealityToolkitUnity2022} to enable XR interactions in Deimos. As a result, Deimos can be deployed on a range of platforms including Windows Mixed Reality, Oculus Quest, and HoloLens. However, due to the aforementioned performance limitations when working with large amounts of data, it is recommended to only use Deimos in tethered VR or remote rendering AR setups. Both controller and articulated hand tracking are supported in Deimos in the form of source-based signals (Section~\ref{deimos_sss:source-based-signals}). While Deimos does not support eye gaze or voice input, these can be included in future work.
\section{Example Gallery} \label{deimos_sec:example-gallery}
We present several examples of morphs created with the Deimos grammar. We categorise and describe the examples in three ways, with the first two aligning with the design goals in Section \ref{deimos_sec:design-goals}.
First, we highlight how morphs can be designed to adapt to different visualisation configurations using generic states (DG1), but also allow for bespoke morphs by using specific states in controlled contexts (DG3).
Second, we demonstrate how morphs can be controlled using both embodied (DG2) and non-embodied (DG3) interaction methods.
And third, we provide two scenarios in which Deimos can facilitate the prototyping of different interaction methods.
All examples and their specifications are included in the Deimos Github repository. As such, we do not provide nor go into detail about each example's specification. The project files also contain additional example morphs not described in this paper.
\subsection{Generic vs specific morph examples} \label{deimos_ssc:example-gallery-generic-specific}
In DG1 and DG3, we described a spectrum in which morphs can vary between generic, adapting itself to a range of visualisation configurations, and specific, allowing it to be used in controlled settings.
On the generic end, we present the \textit{3D Barchart Partitioning and Stacking} morph (shown in Figure \ref{deimos_fig:teaser}). It takes a 3D barchart and either partitions it into a 2D faceted barchart, or stacks it into a 2D stacked barchart whenever it touches a surface in the immersive environment. During the transition, it also aligns the visualisation to be parallel against the surface that it had touched. This is an example of a morph involving three states and two transitions in a branch-like structure. The triggers are set up so that the applied transition is based on the angle of contact between the barchart and surface: orthogonal for the faceted barchart, and parallel for the stacked barchart.
Its states are defined such that they only check that the encodings' types are correct (i.e.\ nominal \textit{x} and/or \textit{z}, quantitative \textit{y}) and that it uses cube marks. Through this, so long as a visualisation is a 3D barchart then it can undergo this morph, greatly expanding the range of scenarios it can be used in. JSON path accessors are also used to substitute in the proper field names during runtime (i.e.\ \textit{facetwrap}, \textit{yoffset}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Chapters/Deimos/images/deimos-examples-slider.pdf}
\caption{Still images of the \textit{Geographic Scatterplot with Slider} morph, using Unity GameObjects as a slider to control the transition.}
\Description{A set of still images showing a hand moving a slider from left to right, which causes the points on a 2D scatterplot to move, and then expand into a choropleth map.}
\label{deimos_fig:deimos-examples-slider}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.7\linewidth]{Chapters/Deimos/images/deimos-examples-embodied.pdf}
\caption{Examples of embodied and non-embodied morphs. Left: Still images of the \textit{Barchart Unstacking} morph, using a ``pinch and pull'' gesture to unstack a 2D barchart into 3D. Right: The result of the \textit{Menu-based Extrusion} morph showing the radial menu and toggle button.}
\Description{A two-part image. The first part shows a 2D barchart on a surface. A hand grabs onto this barchart and pulls away from it, causing the 2D barchart to extrude out into 3D, and level itself out into a 3D side-by-side barchart. The second part shows a 3D scatterplot. Next to it is a radial menu with the options ``Cylinders'', ``Displacement'', and ``Horsepower'', with Displacement being selected. Below it is a toggle button that is enabled labelled ``Extrude Visualisation''. The 3D scatterplot has a z dimension with the field Displacement.}
\label{deimos_fig:deimos-examples-embodied}
\end{figure*}
On the other end of the spectrum, the \textit{Geographic Scatterplot with Slider} morph (shown in Figure \ref{deimos_fig:deimos-examples-slider}) demonstrates the use of two predefined states: a scatterplot and a choropleth map. Both of these are explicitly defined using exact encodings and field names (e.g.\ ``Population'', ``LifeExpectancy''). Because of this, only a visualisation with these exact encodings and fields can undergo this morph. A transition connects the two states together, which is controlled using a linear slider represented by a Unity GameObject. A signal accesses the \textit{x} position of this GameObject and uses it as the timing property of the transition.
A morph like this is useful for controlled settings like data-driven storytelling, as the visualisation(s) are all predefined by the author.
\subsection{Embodied vs non-embodied morph examples} \label{deimos_ssc:example-gallery-embodied}
In DG1 and DG3, we described a spectrum in which morphs vary based on the use of embodied vs non-embodied (or WIMP-based) interactions.
On the embodied end, the \textit{Barchart Unstacking} morph uses a ``pinch and pull'' metaphor as the gesture to unstack the bars of a 2D barchart into a side-by-side 3D barchart (shown in Figure \ref{deimos_fig:deimos-examples-embodied} left). To strengthen the metaphor of bars being extruded out into 3D, a condition is added whereby the 2D barchart needs to be positioned against a surface for the morph to be allowed---introducing a contextual requirement to the morph.
To initiate the transition, the user also needs to perform a pinch gesture on the visualisation itself, which is represented by a deictic signal. Other signals calculate the distance between the user's hand and the surface the visualisation is resting against.
The transition uses this distance as its timing property, causing the bars to extrude at the same rate which the user pulls away from them. In this fashion, the user perceives themselves as actually stretching the barchart into 3D, thus resulting in a high level of gestural congruency \cite{johnson-glenbergEmbodiedScienceMixed2017,johnson-glenbergImmersiveVREducation2018}. Of course, this is but one way in which embodied interaction can be achieved, but this approach can be replicated across other morphs to achieve similar styles of extrusion effects.
On the non-embodied end, the \textit{Menu-based Extrusion} morph adds a third spatial dimension to a 2D scatterplot, but does so via an MRTK toggle button \cite{microsoftMixedRealityToolkitUnity2022} (shown in Figure \ref{deimos_fig:deimos-examples-embodied} right). A signal retrieves the state of this toggle button, and will trigger the visualisation when the button is toggled on. This example also demonstrates the use of a radial menu to select the field name of the newly added dimension. A signal retrieves the selected value and substitutes it into the 3D scatterplot state at keyframe creation.
In comparison to the \textit{Barchart Unstacking} morph, this example presents a much simpler and more familiar type of animated transition, albeit in an immersive environment.
\subsection{Prototyping morph interactions} \label{deimos_ssc:example-gallery-prototyping}
Lastly, we demonstrate how the grammar allows for signals to be easily swapped and modified to allow rapid prototyping of different interactions. In terms of the Cognitive Dimensions of Notations \cite{greenCognitiveDimensionsNotations1989}, this corresponds to a low level of \textit{viscosity}.
In this example, we recreate \textit{Tilt Map} by Yang et al.\ \cite{yangTiltMapInteractive2020} using Deimos (shown in Figure \ref{deimos_fig:deimos-examples-tiltmap} top). Three states are defined: choropleth map, prism map, and barchart. Two transitions are defined to connect these states linearly. A signal is then created to retrieve the tilt angle of the visualisation relative to the horizontal plane. This tilt angle is then subdivided into two ranges at specific angles using expression signals, that are then used as tweening variables for the two transitions (choropleth to prism, prism to barchart). With this, a visualisation will morph between the different states depending on its tilt.
However, we can easily change the manner which the morph is controlled just by replacing the tilt angle with another source. A straightforward example is to replace it with the height of the visualisation relative to the floor (shown in Figure \ref{deimos_fig:deimos-examples-tiltmap} bottom). The two expression signals which subdivide the range will also need to be updated to the new value ranges. In doing so we turn \textit{Tilt Map} into a so-called ``Height Map'', just by changing a few lines in the morph specification. The result is shown in Figure \ref{deimos_fig:deimos-examples-tiltmap}.
\begin{figure*}[htb]
\includegraphics[width=\linewidth]{Chapters/Deimos/images/deimos-examples-tiltmap.pdf}
\caption{Top: Still images of the \textit{Tilt Map} morph based on Yang et al.\ \cite{yangTiltMapInteractive2020}. A red and green angle bracket is shown to provide rotation cues. Bottom: A modified version of \textit{Tilt Map} showing changes to the signal specification and the resulting morph shown as still images. This example shows tilt being replaced with height. A red and green bar is shown to provide height cues. }
\Description{A two part image. The first part shows a series of still images of a 2D choropleth map being tilted. As it rotates, it morphs into a prism map. As it rotates even further, it morphs into a 2D barchart. The second part shows a modified version of the first part. Two signal specifications written in JSON are shown which shows the modifications made. Next to it are still images of a choropleth map turning into a prism map, then into a 2D barchart as it is lowered towards the ground.}
\label{deimos_fig:deimos-examples-tiltmap}
\end{figure*}
Inspired by work on small multiple layouts in immersive environments \cite{liuDesignEvaluationInteractive2020}, we created the \textit{Proxemic-based Facet Curvature} morph (shown in Figure \ref{deimos_fig:deimos-examples-faceted} top). It morphs into a faceted chart between three different layouts: flat, curved, and spherical. These three layouts correspond to three states in the morph, with two transitions connecting them linearly. A signal retrieves the distance between the user's head and the visualisation, with two more signals subdividing the distance into tweening variables (similar to the \textit{Tilt Map} morph). As the user approaches the faceted chart, it begins to wrap around them into a curved layout, and when they are close enough it morphs into an egocentric spherical layout. This effectively makes the chart spatially aware of the user's position.
To demonstrate another method of controlling this morph, we can replace the distance signal with the value of a rotary dial (shown in Figure \ref{deimos_fig:deimos-examples-faceted} bottom). As the user rotates the dial the small multiples curve inwards or outwards. To do so, we create a separate cylinder GameObject in Unity which functions as this dial. We then replace the distance signal with a signal which retrieves the rotation value of the cylinder, and we also update the ranges of the two subdividing signals. This functionally turns the proxemics-based interaction into one involving the manipulation of an external object. This object is currently only virtual, but the concept can be applied to physical objects using either tangible input or motion tracking.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.85\linewidth]{Chapters/Deimos/images/deimos-examples-facet.pdf}
\caption{Top: Still images of the \textit{Proxemic-based Facet Curvature} morph, which curves around the user based on the distance between them and the chart. Bottom: A modified version which replaces distance with the rotation of a separate dial object. The changes to the signal specification are shown with the resulting morph shown as still images.}
\Description{A two part image. The first part shows a series of still images of a faceted chart turning from a flat layout to a curved layout, then into a spherical layout, as a VR avatar approaches it. The second part shows a modified version of the first part. Two signal specifications written in JSON are shown which shows the modifications made. Next to it are still images of the same faceted chart going from flat to curved to spherical, but this time it is controlled using a virtual dial object.}
\label{deimos_fig:deimos-examples-faceted}
\end{figure*}
\section{Expert Evaluation} \label{deimos_sec:user-study}
We evaluated Deimos in order to:
(i) determine the ease of use and expressiveness of the grammar;
(ii) get impressions on the concepts introduced in the grammar; and
(iii) generate discussion topics and research directions on the use of animated transitions in immersive environments.
\subsection{Study Design}
We use an approach similar to Zong and Pollock et al.\ \cite{zongAnimatedVegaLiteUnifying2022} by recruiting three developers of immersive analytics grammars and toolkits: Peter Butcher of VRIA \cite{butcherVRIAWebBasedFramework2021}, Philipp Fleck of RagRug \cite{fleckRagRugToolkitSituated2022}, and Ronell Sicat of DXR \cite{sicatDXRToolkitBuilding2019}.
To diversify our participant pool, we also recruited Zeinab Ghaemi of immersive geovisualisation \cite{ghaemiProxemicMapsImmersive2022}, Tica Lin of embedded sports visualisation \cite{linQuestOmniocularsEmbedded2022}, and Jorge Wagner of the VirtualDesk exploration metaphor \cite{wagnerfilhoVirtualDeskComfortableEfficient2018}.
We hoped to learn how Deimos could be positioned within each researcher's respective works.
To minimise learning requirements, we only invited researchers who have experience working with Unity.
The user study was conducted remotely in three sections, repeated for each participant.
First, we conducted a 30-minute introductory session where we explained the goals of the study, demonstrated the examples in Section \ref{deimos_sec:example-gallery}, and went through high-level concepts of the grammar.
Second, we tasked participants to use Deimos unsupervised for at least 2.5 hours. They were given walkthroughs and documentation to learn the grammar, and were encouraged to create their own morphs with some suggestions given to them. This documentation can be found in the Deimos Github repository.
Third, we held a one-hour semi-structured interview based on the aforementioned evaluation goals.
We asked participants to show us their created morphs, whether they found the overall process easy or difficult, and what parts of the grammar they liked or disliked. For the three participants with toolkit development experience, we also asked how they would retroactively implement animated transitions in their respective toolkits, and if there would be any significant differences compared to Deimos and why. For the other three participants without toolkit development experience, we instead asked how Deimos could be used to support any part of their own research---if at all. However, we allowed the interview to diverge and continue organically, drilling down on any interesting comments participants may have made along the way.
Throughout the study period, we modified the documentation based on participant feedback. While we made bug fixes to Deimos where necessary, we did not add or change any features. Each participant was offered a AU\$150 gift card as compensation for their time.
The interviews were recorded and transcribed. The first author independently performed thematic analysis \cite{braunUsingThematicAnalysis2006} on all six transcriptions, with two other authors doing the same on three transcriptions each. These three authors then discussed and synthesised the main themes together, which form the structure of this section and the following Discussion section.
\subsection{Usability feedback}
We compile participant feedback based on a selection of the most relevant Cognitive Dimensions of Notations \cite{greenCognitiveDimensionsNotations1989}. Rather than using the dimensions as heuristics---a common approach in related works (e.g.\ \cite{satyanarayanDeclarativeInteractionDesign2014, satyanarayanCriticalReflectionsVisualization2019})---we use them from a usability perspective to evaluate the Deimos grammar. However, we provide self-evaluation for certain dimensions where relevant.
\textbf{Error proneness (likelihood of making errors).}
All participants spent the required 2.5 hours using the toolkit, however four of the six spent 7--8 hours using it. The initial reasoning given by most participants was that they enjoyed their time with Deimos and learning how it worked. On further inspection however it was clear that this was in part due to the steep learning curve of the grammar, with Fleck commenting ``I don't feel that three hours are enough.'' We identified several potential causes of this, largely due to grammar's \textit{error proneness}.
First, many participants (Fleck, Ghaemi, Lin, and Wagner) were unfamiliar with the DXR grammar, with even Sicat not having used DXR for three years. As a result, two grammars needed to be learnt, naturally increasing learning time. As the Deimos grammar is intrinsically linked to its visualisation grammar (Section \ref{deimos_ssc:prototype-data-visualisations}), it is apparent that the choice of visualisation package brings not only technical but also notational difficulties.
Second, our documentation assumed full knowledge of Unity and its functions which not all participants had.
Third, the error messages provided by the Deimos prototype were not useful for participants. While the JSON schema validates whether the morph specification is syntactically correct before it is parsed, no check exists for semantic correctness (e.g.\ making sure \textit{name} properties are unique). This has since been corrected in the prototype.
Some participants suggested ways of easing the learning curve. Sicat suggested video tutorials to better explain the grammar, whereas Butcher suggested providing the DXR documentation as pre-reading before the study is even conducted. Interestingly, no participant suggested changes to the grammar itself beyond simple name changes (the terms \textit{signals} and \textit{restrict}). Whether this is due to participants not having had enough time to be exposed to Deimos' advanced features is unclear.
\textbf{Closeness of mapping (closeness to problem domain).}
The lack of grammar changes suggested by participants could be at least partially explained by its \textit{closeness of mapping}. All participants, when asked, had little to no issues understanding how the grammar models the state machine (Figure \ref{deimos_fig:deimos-state-machine-1}). The only participant who raised potential challenges was Fleck, citing the differences between declarative and imperative languages. As Unity primarily uses imperative programming, the shift to a declarative style in Deimos could confuse certain users, particularly when constructing an interaction using signals. We do not believe this to be a major issue however, especially if the immersive visualisations also use a declarative language (e.g.\ DXR \cite{sicatDXRToolkitBuilding2019}, VRIA \cite{butcherVRIAWebBasedFramework2021}).
\textbf{Viscosity (resistance to change).}
After following the walkthroughs, all participants used the same strategy of combining parts of existing examples together to create new morphs to facilitate their learning. For example, Wagner combined the states and transitions of \textit{Tilt Map} example and the signals of the \textit{Proxemic-based Small Multiple Curvature} example to create a rudimentary ``Proxemic Map''. There are only a few examples of participants extending existing examples with completely new components: Sicat remapped the proxemic interaction of the \textit{Proxemic-based Small Multiple Curvature} example with a virtual rotary dial (the same as in Section \ref{deimos_ssc:example-gallery-prototyping}), and Butcher created a stacked barchart to side-by-side barchart morph based on whenever the mouse is clicked. These all demonstrate a low level of \textit{viscosity} within the grammar, as participants were generally able to achieve their goals without issue (minus the aforementioned issues regarding error proneness). The same concept was also described in Section \ref{deimos_ssc:example-gallery-prototyping}.
However, poor error messages introduced viscosity for a few participants. For instance, Lin had tried to create a reduced version of the \textit{3D Barchart Partitioning} example by removing all surface related signals, but the toolkit did not warn her to remove the references to these signals in the states, resulting in errors. This need to keep track of changes in multiple parts of the specification contributes to higher viscosity.
\textbf{Visibility (ability to view components easily).}
Several participants (Fleck, Sicat, and Ghaemi) noted issues relating to the \textit{visibility} of signals in the grammar, primarily due to the large number of possible keywords involved. It was not obvious what options and/or combinations of signals are available without resorting to the documentation, although the JSON schema aided this process. The same participants acknowledged however that this reliance on documentation is fairly normal for toolkits, especially with only a few hours of experience.
From a technical perspective, the Deimos prototype improves visibility by exposing the names of any active morphs and/or transitions on each visualisation, and provides a toggle to print the emitted values of signals to the console for debugging purposes. Further debug messages can also be enabled which show the visualisation specifications of generated keyframes in JSON format. While these features were not explained in the documentation, they were highly useful during the development of Deimos and the creation of our example gallery.
\section{Discussion} \label{deimos_sec:discussion}
This section continues from Section~\ref{deimos_sec:user-study} by summarising the main themes and discussion topics of the semi-structured interviews with our expert participants. We also include several adjacent topics to round out the discussion of immersive morphs---especially in the context of other animated transition grammars.
\textbf{Adaptive morphs.}
While some participants liked the concept of adaptive morphs, others found it getting in the way of their authoring process.
Butcher saw value in adaptive morphs, saying ``I could see why that would be useful, especially if you had a large array of different charts... having it modular just makes sense.''
Wagner thought that ``the premise works well'', but clarified that he would prefer to have ``a [morph] specification for each type of graph'' instead of one hyper-generic morph that applies to all visualisation idioms.
Ghaemi was caught off-guard by this function when her new morph was being applied to other visualisations unintentionally (a result of overly generic states), but was able to reason with modifying the states to ensure that they are more specific.
Fleck and Sicat faced a similar issue, but instead suggested the ability to use an ID to directly target a specific visualisation, skipping the state matching process altogether. This was particularly of relevance to Fleck, where in \textit{RagRug}~\cite{fleckRagRugToolkitSituated2022} ``the user does not create a visualisation [themselves], but the system creates the existing visualisations.''
Overall, participants were able to grasp the concept of adaptive morphs, but it is apparent that their experiences come from the perspective of the morph author. A quantitative evaluation involving data analysis utilising pre-made morphs for practical tasks would be needed to fully evaluate the concept.
\textbf{The purpose of morphs.}
All participants found the examples exciting and interesting, but some had thoughts on their actual purpose. Ghaemi said that morphs are mainly useful when they add or change the data shown, rather than simply remapping encodings (e.g.\ \textit{Stacked Barchart Extrusion} example).
Lin similarly said that she would only use morphs when working with large amounts of data, such as combining proxemics with Shneiderman's mantra~\cite{shneidermanEyesHaveIt1996}, or when working with multiple views, but ``if it's only one smaller data set, and one chart, I probably wouldn't use it to morph between different columns.''
Butcher said that while our example morphs were ``neat and novel'', their animations did not strictly reveal new information, such as a time-varying scatterplot does.
Therefore, future work should investigate specific use cases for morphs and how morphs may potentially vary between them.
\textbf{Embodied interaction and discoverability.}
The reception to the use of embodied interactions in Deimos (DG2) was positive, but two participants raised discussion topics around their long-term effects. Many of our example morphs use interaction metaphors for embodied interaction (e.g.\ collide with surface, pinch and pull). Sicat expressed concern over the use of these metaphors, saying ``...maybe in my application, pinning to the wall means or does something, and then someone else develops a morph where stick to the wall does something else... that might confuse people... there's no universal rule that says, pinning to the wall should do this.'' When asked if Deimos could play a role in shaping these metaphors, Sicat responded ``I would keep it open for now and just let [researchers] explore'', noting that the field is still not mature yet. He then suggested the use of tooltips to guide users in discovering morphs, especially when conflicting metaphors are used, but stated this is of low priority. In a similar vein, Lin suggested two ways of improving embodied morphs and their discoverability, especially as she had difficulties performing the rotation required for the \textit{3D Barchart Partitioning and Stacking} example. The first was to have the system predict what action the user is about to do, and display the morphs associated with that action in a ``gestural menu'' that the user can select to trigger the morph. The second was to show a preview of the morph while performing the interaction. When asked about the importance of these features, she said that they ``probably [do not] affect the current grammar, because it's more like an assistant towards the completion of certain interactions'', and that they are more like external scripts loaded after the core grammar. Overall, while there are broader implications of the use of embodied interaction in immersive analytics, we see the power in Deimos being used to explore this design space in the long term, rather than immediately prescribing them in this work.
\textbf{GUIs and morph templates.}
Fleck, Sicat, and Ghaemi brought up ideas on how GUIs can be incorporated into Deimos. Fleck suggested the use of data flows in Node-RED to author morph specifications in JSON, similar to how visualisation specifications are created in \textit{RagRug}~\cite{fleckRagRugToolkitSituated2022}. Sicat recalled his own experiences developing DXR's GUI~\cite{sicatDXRToolkitBuilding2019}, noting that a GUI can be useful for non-experts and even end-users to create their own morphs. In a similar vein, Ghaemi said that a GUI would have greatly assisted her learning process with Deimos, citing her lack of experience in both DXR and toolkits in general. However, both participants clarified that the GUI should only cover basic functions, and advanced features should only be accessed in JSON format.
Sicat went on to suggest that the GUI could expose templates for different parts of the grammar that allows users to mix and match and create new morphs, which would be exposed through dropdowns and menus. He compared this idea to how he used the grammar himself, saying ``I went through your examples, copied the morphs and then pasted it into my morphs and then just modified them a bit. So it's kind of [the] same idea, right? Just a different interface. So for non-experts [it] would be super easy.''
Lin suggested something similar except from an interaction perspective, especially as in our included examples ``the interaction you perform is very standardised.'' In other words, a set of template interaction techniques could be provided to accelerate the morph authoring process. This feedback opens many future design possibilities for how a GUI for toolkits like Deimos might look like, especially if it can allow end-users in VR or AR to create and/or modify their own morphs to suit their own needs without needing to write JSON.
\textbf{Inspiration drawn from the toolkit.}
All participants drew interesting comparisons between Deimos and their respective works. Wagner, Ghaemi, and Lin all showed great interest in morphs that transition between 2D and 3D. For Wagner, from the context of his work on VirtualDesk~\cite{wagnerfilhoVirtualDeskComfortableEfficient2018}, said ``it would be very interesting to be able to just snap [3D visualisations] to the desk, and then they project to 2D, which is something that many experts are very comfortable with, but then I could show to them that they can extract [the visualisation] from the desk or from the wall, and try to grab it and look around...'' For Ghaemi whose field is immersive geovisualisation~\cite{ghaemiProxemicMapsImmersive2022}, it was to have the morph directly tied to adding layers to a virtual map, ``[when the] 3D chart collides with the map, the bars could be scattered through the buildings, so I can see the charts on top of the building.'' For Lin, she raised ideas in the context of embedded sports visualisation~\cite{linQuestOmniocularsEmbedded2022}, whereby ``you [can] drag the 2D charts onto a specific player, or maybe drag it onto the court, like the flat ground floor, and then it just suddenly morphs into this heatmap.'' In this sense, rather than a visualisation just morphing between 2D and 3D, it could also morph between being embedded and non-embedded~\cite{willettEmbeddedDataRepresentations2017}.
We then asked whether they could see themselves using Deimos to aid in their research.
Wagner thought that as a proof of concept it would work ``super well'', but cited the poor scalability of the toolkit as a reason against using it.
Ghaemi was receptive, hypothesising that ``the [toolkit] that you have it's, at least, for some of [my ideas], I'm pretty sure that I can implement what I want.'' She also noted that there are no other immersive analytics toolkits that currently enable animated transitions in the manner she desired. Lin said ``there's a high chance that I could use this library to help me prototype some scene to show [sports analysts and coaches].'' After this proof of concept stage however, she would instead develop her own research prototype from the ground up to support specific features such as ``instant data updating''.
Lastly, Butcher said that ``seeing the change in data and understanding what you know, getting something out of it, it's important... certainly not enough attention has been paid to it in the past I don't think, especially in the immersive space.'' He followed this up by saying ``it's definitely something we're going to look at in future for sure, the effect is fantastic.''
While it is expected that not every researcher can make use of the Deimos grammar and the toolkit, our user study clearly demonstrates the significance of this work in generating further research ideas and promoting the study of animated transitions in immersive analytics.
\textbf{Animation authoring paradigms.}
Deimos was originally designed around keyframe animation as its main authoring paradigm. Interestingly, Deimos can technically be seen as having a combination of both keyframe and preset \& templates paradigms. This is arguably a good thing, as Thompson et al.\ \cite{thompsonUnderstandingDesignSpace2020} recommend authoring tools to combine multiple paradigms together to accommodate differences in designers' preferences. In truth, our use of the two paradigms is actually dependent on who is using the morph. In Section~\ref{deimos_ssc:design-goal-1} we described two types of users of Deimos: the person who is creating the morph in a development environment (i.e.\ the ``morph author''), and the person who is actually using the morph in an immersive environment (i.e.\ the ``end-user''). The morph author creates the morph with a keyframe mindset, and the end-user uses the morphs as though they were presets \& templates. Of course, when used for data exploration the VR/AR analyst does not necessarily need to interpret morphs as presets. Much like Data Clips \cite{aminiAuthoringDataDrivenVideos2017} allows for data videos to be created using preset clips however, it is theoretically possible to re-frame Deimos in a similar manner. Morph authors create preset morphs that apply to generic states. End-users then combine these preset morphs together to create linear narratives or non-linear experiences. While this is merely speculative, we believe that future research can consider and further investigate this unique combination of authoring paradigms for animated transitions.
\textbf{Data-driven vs interaction-driven animation.}
Deimos stands apart from other works in the manner in which animations are initiated and viewed by end-users once they are defined. Animations in Animated Vega-Lite \cite{zongAnimatedVegaLiteUnifying2022}, Canis \cite{geCanisHighLevel2020}, Data Animator \cite{thompsonDataAnimatorAuthoring2021} and so on are more data-driven. Specifications are tailored around the intricacies of the loaded dataset, with grammars like Gemini \cite{kimGeminiGrammarRecommender2021} and Gemini{\textsuperscript{2}}~\cite{kimGeminiGeneratingKeyframeOriented2021} even providing recommendation systems to further improve the animations created. Completed animations are then passively viewed by the end-user, with little to no input required to initiate and/or control its playback.
In contrast, Deimos is a more \textit{interaction}-driven grammar. Morph specifications consider not only the change in visual encodings, but also how the user interacts with the system to trigger the morph itself. Completed morphs are then \textit{actively} viewed by the end-user, with them potentially having a high degree of control over the morph's playback and function. This difference is intentional, as immersive environments are inherently more interactive and embodied \cite{marriottImmersiveAnalytics2018} than desktop environments, encouraging users to ``reach out'' and directly manipulate their data. We expect and encourage future research on animations in Immersive Analytics to maintain this interaction-driven mindset---even for presentation and storytelling to better engage and immerse users through interactivity \cite{isenbergImmersiveVisualData2018,leeWatchesAugmentedReality2018}.
\section{Limitations}
Our work naturally has several limitations in regards to the grammar, the technical implementation, and the user study. First, our grammar is built upon several key concepts such as dynamic morphs and embodied interaction. While we aimed to justify these ideas in Section~\ref{deimos_sec:design-goals}, we did not properly evaluate them with actual end-users in VR/AR performing data analysis tasks. Therefore, we cannot confidently say that our approach is quantifiably beneficial for immersive analytics.
Second, our participants were not exposed to all of the functionalities of Deimos. It is certainly possible that there are pain points when using Deimos' advanced functionalities which were not identified due to the limited amount of time participants spent using it. This could include the inability to perform certain embodied gestures with the grammar, or difficulties managing morphs that contain more than 2 or 3 states and/or transitions.
Third, as the grammar is dependent on the visualisation package that it is built upon, many of its limitations are born from DXR \cite{sicatDXRToolkitBuilding2019}. Limitations include the inability to transition between different mark types, lack of runtime data transformations, and overall poor scalability compared to other toolkits like IATK \cite{cordeilIATKImmersiveAnalytics2019} especially when rendering large amounts of data. The inability to transform data (e.g.\ aggregation and filtering) is especially troublesome as it meant that time-varying animations (e.g.\ Gapminder \cite{roslingBestStatsYou2007}) were not considered while designing the grammar, and using certain visualisations in morphs such as barcharts required pre-processing. While we had attempted to add data transformations into DXR ourselves, the challenges in using .NET as a scripting language made it difficult to achieve a syntax remotely equivalent to that of Vega-Lite \cite{satyanarayanVegaLiteGrammarInteractive2017}. We see this as obvious future work, especially as it can allow visualisations to not only morph between encodings, but also between different levels of aggregation, filters, or even different datasets.
\section{Introduction} \label{sec: introduction}
Within distributed cognition~\cite{liu2008dcog}, the theory of Embodied Cognition suggests that human beings make sense of the world through our interaction with it \cite{gibbs2005}. These interactions between our bodies and tools and the objects around us embody our dynamic reasoning process in a continuous interplay that is far richer than a static representation, such as a traditional chart or other data visualisation. Immersive environments such as virtual and mixed reality (VR/MR) give human-beings a platform for human computer interaction that has far greater potential for utilising a variety of human senses and the full range of physical human interaction capability. Furthermore, as explored recently by Lee et al.\ \cite{leeDesignSpaceData2022}, immersive environments give us the freedom to move representations of information freely between two and three-dimensional forms as necessary, to either optimally project or spread information without occlusion across surfaces, or to faithfully represent 3D spatial data in the space around us.
However, compared to the decades that have been invested in developing screen-based representation and interaction, tools for immersive data representation and interaction lag far behind. Furthermore, prior to this paper, the existing frameworks and tools for data visualisation in immersive environments have focused on static representation of the data in the environment.
These frameworks and tools have not given attention to transformations of data representations in a way that seems essential to support embodied cognition. In sum, tools for authoring embodied interaction with data lag behind tools for authoring static representations in immersive environments.
Therefore, in this paper we introduce Deimos: a declarative grammar for authoring \textbf{d}ynamic \textbf{e}mbodied \textbf{i}mmersive \textbf{mo}rph\textbf{s} for immersive visualisations. In contrast to traditional animated transitions, morphs are \textit{adaptive} and can be applied to any data visualisation in the environment that matches the partial visualisation specification of one of the morph's \textit{states}. \textit{Transitions} connect these states together to enable the actual animation, and can be controlled by \textit{signals}: data streams which stem from \textit{embodied} user interaction. These are specified using the Deimos grammar and are written in JSON.
We begin by detailing a set of design goals which allow immersive transitions to leverage the strengths of immersive environments that are not present on desktops~(Section \ref{sec: design-goals}). We then introduce the Deimos grammar itself, detailing its components, primitives, and specification (Section~\ref{sec: grammar}). Next, we describe a prototype implementation of the Deimos grammar (Section~\ref{sec: prototype}), developed in Unity as an extension to the DXR~\cite{sicatDXRToolkitBuilding2019} toolkit by Sicat et al. To demonstrate the expressivity of Deimos, we present an example gallery of morphs created in Deimos which highlight key characteristics of the grammar (Section~\ref{sec: example-gallery}). We also conducted a user study in which six immersive analytics researchers used Deimos to create their own morphs. We held a semi-structured interview with these participants in order to gauge the usability of Deimos (Section~\ref{sec: user-study}), as well as to elicit discussion topics and future research directions of immersive animated transitions (Section~\ref{sec: discussion}).
\section{Related Work} \label{sec: related-work}
\subsection{2D Animated Transitions}
When a visualisation changes between visual states, it is common to use animation to maintain transition awareness~\cite{heerAnimatedTransitionsStatistical2007}. Various animated transition grammars have been developed to allow designers to easily design and create 2D animated statistical graphics for use in data storytelling, such as Gemini~\cite{kimGeminiGrammarRecommender2021} and Gemini{\textsuperscript{2}}~\cite{kimGeminiGeneratingKeyframeOriented2021}, Canis~\cite{geCanisHighLevel2020} and CAST~\cite{geCASTAuthoringDataDriven2021}, and DataAnimator~\cite{thompsonDataAnimatorAuthoring2021}. While they each operate in different ways, they all functionally use keyframe animation, which has been shown to be the most preferred paradigm by animation designers~\cite{thompsonUnderstandingDesignSpace2020}. Earlier work by Tversky et al.~\cite{tverskyAnimationCanIt2002} however has suggested that interactivity may be needed to capitalise on the strengths of animation --- an area not properly explored in the aforementioned grammars. Indeed, research has found that interactive animation can improve outcomes for certain data analysis tasks (e.g., \cite{robertsonEffectivenessAnimationTrend2008, abukhodairDoesInteractivityImprove2013}). Recent work by Zong and Pollock et al.~\cite{zongAnimatedVegaLite2022} has sought to formalise this in the form of Animated Vega-Lite, an extension to Vega-Lite~\cite{satyanarayanVegaLiteGrammarInteractive2017} which adds a time encoding channel and event streams to enable interactive animations for use in data analysis. While our work also aims to combine interactivity and animation together, ours is clearly distinguished from Zong and Pollock et al.~\cite{zongAnimatedVegaLite2022} by its immersive nature. As immersive environments face new challenges and opportunities for how such interactive animations can behave, this work takes a fundamentally different approach to that of prior work in 2D animated transitions.
\subsection{Embodied, Immersive Analytics Tools and Toolkits}
Embodiment and animated transitions have been introduced in recent immersive visualisation research. In both ImAxes~\cite{cordeil2017imaxes} and its tangible version using the MADE-Axis~\cite{smileyMadeAxisModular2021}, embodied interaction is used to build visualisations, but this interaction is limited to authoring and does not support animated transitions into other visualisations. DataHop~\cite{hayatpur2020datahop} makes judicious use of animation to seamlessly transition between different configurations of data visualisations during exploration, but the use of animation was not a focus of their work. Similarly, tools like Flow Immersive~\cite{flowImmersive} also use animations to support transitions between visualisations, but are primarily used in data storytelling contexts.
In recent years, many toolkits and frameworks have emerged to support research and development in Immersive Analytics. Some specialised frameworks have been developed which focus on specific application cases. MIRIA~\cite{buschel2021miria} allows user experiment data such as head and hand movements to be replayed in a MR environment for in-situ analytics. However, the animations in the toolkit are limited to playing back user traces.
RagRug~\cite{fleckRagRugToolkitSituated2022} is a situated analytics toolkit that updates immersive visualisations in MR through the use of a distributed data flow from the Internet of Things. However, their work does not focus on embodied interaction to interact with situated visualisations, let alone animated transitions.
More broadly, toolkits like IATK~\cite{cordeilIATKImmersiveAnalytics2019}, DXR~\cite{sicatDXRToolkitBuilding2019}, VRIA~\cite{butcherVRIAWebBasedFramework2021}, and u2vis~\cite{reipschlager2020personal} have been developed to target immersive abstract visualisation scenarios. Those tools mainly provide authoring controls via a GUI or human-readable configuration files like JSON. In the case of DXR and VRIA, users control the design of visualisation objects using an implementation of the Vega-Lite grammar \cite{satyanarayanVegaLiteGrammarInteractive2017}.
Some toolkits also have built-in animations whenever data dimensions or visual attributes are changed, such as IATK~\cite{cordeilIATKImmersiveAnalytics2019}, but these are not designed to support more advanced animated transitions.
Recent work by Lee et al.~\cite{leeDesignSpaceData2022} remains one of few in Immersive Analytics which focus primarily on animated transitions. They present a design space for how immersive data visualisation transformations can be created, and present several examples and use cases. However, their work is conceptual in nature, and does not formalise in concrete terms how immersive animated transitions can be constructed. In this work, our aim is to create a grammar which allows users to construct and customise the behaviour of immersive transitions.
\section{Deimos Design Goals} \label{sec: design-goals}
The shift from 2D to 3D is more than just a third spatial encoding. Early on in the development of Deimos, we identified several key differences between animated transitions in immersive and non-immersive environments. We synthesised these into design goals (DG) which then influenced the creation of the Deimos grammar, allowing us to focus on the novel characteristics of immersive headsets and environments, in turn opening up further design opportunities.
\subsection{DG1: Immersive transitions be adaptable and flexible.}
Most animated transition grammars allow for rapid prototyping between the specification and the resulting animation. A low viscosity authoring process is particularly important when creating interactive animations for data analysis~\cite{zongAnimatedVegaLite2022}, allowing for fast and easy changes in the specification. The ability to rapidly prototype is facilitated by the constant access of keyboards for text input and pointing devices (i.e., mice) on desktop environments. In contrast, immersive environments often lack a convenient and comfortable form of text input that is required to write textual specifications, especially in VR or in highly mobile and/or remote MR contexts. While a GUI can help facilitate this authoring process in VR/MR, designing a GUI is premature if there is no underlying grammar to support it, especially in such a novel environment.
To resolve this conflict, we take an approach inspired by Lee et al.'s recent work~\cite{leeDesignSpaceData2022}. Many animated transition grammars treat transitions as a bespoke set of changes applied to visualisations predefined by the animation designer. Instead, we treat animated transitions as discrete operations that analysts can use to apply changes to their visualisations during their analysis. For example, the analyst might apply an animated transition which adds another spatial encoding to their visualisation, or converts a 3D barchart into a faceted 2D barchart. This turns animated transitions into a catalogue of adaptive and flexible operations that can be applied to immersive visualisations by analysts depending on the situation and goals.
In this way, there exists two types of users of Deimos: immersive analytics system designers who use the grammar to create a catalogue of animated transitions in a desktop environment (e.g., Unity editor), and data analysts in VR/MR who use said animated transitions in their workflows and either don't have access to or are unfamiliar with the grammar.
This necessitates a functional shift in grammar design, moving from highly tailored transitions with known data fields and encodings to generic transitions that operate on baseline idioms. As a result, any given transition specification can be reused across multiple visualisations, so long as they meet the baseline criteria specified by the author (e.g., be a barchart, have no \textit{z} encoding).
\subsection{DG2: Immersive transitions should leverage embodied interaction}
Animated transition grammars (e.g., \cite{kimGeminiGrammarRecommender2021, geCanisHighLevel2020, thompsonDataAnimatorAuthoring2021}) have paid little attention to how transitions are triggered and controlled. In cases where these grammars do (e.g., \cite{zongAnimatedVegaLite2022}) it is limited to WIMP-style controls, with practitioners using similar input methods for their narrative visualisations (e.g., play button~\cite{roslingGapminder2007}, linear slider/scroll~\cite{yeeR2d3}).
In contrast, immersive environments rely on a completely different interaction paradigm which goes beyond the desktop and is both embodied (e.g.,~\cite{hurterFiberClaySculptingThree2019, cordeilIATKImmersiveAnalytics2019}) and spatial in nature (e.g.,~\cite{hubenschmidSTREAMExploringCombination2021, buschelInvestigatingUseSpatial2017}).
Novel language primitives are needed for this as existing ones (i.e., streams in \cite{zongAnimatedVegaLite2022}) do not adequately express relationships between entities, such the direct manipulation of user actions onto target objects (e.g., controller inputs on visualisations~\cite{hurterFiberClaySculptingThree2019}) or the spatial relationship between multiple objects in the immersive environment (e.g., position and orientation of a handheld in relation to a table~\cite{buschelInvestigatingUseSpatial2017}).
By expanding our grammar to support this paradigm, we enable a richer design space of visualisation transitions not otherwise possible on desktop environments, as they allow users to ``reach through'' and interact with their data in a more embodied and engaging manner~\cite{dourishWhereActionFoundations2001}.
\textit{Tilt Map} by Yang et al.~\cite{yangTiltMapInteractive2020} remains a prime example of this concept. As the user tilts the visualisation object, it transitions between a choropleth map, prism map, and barchart at set angular intervals. These intervals are chosen such that the visualisation object is at an optimal viewing angle for each of the three states. Lee et al.~\cite{leeDesignSpaceData2022} also demonstrated numerous other examples of embodied interaction used to enable animated transitions, with the input action (e.g., pinch and drag gesture on visualisation) directly matching the resulting visual changes (e.g., additional dimension being extruded). We want our grammar to support these scenarios and more.
\subsection{DG3: Immersive transitions should still support conventional approaches}
While the two previous design goals were intentionally forward thinking, we still want Deimos to be rooted in the same foundational elements as existing grammars. This is to both ensure that Deimos follows tried and true concepts and theories, and also to preserve a sense of familiarity for users of the grammar --- especially for those new to immersive analytics. This includes the use of keyframe animation as the chief animation paradigm~\cite{thompsonDataAnimatorAuthoring2021}, the ability to specify timing and staging rules to customise the animation, and supporting WIMP-based interaction in hybrid immersive analytics setups or via immersive UX elements (e.g.,~\cite{microsoftMixedRealityUX}).
Moreover, while DG1 advocates for generalised transitions that can be applied to a wide range of visualisations, Deimos should still allow for highly customised transitions that affect predefined visualisations created by designers. This is to allow animated transitions in Deimos to still be useful in controlled situations such as immersive data-driven storytelling.
Therefore, our grammar should support both ends of two orthogonal spectrums: support both WIMP and embodied interaction to control and interact with animated transitions; and support animated transitions that are either highly generalised and can apply to any visualisation, or highly specific and apply only to a particular visualisation in a controlled context.
\section{The Deimos Grammar} \label{sec: grammar}
Deimos is a declarative grammar used to specify \textit{transitions} between \textit{states} (keyframes), as well as the \textit{signals} (interactions) used to control them. The grammar is largely based around the aforementioned design goals (Section~\ref{sec: design-goals}) and prior work by Lee et al.~\cite{leeDesignSpaceData2022}. The target audience of the grammar are developers and designers of immersive analytics systems. The animated transitions they create are then used by analysts in VR/MR. A Deimos specification can formally be described as a three-tuple (elements suffixed with ``?'' are optional):
\begin{center}
\textit{Morph := (states, signals?, transitions)}
\end{center}
These components constitute what we call a \textit{Morph}, the term signifying a visualisation's ability to dynamically change shape and morph from one state to another via transitions upon matching certain conditions.
A morph can be modelled as a state machine (see Figure~\ref{fig:deimos-state-machine-1}). A visualisation in the immersive environment only enters a morph's state machine when it matches one of its \textit{states}. The state node that was matched with determines the possible \textit{transition} nodes that it can access. These transition nodes are where changes are actually made to the visualisation, and are only entered when specified criteria are met. These criteria take the form of \textit{signals}, which are streams of data typically generated by user interaction. They can also be used to control the behaviour of transitions themselves.
\begin{figure}[tb]
\centering
\includegraphics[width=7.5cm]{images/deimos-state-machine.pdf}
\caption{Baseline state machine for Deimos morphs showing a single unidirectional transition. More states and transitions can be added to the state machine with their own signal criteria, with support for bidirectional transitions.}
\Description{A state machine for a baseline morph. A node labelled ``State 1'' is connected with a directed edge to a node labelled ``Transition'', and is labelled with ``Signal criteria met''. The latter node is connected with another directed edge to another node labelled ``State 2'', and is labelled with ``Transition completed''. Two entry nodes connect to the two State nodes with labels reading ``State matched''. The two State nodes are also connected to an exit node reading ``Vis spec manually changed''.}
\label{fig:deimos-state-machine-1}
\end{figure}
Morphs are an extension to any immersive visualisation authoring system already in place. That is, visualisations can still be manipulated in their usual way, but can have morphs applied to them should the relevant conditions be met. In this way, morphs serve purely to augment existing authoring techniques rather than supplanting them outright. When a visualisation is modified by the user in a manner external from the morph, it exits the morph state machine. It may then immediately re-enter following the same rules as before. A visualisation can have multiple morphs (and therefore state machines) active simultaneously. Multiple morphs can also be applied to the same visualisation concurrently, so long as the properties and encodings they affect do not overlap. The same morph specification can also be active across multiple eligible visualisations. This ability for the state machine to adapt to different visualisation configurations through a set of rules and conditions is what helps it satisfy DG1.
Morph specifications are written and stored as standalone JSON files. The use of JSON is very common amongst related grammars and allows for the separation between grammar and implementation (i.e., portability). A JSON schema provides auto-completion and tooltips for writing morph specifications with supported text editors. Figure~\ref{fig:deimos-json} shows a basic example of a morph specification, and how it translates to the immersive environment and the state machine. The three main components of morphs are annotated with coloured boxes: states in yellow, signals in blue, and transitions in green. The same colour-coding is used across all other figures.
The rest of this section will explain in general terms what these components do.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{images/deimos-json.pdf}
\caption{A basic example of a morph changes the mark colour of uncoloured visualisations to red whenever the left hand performs a pinch gesture. Colour-coded boxes denote the same component in different representations. Left: The morph specification. Top right: Still images of this morph being applied to a 2D scatterplot in an immersive environment. Bottom right: The state machine for this morph. The \texttt{``restrict'': true} (shown in the left-hand box labelled with ``2'') prevents the morph from starting at the \textit{highlighted} state, and \texttt{``bidirectional'': true} (shown at the end of morph specification) allows the transition to function in both directions.}
\Description{An image composed of three parts. The first part shows a morph specification written in JSON. The second part shows a series of still images in which a hand does a pinch gesture, and causes the marks on a 2D scatterplot to turn red. The second part shows the state machine of this morph. The following three nodes are connected in a linear order: a ``unhighlighted'' state node, a ``highlighting'' transition node, and a ``highlighted'' state node.}
\label{fig:deimos-json}
\end{figure}
\subsection{States} \label{ssc: states}
A morph is comprised of at least two state specifications. A state can be defined by the following tuple:
\begin{center}
\textit{state := (name, restrict?, partial visualisation specification)}
\end{center}
The \textit{name} property is a unique case-sensitive string used to reference this state specification in a transition (see Section~\ref{ssc: transitions}).
The \textit{restrict} property is a Boolean that if set to \texttt{true} will remove the entry point associated with the state's node on the state machine (see Figure~\ref{fig:deimos-json} for an example). This prevents a morph from starting at that state, making it only accessible via interconnecting transition(s). This is useful if it is illogical for a morph to start at that state, such as in unidirectional transitions.
\textit{Partial visualisation specification} is an arbitrary number of properties and components in the state object that all follow the same declarative notation as an actual visualisation. In other words, its syntax is the same as the visualisation package used in the system. For our implementation of Deimos, this is the DXR grammar~\cite{sicatDXRToolkitBuilding2019} which in turn is based on the Vega-Lite grammar~\cite{satyanarayanVegaLiteGrammarInteractive2017}. In the context of the DXR grammar, a partial specification can consist of any number of view-level properties (e.g., \textit{mark}, \textit{depth}) and/or encoding-level properties declared inside of an \textit{encoding} component (e.g., \textit{x}, \textit{color}).
The partial specification serves two purposes: i) to determine if a visualisation matches (and therefore enters) this state; and ii) to generate the keyframe used in the transition.
\subsubsection{State matching process} \label{sss: state-matching-process}
Any visualisation properties specified as part of the \textit{partial visualisation specification} in a state are used in the matching process against active visualisations. It is important to differentiate between the two types of specifications being used in this process: the visualisation specification created by the end-user, and the state specification (i.e., the \textit{partial visualisation specification}) that exists as a part of the state component in a morph.
Generally speaking, for a state specification to be matched against a visualisation specification, all properties defined in the former should also be defined in the latter, including their associated values. For example, if the state has \texttt{``color'': \{ ``type'': ``quantitative''\}}, then the visualisation must also have a \texttt{color} encoding with the same \texttt{type} for it to match.
\chreplaced{As a rule of thumb, the fewer properties defined in the state specification, the more likely a visualisation can match successfully and have morphs applied to it. The opposite is also true, with more properties in the state specification making it less likely for any visualisation to match successfully.}{As a rule of thumb, the less properties defined in the state specification, the more likely any given visualisation specification can successfully match against it and have morphs applied to and vice versa.} This effectively forms a spectrum. Morphs can be highly generic and can apply to many visualisations, allowing for adaptive morphs as per DG1. They can also only apply to specific datasets and field names, allowing for highly tailored morphs that are used in controlled environments as per DG3.
Deimos provides several primitives which affect the matching process that can be used in place of any JSON value in the state specification. They allow for more nuanced control over which visualisations can and can't match, and are useful to prevent morphs from being accidentally applied to incompatible visualisations.
Note that this is not an exhaustive set of primitives. While they were adequate for the purposes of this work, the grammar can easily be extended to include more if need be.
\begin{itemize}
\item \textbf{``*'' (wildcard)}: The property should be in the visualisation but its value can be anything.
\item \textbf{An inequality expression}: The property should be in the visualisation and its value should satisfy the inequality. Only applicable to numeric properties. e.g., \texttt{``value'': ``>= 100''}.
\item \textbf{\texttt{null}}: The property should not be included in the visualisation regardless of its value.
\end{itemize}
\subsubsection{Keyframe creation process} \label{sss: keyframe-creation-process}
When a visualisation matches a state and one of its connecting transitions is activated, keyframes are generated for both initial and final states. These keyframes are used for actual animation during the transition.
The initial keyframe is always the active visualisation's specification prior to the transition. No changes need to be made to it as it already matches the properties of the state itself.
The final keyframe is created by modifying the initial keyframe using the following set of rules: (i) visualisation properties that are defined in the initial state but not in the final state are removed; (ii) properties that are not defined in the initial state but are defined in the final state are added; and (iii) properties defined in both states are set to the final state's value.
As with the state machine process (Section~\ref{sss: state-matching-process}), Deimos provides primitives that can be used in place of any JSON value to refine the keyframe creation process. These primitives functionally act as placeholders which are later substituted with real values calculated at runtime, akin to the notion of variables. This allows for morphs to adapt to a wider range of situations without the need to hard-code field names, data types, etc. in morph specifications.
For the purposes of the state matching process, all of these primitives are treated as wildcards. Their values are resolved after the keyframes have been created but before the transition is applied. Once again, this is not an exhaustive list of primitives and can easily be extended if need be.
\begin{itemize}
\item \textbf{JSON path accessor}: The value residing at the JSON path will be substituted into the property's value. Is either prefixed with ``this.'' to access a property from this keyframe, or ``other'' to access a property from the other keyframe which is being transitioned to/from. e.g., \texttt{``x'': ``this.\linebreak[0]encoding.\linebreak[0]y'', \texttt{``field'': ``this.\linebreak[0]encoding.\linebreak[0]size.\linebreak[0]field''}}.
\item \textbf{A signal name}: The value emitted by the specified signal (see Section~\ref{ssc: signals}) will be substituted into the property's value.
\item \textbf{An expression}: The evaluated result of the expression will be substituted into the property's value. JSON path accessors and signal names can be used as variables. Only applicable to numeric properties. e.g., \texttt{``value'': ``other.\linebreak[0]encoding.\linebreak[0]size.\linebreak[0]value * 10''}.
\end{itemize}
All keyframes are stored throughout the entire lifespan of a morph. When the morph exits the state machine --- the result of the associated visualisation having its specification manually changed by the user (see Figure~\ref{fig:deimos-state-machine-1}) --- all stored keyframes are deleted.
Any added or changed properties will take their values from the state's keyframe if one already exists.
The main purpose for this is to handle situations where a property is removed by a transition in one direction, but needs to be added back in by a transition in the reverse direction. Without stored keyframes, the removed property would no longer be known and therefore could not be added back in.
\subsection{Signals} \label{ssc: signals}
In Deimos, a signal is the resulting value from a stream of data captured from input events, drawing inspiration from Vega's signals~\cite{satyanarayanDeclarativeInteractionDesign2014} and event-driven functional reactive programming principles~\cite{wanEventDrivenFRP2002}. Signals can be used in Deimos to: (i) be substituted as values in keyframes (see Section~\ref{sss: keyframe-creation-process}); (ii) act as conditional triggers that control when a transition actually begins (see Section~\ref{ssc: transitions}); and (iii) act as a tweening variable to control the progression of a transition (see Section~\ref{ssc: transitions}). No type safety is enforced in Deimos. A morph may contain zero or more signal specifications.
Deimos has two main types of signals: signals that stem from some given source, and signals that evaluate a mathematical expression.
\begin{center}
\textit{signal := sourceBasedSignal | expressionSignal}
\end{center}
\subsubsection{Source-based Signals} \label{sss: source-based-signals}
Source-based signals, as the name suggests, emit values from some input source. This is primarily from user interactions, but could be extended to passively updating values from sensors, etc. We define two classes of source-based signals: deictic and non-deictic signals. Deictic signals express relationships between a source and target entity.
While they mainly serve to model direct manipulation which is commonly associated with embodied interaction (DG2), they can also model situations where there is no actual direct contact. Non-deictic signals capture everything else, although these are mainly input sources which do not require some target/context to make sense (e.g., mid-air hand gestures, input source states, sensor data). Their production rules are:
\begin{center}
\begin{tabular}{l}
\textit{sourceBasedSignal := nonDeicticSignal | deicticSignal} \\
\textit{nonDeicticSignal := (name, source, handedness?, value)} \\
\textit{deicticSignal := (name, source, handedness?, target, criteria?, value)}
\end{tabular}
\end{center}
Both signal classes share the same three attributes.
The \textit{name} property references this signal in either a state (see Section~\ref{sss: keyframe-creation-process}), an expression signal (see Section~\ref{sss: expression-signals}), or a transition (see Section~\ref{ssc: transitions}).
The \textit{source} property denotes the type of source that values are to be retrieved from (e.g., \texttt{hand}, \texttt{head}, \texttt{vis}, \texttt{ui}).
Certain sources can also specify the source's \textit{handedness} to distinguish between \texttt{left}, \texttt{right}, or defaulting to \texttt{any}.
For non-deictic signals, the \textit{value} property denotes what type of value to derive from the source, which is then emitted by the signal. This can either be the state of the user interaction (e.g., whether the hand is performing a \texttt{select} gesture) or the geometric properties of the source as an object in the immersive environment (e.g., \texttt{position} of the user's head). As previously mentioned, these are useful to when some value of the input source is to be retrieved without it needing to be in the context of some other target or object. Figure~\ref{fig:deimos-json} shows an example of a non-deictic signal: it does not matter what the hand is touching so long as it is performing the pinch gesture.
Deictic signals model relationships between entities, and are modelled closely after the interaction section of the design space by Lee et al.~\cite{leeDesignSpaceData2022}.
The \textit{target} property denotes the type of object that the source is attempting to target. This can either be a part of the visualisation (e.g., \texttt{mark}, \texttt{axis}), a separate object in the environment (e.g., \texttt{surface}), or part of the user themselves (e.g., \texttt{head}).
For the first two, a \textit{criteria} property needs to be included to determine the logic used in selecting the target (e.g., \texttt{select}, \texttt{touch}, \texttt{nearest}). This logic is needed when there are multiple potential target objects that could be selected.
Lastly, the \textit{value} property can be used to derive three types of values. First, it can derive values from the \textit{target} much in the same way as non-deictic signals do. For example, a \texttt{hand} source might target the \texttt{mark} that it is \texttt{select}ing, and the \texttt{position} of that mark is used as the value. Second, it can derive values from a comparison between the source and target. For example, a \texttt{vis} source might target the \texttt{surface} that it is \texttt{touch}ing, and the point of \texttt{intersection} between the vis and surface is used as the value. Third, a \texttt{boolean} value simply emits true if a target has been selected successfully, and false if no targets are selected.
Deictic signals in particular fulfil DG2 as they allow morphs to be controlled based on user input and the embodied objects in the environment which they interact with. Moreover, the ability to facilitate WIMP-style interaction using these signals also helps fulfil DG3.
\subsubsection{Expression Signals} \label{sss: expression-signals}
\begin{center}
\textit{expressionSignal := (name, expression)}
\end{center}
Expression signals allow for the arbitrary composition of signals using mathematical expressions. Their primary purpose is to modify and refine values emitted by source-based signals.
We choose to use expressions as they allow arbitrary calculations to be performed in a familiar manner, instead of designing a completely new and potentially confusing domain-specific language.
The \textit{name} property references this signal in the same way as source-based signals. The \textit{expression} property is a mathematical expression as a string. Basic mathematical operators can be used alongside select primitive functions (e.g., \texttt{normalise}, \texttt{distance}, \texttt{angle}).
As with all other primitives, the list of supported functions can easily be extended. Any type of signal can be used as a variable by referencing their name. As previously mentioned, no type safety is enforced, meaning the user has to be aware of the data types present in the expression.
\subsection{Transitions} \label{ssc: transitions}
A morph is comprised of at least one transition specification. They functionally connect two state specifications together in the state machine (see Figure~\ref{fig:deimos-state-machine-1}). A transition can be defined by the following seven-tuple:
\begin{center}
\textit{transition := (name, states, trigger?, control?, bidirectional?, disablegrab?, priority?)}
\end{center}
The \textit{name} property serves to identify this transition especially when multiple transitions are involved.
The \textit{states} property is an array of two strings, corresponding to the names of the initial and final states in the transition respectively. Referencing states via their name in this manner helps with encapsulation, keeping all state related syntax separated from the transitions.
The \textit{trigger} property is an equality expression that activates the transition when it evaluates as true, but only when the visualisation matches the initial state in the \textit{states} property. The expression follows similar rules as expression signals (see Section~\ref{sss: expression-signals}) but must return a Boolean value. Triggers are mainly used to let the user control when the transition is actually applied, usually as the result of some sort of input action or condition caused by the user. Not setting a trigger will cause the transition to be immediately applied when it enters the initial state.
The \textit{control} component is optionally used to further customise the behaviour of the transition. It is formally described by the following five-tuple:
\begin{center}
\textit{control := (timing?, easing?, interrupted?, completed?, staging?)}
\end{center}
The \textit{timing} property controls the duration of the transition. If a number is used, the transition will interpolate between the two state keyframes over the given duration in seconds. Alternatively, the name of a signal can be used, in which case the signal will be used as the tweening variable \textit{t}. This allows for the duration and direction of the interpolation to be controlled by the signal (and subsequently the user). In this situation, the transition will only begin when the signal is a value between 0 and 1, in addition to any other conditions. This defaults to 0 if not specified, which will result in jump cuts.
The \textit{easing} property applies an easing function to the transition, defaulting to a linear function if none is specified. Easing functions are commonly used in animations and helps to make animations look more natural.
The \textit{interrupted} property determines what happens when the \textit{trigger} returns false whilst the transition is in progress. \texttt{initial} and \texttt{final} will cause the visualisation to immediately jump to the specified state. \texttt{ignore} will instead allow the transition to keep progressing until it naturally terminates. The \texttt{ignore} condition is particularly useful in cases where the \textit{trigger} may inadvertently return false mid-transition but the transition should still continue, acting as a sort of fail-safe. This defaults to \texttt{final}.
Similarly, the \textit{completed} property determines what happens when the visualisation naturally terminates, either remaining at the \texttt{final} state or resetting back to the \texttt{initial} state. Using the \texttt{initial} condition may be useful if the transition should not cause any long term changes to the visualisation, particularly if the animation is alone sufficient to serve its purpose~\cite{leeDesignSpaceData2022}. This also defaults to \texttt{final}.
The \textit{staging} property allows for specific visualisation properties to be staged. Name-value pairs can be specified where the name is the property to be staged, and value is an array of two numbers between 0 and 1 that correspond to start and end percentages. The property will only be animated when the transition period is within the given range. Any property not specified will not be staged. Staging is a common feature of animated transition grammars~\cite{heerAnimatedTransitionsStatistical2007} and ours is no different. Note that the grammar does not support staggering.
The \textit{bidirectional} property of the transition, if set to true (default false), allows the transition to start and end in the reverse direction. All transition settings remain the same, except the \textit{trigger}, if specified, needs to return false in order for the reverse transition to activate. This serves mainly as a convenience function which prevents the need for two transition specifications to be written whenever a single bidirectional transition is desired. However, doing so is necessary in order to have distinct settings for either direction.
The \textit{disablegrab} property, if set to true (default false), will automatically disable the standard VR/MR grab action performed on the visualisation when the transition starts. This helps prevent visualisations from being inadvertently moved by the user when a transition's \textit{trigger} uses a similar grab gesture.
Lastly, the \textit{priority} property can be used to handle edge cases where multiple transitions due to similar \textit{trigger} conditions are activating on the same frame, but they conflict in the visualisation properties they modify. In this situation, the transition with the highest numbered priority will activate first, and all other conflicting transitions will be blocked. If priorities are equal, then the order in which they activate is random. The priority property defaults to 0.
\section{Deimos Implementation and Toolkit} \label{sec: prototype}
We created a prototype implementation of the Deimos grammar using the Unity game engine in order to demonstrate its concepts and use. It is open source on a GitHub repository\footnote{Github link provided on acceptance. For review, the project files are instead provided in supplementary material.}, with documentation also available publicly\footnote{Link will be provided upon acceptance. For review, the documentation is instead provided in supplementary material}.
\subsection{Data Visualisations} \label{ssc: prototype-data-visualisations}
As Deimos is primarily an animated transition grammar, we need data visualisations to apply transitions to. We decided to use DXR~\cite{sicatDXRToolkitBuilding2019} as the basis of our work as it natively supports authoring of visualisations using a declarative grammar similar to Vega-Lite~\cite{satyanarayanVegaLiteGrammarInteractive2017}. It is also easy to extend due to its use of individual GameObjects for each mark instead of a single mesh for all marks like in IATK~\cite{cordeilIATKImmersiveAnalytics2019}. This came at the cost of rendering performance however, especially when thousands of marks are displayed on screen. To this end, we made performance improvements to how DXR instantiates and updates its marks and axes, especially since this process occurs multiple times in a morph.
We also added several new visualisation types: choropleth and prism maps, stacked and side-by-side barcharts, and faceted charts (see Section~\ref{sec: example-gallery}). However, as the original DXR implementation does not have support for data transformations like in Vega-Lite, neither does Deimos. This also means that animated transitions involving a time dimension (e.g., time varying scatterplots, barchart races) are not supported in Deimos.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{images/deimos-overview.pdf}
\caption{Overview of Deimos and how it interacts with DXR. Recreated version of the original DXR overview~\cite{sicatDXRToolkitBuilding2019} is shown in the lower half.}
\Description{A diagrammatic image Deimos. Deimos morph specifications connect to the Deimos package. User inputs also connect to the Deimos Package. Below it, a DXR vis specification connects to Data and the DXR package. The Deimos and DXR packages are connected to each other, with arrows indicating ``DXR vis updated'' and ``Start/stop transition''. The DXR package then connects to the Morphing DXR vis.}
\label{fig:deimos-overview}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{images/deimos-pipeline.pdf}
\caption{High-level overview of the Deimos pipeline. Stages in red signify events that cause Deimos to continue down the pipeline.}
\Description{A linear flowchart of the Deimos pipeline. The nodes in linear order are Load morph specifications, DXR vis updated (in red), Find matching states, Create observables from signals, Trigger returns true (in red), Create keyframes, Start transition, Transition finished (in red), Stop transition, and Dispose observables. The last node is connected back to the Find matching states node.}
\label{fig:deimos-pipeline}
\end{figure}
\subsection{Code Structure and Pipeline}
Figure~\ref{fig:deimos-overview} provides an overview of Deimos' structure and how it interacts with DXR.
Morph specifications are contained in JSON files that are read by Deimos at initialisation. They can also be refreshed during runtime if the specifications are edited. Deimos interacts with DXR in two main ways. Deimos receives events from DXR whenever a visualisation has been updated, which includes the visualisation specification as an argument. Deimos also sends start and stop function calls to DXR which executes the animated transitions.
Figure~\ref{fig:deimos-pipeline} provides a high-level overview of the Deimos pipeline. While it is presented as a linear set of stages, the pipeline can reset or be exited in certain conditions.
First, all morph specifications are read and loaded into Deimos.
Next, whenever a DXR visualisation updates, Deimos is notified via event with the visualisation's specification.
This specification is used to check against all state specifications in the loaded morphs using the rules in Section~\ref{sss: state-matching-process}.
For any state that has matched, observable streams are created for each signal that is part of the state's transitions, including trigger signals. Observables are created using the UniRx package~\cite{neueccUniRx}, and are composed together where necessary.
When a transition's trigger signal returns true (or if no trigger was specified in the first place), initial and final keyframes are created using the rules in Section~\ref{sss: keyframe-creation-process}.
These two keyframes, along with other transition parameters such as tweening and staging variables, are sent to the relevant DXR visualisation to start the transition.
When the transition has finished, Deimos stops the transition on the DXR visualisation. This step also updates the visualisation specification to reflect the new changes made by the transition.
Deimos then disposes of all observables related to the transition.
This process then starts anew again, with Deimos finding matching states to see if this newly updated visualisation is eligible for any morphs once more.
While Deimos is designed such that it exists separately from the visualisation framework used, they are still intrinsically linked to each other. Deimos is dependent on the visualisation framework to implement the actual animation and transition. It is also dependent on the grammar and syntax of the visualisations themselves. Therefore, translating Deimos to other visualisation toolkits requires adaptation to support the new declarative grammar, and the toolkit itself needs to support animation between keyframes via interpolation. While it is technically possible to create a middleware to translate visualisation specifications and thus increase modularity, we did not explore this option in this work.
\subsection{XR Interactions}
We use the Mixed Reality Toolkit (MRTK)~\cite{microsoftMRTK} to enable XR interactions in Deimos. As a result, Deimos is able to be deployed on a range of platforms including Windows Mixed Reality, Oculus Quest, and HoloLens. However, due to the aforementioned performance limitations when working with large amounts of data, it is recommended to only use Deimos in tethered VR or remote MR setups. Both controller and articulated hand tracking are supported in Deimos in the form of source-based signals (see Section~\ref{sss: source-based-signals}). While Deimos does not support eye gaze or voice input, these can be included in future work.
\section{Example Gallery} \label{sec: example-gallery}
We present several examples of morphs created with the Deimos grammar. We categorise and describe the examples in three ways, with the first two aligning with the design goals in Section~\ref{sec: design-goals}.
First, we highlight how morphs can be designed to adapt to different visualisation configurations using generic states (DG1), but also allow for bespoke morphs by using specific states in controlled contexts (DG3).
Second, we demonstrate how morphs can be controlled using both embodied (DG2) and non-embodied (DG3) interaction methods.
And third, we provide two scenarios in which Deimos can facilitate prototyping of different interaction methods in order to help evaluate the design of the grammar itself.
All examples and their specifications can be found in the Unity project files\footnote{Github link provided on acceptance. For review, the project files are instead provided in supplementary material.}. As such, we do not provide nor go into detail about each example's specification. The project files also contain additional example morphs not described in this paper.
\subsection{Generic vs specific morph examples}
In DG1 and DG3, we described a spectrum in which morphs can vary between generic, adapting itself to a range of visualisation configurations, and specific, allowing it to be used in controlled settings.
On the generic end, we present the \textit{3D Barchart Partitioning and Stacking} morph (shown in Figure~\ref{fig:teaser}). It takes a 3D barchart and either partitions it into a 2D faceted barchart, or stacks it into a 2D stacked barchart whenever it touches a surface in the immersive environment. During the transition, it also aligns the visualisation to be parallel against the surface that it had touched. This is an example of a morph involving three states and two transitions in a branch-like structure. The triggers are set up so that the applied transition is based on the angle of contact between the barchart and surface: orthogonal for the faceted barchart, and parallel for the stacked barchart.
Its states are defined such that they only check that the encodings' types are correct (i.e., nominal \textit{x} and/or \textit{z}, quantitative \textit{y}) and that it uses cube marks. Through this, so long as a visualisation is a 3D barchart then it can undergo this morph, greatly expanding the range of scenarios which it can be used in. JSON path accessors are also used substitute in the proper field names during runtime (i.e., \textit{facetwrap}, \textit{yoffset}).
On the other end of the spectrum, the \textit{Geographic Scatterplot with Slider} morph (shown in Figure~\ref{fig:deimos-examples-slider}) demonstrates the use of two predefined states: a scatterplot and a choropleth map. Both of these are explicitly defined using exact encodings and field names (e.g., ``Population'', ``LifeExpectancy''). Because of this, only a visualisation with these exact encodings and fields can undergo this morph. A transition connects the two states together, which is controlled using a linear slider represented by a Unity GameObject. A signal accesses the \textit{x} position of this GameObject and uses it as the timing property of the transition.
A morph like this is useful for controlled settings like data-driven storytelling, as the visualisation(s) are all predefined by the author.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{images/deimos-examples-slider.pdf}
\caption{Still images of the \textit{Geographic Scatterplot with Slider} morph, using Unity GameObjects as a slider to control the transition.}
\Description{A set of still images showing a hand moving a slider from left to right, which causes the points on a 2D scatterplot to move, and then expand into a choropleth map.}
\label{fig:deimos-examples-slider}
\end{figure}
\subsection{Embodied vs non-embodied morph examples}
In DG1 and DG3, we described a spectrum in which morphs vary been the use of embodied vs non-embodied (or WIMP-based) interactions.
On the embodied end, the \textit{Barchart Unstacking} morph uses a ``pinch and pull'' gesture to unstack the bars of a 2D barchart into a side-by-side 3D barchart (shown in Figure~\ref{fig:deimos-examples-embodied} left). To strengthen the metaphor of bars being extruded out into 3D, a condition is added whereby the 2D barchart needs to be positioned against a surface for the morph to be allowed.
To initiate the transition, the user also needs to perform a pinch gesture on the visualisation itself, which is represented by a deictic signal. Other signals calculate the distance between the user's hand and the surface the visualisation is resting against.
The transition uses this distance as its timing property, causing the bars to extrude at the same rate which the user pulls away from them. In this fashion, the user perceives themselves as actually stretching the barchart into 3D. Of course, this is but one way in which embodied interaction can be achieved, but this approach can be replicated across other morphs to achieve similar styles of extrusion effects.
On the non-embodied end, the \textit{Menu-based Extrusion} morph adds a third spatial dimension to a 2D scatterplot, but does so via an MRTK toggle button~\cite{microsoftMRTK} (shown in Figure~\ref{fig:deimos-examples-embodied} right). A signal retrieves the state of this toggle button, and will trigger the visualisation when the button is toggled on. This example also demonstrates the use of a radial menu to select the field name of the newly added dimension. A signal retrieves the selected value and substitutes it into the 3D scatterplot state at keyframe creation.
In comparison to the \textit{Barchart Unstacking} morph, this example presents a much simpler and more familiar type of animated transition, albeit in an immersive environment.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{images/deimos-examples-embodied.pdf}
\caption{Examples of embodied and non-embodied morphs. Left: Still images of the \textit{Barchart Unstacking} morph, using a ``pinch and pull'' gesture to unstack a 2D barchart into 3D. Right: The result of the \textit{Menu-based Extrusion} morph showing the radial menu and toggle button.}
\Description{A two-part image. The first part shows a 2D barchart on a surface. A hand grabs onto this barchart and pulls away from it, causing the 2D barchart to extrude out into 3D, and level itself out into a 3D side-by-side barchart. The second part shows a 3D scatterplot. Next to it is a radial menu with the options ``Cylinders'', ``Displacement'', and ``Horsepower'', with Displacement being selected. Below it is a toggle button that is enabled labelled ``Extrude Visualisation''. The 3D scatterplot has a z dimension with the field Displacement.}
\label{fig:deimos-examples-embodied}
\end{figure}
\subsection{Prototyping morph interactions} \label{ssc: example-gallery-prototyping}
Lastly, we demonstrate how the grammar allows for signals to be easily swapped and modified to allow rapid prototyping of different interactions. In terms of the Cognitive Dimensions of Notations~\cite{greenCognitiveDimensionsNotations1989}, this corresponds to a low level of \textit{viscosity}.
In this example, we recreate \textit{Tilt Map} by Yang et al.~\cite{yangTiltMapInteractive2020} using Deimos (shown in Figure~\ref{fig:deimos-examples-tiltmap} top). Three states are defined: choropleth map, prism map, and barchart. Two transitions are defined to connect these states in a linear fashion. A signal is then created to retrieve the tilt angle of the visualisation relative to the horizontal plane. This tilt angle then subdivided into two ranges at specific angles using expression signals, that are then used as tweening variables for the two transitions (choropleth to prism, prism to barchart). With this setup, a visualisation will morph between the different states depending on its tilt.
However, we can easily change the manner which the morph is controlled just by replacing the tilt angle with some other source. A straightforward example is to replace it with the height of the visualisation relative to the floor (shown in Figure~\ref{fig:deimos-examples-tiltmap} bottom). The two expression signals which subdivide the range will also need to be updated to the new value ranges. In doing so we turn \textit{Tilt Map} into a so-called ``Height Map'', just by changing a few lines in the morph specification. The result is shown in Figure~\ref{fig:deimos-examples-tiltmap}.
\begin{figure*}[htb]
\centering
\includegraphics[width=\linewidth]{images/deimos-examples-tiltmap.pdf}
\caption{Top: Still images of the \textit{Tilt Map} morph based on Yang et al.~\cite{yangTiltMapInteractive2020}. A red and green angle bracket is shown to provide rotation cues. Bottom: A modified version of \textit{Tilt Map} showing changes to the signal specification and the resulting morph shown as still images. This example shows tilt being replaced with height. A red and green bar is shown to provide height cues. }
\Description{A two part image. The first part shows a series of still images of a 2D choropleth map being tilted. As it rotates, it morphs into a prism map. As it rotates even further, it morphs into a 2D barchart. The second part shows a modified version of the first part. Two signal specifications written in JSON are shown which shows the modifications made. Next to it are still images of a choropleth map turning into a prism map, then into a 2D barchart as it is lowered towards the ground.}
\label{fig:deimos-examples-tiltmap}
\end{figure*}
Inspired by work on small multiple layouts in immersive environments~\cite{liuDesignEvaluationInteractive2020}, we created the \textit{Proxemic-based Facet Curvature} morph (shown in Figure~\ref{fig:deimos-examples-faceted} top). It morphs a faceted chart between three different layouts: flat, curved, and spherical. These three layouts correspond to three states in the morph, with two transitions connecting them linearly. A signal retrieves the distance between the user's head and the visualisation, with two more signals subdividing the distance into tweening variables (similar to the \textit{Tilt Map} morph). As the user approaches the faceted chart, it begins to wrap around them into a curved layout, and when they are close enough it morphs into an egocentric spherical layout.
To demonstrate another method of controlling this morph, we can replace the distance signal with the value of a rotary dial (shown in Figure~\ref{fig:deimos-examples-faceted} bottom). As the user rotates the dial the small multiples curve inwards or outwards. To do so, we create a separate cylinder GameObject in Unity which functions as this dial. We then replace the distance signal with a signal which retrieves the rotation value of the cylinder, and we also update the ranges of the two subdividing signals. This functionally turns the proxemics-based interaction into one involving the manipulation of an external object. This object is currently only virtual, but the concept can be applied to physical objects using either tangible input or motion tracking.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.8\linewidth]{images/deimos-examples-facet.pdf}
5 \caption{Top: Still images of the \textit{Proxemic-based Facet Curvature} morph, which curves around the user based on distance between them and the chart. Bottom: A modified version which replaces distance with the rotation of a separate dial object. The changes to the signal specification are shown with the resulting morph shown as still images.}
\Description{A two part image. The first part shows a series of still images of a faceted chart turning from a flat layout to a curved layout, then into a spherical layout, as a VR avatar approaches it. The second part shows a modified version of the first part. Two signal specifications written in JSON are shown which shows the modifications made. Next to it are still images of the same faceted chart going from flat to curved to spherical, but this time it is controlled using a virtual dial object.}
\label{fig:deimos-examples-faceted}
\end{figure*}
\section{Expert Evaluation} \label{sec: user-study}
We conducted an evaluation of Deimos in order to:
(i) determine the ease of use and expressiveness of the grammar;
(ii) get impressions on the concepts introduced in the grammar; and
(iii) generate discussion topics and research directions of the use of animated transitions in immersive environments.
\subsection{Study Design}
We use an approach similar to Zong and Pollock et al.~\cite{zongAnimatedVegaLite2022} by recruiting three developers of immersive analytics grammars and toolkits: Peter Butcher of VRIA~\cite{butcherVRIAWebBasedFramework2021}, Philipp Fleck of RagRug~\cite{fleckRagRugToolkitSituated2022}, and Ronell Sicat of DXR~\cite{sicatDXRToolkitBuilding2019}.
To diversify our participant pool, we also recruited Zeinab Ghaemi of immersive geovisualisation~\cite{ghaemiProxemicMapsImmersive2022a}, Tica Lin of embedded sports visualisation~\cite{linTheQuestOmnioculars2022}, and Jorge Wagner of the VirtualDesk exploration metaphor~\cite{wagnerfilhoVirtualDeskComfortableEfficient2018}.
We hoped to learn how Deimos could be positioned within each researcher's respective works.
To minimise learning requirements, we only invited researchers who have experience working with Unity.
The user study was conducted remotely in three sections, repeated for each participant.
First, we conducted a 30 minute introductory session where we explained the goals of the study, demonstrated the examples in Section~\ref{sec: example-gallery}, and went through high level concepts of the grammar.
Second, we tasked participants to use Deimos unsupervised for at least 2.5 hours. They were given walkthroughs and documentation\footnote{Walkthroughs and documentation are provided in supplementary material.} to learn the grammar, and were encouraged to create their own morphs with some suggestions given to them.
Third, we held a one-hour semi-structured interview based around the aforementioned evaluation goals. We asked participants to show us their created morphs, and asked questions relating to each participant's own field of research.
Throughout the study period, we modified the documentation based on participant feedback. While we made bug fixes to Deimos where necessary, we did not add or change any features. Each participant was offered a AU\$150 gift card as compensation for their time.
The interviews were recorded and transcribed. The first author independently performed thematic analysis on all six transcriptions, with two other authors doing the same on three transcriptions each. The three authors then discussed and synthesised the main themes together, which form the structure of this section and the following Discussion section.
\subsection{Usability feedback}
We compile participant feedback based on a selection of the most relevant Cognitive Dimensions of Notations~\cite{greenCognitiveDimensionsNotations1989}. Rather than using the dimensions as heuristics --- a common approach in related works (e.g.,~\cite{satyanarayanDeclarativeInteractionDesign2014, satyanarayanCriticalReflectionsVisualization2019}) --- we use them from a usability perspective to evaluate the Deimos grammar. However, we provide self-evaluation for certain dimensions where relevant.
\textbf{Error proneness (likelihood of making errors).}
All participants spent the required 2.5 hours using the toolkit, however four of the six spent 7--8 hours using it. The initial reasoning given by most participants was that enjoyed their time with Deimos and learning how it worked. On further inspection however it was clear that this was in part due to the steep learning curve of the grammar, with Fleck commenting ``I don't feel that three hours are enough.'' We identified several potential causes of this, largely due to grammar's \textit{error proneness}.
First, many participants (Fleck, Ghaemi, Lin, and Wagner) were unfamiliar with the DXR grammar, with even Sicat not having used DXR for three years. As a result, two grammars needed to be learnt, naturally increasing learning time. As the Deimos grammar is intrinsically linked to its visualisation grammar (see Section~\ref{ssc: prototype-data-visualisations}), it is apparent that the choice of visualisation package brings not only technical but also notational difficulties.
Second, our documentation assumed full knowledge of Unity and its functions which not all participants had.
Third, the error messages provided by the Deimos prototype were not useful for participants. While the JSON schema validates whether the morph specification is syntactically correct before it is parsed, no check exists for semantic correctness (e.g., making sure \textit{name} properties are unique). This has since been corrected in the prototype.
Some participants suggested ways of easing the learning curve. Sicat suggested video tutorials to better explain the grammar, whereas Butcher suggested providing the DXR documentation as pre-reading before the study is even conducted. Interestingly, no participant suggested changes to the grammar itself beyond simple name changes (the terms \textit{signals} and \textit{restrict}). Whether this is due to participants not having had enough time to be exposed to Deimos' advanced features is unclear.
\textbf{Closeness of mapping (closeness to problem domain).}
The lack of grammar changes suggested by participants could be at least partially explained by its \textit{closeness of mapping}. All participants, when asked, had little to no issues understanding how the grammar models the state machine (see Figure~\ref{fig:deimos-state-machine-1}). The only participant who raised potential challenges was Fleck, citing the differences between declarative and imperative languages. As Unity primarily uses imperative programming, the shift to a declarative style in Deimos could cause confusion to certain users, particularly when constructing an interaction using signals. We don't believe this to be a major issue however, especially if the immersive visualisations also use a declarative language (e.g., DXR~\cite{sicatDXRToolkitBuilding2019}, VRIA~\cite{butcherVRIAWebBasedFramework2021}).
\textbf{Viscosity (resistance to change).}
After following the walkthroughs, all participants used the same strategy of combining parts of existing examples together to create new morphs to facilitate their learning. For example, Wagner combined the states and transitions of \textit{Tilt Map} example and the signals of the \textit{Proxemic-based Small Multiple Curvature} example to create a rudimentary ``Proxemic Map''. There are only a few examples of participants extending existing examples with completely new components: Sicat remapped the proxemic interaction of the \textit{Proxemic-based Small Multiple Curvature} example with a virtual rotary dial (the same as in Section~\ref{ssc: example-gallery-prototyping}), and Butcher created a stacked barchart to side-by-side barchart morph based on whenever the mouse is clicked. These all demonstrate a low level of \textit{viscosity} within the grammar, as participants were generally able to achieve their goals without issue (minus the aforementioned issues regarding error proneness). The same concept was also described in Section~\ref{ssc: example-gallery-prototyping}.
However, poor error messages introduced viscosity for a few participants. For instance, Lin had tried to create a reduced version of the \textit{3D Barchart Partitioning} example by removing all surface related signals, but the toolkit did not warn her to remove the references to these signals in the states, resulting in errors. This need to keep track of changes in multiple parts of the specification contributes to higher viscosity.
\textbf{Visibility (ability to view components easily).}
Several participants (Fleck, Sicat, and Ghaemi) noted issues relating to the \textit{visibility} of signals in the grammar, primarily due to the large number of possible keywords involved. It was not obvious what options and/or combinations of signals are available without resorting to the documentation, although the JSON schema aided this process. The same participants acknowledged however that this reliance on documentation is fairly normal for toolkits, especially with only a few hours of experience.
From a technical perspective, the Deimos prototype improves visibility by exposing the names of any active morphs and/or transitions on each visualisation, and provides a toggle to print the emitted values of signals to the console for debug purposes. Further debug messages can also be enabled which shows the visualisation specifications of generated keyframes in JSON format. While these features were not explained in the documentation, they were highly useful during the development of Deimos and the creation of our example gallery.
\section{Discussion} \label{sec: discussion}
This section continues from Section~\ref{sec: user-study} by summarising the main themes and discussion topics of the semi-structured interviews with our expert participants.
\textbf{Adaptive morphs.}
While some participants liked the concept of adaptive morphs, others found it getting in the way of their authoring process.
Butcher saw value in adaptive morphs, saying ``I could see why that would be useful, especially if you had a large array of different charts... having it modular just makes sense.''
Wagner thought that ``the premise works well'', but clarified that he would prefer to have ``a [morph] specification for each type of graph'' instead of one hyper-generic morph that applies to all visualisation idioms.
Ghaemi was caught off-guard by this function when her new morph was being applied to other visualisations unintentionally (a result of overly generic states), but was able to reason with modifying the states to ensure that they are more specific.
Fleck and Sicat faced a similar issue, but instead suggested the ability to use an ID to directly target a specific visualisation, skipping the state matching process altogether. This was particularly of relevance to Fleck, where in \textit{RagRug}~\cite{fleckRagRugToolkitSituated2022} ``the user does not create a visualisation [themselves], but the system creates the existing visualisations.''
Overall, participants were able to grasp the concept of adaptive morphs, but it is apparent that their experiences come from the perspective of the morph author. A quantitative evaluation involving data analysis utilising pre-made morphs for practical tasks would be needed to fully evaluate the concept.
\textbf{The purpose of morphs.}
All participants found the examples exciting and interesting, but some had thoughts on their actual purpose. Ghaemi said that morphs are mainly useful when they add or change the data shown, rather than simply remapping encodings (e.g., \textit{Stacked Barchart Extrusion} example).
Lin similarly said that she would only use morphs when working with large amounts of data, such as combining proxemics with Shneiderman's mantra~\cite{shneidermanEyesHaveIt1996}, or when working with multiple views, but ``if it's only one smaller data set, and one chart, I probably wouldn't use it to morph between different columns.''
Butcher said that while our example morphs were ``neat and novel'', their animations did not strictly reveal new information, such as a time varying scatterplot does.
Therefore, future work should investigate specific use cases for morphs and how morphs may potentially vary between them.
\textbf{Embodied interaction and discoverability.}
The reception to the use of embodied interactions in Deimos (DG2) was positive, but two participants raised discussion topics around their long term effects. Many of our example morphs use interaction metaphors for embodied interaction (e.g., collide with surface, pinch and pull). Sicat expressed concern over the use of these metaphors, saying ``...maybe in my application, pinning to the wall means or does something, and then someone else develops a morph where stick to the wall does something else... that might confuse people... there's no universal rule that says, pinning to the wall should do this.'' When asked if Deimos could play a role in shaping these metaphors, Sicat responded ``I would keep it open for now and just let [researchers] explore'', noting that the field is still not mature yet. He then suggested the use of tooltips to guide users in discovering morphs, especially when conflicting metaphors are used, but stated this is of low priority. In a similar vein, Lin suggested two ways of improving embodied morphs and their discoverability, especially as she had difficulties performing the rotation required for the \textit{3D Barchart Partitioning and Stacking} example. The first was to have the system predict what action the user is about to do, and display the morphs associated with that action in a ``gestural menu'' that the user can select to trigger the morph. The second was to show a preview of the morph while performing the interaction. When asked about the importance of these features, she said that they ``probably [don't] affect the current grammar, because it's more like an assistant towards the completion of certain interactions'', and that they are more like external scripts loaded after the core grammar. Overall, while there are broader implications of the use of embodied interaction in immersive analytics, we see power in Deimos being used to explore this design space in the long term, rather than immediately prescribing them in this work.
\textbf{GUIs and morph templates.}
Fleck, Sicat, and Ghaemi brought up ideas on how GUIs can be incorporated into Deimos. Fleck suggested the use of data flows in Node-RED to author morph specifications in JSON, similar to how visualisation specifications are created in \textit{RagRug}~\cite{fleckRagRugToolkitSituated2022}. Sicat recalled his own experiences developing DXR's GUI~\cite{sicatDXRToolkitBuilding2019}, noting that a GUI can be useful for non-experts and even end-users to create their own morphs. In a similar vein, Ghaemi said that a GUI would have greatly assisted her learning process with Deimos, citing her lack of experience in both DXR and toolkits in general. However, both participants clarified that the GUI should only cover basic functions, and advanced features should only be accessed in JSON format.
Sicat went on to suggest that the GUI could expose templates for different parts of the grammar that allows users to mix and match and create new morphs with, which would be exposed through dropdowns and menus. He compared this idea to how he used the grammar himself, saying ``I went through your examples, copied the morphs and then pasted it into my morphs and then just modified them a bit. So it's kind of same idea, right? Just a different interface. So for non experts [it] would be super easy.''
Lin suggested something similar except from an interaction perspective, especially as in our included examples ``the interaction you perform is very standardised.'' This feedback opens many future design possibilities for how a GUI for toolkits like Deimos might look like, especially if it can allow end-users in VR or MR to create and/or modify their own morphs to suit their own needs without needing to write JSON.
\textbf{Inspiration drawn from the toolkit.}
All participants drew interesting comparisons between Deimos and their respective works. Wagner, Ghaemi, and Lin all showed great interest in morphs that transition between 2D and 3D. For Wagner, from the context of his work on VirtualDesk~\cite{wagnerfilhoVirtualDeskComfortableEfficient2018}, said ``it would be very interesting to be able to just snap [3D visualisations] to the desk, and then they project to 2D, which is something that many experts are very comfortable with, but then I could show to them that they can extract [the visualisation] from the desk or from the wall, and try to grab it and look around...'' For Ghaemi whose field is immersive geovisualisation~\cite{ghaemiProxemicMapsImmersive2022a}, it was to have the morph directly tied to adding layers to a virtual map, ``[when the] 3D chart collides with the map, the bars could be scattered through the buildings, so I can see the charts on top of the building.'' For Lin, she raised ideas in the context of embedded sports visualisation~\cite{linTheQuestOmnioculars2022}, whereby ``you [can] drag the 2D charts onto a specific player, or maybe drag it onto the court, like the flat ground floor, and then it just suddenly morphs into this heatmap.'' In this sense, rather than a visualisation just morphing between 2D and 3D, it could also morph between being embedded and non-embedded~\cite{willettEmbeddedDataRepresentations2017}.
We then asked whether they could see themselves using Deimos to aid in their research.
Wagner thought that as a proof of concept it would work ``super well'', but cited poor scalability of the toolkit as a reason against using it.
Ghaemi was receptive, hypothesising that ``the [toolkit] that you have it's, at least, for some of [my ideas], I'm pretty sure that I can implement what I want''. She also noted that there are no other immersive analytics toolkits that currently enable animated transitions in the manner she desired. Lin said that ``there's a high chance that I could use this library to help me prototype some scene to show [sports analysts and coaches]''. After this proof of concept stage however, she would instead develop her own research prototype from the ground up to support specific features such as ``instant data updating''.
Lastly, Butcher said that ``seeing the change in data and understanding what you know, getting something out of it, it's important... certainly not enough attention has been paid to it in the past I don't think, especially in the immersive space.'' He followed this up by saying ``it's definitely something we're going to look at in future for sure, the effect is fantastic.''
While it is expected that not every researcher can make use of the Deimos grammar and the toolkit, our user study clearly demonstrates the significance of this work in generating further research ideas and promoting the study of animated transitions in immersive analytics.
\section{Limitations}
Our work naturally has several limitations in regards to the grammar, the technical implementation, and the user study. First, our grammar is built upon several key concepts such as dynamic morphs and embodied interaction. While we aimed to justify these ideas in Section~\ref{sec: design-goals}, we did not properly evaluate them with actual end-users in VR/MR performing data analysis tasks. Therefore, we cannot confidently say that our approach is quantifiably beneficial for immersive analytics.
Second, our participants were not exposed to all of the functionalities of Deimos. It is certainly possible that there are pain points when using Deimos' advanced functionalities which were not identified due to the limited amount of time participants spent using it. This could include the inability to perform certain embodied gestures with the grammar, or difficulties managing morphs that contain more than 2 or 3 states and/or transitions.
Third, as the grammar is dependent on the visualisation package that it is built upon, many of its limitations are born from DXR~\cite{sicatDXRToolkitBuilding2019}. Limitations include the inability to transition between different mark types, lack of runtime data transformations, and overall poor scalability compared to other toolkits like IATK~\cite{cordeilIATKImmersiveAnalytics2019} especially when rendering large amounts of data. The inability to transform data (e.g., aggregation and filtering) is especially troublesome as it meant that time varying animations (e.g., Gapminder~\cite{roslingGapminder2007}) were not considered while designing the grammar, and using certain visualisations in morphs such as barcharts required pre-processing. While we had attempted to add data transformations into DXR ourselves, the challenges in using .NET as a scripting language made it difficult to achieve a syntax remotely equivalent to that of Vega-Lite~\cite{satyanarayanVegaLiteGrammarInteractive2017}. We see this as obvious future work, especially as it can allow visualisations to morph not just between encodings, but also between different levels of aggregation, filters, or even different datasets.
\section{Conclusion} \label{sec: conclusion}
This paper presented Deimos, a grammar and toolkit for prototyping morphs in immersive environments. Morphs are a collection of animated transitions that occur between different defined states, which are triggered and modified by the use of signals. These morphs are dynamically applied to visualisations during runtime, and are designed to leverage embodied interaction to enable interactive animated transitions. We view Deimos as an initial foray into what a functional grammar to create embodied animated transitions in immersive environments would look like. While our example gallery and user study demonstrates Deimos' ability to create a wide range of morphs, future work would seek to understand how these morphs are used by actual data analysts and/or audiences of immersive data stories in VR/MR. We also hope that this work fuels greater interest in the use of dynamically morphing visualisations in immersive analytics.
|
{
"arxiv_id": "2302.13625",
"language": "en",
"timestamp": "2023-02-28T02:25:40",
"url": "https://arxiv.org/abs/2302.13625",
"yymm": "2302"
} | \section{Introduction}
When an expert lexicographer constructs a (monolingual) dictionary, one of the most challenging and time-consuming tasks is to create concise and comprehensive word meaning explanations, also referred to as (dictionary) definitions~\cite{gilliver2016making,gray1986creating,mckeown1993creating}. The standard approaches concentrate on selecting the shared vocabulary of terms used to describe the word and organize them in the order of the main word category followed by listing the characteristics which are specific to the word~\cite{svensen2009handbook}.
In this paper, we describe a new attempt to develop dictionary word explanations for Czech and English automatically, using statistical information aggregated from large text corpora. We work with the hypothesis that a meaning of a word can be deduced from its context~\cite{church2011pendulum}. Therefore, it is possible to abstract common collocations of a word and use them to explain the word meaning. Such an explanation helps the reader to understand a meaning of a new word unfamiliar beforehand.
In the following section, we discuss the related work and the uniqueness of the presented approach. In sections three and four, we discuss the method and evaluate the results. Section five concludes the text.
\section{Related Work}
The attempts to actually \emph{create} explanations automatically have been rather scarce. Labropoulou et al.~\cite{CLgen} generated dictionary definitions from a computational lexicon, i.e.\ a lexicon of formalized and explicitly encoded semantic information about words. The results were comprehensible and the generated definitions were consistent, the downside being the need for the ontological background in the form of the computational lexicon. That is why the authors were focusing only on selected concrete entities; our aim is to cover a broader part of the vocabulary.
The automated definition construction process has been mostly solved
by text mining approaches. There were attempts to \emph{find},
\emph{mine}, or \emph{extract} definitions. Early approaches, such
as~\cite{zdipl3}, used rule-based or pattern-based approaches to
identify text passages containing the sought term and its explanation.
Such an approach is usually limited to a selected domain and sources of
texts to allow for acceptable precision and recall. The pattern-based
approach was later adjusted for mining from very large
corpora~\cite{zdipl4} which offered improved precision of
\mbox{73--74\%} with Wikipedia corpora and 31--57\% with large web
corpora. Borg et al.~\cite{zdipl1} employed genetic programming
techniques to generate the best definition templates and to learn to
rank these templates by importance. The templates were then used to
identify definitions in non-technical texts with high precision (up to
100\%) but with about 50\% recall. Later works~\cite{zdipl5,zdipl2} solved the definition text search by
annotating a corpus of definitions (from Wikipedia or from scientific
papers) and then training a sequence labeling classifier to mark words
as \emph{term}, \emph{definition} or \emph{other}. This technique
improved the F-score with Wikipedia benchmark corpus to 85\%.
All these techniques concentrate on extracting (parts of) the
definitions from existing human-made texts. Such an approach is
useful for summarization of technical terminology and educative texts,
but not for general notion explanations. Another possible problem with
the extractive approaches lies in the authorship laws and their
possible breach.
In the following text, we concentrate on extending our previous work
published by Stará and Kovář~\cite{staraslanen,staraslancs,svojtou}.
The method and the involved tools are described in detail here and
a quantitative evaluation of explanations is offered.
\section{Method}
The presented explanation creation method has been evaluated with nouns, adjectives and verbs in the Czech and English languages using Word Sketches~\cite{ske10,ske04,skegram} compiled with specific sketch grammars and the \emph{csTenTen12}\footnote{\url{https://www.sketchengine.eu/cstenten-czech-corpus/}} and \emph{enTenTen13}\footnote{\url{https://www.sketchengine.eu/ententen-english-corpus/}} corpora provided by the Sketch Engine corpus management system.$\!$\footnote{Sketch Engine is a tool analysing text corpora to identify instantly what is typical in language and what is rare, unusual or emerging usage. See \url{https://www.sketchengine.eu/} for details.}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.8\textwidth]{ws_skegrammar}
\end{center}
\caption{An example of sketch grammar relations of
\emph{pronominal subjects} and \emph{modifiers of a word}}
\label{fig_gram}
\end{figure}
A sketch grammar is a set of syntactic queries written in the corpus query language (CQL~\cite{cql}) to identify inter-word relations based on their position, distance, part-of-speech tag and word form (see Figure~\ref{fig_gram} for an example). The grammar, i.e.\ the set of word relation rules, can be applied to large text corpus to create the word sketches. The word sketches show statistically meaningful collocations of a given word organized by the relation rules; an example is shown in Figure~\ref{fig1}.
\begin{figure}[t]
\includegraphics[width=1\textwidth]{ws_deadline}
\caption{Word sketches for the word \emph{deadline}} \label{fig1}
\end{figure}
The definition grammars are partially based on existing grammars for the above-mentioned corpora. Specific modifications are focused mainly on identifying hypernyms/hyponyms and meronyms/holonyms for nouns, opposites and specific noun collocations for adjectives, and prepositional phrases for verbs.
Apart from word sketches, we also employ the thesaurus tool~\cite{ske10,ske04} to obtain synonyms.
These tools provide the grounding of the information that should be contained in the explanation. Following the explanation schemata by renowned dictionary creation guidelines~\cite{ogpl,landau,ml,plr}, we have compiled a list of definition/ex\-pla\-na\-tion types. In the overview below, we show the links between the types and the sketch grammar relations.
The standard way of how to explain a word meaning consists of two parts: the \emph{genus proximum} and \emph{differentia specifica}. In other words, using a hypernym as the core determination and a set of features that distinguish the word from other related words. As long as the headword is a (concrete) noun, there is quite a clear path to a hypernym. Identification of the distinguishing features is, however, not so straightforward. They can have the form of a \emph{verb} (``dog: an animal that barks''), a \emph{noun} describing what the headword has got (``snake: an animal with forked tongue'') or what the headword lacks (``snake: an animal with no legs''). In this regard, meronyms and holonyms are a specific case (``cutlery: spoon, fork, knife'').
Another approach to explaining lies in using the \emph{ostensive definition} or explaining by pointing. Pointing is quite helpful when explaining adjectives (``blue as the sky''). For adjective definitions, opposites can also bring clarification (``dead: not alive'').
Using synonyms for explaining is not much encouraged; however, we find synonymy (or, in case of verbs, troponymy) generally useful. Here, we should remark that we consider a synonym quite loosely, taking into account any words that have a similar meaning.
To describe the meaning of \emph{verbs}, we mostly make use of valency, focusing especially on objects.
We use valency with other parts-of-speech as well, mainly to find nouns and adjective modifiers.
\begin{figure}[t]
\noindent
\begin{minipage}{\textwidth}
\begin{quote}
\emph{\textbf{bone}}:
\begin{enumerate}
\item similar meaning as a/an \textbf{bone} can have (a/an) \emph{tooth}, \emph{joint}, \emph{muscle}, \emph{tissue}, \emph{fracture}, \emph{calcium}, \emph{osteoporosis}, \emph{skull}, \emph{spine}, \emph{injury}, \emph{remain}
\item \textbf{bone} can be \emph{bare}, \emph{pubic}, \emph{brittle}
\item for example (a/an) \emph{femur}, \emph{vertebra}
\item \textbf{bone} can have/contain (a/an) \emph{marrow}, \emph{skull}, \emph{joint}, \emph{tooth}
\item (a/an) \emph{tissue}, \emph{osteoporosis} can have/contain (a/an) \textbf{bone}
\item \textbf{bone} can \emph{fragment}, \emph{heal}, \emph{fracture}
\item sth/sb can \emph{break}, \emph{strengthen}, \emph{fracture} a/an \textbf{bone}
\item \textbf{bone} of (a/an) \emph{contention}, \emph{skull}, \emph{spine}
\item \textbf{bone} with (a/an) \emph{flesh}, \emph{marrow},
\emph{meat}
\end{enumerate}
\end{quote}
\end{minipage}
\caption{An example automatic explanation of the word \emph{bone}}
\label{fig_bone}
\end{figure}
To create the explanation as such, we use an automated script to combine all results together, with the main source being the word sketches and the thesaurus providing extra synonyms. The word sketches are sorted by their frequency score limited to the first three results. Some of the sketch relations are merged to one line in the explanation while removing duplicates. An example explanation for the word \emph{bone} is presented in Figure~\ref{fig_bone}. The lines are enumerated to make referencing easier.
The first line joins results from several relations~--~\emph{cooperation}, \emph{hypernymy}, and results from the \emph{thesaurus}. The second line introduces significant \emph{adjective modifiers}. The third line shows \emph{examples} of the headword. The fourth and fifth lines present the \emph{meronyms} and \emph{holonyms}, respectively.
The sixth and seventh lines list \emph{verbs} that typically have the headword as a \emph{subject} and \emph{object}. The eighth and ninth line post \emph{nouns} connected with the headword by \emph{genitive} and \emph{instrumental} case.
\section{Evaluation}
For both Czech and English, we manually evaluated a test set consisting of 71~nouns, 33~adjectives and 40~verbs.$\!$\footnote{42 for Czech due to aspect variants.}
\begin{table}[t]
\centering
\caption{Indicators in Czech and English Explanations}
\label{cs_en_markers}
\tiny
\begin{minipage}{.47\textwidth}
\centering
Indicators in Czech Explanations
\begin{tabular}{lrrr}
\hline
& N~~~ & J~~~ & V~~~ \\ \hline
synonym & 92.96\% & 78.79\% & 90.48\% \\
J modifier & 95.77\% & - & - \\
subject & 97.18\% & - & 90.48\% \\
object & 94.37\% & - & 100.00\% \\
hypernym & 67.61\% & - & - \\
hyponym & 29.58\% & - & - \\
meronym & 54.93\% & - & - \\
holonym & 46.48\% & - & - \\
A modifier & - & - & 90.48\% \\
(such) as & - & 36.36 & - \\
troponym & - & - & 45.24\% \\
opposite & - & 63.64\% & - \\
PP & - & - & 97.62\% \\
infrequent & 54.93\% & 57.58\% & 45.24\% \\
data issues & 81.69\% & 30.30\% & 95.86\% \\\hline
\end{tabular}
\end{minipage}\hspace{6mm}%
\begin{minipage}{.47\textwidth}
\centering
Indicators in English Explanations
\begin{tabular}{lrrr}
\hline
& N~~~ & J~~~ & V~~~ \\\hline
synonym & 92.96\% & 72.73\% & 85.00\% \\
Jmodifier & 87.32\% & - & - \\
subject & 69.01\% & - & 85.00\% \\
object & 80.28\% & - & 87.50\% \\
hypernym & 54.93\% & - & - \\
hyponym & 21.13\% & - & - \\
meronym & 56.34\% & - & - \\
holonym & 56.34\% & - & - \\
Amodifier & - & 75.76\% & 9.00\% \\
as & - & 42.42\% & - \\
troponym & - & 0.00\% & 52.50\% \\
opposite & - & 45.45\% & - \\
PP & - & - & - \\
infrequent & 61.97\% & 18.18\% & 35.00\% \\
data issues & 12.68\% & 36.36\% & 32.50\% \\\hline
\end{tabular}
\end{minipage}
\medskip
N: noun, J: adjective, V: verb, A: adverb, PP: prepositional phrase
\end{table}
The evaluation proceeded in a quantitative way measuring the occurrence numbers of identified features. The resulting aggregated score should correspond to the decision about the usefulness of the explanation. The indicators of a (presumably) good explanation are the presence of (useful):
\begin{itemize}
\item synonyms: all parts-of-speech
\item adjective modifiers: nouns
\item adverbial modifiers: verbs, (adjectives)
\item noun collocation: adjectives (\textit{(such) as}), verbs (\textit{is subject/object of})
\item verb collocation: nouns (\textit{subject/object})
\item opposites: adjectives
\item hypernyms/hyponyms: nouns
\item meronyms/holonyms (part of): nouns
\item troponyms: verbs
\item prepositional phrases: verbs
\end{itemize}
On the other hand, certain features may also serve as negative indicators.
Markers of possible problems with the explanation are the presence of:
\begin{itemize}
\item infrequent expressions
\item errors caused by the corpus data (wrong lemma/tag;
interchanging objects and subjects, meronyms and holonyms,
etc.)
\end{itemize}
Table~\ref{cs_en_markers} lists the ratios of explanations that contain the said (positive or negative) indicator. The fact that the indicator is not present does not necessarily mean that the explanation is bad or insufficient: different words require different indicators, as discussed in Section~\ref{Neval}.
Table~\ref{overviev} shows the total number of explanations that are sufficient as-is, denoted as \emph{good} explanations. The explanations that contain \emph{some} of the important data but are either incomplete (lack some of the necessary information) or contain too much junk data, or the data are misleading (e.g.\ opposites are presented as synonyms) are counted as \emph{post-edit}; the last group of explanations, \emph{bad}, consists of those that are completely insufficient or contain so many issues they would require rewriting, not just post-editing.
As the results were evaluated manually, we necessarily used our subjective view towards the evaluation based on our experience as a language and dictionary users. Nevertheless, we tried to minimize the bias by following the above-mentioned indicators.
We plan to engage more evaluators in the future to offer a broad objective assessment of the quality and intelligibility of the explanations.
\begin{table}[t]
\caption{Overall quality of Explanations}
\label{overviev}
\centering
\begin{tabular}{lrr@{\quad}r}
\hline
Czech & good & ~~post-edit & bad \\\hline
N & 39.44\% & 52.11\% & 8.45\% \\
J & 42.42\% & 27.27\% & 30.30\% \\
V & 16.67\% & 78.57\% & 4.76\% \\\hline
\end{tabular}\hspace{8mm}%
\begin{tabular}{lrr@{\quad}r}
\hline
English & good & ~~post-edit & bad \\\hline
N & 39.44\% & 49.30\% & 11.27\% \\
J & 21.21\% & 60.61\% & 18.18\% \\
V & 23.81\% & 69.05\% & 2.38\% \\\hline
\end{tabular}
\medskip
N: noun, J: adjective, V: verb
\end{table}
\subsection{Nouns}\label{Neval}
In this section, we present a more detailed evaluation of noun explanations, offering a comparison with existing dictionary definitions in the Macmillan Dic\-tio\-nary.$\!$\footnote{\url{https://www.macmillandictionary.com/}}
A universally acknowledged truth says that a noun explanation should contain its hypernym. Even though this is generally true, as e.g.\ in Example~\ref{ex1} below, there are counterexamples, such as Examples~\ref{ex2} and~\ref{ex3} where the hypernym is too general or replaced by a synonym, respectively. When evaluating the testing dataset with the established dictionary, we see that a hypernym is present in 70.42\% of the noun definitions, while 4.23\% headwords are not defined in the dictionary.
\begin{quote}
\begin{example}\label{ex1}
\textbf{\emph{deer}}: a large brown \textbf{animal} with long thin legs. The adult male \textbf{deer} is called a stag and may have antlers growing from its head. The female \textbf{deer} is called a doe and a young \textbf{deer} is called a fawn.\footnote{\url{https://www.macmillandictionary.com/dictionary/british/deer}}
\end{example}
\begin{example}\label{ex2}
\emph{\textbf{teacher}}: someone whose job is to \textbf{teach}\footnote{\url{https://www.macmillandictionary.com/dictionary/british/teacher}}
\end{example}
\begin{example}\label{ex3}
\emph{\textbf{stream}}: a small narrow \textbf{river}\footnote{\url{https://www.macmillandictionary.com/dictionary/british/stream_1}}
\end{example}
\end{quote}
Examples~\ref{ex1}, \ref{ex2}, and~\ref{ex3} are all cases of a good definition, as they denote what does the headword mean. It is important to note that all these explanations use different strategies of what semantic relations to use.
In the second example, a verb describing the prototypical activity of the headword is necessary, while in the third example, only a synonym with a few modifiers is sufficient. Such an approach is not always applicable, as can be seen in Example~\ref{ex4}. An explanation like this could be as well used to describe a \emph{shrew}, \emph{rat}, or \emph{opossum}, even a \emph{cat}. To avoid such confusion, we decided to prefer redundant data over data scarcity.
\begin{quote}
\begin{example}\label{ex4}
\emph{\textbf{mouse}}: a small furry \textbf{animal} with a long tail\footnote{\url{https://www.macmillandictionary.com/dictionary/british/mouse_1}}
\end{example}
\end{quote}
Example~\ref{ex5} shows an automatically created explanation that can be compared with the human-made one. The explanation contains hypernyms (\emph{water, waterway}; possibly also \emph{source, body}); synonyms (\emph{river, tributary}; possibly also \emph{lake, pond, channel}; and verb collocates for \emph{\textbf{stream}} as a subject (\emph{flow, meander}).
\begin{quote}
\begin{example}\label{ex5}\hfuzz=25pt
\emph{\textbf{stream}}:
\begin{itemize}
\item similar meaning as a/an \textbf{stream} can have (a/an) \emph{river}, \emph{lake}, \emph{pond}, \emph{flow}, \emph{channel}, \emph{tributary}, \emph{water}, \emph{source}, \emph{waterway}, \emph{body}
\item \textbf{stream} can be \emph{steady}, \emph{endless},
\emph{constant}
\item for example (a/an) \emph{river}, \emph{habitat}
\item \textbf{stream} can have/contain (a/an) \emph{trout}, \emph{flow}, \emph{waterfall}
\item (a/an) \emph{watershed}, \emph{valley} can have/contain (a/an) \textbf{stream}
\item \textbf{stream} can \emph{flow}, \emph{meander}, \emph{replenish}
\item sth/sb can \emph{cross}, \emph{never-end}, \emph{flow} a/an \textbf{stream}
\item \textbf{stream} of (a/an) \emph{income}, \emph{consciousness}, \emph{revenue}
\item \textbf{stream} with (a/an) \emph{waterfall}, \emph{trout}
\end{itemize}
\end{example}
\end{quote}
\iffalse
synonyms: all PoS
adjective modifiers: nouns
verb collocation: nouns (\textit{subject/object})
hypernyms/hyponyms: nouns
meronyms/holonyms (part of): nouns
\fi
\noindent
The results for nouns are encouraging, as a significant number of the explanations helps to understand the word meaning without the need of \emph{excessive} post-editing. The other parts of speech seem to require a slightly different approach, mainly if we compare the explanations to existing dictionary definitions. As adjectives' main function is to modify, we need to change the paradigm and accept the fact that the results can be helpful even when they do not conform to the standard definitions. A similar problem occurs with verbs.
\section{Conclusions}
In this paper, we introduced a new tool for automatic construction of word meaning explanations for Czech and English, using large corpora, especially the word sketches technique. We have conceived a quantitative evaluation of explanations, focusing mainly on explanations of nouns.
The presented approach gathers enough data to construct explanations for 91.25\% and 88.73\% of nouns for Czech and English, respectively. As a majority of the results needs post-editing, the output in general is not yet in the state that could be presented to users as actual explanations. However, the status quo can be used as a basis for human-made explanations or definitions.
To further improve our work, the next tasks will be finding out which words need which specific approach, such as deciding which words do (not) require a hypernym, or for what words it is necessary to output a verb collocation. We believe such steps will further improve the results and reduce the need for post-editing.
\bibliographystyle{plain}
|
{
"arxiv_id": "2302.13737",
"language": "en",
"timestamp": "2023-02-28T02:28:44",
"url": "https://arxiv.org/abs/2302.13737",
"yymm": "2302"
} | \section{Proof of \Cref{thm:upper}}
\section{Coreset Lower Bound for General \ProblemName{$k$-Median}\xspace in $\mathbb{R}$}
\label{sec:general1dkmedianlower}
We prove the general case of \Cref{thm:Lowerbound1dkmedian} here.
\begin{proof}[the general case of \Cref{thm:Lowerbound1dkmedian}]
We first construct the hard instance $P$. Let $P_1$ denote the hard instance we have constructed in the proof of \Cref{thm:Lowerbound1dkmedian}. We take a large enough constant $L>0$, take $P_i = (i-1)L + P_1$, and take $P = \cup_{i=1}^{\frac{k}{2}} P_i$. Here $(i-1)L + P_1$ means $\{(i-1)L+p|p \in P_1\}$.
The dataset $P$ is a unification of $\frac{k}{2}$ copies of $P_1$. These copies are far from each other. Thus \ProblemName{$k$-Median}\xspace problem on $P$ can be decomposed to \ProblemName{$2$-Median}\xspace problem on each copy. We prove the \ProblemName{$k$-Median}\xspace lower bound by applying the argument for the \ProblemName{$2$-Median}\xspace lower bound on every single copy and combining them together.
We denote $P_1 = \cup_{j =1}^{\frac{1}{\epsilon}} I_{1,j}$, where $I_{1,j}$ is the $j$-th interval we constructed in the proof of the \ProblemName{$2$-Median}\xspace case of \Cref{thm:Lowerbound1dkmedian}. We denote $I_{i,j} = (i-1)L + I_{1,j}$, denote the left endpoint and right endpoint of $I_{i,j}$ by $l_{i,j}$ and $r_{i,j}$ respectively. We have $P_i = \cup_{j =1}^{\frac{1}{\epsilon}} I_{i,j}$.
Now, assume that $S$ is an $\frac{\epsilon}{300}$ coreset of $P$ such that $|S| < \frac{k}{4\epsilon}$. We prove that there must be a contradiction. Since $|S| < \frac{k}{4\epsilon}$, there must be at least half of $i$ such that $(l_{i,j_i}, r_{i,j_i}) \cap S = \varnothing$ for some $j_i$. We assume that these indexes are $1,2,\dots,\frac{k}{4}$, without loss of generality. We define a parametrized query family as $Q(t) = \cup_{i=1}^{\frac{k}{2}} Q_i(t)$, where $t \in [\frac{1}{3},1]$ and
\[
Q_i(t) = \begin{cases}
\{l_{i,1}, l_{i,j_i} + t(r_{i,j_i} - l_{i,j_i}), r_{i,j_i} \} \quad \text{for } i \le \frac{k}{4},\\
\{l_{i,1}\} \quad \text{otherwise}.
\end{cases}
\]
Consider $\ensuremath{\mathrm{cost}}(P,Q(t))$, a function of $t$. Since $L$ is large enough, we have $\ensuremath{\mathrm{cost}}(P,Q(t)) = \sum_{i=1}^{\frac{k}{2}} \ensuremath{\mathrm{cost}}(P_i,Q_i(t))$. The computation we have done in the proof of the \ProblemName{$2$-Median}\xspace case of \Cref{thm:Lowerbound1dkmedian} implies that $\ensuremath{\mathrm{cost}}(P_i,Q_i(t)) \le \frac{2}{\epsilon}$ for each $i$ and
\[(1-\frac{1}{3})^2 \frac{\mathrm{d^2}}{\mathrm{dt^2}} \ensuremath{\mathrm{cost}}(P_i,Q_i(t)) =
\begin{cases}
\frac{4}{9} \quad \text{for } i \le \frac{k}{4}, \\
0 \quad \text{otherwise}.
\end{cases}
\]
Thus we have $\ensuremath{\mathrm{cost}}(P,Q(t)) \le \frac{k}{\epsilon}$ and $(1 - \frac{1}{3})^2 \frac{\mathrm{d^2}}{\mathrm{dt^2}} \ensuremath{\mathrm{cost}}(P,Q(t)) = \frac{k}{9}$.
It's easy to see that $\ensuremath{\mathrm{cost}}(S,Q(t))$ is affine linear since $(l_{i,j_i}, r_{i,j_i}) \cap S = \varnothing$ for $i \le \frac{k}{4}$. Since $S$ is an $\frac{\epsilon}{300}$ coreset, we have $|\ensuremath{\mathrm{cost}}(S,Q(t)) - \ensuremath{\mathrm{cost}}(P,Q(t))| \le \frac{\epsilon}{300} \ensuremath{\mathrm{cost}}(P,Q(t))$. By \Cref{lem:quadratic_approximation}, we must have $\frac{\epsilon}{300} \ge \frac{1}{32} \frac{\epsilon}{k} \frac{k}{9} > \frac{\epsilon}{300}$, which leads to a contradiction.
\end{proof}
\section{Proof of \Cref{thm:lowerbound} for General $z\geq 1$}
\label{sec:generalz}
Using similar ideas from \citep{cohenaddad2022towards}, our proof of the lower bound for $z=2$ can be extended to arbitrary $z$. First, we provide two lemmas analogous to \Cref{lem:cost-to-basis} and \Cref{lem:cost-to-smallset} for general $z\ge 1$. Their proofs can be found in Appendix A in \citep{cohenaddad2022towards}.
\begin{lemma}\label{lem:cost-to-basis-z}
For any even number $k\ge 1$, let $\{c_1,\cdots,c_k\}$ be arbitrary $k$ unit vectors in $\mathbb{R}^d$ such that for each $i$ there exist some $j$ satisfying $c_i=-c_j$. We have
\begin{align*}
\sum_{i=1}^{d/2} \min_{\ell=1}^k\|e_i-c_{\ell}\|^z \ge 2^{z/2-1}d - 2^{z/2}\max\{1, z/2\}\sqrt{\frac{kd}{2}}.
\end{align*}
\end{lemma}
\begin{lemma}\label{lem:cost-to-smallset-z}
Let $S$ be a set of points in $\mathbb{R}^d$ of size $t$ and $w: S\rightarrow \mathbb{R}^+$ be their weights. For arbitrary $\Delta_p$ for each $p$, there exist $2$ unit vectors $v_1, v_2$ satisfying $v_1=-v_2$, such that
\begin{align*}
\sum_{p\in S} w(p)\min_{\ell=1,2} \left(\|p-v_{\ell}\|^2+\Delta_p^2 \right)^{z/2} \le &\sum_{s\in P}w(p)(\|p\|^2+1+\Delta_p^2)^{z/2}\\
&-\min\{1,z/2\}\cdot \frac{2\sum_{p\in S} w(p)(\|p\|^2+1+\Delta_p^2)^{z/2-1} \|p\|}{\sqrt{t}}.
\end{align*}
\end{lemma}
\noindent
In this proof, the original point set $P$ and three sets of $k$-centers, namely $C_1,C_2,C_3$, are the same as for the case $z=2$. The difference is that now $I=\{j:|S_j|\le \frac{d}{2^z}\}$ and when constructing $C_2$, we use \Cref{lem:cost-to-smallset-z} in place of \Cref{lem:cost-to-smallset}. Again, we compare the cost of $P$ and $S$ w.r.t. $C_1,C_2,C_3$ and get the following lemmas.
\begin{lemma}
For $C_1$ constructed above, we have $\ensuremath{\mathrm{cost}}(P,C_1) = \frac{kd}{4}\cdot 2^{z/2}$ and
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_1) =\sum_{j\in {I}}\sum_{p\in S_j} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1)^{z/2} +\sum_{j\in \Bar{I}}\sum_{p\in S_j}w(p)\|p-c_j\|^{z}.
\end{align*}
\end{lemma}
\begin{proof}
Since $e_i$ is orthogonal to $c_j-jLe_0$ and $c_j-jLe_0$ has unit norm for all $i,j$, it follows that
\begin{align}
\ensuremath{\mathrm{cost}}(P,C_1) &=
\sum_{j=1}^{k/2}\sum_{i=1}^{d/2} \min_{c\in C_1}\|jLe_0+e_i-c\|^{2\cdot z/2} = \sum_{j=1}^{k/2} \sum_{i=1}^{d/2} \|jLe_0+e_i -c_j\|^{2\cdot z/2} \nonumber\\
&=\sum_{j=1}^{k/2}\sum_{i=1}^{d/2}(\|e_i\|^2+\|c_j-jLe_0\|^2-2\langle e_i, c_j-jLe_0 \rangle )^{z/2}\nonumber\\
&=\frac{kd}{4}\cdot 2^{z/2}.\label{eqn:cost-of-c-z}
\end{align}
On the other hand, the cost of $C_1$ w.r.t.\ $S_j$ is
\begin{align}
\sum_{p\in S_j} \min_{c\in C_1}w(p)\|p-c\|^{2\cdot z/2} &= \sum_{p\in S_j} w(p)\|p-c_j\|^{2\cdot z/2} = \sum_{p\in S_j} w(p)\|p-jLe_0+ jLe_0-c_j\|^{2\cdot z/2} \nonumber\\
&= \sum_{p\in S_j}w(p) \left( \|p-jLe_0\|^2 + 1 -2\langle p-jLe_0, jLe_0-c_j \rangle \right)^{z/2}.\label{eqn:cost-of-c-to-S-z}
\end{align}
For $j\in I$, the inner product is $0$, and thus the total cost w.r.t.\ $S$ is
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_1) = \sum_{j\in {I}}\sum_{p\in S_j} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1)^{z/2} + \sum_{j\in \Bar{I}}\sum_{p\in S_j}w(p)\|p-c_j\|^{z},
\end{align*}
which finishes the proof.
\end{proof}
\noindent
For notational convenience, we define $\kappa := \sum_{j\in \Bar{I}}\sum_{p\in S_j}w(p)\|p-c_j\|^{z}$.
Since $S$ is an $\varepsilon$-coreset of $P$, we have
\begin{align}\label{eqn:weight-constraints-z}
\frac{kd}{4}\cdot 2^{z/2} - \frac{\varepsilon kd}{4}\cdot 2^{z/2} \le \sum_{j\in {I}}\sum_{p\in S_j} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1)^{z/2} + \kappa \le \frac{kd}{4}\cdot 2^{z/2} + \frac{\varepsilon kd}{4}2^{z/2}.
\end{align}
Next we consider a different set of $k$ centers denoted by $C_2$. By \Cref{lem:cost-to-smallset-z}, there exists unit vectors $v^j_1,v^j_2 \in \mathbb{R}^d$ satisfying $v^j_1=-v^j_2$ such that
\begin{align}
\sum_{p\in S_j} w(p)(\min_{\ell=1,2} \left(\|\Tilde{p}-v^j_{\ell}\|^2+\Delta^2_p\right)^{z/2}) \le&
\sum_{p\in S_j}w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p )^{z/2} \nonumber\\
&- \min\{1,z/2\}\frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{|S_j|}}. \label{eqn:small-coreset-cost-z}
\end{align}
Applying this to all $j\in I$ and get corresponding $v^j_1,v^j_2$ for all $j\in I$. Let $C_2=\{u_1^1,u_2^2,\cdots, u_1^{k/2},u_2^{k/2}\}$ be a set of $k$ centers in $\mathbb{R}^{d+1}$ defined as follows: if $j\in I$, $u_{\ell}^j$ is $v_{\ell}^j$ with an additional $0$th coordinate with value $jL$, making them lie in $H_j$; for $j\in \Bar{I}$, we use the same centers as in $C_1$, i.e., $u_{1}^j=u_{2}^j =c_j$.
\begin{lemma}
For $C_2$ constructed above, we have
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_2) \ge 2^{z/2}\left(\frac{kd}{4} -\max\{1, z/2\}\sqrt{d}|I|\right), \text{ and }
\end{align*}
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_2) \le& \sum_{j\in I}\sum_{p\in S_j}w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p )^{z/2} \\
&- \min\{1,z/2\}\sum_{j\in I}\frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{|S_j|}}+\kappa.
\end{align*}
\end{lemma}
\begin{proof}
By \eqref{eqn:small-coreset-cost-z},
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_2) &= \sum_{j=1}^{k/2} \sum_{p\in S_j}w(p)\min_{c\in C_2}\|p-c\|^{2\cdot z/2} =
\sum_{j\in I}\sum_{p\in S_j} w(p)\min_{\ell=1,2} (\|\Tilde{p}-v^j_{\ell}\|^2+\Delta^2_p)^{z/2} +\kappa\\
&\le
\sum_{j\in I}\sum_{p\in S_j}w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p )^{z/2} \\
&- \min\{1,z/2\}\sum_{j\in I}\frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{|S_j|}}+\kappa.
\end{align*}
By \Cref{lem:cost-to-basis-z} (with $k=2$), we have
\begin{align*}
\sum_{i=1}^{d/2} \min_{\ell=1,2}\|e_i-v^j_{\ell}\|^z \ge 2^{z/2-1}d - 2^{z/2}\max\{1, z/2\}\sqrt{d}.
\end{align*}
It follows that
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_2)&=\sum_{j=1}^{k/2}\sum_{i=1}^{d/2} \min_{c\in C_2}\|jLe_0+e_i-c\|^{z}\\
&= \sum_{j\in I}\sum_{i=1}^{d/2} \min_{\ell=1,2} \|e_i-v^{j}_{\ell}\|^{2\cdot z/2} + \sum_{j\in \Bar{I}}\sum_{i=1}^{d/2} \|jLe_0+e_i-c_j\|^{2\cdot z/2} \\
& \ge \left( 2^{z/2-1}d - 2^{z/2}\max\{1, z/2\}\sqrt{d} \right)|I| + |\Bar{I}|\frac{d}{2}\cdot 2^{z/2}\\
&= \frac{kd}{4}2^{z/2} - 2^{z/2}\max\{1, z/2\}\sqrt{d}|I|,
\end{align*}
where in the inequality, we also used the orthogonality between $e_i$ and $c_j-jLe_0$.
\end{proof}
\noindent
Since $S$ is an $\varepsilon$-coreset of $P$, we have
\begin{align*}
&2^{z/2}\left(\frac{dk}{4}-\max\{1, z/2\}|I|\sqrt{d} - \frac{\varepsilon dk}{4}\right) \le 2^{z/2}\left(\frac{kd}{4} -\max\{1, z/2\}\sqrt{d}|I|\right)(1-\varepsilon) \\
&\le \sum_{j\in I}\sum_{p\in S_j}w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p )^{z/2} - \min\{1,z/2\}\sum_{j\in I}\frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{|S_j|}}+\kappa,
\end{align*}
which implies
\begin{align}
&\min\{1,z/2\}\sum_{j\in I}\frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{|S_j|}}\nonumber\\
&\le \sum_{j\in I}\sum_{p\in S_j}w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p )^{z/2} - 2^{z/2}\left(\frac{dk}{4}-\max\{1, z/2\}|I|\sqrt{d} - \frac{\varepsilon dk}{4}\right) + \kappa
\nonumber\\
&\le \frac{kd}{4}\cdot 2^{z/2} + \frac{\varepsilon kd}{4}2^{z/2} - 2^{z/2}\left(\frac{dk}{4}-\max\{1, z/2\}|I|\sqrt{d} - \frac{\varepsilon dk}{4}\right) \quad\textnormal{by \eqref{eqn:weight-constraints-z}} \nonumber\\
&= \max\{1, z/2\} |I|\sqrt{d}2^{z/2} +\frac{\varepsilon kd}{2}2^{z/2}. \nonumber
\end{align}
By definition, $|S_j| \le d/t^2$, so
\begin{align*}
& \min\{1,\frac{z}{2}\} \sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{d/t^2}}\\
\le &\min\{1,\frac{z}{2}\}\sum_{j\in I}\frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{|S_j|}},
\end{align*}
and it follows that
\begin{align}\label{eqn:size-constraint-z}
\min\{1,\frac{z}{2}\} \sum_{j\in I} \frac{\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{d}}
\le \frac{\max\{1, z/2\} |I|\sqrt{d} 2^{z/2} +\frac{\varepsilon kd}{2}2^{z/2}} {2t}.
\end{align}
Finally we consider a third set of $k$ centers $C_3$. Similarly, there are two centers per group. We set $m$ be a power of $2$ in $[d/2,d]$. Let $h_1,\cdots,h_m$ be the $m$-dimensional Hadamard basis vectors. So all $h_{\ell}$'s are $\{-\frac{1}{\sqrt{m}},\frac{1}{\sqrt{m}}\}$ vectors and $h_1=(\frac{1}{\sqrt{m}},\cdots,\frac{1}{\sqrt{m}})$. We slightly abuse notation and treat each $h_{\ell}$ as a $d$-dimensional vector by concatenating zeros in the end. For each $h_{\ell}$ construct a set of $k$ centers as follows. For each $j\in \Bar{I}$, we still use two copies of $c_j$. For $j\in I$, the $0$th coordinate of the two centers is $jL$, then we concatenate $h_{\ell}$ and $-h_{\ell}$ respectively to the first and the second centers.
\begin{lemma}\label{lem:hadamard-cost-z}
Suppose $C_3$ is constructed based on $h_{\ell}$. Then for all $\ell\in [m]$, we have
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_3) \le 2^{z/2}\left( \frac{kd}{4} - \frac{d|I|}{2}\cdot \frac{\min\{1,z/2\}}{\sqrt{m}} \right), \text{ and }
\end{align*}
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_3) &\ge \sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}} \\
-& 2\max\{1,\frac{z}{2}\} \sum_{j\in I}\sum_{p\in S_j} w(p)\langle \Tilde{p}, h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1}+\kappa.
\end{align*}
\end{lemma}
\begin{proof}
For $j\in I$, the cost of the two centers w.r.t.\ $P_j$ is
\begin{align*}
\ensuremath{\mathrm{cost}}(P_j,C_3) &= \sum_{i=1}^{d/2} \min_{s=-1,+1}\|e_i - s\cdot h_{\ell}\|^z = \sum_{i=1}^{d/2} (2-2\max_{s=-1,+1}\langle h_{\ell},e_i\rangle)^{z/2}= \frac{d}{2}(2-\frac{2}{\sqrt{m}})^{z/2} \\
&\le \frac{d}{2}\cdot 2^{z/2} \left(1-\frac{\min\{1,z/2\}}{\sqrt{m}}\right).
\end{align*}
For $j\in \Bar{I}$, the cost w.r.t.\ $P_j$ is $\frac{d}{2}\cdot 2^{z/2}$ by \eqref{eqn:cost-of-c-z}.
Thus, the total cost over all subspaces is
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_3) &\le \frac{d}{2}\cdot2^{z/2} \left(1-\frac{\min\{1,z/2\}}{\sqrt{m}}\right)|I| + \left(\frac{k}{2} -|I| \right)\frac{d}{2}\cdot 2^{z/2}\\
&= 2^{z/2}\left( \frac{kd}{4} - \frac{d|I|}{2}\cdot \frac{\min\{1,z/2\}}{\sqrt{m}} \right).
\end{align*}
On the other hand, for $j\in I$, the cost w.r.t.\ $S_j$ is
\begin{align*}
&\sum_{p\in S_j} w(p)(\Delta_p^2+ \min_{s=\{-1,+1\}} \|\Tilde{p}-s\cdot h_{\ell}\|^2)^{z/2}\\
=&\sum_{p\in S_j} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2 - 2\max_{s=\{-1,+1\}}\langle \Tilde{p}, s\cdot h_{\ell} \rangle)^{z/2}\\
=& \sum_{p\in S_j} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2 - 2\langle \Tilde{p}, h^p_{\ell} \rangle)^{z/2}\\
\ge& \sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}} - 2\max\{1,\frac{z}{2}\} \sum_{p\in S_j} w(p)\langle \Tilde{p}, h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1}.
\end{align*}
Here $h^p_{\ell} = s^p\cdot h_{\ell}$, where $s^p=\arg\max_{s=\{-1,+1\}}\langle \Tilde{p}, s\cdot h_{\ell} \rangle$.
For $j\in \Bar{I}$, the total cost w.r.t.\ $S_j$ is $\kappa$. Thus, the total cost w.r.t.\ $S$ is
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_3) &\ge \sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}}\\
-& 2\max\{1,\frac{z}{2}\} \sum_{j\in I}\sum_{p\in S_j} w(p)\langle \Tilde{p}, h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1}+\kappa .
\end{align*}
This finishes the proof.
\end{proof}
\begin{corollary}
Let $S$ be a $\varepsilon$-coreset of $P$, and $I =\{j: |S_j|\le d/4\}$. Then
\begin{align*}
2\max\{1,\frac{z}{2}\} \sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \|\Tilde{p}\| \ge 2^{z/2}\cdot \left( \frac{d|I|}{2}\cdot\min\{1,z/2\} - \frac{\varepsilon kd \sqrt{d}}{2}\right). \\
\end{align*}
\end{corollary}
\begin{proof}
Since $S$ is an $\varepsilon$-coreset, we have by \Cref{lem:hadamard-cost-z}
\begin{align*}
&2\max\{1,\frac{z}{2}\} \sum_{j\in I}\sum_{p\in S_j} w(p)\langle \Tilde{p}, h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1}\\
& \ge \sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}} + \kappa
-2^{z/2}\left( \frac{kd}{4} - \frac{d|I|}{2}\cdot \frac{\min\{1,z/2\}}{\sqrt{m}} \right)(1+\varepsilon)\\
&\ge \frac{kd}{4}\cdot 2^{z/2} - \frac{\varepsilon kd}{4}\cdot 2^{z/2} - 2^{z/2}\left( \frac{kd}{4} - \frac{d|I|}{2}\cdot \frac{\min\{1,z/2\}}{\sqrt{m}} +\frac{\varepsilon kd}{4} \right) \quad\textnormal{by \eqref{eqn:weight-constraints-z}}\\
&= 2^{z/2}\cdot \frac{d|I|}{2}\cdot \frac{\min\{1,z/2\}}{\sqrt{m}} - \frac{\varepsilon kd}{2}\cdot 2^{z/2}.
\end{align*}
Note that the above inequality holds for all $\ell\in[m]$, then
$$ 2\max\{1,\frac{z}{2}\} \sum_{\ell=1}^m\sum_{j\in I}\sum_{p\in S_j} w(p)\langle \Tilde{p}, h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \ge 2^{z/2}\cdot \left( \frac{d|I|\sqrt{m}}{2}\cdot\min\{1,z/2\} - \frac{\varepsilon kd m}{2}\right).$$
By the Cauchy-Schwartz inequality,
\begin{align*}
\sum_{\ell=1}^m\sum_{j\in I}\sum_{p\in S_j} w(p)\langle \Tilde{p}, h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} &= \sum_{j\in I}\sum_{p\in S_j} w(p)\langle \Tilde{p}, \sum_{\ell=1}^m h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \\
&\le \sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \|\Tilde{p}\|\cdot \|\sum_{\ell=1}^m h^p_{\ell}\| \\
&= \sqrt{m}\sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \|\Tilde{p}\|.
\end{align*}
Therefore, we have
\begin{align*}
2\max\{1,\frac{z}{2}\} \sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \|\Tilde{p}\| &\ge 2^{z/2}\cdot \left( \frac{d|I|}{2}\cdot\min\{1,z/2\} - \frac{\varepsilon kd \sqrt{m}}{2}\right) \\
&\ge 2^{z/2}\cdot \left( \frac{d|I|}{2}\cdot\min\{1,z/2\} - \frac{\varepsilon kd \sqrt{d}}{2}\right).
\end{align*}
\end{proof}
Combining the above corollary with \eqref{eqn:size-constraint-z}, we have
\begin{align*}
\frac{\min\{1,z/2\}}{2\max\{1,z/2\}}2^{z/2}\cdot \left( \frac{\sqrt{d}|I|}{2}\cdot\min\{1,z/2\} - \frac{\varepsilon kd }{2}\right) \le \frac{\left( \max\{1, z/2\} |I|\sqrt{d} +\frac{\varepsilon kd}{2}\right)2^{z/2}} {2t},
\end{align*}
which implies that
\begin{align*}
\left(\frac{\min\{1,(z/2)^2\}}{4\max\{1,(z/2)\}}-\frac{\max\{1,z/2\}}{2t}\right)|I| \le \frac{\min\{1,(z/2)\}\varepsilon kd}{4\max\{1,(z/2)\}}+ \frac{\varepsilon k\sqrt{d}}{4t}.
\end{align*}
So if we set $t = \frac{4\max\{1,(z/2)^2\}}{\min\{1,(z/2)^2\}}$, then
\begin{align*}
\frac{\min\{1,(z/2)^2\}}{8\max\{1,(z/2)\}}|I| \le \frac{\min\{1,(z/2)\}\varepsilon k\sqrt{d}}{2\max\{1,(z/2)\}} \implies |I| \le \frac{4\varepsilon k\sqrt{d}}{\min\{1,z/2\}}.
\end{align*}
By the assumption $d\le \frac{\min\{1,(z/2)^2\}}{100\varepsilon^2}$, it holds that $|I| \le \frac{2k}{5}$ or $|\Bar{I}|\ge \frac{k}{2} -\frac{2k}{5}=\frac{k}{10}$. Moreover, since $|S_j|>\frac{d}{t^2}$ for each $j\in \Bar{I}$, we have $|S|>\frac{d}{t^2}\cdot \frac{k}{5} = \frac{kd \min\{1,(z/2)^4\}}{\max\{1,(z/2)^4\}}$.
\section{Conclusion}
This work studies coresets for \ProblemName{$k$-Median}\xspace problem in small dimensional Euclidean spaces. We give tight size bounds for \ProblemName{$k$-Median}\xspace in $\mathbb{R}$ and show that the framework in \citep{HarPeled2005SmallerCF}, with significant improvement, is optimal. For $d \ge 2$, we improve existing coreset upper bounds for \ProblemName{$1$-Median}\xspace and prove new lower bounds.
Our work leaves several interesting problems for future research. One of which is to close the gap between upper bounds and lower bounds for $d \ge 2$. Another one is to generalize our results to \ProblemName{$(k, z)$-Clustering}\xspace for general $z$. Note that the generalization is non-trivial even for $d=1$ since the cost function is piece-wise linear for \ProblemName{$k$-Median}\xspace while piece-wise polynomial of order $z$ for general \ProblemName{$(k, z)$-Clustering}\xspace.
\newpage
\section{Tight Coreset Sizes for $1$-d \ProblemName{$k$-Median}\xspace}
\label{sec:tight}
\subsection{Near Optimal Coreset for $1$-d \ProblemName{$1$-Median}\xspace}
\label{sec:d1k1}
We have the following theorem.
\begin{theorem}[\bf Improved Coreset for one-dimensional \ProblemName{$1$-Median}\xspace]
\label{thm:1d1k}
There is a polynomial time algorithm, such that given an input data set $P\subset \mathbb{R}$, it outputs an $\varepsilon$-coreset of $P$ for $\ProblemName{$1$-Median}\xspace$ with size $\tilde{O}(\varepsilon^{-\frac{1}{2}})$.
\end{theorem}
\paragraph{Useful Notations and Facts.}
Throughout this section, we use $P=\{p_1,\cdots,p_n\}\subset \mathbb{R}$ with $p_1<p_2<\cdots<p_n$. Let $c^\star = p_{\lfloor \frac{n}{2}\rfloor}$, we have the following simple observations for $\ensuremath{\mathrm{cost}}(P,c)$.
\begin{observation}\label{obs:convexity}
$\ensuremath{\mathrm{cost}}(P,c)$ is a convex piecewise affine linear function of $c$ and $\ensuremath{\mathsf{OPT}}=\ensuremath{\mathrm{cost}}(P,c^\star)$ is the optimal \ProblemName{$1$-Median}\xspace cost on $P$.
\end{observation}
\noindent
The following notions, proposed by~\citep{harpeled2004on}, are useful for our coreset construction.
\begin{definition}[\bf Bucket]
A bucket $B$ is a continuous subset $\{p_l,p_{l+1}\dots,p_r \}$ of $P$ for some $1\leq l\leq r\leq n$.
\end{definition}
\begin{definition}[\bf Mean and cumulative error
\citep{HarPeled2005SmallerCF}]\label{Def:MeanAndCumulativeError}
Given a bucket $B = \{p_l,\dots,p_r \}$ for some $1\leq l\leq r\leq n$, denote $N(B) := r-l+1$ to be the number of points within $B$ and $L(B) := p_r - p_l$ to be the length of $B$.
We define the \emph{mean} of $B$ to be
$
\mu(B) :=\frac{1}{N(B)} \sum_{p\in B} p,
$
and define the \emph{cumulative error} of $B$ to be
$
\delta(B) :=\sum_{p\in B} |p-\mu(B)|.
$
\end{definition}
\noindent
Note that $\mu(B)\in [p_l,p_r]$ always holds, which implies the following fact.
\begin{fact}\label{fac:del<=LN}
$\delta(B)\leq N(B) \cdot L(B)$.
\end{fact}
\noindent
The following lemma shows that for each bucket $B$, the coreset error on $B$ is no more than $\delta(B)$.
\begin{lemma}[\bf Cumulative error controls coreset error~\citep{HarPeled2005SmallerCF}] \label{lem:CumulativeErrorControlsCoresetError}
Let $B = \left\{p_l,\ldots, p_r\right\}\subseteq P$ for $1\leq l\leq r\leq n$ be a bucket and $c\in \mathbb{R}$ be a center.
We have
\begin{enumerate}
\item if $c \in (p_l, p_r)$, $|\ensuremath{\mathrm{cost}}(B,c)-N(B) d(\mu(B),c) |\le \delta(B)$;
\item if $c\notin (p_l, p_r)$, $|\ensuremath{\mathrm{cost}}(B,c)-N(B) d(\mu(B),c) | = 0$.
\end{enumerate}
\end{lemma}
\paragraph{Algorithm for \Cref{thm:1d1k}.}
Our algorithm is summarized in \Cref{alg:k1d1}.
We improve the framework in \citep{HarPeled2005SmallerCF}, which partitions $P$ into multiple buckets so that the cumulative errors in different buckets are the same and collects their means as a coreset.
Our main idea is to carefully select an adaptive cumulative error for different buckets.
In Lines 2-3, we take the leftmost $\varepsilon n$ points and the rightmost $\varepsilon n$ points, and add their weighted means to our coreset $S$.
In Lines 4 (and 7), we divide the remaining points into disjoint blocks $B_i$ ($B'_i$) such that for every $p\in B_i$, $\ensuremath{\mathrm{cost}}(P,p)\approx 2^i \cdot \ensuremath{\mathsf{OPT}}$, and then greedily divide each $B_i$ into disjoint buckets $B_{i,j}$ with a cumulative error roughly $\varepsilon \cdot 2^i \cdot \ensuremath{\mathsf{OPT}}$ in Line 5.
We remind the readers that the cumulative error in \citep{HarPeled2005SmallerCF} is always $\varepsilon \cdot \ensuremath{\mathsf{OPT}}$.
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\begin{algorithm}
\caption{$\mathrm{Coreset1d}(P,\varepsilon)$}
\label{alg:k1d1}
\begin{algorithmic}[1]
\REQUIRE Dataset $P=\{p_1,\cdots,p_n\}\subset \mathbb{R}$ with $p_1<\cdots<p_n$, and $\varepsilon\in (0,1)$.
\ENSURE An $\varepsilon$-coreset $S$ of $P$ for $1$-d \ProblemName{$1$-Median}\xspace
\STATE Set $S\leftarrow \emptyset$.
\STATE Set $L\leftarrow \lfloor\varepsilon n\rfloor$ and $R\leftarrow n-\lfloor\varepsilon n\rfloor$. Set $B_-\leftarrow \left\{p_1,\ldots, p_L\right\}$ and $B_+\leftarrow \left\{p_{R+1},\ldots, p_n\right\}$.
\STATE \ Add $\mu(B_-)$ with weight $N(B_-)$ and $\mu(B_+)$ with weight $N(B_+)$ into $S$.
\STATE Divide $\{p_{L+1}, \dots, p_{\lfloor \frac{n}{2} \rfloor} \}$ into disjoint blocks $\{ B_i \}_{i \ge 0}$ where $B_i := \big\{ p\in \{p_{L+1}, \dots p_{\lfloor \frac{n}{2} \rfloor} \} : 2^i\cdot \ensuremath{\mathsf{OPT}}\leq \ensuremath{\mathrm{cost}}(P,p) < 2^{i+1}\cdot \ensuremath{\mathsf{OPT}} \big\}$.
\STATE For each non-empty block $B_i$ ($i\geq 0$), consider the points within $B_i$ from left to right and group them into buckets $\{ B_{i,j}\}_{j\geq 0}$ in a greedy way: each bucket $B_{i,j}$ is a maximal set with $\delta(B_{i,j})\leq \varepsilon \cdot 2^i\cdot \ensuremath{\mathsf{OPT}}$.
\STATE For every bucket $B_{i,j}$, add $\mu(B_{i,j})$ with weight $N(B_{i,j})$ into $S$.
\STATE Symmetrically divide $\{ p_{\lfloor \frac{n}{2} \rfloor + 1},\dots, p_R\}$ into disjoint buckets $\{ B'_{i,j}\}_{i,j\geq 0}$ and add $\mu(B'_{i,j})$ with weight $N(B'_{i,j})$ into $S$ for every bucket $B'_{i,j}$.
\STATE Return $S$.
\end{algorithmic}
\end{algorithm}
\noindent
We define function $f_P: \mathbb{R}\rightarrow \mathbb{R}_{\geq 0}$ such that $f_P(c)=\ensuremath{\mathrm{cost}}(P,c)$ for every $c\in \mathbb{R}$ and define $f_S: \mathbb{R}\rightarrow \mathbb{R}_{\geq 0}$ such that $f_S(c) = \ensuremath{\mathrm{cost}}(S,c)$ for every $c\in \mathbb{R}$. By Observation~\ref{obs:convexity}, $f_P(c)$ is decreasing on $(-\infty, c^*]$ and increasing on $[c^*,\infty)$. As a result, each $B_i (B'_i)$ consists of consecutive points in $P$.
The following lemma shows that the number of blocks
$B_i$($B'_i$) is $O(\log \frac{1}{\varepsilon})$.
\begin{lemma}[\bf Number of blocks]
\label{lem:number_bucket}
There are at most $O(\log(\frac{1}{\varepsilon}))$ non-empty blocks $B_i$ or $B'_i$.
\end{lemma}
\begin{proof}
We prove \Cref{alg:k1d1} divides $\{p_{L+1}, \dots, p_{\lfloor \frac{n}{2} \rfloor} \}$ into at most $O(\log(\frac{1}{\varepsilon}))$ non-empty blocks $B_i$. Argument for $\{ p_{\lfloor \frac{n}{2} \rfloor + 1},\dots, p_R\}$ is entirely symmetric.
If $B_i$ is non-empty for some $i \ge 0$, we must have $f_P(p) \ge 2^i \cdot \ensuremath{\mathsf{OPT}}$ for $p \in B_i$. We also have $p > p_L$ since $p \in B_i \subset \{p_{L+1}, \dots, p_{\lfloor \frac{n}{2} \rfloor} \}$. Since $f_P$ is convex, we have $ 2^i \cdot \ensuremath{\mathsf{OPT}} \le f_P(p) \le f_P(p_L)$. If we show that $f_P(p_L)\leq (1+\varepsilon^{-1})\cdot \ensuremath{\mathsf{OPT}} = (1+\varepsilon^{-1})\cdot f_P(c^\star)$ then we have $2^i \le (1+\varepsilon^{-1})$ thus $i \le O(\log(\frac{1}{\varepsilon}))$.
To prove $f_P(p_L)\leq (1+\varepsilon^{-1})\cdot f_P(c^\star)$, we use triangle inequality to obtain that
\begin{eqnarray*}
f_P(p_L)&=&\sum_{i=1}^n |p_i-p_L|\\
&\leq& \sum_{i=1}^n (|p_i-c^\star|+|c^\star-p_L|)\\
&=&f_P(c^\star)+n\cdot |c^\star-p_L|.
\end{eqnarray*}
Moreover, we note that by the choice of $p_L$, $|c^\star-p_L|\leq \frac{1}{L}\cdot \sum_{i=1}^L |c^\star-p_i|\leq \frac{f_P(c^\star)}{\varepsilon n}$. Thus we have,
$$
f_P(p_L)\leq f_P(c^\star)+n\cdot \frac{f_P(c^\star)}{\varepsilon n}=(1+\varepsilon^{-1})\cdot f_P(c^\star).
$$
\end{proof}
\noindent
We next give a key lemma that we use to obtain an improved coreset size.
\begin{lemma}[\bf Number of buckets]
\label{lem:number_subbucket}
Each non-empty block $B_i$ or $B'_i$ is divided into $O(\varepsilon^{-1/2})$ buckets.
\end{lemma}
\begin{proof}
We prove that each block $B_i \subset \{p_{L+1}, \dots, p_{\lfloor \frac{n}{2} \rfloor} \}$ is divided into at most $O(\varepsilon^{-1/2})$ buckets $B_{i,j}$. Argument for $B'_i \subset \{ p_{\lfloor \frac{n}{2} \rfloor + 1},\dots, p_R\}$ is entirely symmetric.
Suppose $B_i = \{p_{l_i},\dots,p_{r_i}\}$ and we divide $B_i$ into $t$ buckets $\{B_{i,j} \}_{j=0}^{t-1}$. Since each $B_{i,j}$ is the maximal bucket with $\delta(B_{i,j})\leq \varepsilon \cdot 2^i\cdot \ensuremath{\mathsf{OPT}}$, we have $\delta(B_{i,2j} \cup B_{i,2j+1}) > \varepsilon \cdot 2^i\cdot \ensuremath{\mathsf{OPT}}$ for $2j+1 < t$. Denote $B_{i,2j} \cup B_{i,2j+1}$ by $C_j$ for $j \in \{0,\dots,\lfloor \frac{t-2}{2} \rfloor\}$, we have:
\begin{eqnarray}
4 \cdot 2^i \cdot \ensuremath{\mathsf{OPT}} &\geq& f_P(p_{l_i})+f_P(p_{r_i})\nonumber\\
&\geq&\sum_{p \in B_i} (|p-p_{l_i}|+|p-p_{r_i}|)\nonumber\\
&=&N(B_i)(p_{r_i}-p_{l_i})\nonumber\\
&\geq& (\sum_{j=1}^{\lfloor \frac{t-2}{2} \rfloor} N(C_j))\cdot (\sum_{j=1}^{\lfloor \frac{t-2}{2} \rfloor} L(C_j))\nonumber\\
&\geq& \big(\sum_{j=1}^{\lfloor \frac{t-2}{2} \rfloor} N(C_j)^{\frac{1}{2}} L(C_j)^{\frac{1}{2}}\big)^2 \label{eqn:cauchy}\\
&\geq & \big(\sum_{j=1}^{\lfloor \frac{t-2}{2} \rfloor} \delta(C_j)^{\frac{1}{2}}\big)^2 \quad \textnormal{by \Cref{fac:del<=LN}}\nonumber\\
&>& (\lfloor \frac{t-2}{2} \rfloor)^2 \cdot \varepsilon \cdot 2^i \cdot \ensuremath{\mathsf{OPT}}.\nonumber
\end{eqnarray}
Here \eqref{eqn:cauchy} is from Cauchy-Schwarz inequality.
So we have $(\lfloor \frac{t-2}{2} \rfloor)^2 \cdot \varepsilon \cdot 2^i \cdot \ensuremath{\mathsf{OPT}} < 4 \cdot 2^i \cdot \ensuremath{\mathsf{OPT}}$, which implies $t\leq O(\varepsilon^{-\frac{1}{2}})$.
\end{proof}
\noindent
Now we are ready to prove \Cref{thm:1d1k}.
\begin{proof}[of \Cref{thm:1d1k}]
We first verify that the set $S$ is an $O(\varepsilon)$-coreset.
Our goal is to prove that for every $c\in \mathbb{R}$, $f_S(c)\in (1\pm \varepsilon)\cdot f_P(c)$.
We prove this for any $c\in (-\infty,c^\star]$. The argument for $c \in (c^\star,+\infty)$ is entirely symmetric.
For any $c \in (-\infty,c^\star] $, we have
$$f_P(c) - f_S(c) = \sum_{B} \ensuremath{\mathrm{cost}}(B,c) - N(B) \cdot d(\mu(B),c)$$ where $B$ takes over all buckets. We then separately analyze the $c \in (-\infty,p_L]$ case and the $c \in (p_L,c^*]$ case.
When $c \in (-\infty,p_L]$, we note that $f_P(p_L)=f_S(p_L)$(Lemma~\ref{lem:CumulativeErrorControlsCoresetError}). By elementary calculus, both $\frac{df_P(c)}{dc}$ and $\frac{df_S(c)}{dc}$ are within $[-n,-(1-2\varepsilon) n]$; hence differ by at most a multiplicative factor of $1+\varepsilon$. Thus, $|f_P(c)-f_S(c)|\leq O(\varepsilon)\cdot f_P(c)$.
When $c \in (p_L,c^*]$, there is at most one bucket $B = \{ p_{l},\dots,p_r\}$ such that $c \in (p_l, p_r)$ since these buckets are disjoint. If such a bucket $B$ does not exist, we have $f_P(c) = f_S(c)$. Now suppose such a bucket $B$ exists. Since $c > p_L$, we have $B \subset B_i$ for some block $B_i$. Thus, by Lemma~\ref{lem:CumulativeErrorControlsCoresetError} and the construction of buckets:
\begin{eqnarray*}
|f_P(c) - f_S(c)|\leq \delta(B)
\leq \varepsilon \cdot 2^i \cdot \ensuremath{\mathsf{OPT}}. \\
\end{eqnarray*}
We have $f_P(p_l) \ge 2^i \cdot \ensuremath{\mathsf{OPT}} $ and $f_P(p_r) \ge 2^i \cdot \ensuremath{\mathsf{OPT}}$. Since $f_P$ is convex (thus decreasing on $(-\infty, c^*]$) and $c \in (p_l, p_r)$, we also have $f_P(c) \ge 2^i \cdot \ensuremath{\mathsf{OPT}}$. This implies $|f_P(c) - f_S(c)| \leq \varepsilon \cdot f_P(c)$.
It remains to show that the size of $S$, which is the total number of buckets, is $\Tilde{O}(\varepsilon^{-1/2})$. However, by \Cref{lem:number_bucket}, there are $O(\log(1/\varepsilon))$ blocks, and by \Cref{lem:number_subbucket}, each block contains $O(\varepsilon^{-1/2})$ buckets. Thus, there are at most $\Tilde{O}(\varepsilon^{-1/2})$ buckets.
\end{proof}
\subsection{Tight Lower Bound on Coreset Size for $1$-d \ProblemName{$k$-Median}\xspace when $k\geq 2$}
\label{sec:d1k2}
In this subsection, we prove that the size lower bound of $\epsilon$-coreset for \ProblemName{$k$-Median}\xspace problem in $\mathbb{R}^1$ is $\Omega(\frac{k}{\epsilon})$. This lower bound matches the upper bound in~\citep{HarPeled2005SmallerCF}.
\begin{theorem}[\bf{Coreset lower bound for $1$-d \ProblemName{$k$-Median}\xspace when $k\geq 2$}]\label{thm:Lowerbound1dkmedian}
For a given integer $k\geq 2$ and $\epsilon\in (0,1)$, there exists a dataset $P\subset \mathbb{R}$ such that any $\varepsilon$-coreset $S$ must have size $|S|\geq \Omega(k \varepsilon^{-1})$.
\end{theorem}
\noindent
For ease of exposition, we only prove the lower bound for \ProblemName{$2$-Median}\xspace here.
The generalization to \ProblemName{$k$-Median}\xspace is straightforward and can be found in \cref{sec:general1dkmedianlower}.
We first prove a technical lemma, which shows that a quadratic function cannot be approximated well by an affine linear function in a long enough interval. We note that similar technical lemmas appear in coresets lower bound of other related clustering problems~\citep{Braverman2019CoresetsFO}~\citep{baker2020coresets}. The lemma in~\citep{Braverman2019CoresetsFO} shows that the function $
\sqrt{x}$ cannot be approximated well by an affine linear function while our lemma is about approximating a quadratic function. The lemma in~\citep{baker2020coresets} shows that a quadratic function cannot be approximated well by an affine linear function on a bounded interval, a situation slightly different from ours.
\begin{lemma}[\bf{Quadratic function cannot be approximated well by affine linear functions}]\label{lem:quadratic_approximation}
Let $[a,b]$ be an interval, $f(c)$ be a quadratic function on interval $[a,b]$, $\alpha >0 $ and $ \beta > 0$ be two constants, and $0 \le \epsilon < \frac{1}{32} \frac{\beta}{\alpha}$ be a non-negative real number.
If $|f(c)| \le \alpha$ and $(b-a)^2 f''(c)\ge \beta$ for all $c \in [a,b]$, then there is no affine linear function $g$ such that $|g(c) - f(c)| \le \epsilon f(c)$ for all $c \in [a,b]$.
\end{lemma}
\begin{proof}
Assume there is an affine linear function $g(c)$ that satisfies $|g(c) - f(c)| \le \epsilon f(c)$ for all $c\in[a,b]$. We denote the error function by $r(c) = f(c) - g(c)$, which has two properties. First, its $l_{\infty}$ norm $\| r \|_{\infty} = \sup_{c \in [a,b]} |r(c)| \le \epsilon \alpha$. Second, it is quadratic and satisfies $r''(c) = f''(c)$, thus $(b-a)^2 r''(c) \ge \beta$ for all $c \in [a,b]$.
Define $L = b-a$. By the mean value theorem, there is a point $c_{1/4} \in [a,\frac{a+b}{2}]$ such that $|r'(c_{1/4})| = |\frac{1}{L/2} [r(\frac{a+b}{2}) - r(a)]| \le \frac{4}{L} \| r \|_{\infty}$. Similarly there is a point $c_{3/4} \in [\frac{a+b}{2},b]$ such that $|r'(c_{3/4})| \le \frac{4}{L} \| r \|_{\infty}$. Since $r$ is a quadratic function, its derivative is monotonic and $|r'(\frac{a+b}{2})| \le \max(|r'(c_{1/4})| ,|r'(c_{3/4})|) \le \frac{4}{L} \| r \|_{\infty}$. Thus we have
\begin{align*}
r(b) - r(\frac{a+b}{2}) &= \int_{\frac{a+b}{2}}^{b} r'(c) \mathrm{dc} \\
&=\int_{\frac{a+b}{2}}^b r'(\frac{a+b}{2}) + \int_{\frac{a+b}{2}}^c r''(t) \mathrm{dt} \mathrm{dc}\\
&= \frac{L}{2} r'(\frac{a+b}{2}) + \int_{\frac{a+b}{2}}^b \int_{\frac{a+b}{2}}^c r''(t) \mathrm{dt} \mathrm{dc} \\
&\ge - \frac{L}{2} \frac{4}{L} \| r \|_{\infty} + \frac{1}{8} (b-a)^2 r''(c) \\
&\ge -2 \epsilon \alpha + \frac{1}{8} \beta.
\end{align*}
On the other hand $r(b) - r(\frac{a+b}{2}) \le 2 \| r \|_{\infty} \le 2 \epsilon \alpha$. We have $2\epsilon \alpha \ge -2 \epsilon \alpha + \frac{1}{8} \beta$. Thus $\epsilon \ge \frac{1}{32} \frac{\beta}{\alpha}$.
\end{proof}
\noindent
For any dataset $P$, with a slight abuse of notations, we denote the cost function for \ProblemName{$2$-Median}\xspace with one query point fixed in $0$ by $f_P(c) = \ensuremath{\mathrm{cost}}(P,\{ 0, c\})$. The following lemma shows that $f_P(c)$ is a piecewise affine linear function and all the transition points are $P\cup \{2p\mid p\in P\}$.
\begin{lemma}[\bf{The function $f_P(c)$ is piecewise affine linear}]\label{lem:1d2m-cost-is-affine}
Let $P \subset \mathbb{R}$ be a weighted dataset. The function $f_P(c)$ is a piecewise affine linear function. All the transition points between two affine pieces are $P\cup \{2p\mid p\in P\}$.
\end{lemma}
\begin{proof}
We denote the weight of point $p$ by $w(p)$ and denote the midpoint between any point $c$ and $0$ by $\text{mid} = \frac{c}{2}$. Now assume $c \ge 0$ and both $c$ and $\frac{c}{2}$ are not in the dataset $P$. The clustering cost of a single point $p$ is
\[
\ensuremath{\mathrm{cost}}(p,\{0,c\}) = \begin{cases}
w(p)p \quad &\text{for } p \in [0,\text{mid}], \\
w(p)(c-p) \quad &\text{for } p \in [\text{mid},c],\\
w(p)(p-c) \quad &\text{for } p \in [c,+\infty).
\end{cases}
\]
If $c$ changes to $c + \mathrm{dc}$ we have
\begin{align*}
&\ensuremath{\mathrm{cost}}(p,\{0,c+\mathrm{dc}\}) - \ensuremath{\mathrm{cost}} (p,\{0,c\}) \\
= &\begin{cases}
0 \quad &\text{for } p \in [0,\text{mid}], \\
w(p)\mathrm{dc} \quad &\text{for } p \in [\text{mid}+\frac{1}{2} \mathrm{dc},c], \\
-w(p)\mathrm{dc} \quad &\text{for } p \in [c+ \mathrm{dc},+\infty).
\end{cases}
\end{align*}
Assume $|\mathrm{dc}|$ is small enough, then there are no data points in $[\text{mid},\text{mid}+\frac{1}{2} \mathrm{dc}]$ and $[c,c+\mathrm{dc}]$. We have
\begin{align*}
& \quad f_P(c+\mathrm{dc}) - f_P(c) \\
= & \quad \sum_{p \in P \cap [\text{mid},c]} w(p) \mathrm{dc} - \sum_{p \in P \cap [c,+\infty)} w(p) \mathrm{dc},
\end{align*}
thus
\[f_P'(c) = \sum_{p \in P \cap [\text{mid},c]} w(p) - \sum_{p \in P \cap [c,+\infty)} w(p) . \]
Consider $c$ moves in $\mathbb{R}$ from left to right, the derivative $f_P'(c)$ changes only when $c$ or $\text{mid} = \frac{c}{2}$ pass a data point in $P$. The same conclusion also holds for $c<0$ by a symmetric argument. This is exactly what we want.
\end{proof}
\begin{proof}[\ProblemName{$2$-Median}\xspace case of Theorem~\ref{thm:Lowerbound1dkmedian}]
We first construct the dataset $P$. The dataset $P$ is a union of $\frac{1}{\epsilon}$ disjoint intervals $\{ I_i\}_{i=1}^{\frac{1}{\epsilon}}$. Denote the left endpoint and right endpoint of $I_i$ by $l_i$ and $r_i$ respectively. We recursively define $l_i = r_{i-1}$ for $i\geq 2$, $r_i = l_i+4^{i-1}$ for $i\geq 1$, and $l_1 = 0$. Thus $r_i = l_{i+1} = \frac{1}{3} (4^i - 1)$. The weight of points is specified by a measure $\lambda$ on $P$. The measure is absolutely continuous with respect to Lebesgue measure $m$ such that its density on the $i$th interval is $\frac{\mathrm{d\lambda}}{\mathrm{dm}} = (\frac{1}{16})^{i-1}$.
We denote the density on the $i$th interval by $\mu_i$ and the density at point $p$ by $\mu(p)$.
Note that $P$ can be discretized in the following way.
We only need to take a large enough constant $n$, create a bucket $B_i$ of $(\frac{1}{4})^{i-1} n $ equally spaced points in each interval $I_i$, and assign weight $\frac{1}{n}$ to every point.
The cost function $f_P(c)$ has following two features:
\begin{enumerate}
\item \label{fea:bound} the function value $f_P(c) \in [0,\frac{2}{\epsilon}]$ for any $c \in \mathbb{R}$,
\sloppy
\item \label{fea:quadratic} the function is quadratic on the interval $[l_i+\frac{1}{3}(r_i - l_i), r_i]$ and satisfies $[\frac{2}{3}(r_i-l_i)]^2 f''_P(c) = \frac{2}{3}$ for each $i$.
\end{enumerate}
We show how to prove theorem~\ref{thm:Lowerbound1dkmedian} from these features and defer verification of these features later. Note that feature~\ref{fea:quadratic} does not contradict lemma~\ref{lem:1d2m-cost-is-affine} since the dataset contains infinite points.
Assume that $S$ is an $\frac{\epsilon}{300}$-coreset of $P$. We prove $|S| \ge \frac{1}{2\epsilon}$ by contradiction. If $|S| < \frac{1}{2\epsilon}$, then there is an interval $I_i = [l_i,r_i]$ such that $ (l_i,r_i) \cap S = \varnothing$ by the pigeonhole's principle. Consider function $f_S(c)$ on interval $[l_i+\frac{1}{3}(r_i - l_i), r_i]$. When $c \in [l_i+\frac{1}{3}(r_i - l_i), r_i]$, we have $\frac{c}{2} \in [l_i,r_i]$. Thus both $c$ and $\frac{c}{2}$ do not pass points in $S$ when $c$ moves from $l_i+\frac{1}{3}(r_i - l_i)$ to $r_i$. By lemma~\ref{lem:1d2m-cost-is-affine}, function $f_S(c)$ is affine linear on interval $[l_i+\frac{1}{3}(r_i - l_i), r_i]$. Since $S$ is an $\frac{\epsilon}{300}$-coreset of $P$, we have $|f_S(c) - f_P(c) | \le \frac{\epsilon}{300} f_P(c)$ on interval $[l_i+\frac{1}{3}(r_i - l_i), r_i]$. However, by applying lemma~\ref{lem:quadratic_approximation} to $f_P(c)$ and $f_S(c)$ on interval $[l_+\frac{1}{3}(r_i - l_i),r_i]$ with $\alpha = \frac{2}{\epsilon}$ and $\beta = \frac{2}{3}$, we obtain that $\frac{\epsilon}{300} \ge \frac{1}{32} \times \frac{2}{3} \times \frac{\epsilon}{2} > \frac{\epsilon}{300}$. This is a contradiction.
It remains to verify the two features of $f_P(c)$. We verify feature~\ref{fea:bound} by direct computations. For any point $c$, the function satisfies
\begin{align*}
0 \le f_P(c) &\le \ensuremath{\mathrm{cost}}(P,\{0,0\}) = \int_P p \mu(p) \mathrm{dp} \\
&\le \sum_{i=1}^{\frac{1}{\epsilon}} \lambda(I_i) r_i \le \sum_{i=1}^{\frac{1}{\epsilon}} (\frac{1}{4})^{i-1} \times 2 \times 4^{i-1}\\
&=\frac{2}{\epsilon}.
\end{align*}
To verify feature~\ref{fea:quadratic}, we compute the first order derivative by computing the change of the function value $f_P(c+\mathrm{dc}) - f_P(c)$ up to the first order term when $c$ increases an infinitesimal number $\mathrm{dc}$. The unweighted clustering cost of a single point $p$ is
\[
\ensuremath{\mathrm{cost}}(p,\{0,c\}) = \begin{cases}
p \quad &\text{for } p \in [0,\text{mid}], \\
c-p \quad &\text{for } p \in [\text{mid},c],\\
p-c \quad &\text{for } p \in [c,+\infty).
\end{cases}
\]
As $c$ increases to $c + \mathrm{dc}$, the clustering cost of a single point changes
\begin{align*}
& \quad \ensuremath{\mathrm{cost}}(p,\{0,c+\mathrm{dc}\}) - \ensuremath{\mathrm{cost}} (p,\{0,c\})] \\
= & \quad \begin{cases}
0 \quad &\text{for } p \in [0,\text{mid}], \\
O(\mathrm{dc}) \quad &\text{for } p \in [\text{mid},\text{mid}+\frac{1}{2} \mathrm{dc}], \\
\mathrm{dc} \quad &\text{for } p \in [\text{mid}+\frac{1}{2} \mathrm{dc},c], \\
O(\mathrm{dc}) \quad &\text{for } p \in [c,c + \mathrm{dc}], \\
-\mathrm{dc} \quad &\text{for } p \in [c+ \mathrm{dc},+\infty).
\end{cases}
\end{align*}
The cumulative clustering cost changes
\begin{align*}
& f_P(c +\mathrm{dc}) - f_P(c) \\
= & \int_0^{+\infty} \ensuremath{\mathrm{cost}}(p,\{0,c+\mathrm{dc}\}) - \ensuremath{\mathrm{cost}} (p,\{0,c\}) \mathrm{d\lambda} \\
= & \int_0^{\text{mid}} 0\mathrm{d\lambda} + \int_{\text{mid}}^{\text{mid}+\frac{1}{2} \mathrm{dc}} O(\mathrm{dc}) \mathrm{d\lambda} + \int_{\text{mid}+\frac{1}{2} \mathrm{dc}}^c \mathrm{dc} \mathrm{d\lambda}\\
&+\int_c^{c+ \mathrm{dc}} O(\mathrm{dc}) \mathrm{d\lambda} + \int_{c+ \mathrm{dc}}^{+\infty} -\mathrm{dc} \mathrm{d\lambda}\\
= & \lambda([\text{mid},c])\mathrm{dc} - \lambda([c,+\infty))\mathrm{dc} + O(\mathrm{dc})^2.
\end{align*}
Thus the first order derivative $f_P'(c) = \lambda([\frac{c}{2},c]) - \lambda([c,+\infty))$ and the second order derivative
\begin{align*}
f_P''(c) &= \frac{\mathrm{d}}{\mathrm{dc}} \bigl( \lambda([\frac{c}{2},c]) - \lambda([c,+\infty)) \bigr),\\
&= 2\mu(c) - \frac{1}{2} \mu(\frac{c}{2}) .
\end{align*}
For $c \in [l_i+\frac{1}{3} (r_i - l_i), r_i]$, the two points $c$ and $\frac{c}{2}$ both lie in
interval $[l_i, r_i]$. We have $\mu(c) = \mu(\frac{c}{2}) = \mu_i$ and $f_P''(c) = \frac{3}{2} \mu_i$. Thus the function $f_P(c)$ is quadratic on $[l_i+\frac{1}{3}(r_i - l_i), r_i]$ and $[\frac{2}{3}(r_i-l_i)]^2 f''_P(c) = \frac{2}{3}$.
\end{proof}
\section{Improve Coreset Sizes when $2\leq d \leq \varepsilon^{-2}$}
\label{sec:improve}
In this section, we consider the case of constant $d$, $2\leq d\leq \varepsilon^{-2}$, and provide several improved coreset bounds for a general problem of Euclidean \ProblemName{$k$-Median}\xspace, called Euclidean \ProblemName{$(k, z)$-Clustering}\xspace.
The only difference from \ProblemName{$k$-Median}\xspace is that the goal is to find a $k$-center set $C \subset \mathbb{R}^d$ that minimizes the objective function
\begin{equation} \label{eq:DefCost_general}
\ensuremath{\mathrm{cost}}_z(P, C) := \sum_{p \in P}{d^z(p, C)} = \sum_{p\in P}{\min_{c\in C} d^z(p,c)},
\end{equation}
where $d^z$ represents the $z$-th power of the Euclidean distance.
The coreset notion is as follows.
\begin{definition}[\bf $\epsilon$-Coreset for Euclidean \ProblemName{$(k, z)$-Clustering}\xspace~\citep{harpeled2004on}]
\label{def:coreset_general}
Given a dataset $P\subset \mathbb{R}^d$ of $n$ points, an integer $k\geq 1$, constant $z\geq 1$ and $\varepsilon\in (0,1)$, an $\varepsilon$-coreset for Euclidean \ProblemName{$(k, z)$-Clustering}\xspace is a subset $S \subseteq P$ with weight $w : S \to \mathbb{R}_{\geq 0}$, such that
\begin{equation*}
\forall C\in \mathcal{C},
\qquad
\sum_{p \in S}{w(p) \cdot d^z(p, C)}
\in (1 \pm \varepsilon) \cdot \ensuremath{\mathrm{cost}}_z(P, C).
\end{equation*}
\end{definition}
\noindent
We first study the case of $k=1$ and provide a coreset upper bound $\tilde{O}(\sqrt{d} \varepsilon^{-1})$ (\Cref{thm:upper}).
Then we study the general case $k\geq 1$ and provide a coreset lower bound $\Omega(kd)$ (\Cref{thm:lowerbound}).
\subsection{Improved Coreset Size in $\mathbb{R}^d$ when $k=1$}
\label{sec:d2k1}
We prove the following main theorem for $k=1$ whose center is a point $c\in \mathbb{R}^d$.
\begin{theorem}[\bf{Coreset for Euclidean \ProblemName{$(1, z)$-Clustering}\xspace}]
\label{thm:upper}
Let integer $d\geq 1$, constant $z\geq 1$ and $\varepsilon\in (0,1)$.
%
There exists a randomized polynomial time algorithm that given a dataset $P\subset \mathbb{R}^d$, outputs an $\varepsilon$-coreset for Euclidean \ProblemName{$(1, z)$-Clustering}\xspace of size at most $z^{O(z)} \sqrt{d}\varepsilon^{-1}\log \varepsilon^{-1}$.
\end{theorem}
\begin{proofsketch}
By~\citep{braverman2022power}, we first reduce the problem to constructing a mixed coreset $(S,w)$ for Euclidean \ProblemName{$(1, z)$-Clustering}\xspace for a dataset $P\subset B(0,1)$ satisfying that $\forall c\in \mathbb{R}^d$,
\[
\sum_{p \in S}{w(p) \cdot d^z(p, c)} \in
\ensuremath{\mathrm{cost}}_z(P, c)\pm \varepsilon \max\left\{1, \|c\|_2\right\}^z \cdot |P|.
\]
%
The main idea to construct such $S$ is to prove that the class discrepancy of Euclidean \ProblemName{$(1, z)$-Clustering}\xspace for $P$ is at most $z^{O(z)}\max\left\{1,r\right\}^z \cdot \sqrt{d}/m$ for $c\in B(0,r)$ (\Cref{lm:discrepancy_median}), which implies the existence of a mixed coreseet $S$ of size $z^{O(z)} \sqrt{d}\varepsilon^{-1}$ by Fact 6 of~\citep{karnin2019discrepancy}.
%
For the class discrepancy, we apply an analytic result of ~\citep{karnin2019discrepancy} (\Cref{thm:discrepancy_upper}).
%
The main difference is that \citep{karnin2019discrepancy} only considers an additive error that can handle $c\in B(0,1)$ instead of an arbitrary center $c\in \mathbb{R}^d$.
%
In our case, we allow a mixed error proportional to the scale of $\|c\|_2$ and extend the approach of \citep{karnin2019discrepancy} to handle arbitrary centers $c\in \mathbb{R}^d$ by increasing the discrepancy by a multiplicative factor $\|c\|_2^z$.
\end{proofsketch}
\noindent
The above theorem is powerful and leads to the following results for $z=O(1)$:
\begin{enumerate}
\item By dimension reduction as in~\citep{huang2020coresets,cohenaddad2021new,cohenaddad2022towards}, we can assume $d=O(\varepsilon^{-2}\log\varepsilon^{-1})$.
%
Consequently, our coreset size is upper bounded by $\tilde{O}(\varepsilon^{-2})$, which matches the nearly tight bound in~\citep{cohenaddad2022towards}.
\item For $d= O(1)$, our coreset size is $O(\varepsilon^{-1})$, which is the first known result in small dimensional space.
%
Specifically, the prior known coreset size in $\mathbb{R}^2$ is $\tilde{O}(\varepsilon^{-3/2})$~\citep{braverman2022power}, and our result improves it by a factor of $\varepsilon^{-1/2}$.
%
\end{enumerate}
\noindent
We conjecture that our coreset size is almost tight, i.e., there exists a coreset lower bound $\Omega(\sqrt{d} \varepsilon^{-1})$ for constant $2\leq d \leq \varepsilon^{-2}$, which leaves as an interesting open problem.
\subsubsection{Useful Notations and Facts}
\label{sec:notation_upper}
For preparation, we first propose a notion of mixed coreset (\Cref{def:mixed_coreset}), and then introduce some known discrepancy results.
\paragraph{Reduction to mixed coreset.}
Let $B(a,r)$ denote the $\ell_2$-ball in $\mathbb{R}^d$ that centers at $a\in \mathbb{R}^d$ with radius $r\geq 0$.
Specifically, $B(0,1)$ is the unit ball centered at the original point.
\begin{definition}[\bf Mixed coreset for Euclidean \ProblemName{$(1, z)$-Clustering}\xspace]
\label{def:mixed_coreset}
Given a dataset $P\subset B(0,1)$ and $\varepsilon\in (0,1)$, an $\varepsilon$-mixed-coreset for Euclidean \ProblemName{$(1, z)$-Clustering}\xspace is a subset $S \subseteq P$ with weight $w : S \to \mathbb{R}_{\geq 0}$, such that $\forall c\in \mathbb{R}^d$,
\begin{equation}
\label{eq:DefCoreset}
\sum_{p \in S}{w(p) \cdot d^z(p, c)} \in
\ensuremath{\mathrm{cost}}_z(P, c)\pm \varepsilon \max\left\{1, \|c\|_2\right\}^z \cdot |P|.
\end{equation}
\end{definition}
\noindent
Actually, prior work~\citep{cohenaddad2021new,cohenaddad2022towards,braverman2022power} usually consider the following form: $
\forall c\in \mathbb{R}^d$,
\begin{equation}
\label{eq:DefCoreset_equivalent}
\sum_{p \in S}{w(p) \cdot d^z(p, c)}
\in (1 \pm \varepsilon) \cdot \ensuremath{\mathrm{cost}}_z(P, c)\pm \varepsilon |P|.
\end{equation}
Compared to \Cref{def:coreset}, the above inequality allows both a multiplicative error $\varepsilon\cdot \ensuremath{\mathrm{cost}}_z(P,c)$ and an additional additive error $\varepsilon |P|$.
Note that for a small $r=O(1)$, the additive error $\varepsilon |P|$ dominates the total error; while for a large $r\gg \Omega(1)$, the multiplicative error $\varepsilon\cdot \ensuremath{\mathrm{cost}}_z(P,c)\approx \varepsilon \|c\|_2\cdot |P|$ dominates the total error.
Hence, it is not hard to check that Inequality~\eqref{eq:DefCoreset_equivalent} is an equivalent form of Inequality~\eqref{eq:DefCoreset} (up to an $2^{O(z)}$-scale).
This is also the reason that we call \Cref{def:mixed_coreset} mixed coreset.
We have the following useful reduction.
\begin{theorem}[\bf{Reduction from coreset to mixed coreset~\citep{braverman2022power}}]
\label{thm:reduction}
Let $\varepsilon\in (0,1)$.
%
Suppose there exists a polynomial time algorithm $A$ that constructs an $\varepsilon$-mixed coreset for Euclidean \ProblemName{$(1, z)$-Clustering}\xspace of size $\Gamma$.
%
Then there exists a polynomial time algorithm $A'$ that constructs an $\varepsilon$-coreset for Euclidean \ProblemName{$(1, z)$-Clustering}\xspace of size $O(\Gamma\log \varepsilon^{-1})$.
%
\end{theorem}
\noindent
Thus, it suffices to prove that an $\varepsilon$-mixed coreset is of size $z^{O(z)}\sqrt{d} \varepsilon^{-1}$, which implies \Cref{thm:upper}.
\paragraph{Class discrepancy.}
For preparation, we introduce the notion of class discrepancy introduced by~\citep{karnin2019discrepancy}.
The idea of combining discrepancy and coreset construction has been studied in the literature, specifically for kernel density estimation~\citep{phillips2018improved,phillips2018near,karnin2019discrepancy,tai2022optimal}.
We propose the following definition.
\begin{definition}[\bf{Class discrepancy~\citep{karnin2019discrepancy}}]
\label{def:discrepancy}
Let $m\geq 1$ be an integer.
%
Let $f: \mathcal{X}, \mathcal{C}\rightarrow \mathbb{R}$ and $P\subseteq \mathcal{X}$ with $|P|=m$.
%
The class discrepancy of of $P$ w.r.t. $(f,\mathcal{C})$ is
\begin{align*}
D^{(\mathcal{C})}_P(f) := & \min_{\sigma\in \left\{-1,1\right\}^P} D^{(\mathcal{C})}_P(f,\sigma) \\
= &\min_{\sigma\in \left\{-1,1\right\}^P} \max_{c\in \mathcal{C}} \frac{1}{m}\left|\sum_{p\in P} \sigma_p\cdot f(p,c)\right|.
\end{align*}
Moreover, we define $D^{(\mathcal{X}, \mathcal{C})}_m(f) := \max_{P\subseteq \mathcal{X}: |P|=m} D^{(\mathcal{C})}_P(f)$ to be the class discrepancy w.r.t. $(f,\mathcal{X},\mathcal{C})$.
%
\end{definition}
\noindent
Here, $\mathcal{X}$ is the instance space and $\mathcal{C}$ is the parameter space.
%
Specifically, for Euclidean \ProblemName{$(1, z)$-Clustering}\xspace, we let $\mathcal{X}, \mathcal{C}\subseteq \mathbb{R}^d$ and $f$ be the Euclidean distance.
%
The class discrepancy $D^{(\mathcal{X}, \mathcal{C})}_m(f)$ measures the capacity of $\mathcal{C}$.
%
Intuitively, if the capacity of $\mathcal{C}$ is large and leads to a complicated geometric structure of vector $(f(p,c))_{p\in P}$ for $c\in \mathcal{C}$, $D^{(\mathcal{X}, \mathcal{C})}_m(f)$ tends to be large.
%
\paragraph{Useful discrepancy results.}
For a vector $p\in \mathbb{R}^d$ and integer $l\geq 1$, let $p^{\otimes l}$ present the $l$-dimensional tensor obtained from the outer product of $p$ with itself $l$ times.
For a $l$-dimensional tensor $X$ with $d^l$ entries, we consider the measure $\|X\|_{T_l} := \max_{c\in \mathbb{R}^d: \|q\| = 1} |\langle X, q^{\otimes l}\rangle |$.
Next, we provide some known results about the class discrepancy.
\begin{theorem}[\bf{An upper bound for class discrepancy (restatement of Theorem 18 of~\citep{karnin2019discrepancy})}]
\label{thm:discrepancy_upper}
\sloppy
Let $\mathcal{X} = B(0,1)$ in $\mathbb{R}^d$.
%
Let $f:\mathbb{R}\rightarrow \mathbb{R}$ be analytic satisfying that for any integer $l\geq 1$, $|\frac{d^l f}{d x^l}(x)| \leq \gamma_1 C^l l!$ for some constant $\gamma_1, C>0$.
%
Let $\mathcal{C} = B(0,\frac{1}{2C})$ and $m\geq 1$ be an integer.
%
The class discrepancy w.r.t. $(f = f(\langle p,c \rangle), \mathcal{X}, \mathcal{C})$ is at most $D^{(\mathcal{X}, \mathcal{C})}_m(f) \leq \gamma_2\gamma_1\sqrt{d}/m$ for some constant $\gamma_2 > 0$.
%
Moreover, for any dataset $P\subset \mathcal{X}$ of size $m$, there exists a randomized polynomial time algorithm that constructs $\sigma\in \left\{-1,1\right\}^P$ satisfying that for any integer $l\geq 1$, we have
\begin{align*}
\label{ineq:tensor}
\|\sum_{p\in P}\sigma_p\cdot p^{\otimes l}\|_{T_l} = O(\sqrt{d l \log^3 l}).
\end{align*}
%
This $\sigma$ satisfies $D^{(\mathcal{C})}_P(f,\sigma) \leq \gamma_2\gamma_1\sqrt{d}/m$.
\end{theorem}
\noindent
Note that the above theorem is a constructive result instead of an existential result in Theorem 18 of~\citep{karnin2019discrepancy}.
This is because Theorem 18 of~\citep{karnin2019discrepancy} applies the existential version of Banaszczyk’s \citep{banaszczyk1998balancing}, which has been proven to be constructive recently~\citep{bansal2019gram}.
Also, note that the construction of $\sigma$ only depends on $P$ and does not depend on the selection of $\mathcal{C}$.
This observation is important for the construction of mixed coresets via discrepancy.
\subsubsection{Proof of \Cref{thm:upper}}
\label{sec:proof_upper}
We are ready to prove \Cref{thm:upper}.
The main lemma is as follows.
\begin{lemma}[\bf Class discrepancy for Euclidean \ProblemName{$(1, z)$-Clustering}\xspace]
\label{lm:discrepancy_median}
Let $m\geq 1$ be an integer.
%
Let $f = d^z$ and $\mathcal{X} = B(0,1)$.
%
For a given dataset $P\subset \mathcal{X}$ of size $m$, there exists a vector $\sigma\in \left\{-1,1\right\}^P$ such that for any $r>0$,
\[
D^{(B(0,r))}_P(f,\sigma) \leq z^{O(z)}\max\left\{1,r\right\}^z \cdot \sqrt{d}/m.
\]
%
\end{lemma}
\noindent
The above lemma indicates that the class discrepancy for Euclidean \ProblemName{$(1, z)$-Clustering}\xspace linearly depends on the radius $r$ of the parameter space $\mathcal{C}$.
Note that the lemma finds a vector $\sigma$ that satisfies all levels of parameter spaces $\mathcal{C}= B(0,r)$ simultaneously.
This requirement is slightly different from \Cref{def:discrepancy} that considers a fixed parameter space.
Observe that the term $\max\left\{1,r\right\}$ is similar to $\max\left\{1,\|c\|_2\right\}$ in \Cref{def:mixed_coreset}, which is the key of reduction from \Cref{lm:discrepancy_median} to \Cref{thm:upper}.
The proof idea is similar to that of Fact 6 of~\citep{karnin2019discrepancy}.
\begin{proof}[of \Cref{thm:upper}]
Let $P\subset B(0,1)$ be a dataset of size $n$ and $\Lambda = z^{O(z)} \sqrt{d}\varepsilon^{-1}$.
%
By the same argument as in Fact 6 of~\citep{karnin2019discrepancy}, we can iteratively applying \Cref{lm:discrepancy_median} to construct a subset $S\subseteq P$ of size $m = \Theta(\Lambda)$ together with weights $w(p) = \frac{n}{|S|}$ for $p\in S$ and a vector $\sigma\in \left\{-1,1\right\}^S$, and $(S,\sigma)$ satisfies that for any $c\in \mathbb{R}^d$,
\begin{align*}
& \quad \left| \sum_{p\in S} w(p)\cdot d(p,c) - \ensuremath{\mathrm{cost}}_z(P,c) \right| \\
\leq & \quad n\cdot D^{(B(0,\|c\|_2))}_S(f,\sigma) \\
\leq & \quad \varepsilon \max\left\{1, \|c\|_2\right\}\cdot n.
\end{align*}
%
%
This implies that $S$ is an $O(\varepsilon)$-mixed coreset for Euclidean \ProblemName{$(1, z)$-Clustering}\xspace of size at most $\Lambda = z^{O(z)}\sqrt{d} \varepsilon^{-1}$, which completes the proof of \Cref{thm:upper}.
%
\end{proof}
\noindent
It remains to prove \Cref{lm:discrepancy_median}.
\begin{proof}[of \Cref{lm:discrepancy_median}]
Let $P\subset B(0,1)$ be a dataset of size $m$.
%
We first construct a vector $\sigma\in \left\{-1,1\right\}^P$ by the following way:
\begin{enumerate}
\item For each $p\in P$, construct a point $\phi(p)=(\frac{1}{2}\|p\|_2^2, \frac{\sqrt{2}}{2} p, \frac{1}{2})\in \mathbb{R}^{d+2}$.
%
\item By \Cref{thm:discrepancy_upper}, construct $\sigma\in \left\{-1,1\right\}^P$ such that for any integer $l\geq 1$,
\[
\|\sum_{p\in P}\sigma_p\cdot \phi(p)^{\otimes l}\|_{T_l} = O(\sqrt{(d+2) l \log^3 l}).
\]
\end{enumerate}
%
Let $\phi(P)$ be the collection of all $\phi(p)$s.
%
Note that $\|\phi(p)\|_2\leq 1$ by construction, which implies that $\phi(P)\subset B(0,1)\subset \mathbb{R}^{d+2}$.
%
In the following, we show that $\sigma$ satisfies \Cref{lm:discrepancy_median}.
%
Fix $r\geq 1$ and let $\mathcal{C}=B(0,r)$.
%
We construct another dataset $P' = \left\{p'=\frac{p}{4r}: p\in P\right\}$.
%
For any $c\in \mathcal{C}=B(0,r)$, we let $c' = \frac{c}{4r}\in B(0,\frac{1}{4})$.
%
By definition, we have for any $p\in \mathcal{X}$ and $c\in \mathcal{C}$,
\[
\frac{1}{m} \left|\sum_{p\in P}\sigma_p\cdot f(p,c)\right| = \frac{(4r)^z}{m} \left|\sum_{p'\in P'}\sigma_p\cdot f(p',c')\right|,
\]
which implies that
\[
D^{(\mathcal{C})}_P(f,\sigma) = (4r)^z\cdot D^{(B(0,\frac{1}{4}))}_{P'}(f,\sigma).
\]
%
Thus, it suffices to prove that
\begin{align}
\label{ineq1_proof_lm:discrepancy_median}
D^{(B(0,\frac{1}{4}))}_{P'}(f,\sigma) \leq z^{O(z)}\sqrt{d}/m,
\end{align}
which implies the lemma.
%
The proof idea of Inequality~\eqref{ineq1_proof_lm:discrepancy_median} is similar to that of Theorem 22 of~\citep{karnin2019discrepancy}.\footnote{Note that the proof of Theorem 22 of~\citep{karnin2019discrepancy} is actually incorrect. Applying Theorem 18 of~\citep{karnin2019discrepancy} may lead to an upper bound $\|\tilde{q}\|_2< 1$, which makes $R$ in Theorem 22 of~\citep{karnin2019discrepancy} not exist.}
%
For each $p'\in P'$ and $c'\in B(0,\frac{1}{4})$, let $\psi(c') = (\frac{1}{8r^2}, -\frac{\sqrt{2}}{2r} c', 2\|c'\|_2^2)\in \mathbb{R}^{d+2}$ and we can rewrite $f(p',c')$ as follows:
\[
f(p',c') = \|p'-c'\|_2^z = (\left\langle \phi(p), \psi(c') \right\rangle)^{z/2}.
\]
%
We note that $\phi(p)\in B(0,1)$ and $\psi(c')\in B(0,\frac{1}{3})$ since $c'\in B(0,\frac{1}{4})$.
%
Construct another function $g: P\times B(0,\frac{1}{3})$ as follows: for each $p\in P$ and $c\in B(0,\frac{1}{3})$,
\begin{enumerate}
\item If for any $p'\in P$, $\langle p',c\rangle \geq 0$, let $g(p,c) = g(\langle p,c\rangle) = (\langle p,c\rangle)^{z/2}$;
\item Otherwise, let $g(p,c) = 0$.
\end{enumerate}
We have $|\frac{d^l g}{d x^l}(x)| \leq z^{O(z)} l!$ for any integer $l\geq 1$.
%
By the construction of $\sigma$ and \Cref{thm:discrepancy_upper}, we have that
\[
D^{(B(0,\frac{1}{3}))}_{\phi(P)}(g,\sigma) \leq z^{O(z)}\sqrt{d}/m,
\]
which implies Inequality~\eqref{ineq1_proof_lm:discrepancy_median} since $D^{(B(0,\frac{1}{4}))}_{P'}(f,\sigma)\leq D^{(B(0,\frac{1}{3}))}_{\phi(P)}(g,\sigma)$ due to the fact that $\psi(c')\in B(0,\frac{1}{3})$.
%
Overall, we complete the proof.
\end{proof}
\subsection{Improved Coreset Lower Bound in $\mathbb{R}^d$ when $k\geq 2$}\label{sec:lb_main}
We present a lower bound for the coreset size in small dimensional spaces.
\begin{theorem}[\bf{Coreset lower bound in small dimensional spaces}]
\label{thm:lowerbound}
Given an integer $k\geq 1$, constant $z\geq 1$ and a real number $\epsilon \in (0,1)$, for any integer $d \le \frac{1}{100\epsilon^2}$, there is a dataset $P \subset \mathbb{R}^{d+1}$ such that its $\epsilon$-coreset for \ProblemName{$(k, z)$-Clustering}\xspace must contain at least $\frac{dk}{10 z^4}$ points.
\end{theorem}
\noindent
When $d = \Theta(\frac{1}{\epsilon^2})$, \Cref{thm:lowerbound} gives the well known lower bound $\frac{k}{\epsilon^2}$. When $d \ll \Theta(\frac{1}{\epsilon^2})$, the theorem is non-trivial.
In the following, we prove \Cref{thm:lowerbound} for $z = 2$ and show how to extend to general $z \ge 1$ in \Cref{sec:generalz}.
\subsubsection{Preparation}
\paragraph{Notations} Let $e_0, \cdots, e_d$ be the standard basis vectors of $\mathbb{R}^{d+1}$, and $H_1,\cdots,H_{k/2}$ be $k/2$ $d$-dimensional affine subspaces, where $H_j := jLe_0 + \text{span}\left\{e_1,\dots,e_d\right\}$ for a sufficiently large constant $L$. For any $p\in \mathbb{R}^{d+1}$, we use $\Tilde{p}$ to denote the $d$-dimensional vector $p_{1:d}$ (i.e., discard the $0$-th coordinate of $p$).
\paragraph{Hard instance}
We construct the hard instance as follows. Take $P_j = \{ jLe_0+e_1,\cdots, jLe_0+e_{d/2} \}$ for $j \in \{1, \dots, k/2\}$ and take $P$ to be the union of all $P_j$.
The hard instance is $P$.
Note that $P_j \subset H_j$ for each $j$ and $|P|=kd/4$.
\color{black}
In our proof, we always put two centers in each $H_j$. Thus for large enough $L$, all $p\in P_j$ must be assigned to centers in $H_j$.
We will use the following two technical lemmas from \citep{cohenaddad2022towards}.
\begin{lemma}\label{lem:cost-to-basis}
For any $k\ge 1$, let $\{c_1,\cdots,c_k\}$ be arbitrary $k$ unit vectors in $\mathbb{R}^d$, we have
\begin{align*}
\sum_{i=1}^{d/2} \min_{\ell=1}^k\|e_i-c_{\ell}\|^2 \ge d-\sqrt{dk/2}.
\end{align*}
\end{lemma}
\begin{lemma}\label{lem:cost-to-smallset}
Let $S$ be a set of points in $\mathbb{R}^d$ of size $t$ and $w: S\rightarrow \mathbb{R}^+$ be their weights. There exist $2$ unit vectors $v_1, v_2$, such that
\begin{align*}
\sum_{p\in S} w(p)\min_{\ell=1,2} \|p-v_{\ell}\|^2 \le \sum_{s\in P}w(p)(\|p\|^2+1)- \frac{2\sum_{p\in S} w(p)\|p\|}{\sqrt{t}}.
\end{align*}
\end{lemma}
\subsubsection{Proof of \Cref{thm:lowerbound} when $z=2$}
\noindent
Now we are ready to prove \Cref{thm:lowerbound} when $z=2$.
\begin{proof}
Note that points in $S$ might not be in any $H_j$. We first map each point $p\in S$ to an index $j_p\in [k/2]$ such that $H_{j_p}$ is the nearest subspace of $p$. The mapping is quite simple:
\begin{align*}
j_p = \arg\min_{j\in[k/2]} |p_0 - jL|,
\end{align*}
where $p_0$ is the $0$-th coordinate of $p$.
Let $\Delta_p = p_0 - j_pL$, which is the distance of $p$ to the closest affine subspace. Let $S_j:=\{p\in S: j_p=j\}$ be the set of points in $P$, whose closest affine subspace is $H_j$. Define $I:=\{j\in[k/2] : |S_j| \le d/4\}$. Consider any $k$-center set $C$ such that $H_j\bigcap C \neq \emptyset$. Then $\ensuremath{\mathrm{cost}}(P, C)\ll 0.1 L$ for sufficiently large $L$. On the other hand, $\ensuremath{\mathrm{cost}}(S, C) \ge \sum_{p\in S} \Delta_p^2$. Since $S$ is a coreset, $\Delta_p^2 \ll L$ for all $p\in S$. \footnote{Here we do not allow offsets to simplify the proof, but our technique can be extended to handle offsets.} Therefore each $p\in S$ must be very close to its closest affine subspace; in particular, we can assume that $p$ must be assigned to some center in $H_{j_p}$ (if there exists one).
In the proof follows, we consider three different set of $k$ centers $C_1,C_2$ and $C_3$ and compare the costs $\ensuremath{\mathrm{cost}}(P, C_i)$ and $\ensuremath{\mathrm{cost}}(S, C_i)$ for $i=1,2,3$. In each $C_i$, there are two centers in each $H_j$. As we have discussed above, for large enough $L$, the total cost for both $P$ and $S$ can be decomposed into the sum of costs over all affine subspaces.
For each $j\in \Bar{I}$, the corresponding centers in $H_j$ are the same across $C_1,C_2,C_3$. Let $c_j$ be any point in $H_j$ such that $c_j-jLe_0$ has unit norm and is orthogonal to $e_1,\cdots,e_{d/2}$; in other words, $\|\Tilde{c}_j\|=1$ and the first $d/2$ coordinates of $\Tilde{c}_j=1$ are all zero. Specifically, we set $c_j = jLe_0 + e_{d/2+1}$ and the two centers in $H_j$ are two copies of $c_j$ for $j\in \Bar{I}$.
We first consider the following $k$ centers denoted by $C_1$. As we have specified the centers for $j\in \Bar{I}$, we only describe the centers for each $j\in I$. Since by definition, $|S_j|\le d/4$, we can find a vector $c_j\in \mathbb{R}^{d+1}$ in $H_j$ such that $c_j-jLe_0$ has unit norm and is orthogonal to $e_1,\cdots,e_{d/2}$ and all vectors in $S_j$. Let $C_1$ be the set of $k$ points with each point in $\{c_1,\cdots,c_{k/2}\}$ copied twice. We evaluate the cost of $C_1$ with respect to $P$ and $S$.
\begin{lemma}
For $C_1$ constructed above, we have $\ensuremath{\mathrm{cost}}(P,C_1) = \frac{kd}{2}$ and
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_1) =\sum_{p\in S} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1) - 2\sum_{j\in \Bar{I}}\sum_{p\in S_j}w(p)\langle p-jLe_0, jLe_0-c_j \rangle.
\end{align*}
\end{lemma}
\begin{proof}
Since $e_i$ is orthogonal to $c_j-jLe_0$ and $c_j-jLe_0$ has unit norm for all $i,j$, it follows that
\begin{align}
\ensuremath{\mathrm{cost}}(P,C_1) &=
\sum_{j=1}^{k/2}\sum_{i=1}^{d/2} \min_{c\in C_1}\|jLe_0+e_i-c\|^2 = \sum_{j=1}^{k/2} \sum_{i=1}^{d/2} \|jLe_0+e_i -c_j\|^2 \nonumber\\
&=\sum_{j=1}^{k/2}\sum_{i=1}^{d/2}(\|e_i\|^2+\|c_j-jLe_0\|^2-2\langle e_i, c_j-jLe_0 \rangle )\nonumber\\
&=\frac{kd}{2}.\label{eqn:cost-of-c}
\end{align}
On the other hand, the cost of $C$ w.r.t.\ $S_j$ is
\begin{align}
\sum_{p\in S_j} \min_{c\in C_1}w(p)\|p-c\|^2 &= \sum_{p\in S_j} w(p)\|p-c_j\|^2 = \sum_{p\in S_j} w(p)\|p-jLe_0+ jLe_0-c_j\|^2 \nonumber\\
&= \sum_{p\in S_j}w(p)\left(\|p-jLe_0\|^2 + 1 -2\langle p-jLe_0, jLe_0-c_j \rangle \right)\nonumber\\
&= \sum_{p\in S_j} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1) -2w(p)\langle p-jLe_0, jLe_0-c_j \rangle .\label{eqn:cost-of-c-to-S}
\end{align}
Recall $\Tilde{p}\in \mathbb{R}^d$ is $p_{1:d}$.
For $j\in I$, the inner product is $0$, and thus the total cost w.r.t.\ $S$ is
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_1) = \sum_{p\in S} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1) - 2\sum_{j\in \Bar{I}}\sum_{p\in S_j}w(p)\langle p-jLe_0, jLe_0-c_j \rangle,
\end{align*}
which finishes the proof.
\end{proof}
\noindent
For notational convenience, we define $\kappa := 2\sum_{j\in \Bar{I}}\sum_{p\in S_j}w(p)\langle p-jLe_0, jLe_0-c_j \rangle$.
Since $S$ is an $\varepsilon$-coreset of $P$, we have
\begin{align}\label{eqn:weight-constraints}
dk/2- \varepsilon dk/2\le \sum_{p\in S} w(p)(\Delta_p^2+\|p'\|^2 +1) - \kappa \le dk/2+\varepsilon dk/2.
\end{align}
Next we consider a different set of $k$ centers denoted by $C_2$. By \Cref{lem:cost-to-smallset}, there exists unit vectors $v^j_1,v^j_2 \in \mathbb{R}^d$ such that
\begin{align}
\sum_{p\in S_j} w(p)(\min_{\ell=1,2} \|\Tilde{p}-v^j_{\ell}\|^2+\Delta_p^2) \le \sum_{p\in S_j}w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)- \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{|S_j|}}. \label{eqn:small-coreset-cost}
\end{align}
Applying this to all $j\in I$ and get corresponding $v^j_1,v^j_2$ for all $j\in I$. Let $C_2=\{u_1^1,u_2^2,\cdots, u_1^{k/2},u_2^{k/2}\}$ be a set of $k$ centers in $\mathbb{R}^{d+1}$ defined as follows: if $j\in I$, $u_{\ell}^j$ is $v_{\ell}^j$ with an additional $0$th coordinate with value $jL$, making them lie in $H_j$; for $j\in \Bar{I}$, we use the same centers as in $C_1$, i.e., $u_{1}^j=u_{2}^j =c_j$.
\begin{lemma}
For $C_2$ constructed above, we have
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_2) \ge \frac{kd}{2}-\sqrt{d}|I| \text{ and }
\end{align*}
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_2) \le \sum_{p\in S}w(p)(\|\Tilde{p}\|^2+1+\Delta_p^2)- \sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{|S_j|}} -\kappa.
\end{align*}
\end{lemma}
\begin{proof}
By \eqref{eqn:small-coreset-cost},
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_2) &= \sum_{j=1}^{k/2} \sum_{p\in S_j}w(p)\min_{c\in C_2}\|p-c\|^2 \\
&= \sum_{j\in I}\sum_{p\in S_j} w(p)\min_{\ell=1,2} (\|\Tilde{p}-v^j_{\ell}\|^2+\Delta^2_p) +\sum_{j\in \Bar{I}} \sum_{p\in S_j}w(p)\|p-c_j\|^2\\
&\le \sum_{p\in S}w(p)(\|\Tilde{p}\|^2+1+\Delta_p^2)- \sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{|S_j|}} -\kappa.
\end{align*}
By \Cref{lem:cost-to-basis} (with $k=2$), we have
\begin{align*}
\sum_{i=1}^{d/2} \min_{\ell=1,2}\|e_i-v^j_{\ell}\|^2 \ge {d}-\sqrt{d}.
\end{align*}
It follows that
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_2)&=\sum_{j=1}^{k/2}\sum_{i=1}^{d/2} \min_{c\in C_2}\|jLe_0+e_i-c\|^2 = \sum_{j\in I}\sum_{i=1}^{d/2} \min_{\ell=1,2} \|e_i-v^{j}_{\ell}\|^2 + \sum_{j\in \Bar{I}}\sum_{i=1}^{d/2} \|jLe_0+e_i-c\|^2 \\
&\ge \frac{kd}{2}-\sqrt{d}|I|,
\end{align*}
where in the inequality, we also used the orthogonality between $e_i$ and $c_j-jLe_0$.
\end{proof}
\noindent
Since $S$ is an $\varepsilon$-coreset of $P$, we have
\begin{align*}
\frac{dk}{2}-|I|\sqrt{d} - \frac{\varepsilon dk}{2} \le (\frac{dk}{2}-|I|\sqrt{d})(1-\varepsilon) \le \sum_{p\in S}w(p)(\|\Tilde{p}\|^2+1+\Delta_p^2)- \sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{|S_j|}}-\kappa,
\end{align*}
which implies
\begin{align}
\sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{|S_j|}}
&\le \sum_{p\in S}w(p)(\|\Tilde{p}\|^2+1+\Delta_p^2) -\frac{dk-2|I|\sqrt{d}-\varepsilon kd}{2}-\kappa
\nonumber\\
&\le \frac{dk+\varepsilon dk}{2} - \frac{dk-2|I|\sqrt{d}-\varepsilon kd}{2} \quad\textnormal{by \eqref{eqn:weight-constraints}} \nonumber\\
&= |I|\sqrt{d} +\varepsilon kd. \nonumber
\end{align}
By definition, $|S_j| \le d/4$, so
\begin{align*}
\sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{d/4}}\le \sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{|S_j|}},
\end{align*}
and it follows that
\begin{align}\label{eqn:size-constraint}
\frac{\sum_{j\in I} \sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{d}}
\le \frac{|I|\sqrt{d} +\varepsilon kd} {4}.
\end{align}
Finally we consider a third set of $k$ centers $C_3$. Similarly, there are two centers per group. We set $m$ be a power of $2$ in $[d/2,d]$. Let $h_1,\cdots,h_m$ be the $m$-dimensional Hadamard basis vectors. So all $h_{\ell}$'s are $\{-\frac{1}{\sqrt{m}},\frac{1}{\sqrt{m}}\}$ vectors and $h_1=(\frac{1}{\sqrt{m}},\cdots,\frac{1}{\sqrt{m}})$. We slightly abuse notation and treat each $h_{\ell}$ as a $d$-dimensional vector by concatenating zeros in the end. For each $h_{\ell}$ construct a set of $k$ centers as follows. For each $j\in \Bar{I}$, we still use two copies of $c_j$. For $j\in I$, the $0$th coordinate of the two centers is $jL$, then we concatenate $h_{\ell}$ and $-h_{\ell}$ respectively to the first and the second centers.
\begin{lemma}\label{lem:hadamard-cost}
Suppose $C_3$ is constructed based on $h_{\ell}$. Then for all $\ell\in [m]$, we have
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_3) = \frac{kd}{2} -\frac{d|I|}{\sqrt{m}}\text{ and }
\end{align*}
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_3) = \sum_{p\in S} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2) - 2\sum_{j\in I} \sum_{p\in S_j}\langle w(p) \Tilde{p}, h^p_{\ell} \rangle -\kappa.
\end{align*}
\end{lemma}
\begin{proof}
For $j\in I$, the cost of the two centers w.r.t.\ $P_j$ is
\begin{align*}
\ensuremath{\mathrm{cost}}(P_j,C_3) = \sum_{i=1}^{d/2} \min_{s=-1,+1}\|e_i - s\cdot h_{\ell}\|^2 = \sum_{i=1}^{d/2} (2-2\max_{s=-1,+1}\langle h_{\ell},e_i\rangle)=\sum_{i=1}^{d/2} (2-\frac{2}{\sqrt{m}}) = d-\frac{d}{\sqrt{m}}.
\end{align*}
For $j\in \Bar{I}$, the cost w.r.t.\ $P_j$ is $d$ by \eqref{eqn:cost-of-c}.
Thus, the total cost over all subspaces is
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_3) = (d-\frac{d}{\sqrt{m}})|I| + \left(\frac{k}{2} -|I| \right)d = \frac{kd}{2} -\frac{d|I|}{\sqrt{m}}.
\end{align*}
On the other hand, for $j\in I$, the cost w.r.t.\ $S_j$ is
\begin{align*}
\sum_{p\in S_j} w(p)(\Delta_p^2+ \min_{s=\{-1,+1\}} \|\Tilde{p}-s\cdot h_{\ell}\|^2) &=\sum_{p\in S_j} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2 - 2\max_{s=\{-1,+1\}}\langle \Tilde{p}, s\cdot h_{\ell} \rangle)\\
& =\sum_{p\in S_j} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2 - 2\langle \Tilde{p}, h^p_{\ell} \rangle).
\end{align*}
Here $h^p_{\ell} = s^p\cdot h_{\ell}$, where $s^p=\arg\max_{s=\{-1,+1\}}\langle \Tilde{p}, s\cdot h_{\ell} \rangle$.
For $j\in \Bar{I}$, the cost w.r.t.\ $S_j$ is $\sum_{p\in S_j} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1) -2\langle p-jLe_0, jLe_0-c_j \rangle )$ by \eqref{eqn:cost-of-c-to-S}. Thus, the total cost w.r.t.\ $S$ is
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_3) = \sum_{p\in S} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2) - 2\sum_{j\in I} \sum_{p\in S_j}\langle w(p) \Tilde{p}, h^p_{\ell} \rangle -\kappa .
\end{align*}
This finishes the proof.
\end{proof}
\begin{corollary}
Let $S$ be a $\varepsilon$-coreset of $P$, and $I =\{j: |S_j|\le d/4\}$. Then
\begin{align*}
\sum_{j\in I}\sum_{p\in S_j} w(p)\|\Tilde{p}\| \ge \frac{d|I|-\varepsilon kd\sqrt{d}}{2}. \\
\end{align*}
\end{corollary}
\begin{proof}
Since $S$ is an $\varepsilon$-coreset, we have by \Cref{lem:hadamard-cost}
\begin{align*}
2\sum_{j\in I} \sum_{p\in S_j}\langle w(p) \Tilde{p}, h^p_{\ell} \rangle &\ge \sum_{p\in S} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2) -\kappa -(\frac{kd}{2} -\frac{d|I|}{\sqrt{m}})(1+\varepsilon)\\
&\ge \sum_{p\in S} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2) -\kappa -\frac{kd}{2} +\frac{d|I|}{\sqrt{m}}-\frac{\varepsilon kd}{2}\\
&\ge \frac{dk-\varepsilon dk}{2} - \frac{kd}{2} +\frac{d|I|}{\sqrt{m}}-\frac{\varepsilon kd}{2} \quad\textnormal{by \eqref{eqn:weight-constraints}}\\
&= \frac{d|I|}{\sqrt{m}} - \varepsilon kd.
\end{align*}
Note that the above inequality holds for all $\ell\in[m]$, then
$$ 2\sum_{\ell=1}^m\sum_{j\in I} \sum_{p\in S_j}\langle w(p) \Tilde{p}, h^p_{\ell}\rangle\ge d|I|\sqrt{m} - \varepsilon kdm.$$
By the Cauchy-Schwartz inequality,
\begin{align*}
\sum_{\ell=1}^m\sum_{j\in I} \sum_{p\in S_j}\langle w(p) \Tilde{p}, h^p_{\ell} \rangle &= \sum_{j\in I}\sum_{p\in S_j}\langle w(p) \Tilde{p}, \sum_{\ell=1}^m h^p_{\ell} \rangle \\
&\le \sum_{j\in I}\sum_{p\in S_j} w(p)\|\Tilde{p}\| \|\sum_{\ell=1}^m h^p_{\ell} \| \\
&= \sqrt{m}\sum_{j\in I}\sum_{p\in S_j} w(p)\|\Tilde{p}\|.
\end{align*}
Therefore, we have
\begin{align*}
\sum_{j\in I}\sum_{p\in S_j} w(p)\|\Tilde{p}\| \ge \frac{d|I|-\varepsilon kd\sqrt{m}}{2} \ge \frac{d|I|-\varepsilon kd\sqrt{d}}{2}.
\end{align*}
\end{proof}
\noindent
Combining the above corollary with \eqref{eqn:size-constraint}, we have
\begin{align*}
\frac{\sqrt{d}|I| -\varepsilon kd }{2} \le \frac{|I|\sqrt{d} + \varepsilon kd}{4} \implies |I| \le 3\varepsilon k\sqrt{d}.
\end{align*}
By the assumption $d\le \frac{1}{100\varepsilon^2}$, it holds that $|I| \le \frac{3k}{10}$ or $|\Bar{I}|\ge \frac{k}{2} -\frac{3k}{10}=\frac{k}{5}$. Moreover, since $|S_j|>\frac{d}{4}$ for each $j\in \Bar{I}$, we have $|S|>\frac{d}{4}\cdot \frac{k}{5} = \frac{kd}{20}$.
\end{proof}
\section{Improved Coreset Lower Bound in $\mathbb{R}^d$ when $k\geq 2$}\label{sec:lb}
We present a lower bound for the coreset size in small dimensional spaces.
\begin{theorem}[\bf{Coreset lower bound in small dimensional spaces}]
\label{thm:lowerbound_appendix}
Given an integer $k$ and a real number $\epsilon \in (0,1)$, for any integer $d \le \frac{1}{45} \frac{1}{\epsilon^2}$, there is a data set $P \subset \mathbb{R}^{d+1}$ such that its $\epsilon$-coreset must contain at least $\frac{1}{10} dk$ points.
\end{theorem}
\noindent
When $d = \Theta(\frac{1}{\epsilon^2})$, Theorem~\ref{thm:lowerbound_appendix} gives the well known lower bound $\frac{k}{\epsilon^2}$. When $d \ll \Theta(\frac{1}{\epsilon^2})$, the theorem is non-trivial.
\begin{proof}
We work on $\mathbb{R}^{d+1}$ instead of working on $\mathbb{R}^d$ for technical reasons. We will construct $k$ $d$-dimensional affine subspaces in $\mathbb{R}^{d+1}$, each of them is far away from others. Then we consider the standard basis in each subspace and show that a coreset of the data set must contain at least $\Omega(kd)$ points.
Denote the standard basis in $\mathbb{R}^{d+1}$ by $e_0,\dots,e_d$. For each $j \in [k]$, we consider the data set $P_j \triangleq jLe_0 + \{ e_1, e_2, \dots, e_d\}$ where $L$ is a positive number large enough. We take the full data set $P$ as $P = \cup_{j \in [k]} P_j$. Each $P_j$ lies in a $d$-dimensional affine subspace $jLe_0 + \text{span}\left<e_1,\dots,e_d\right>$. These affine subspaces are far away from each other since $L$ is large enough, and this separation property assures that we can analyze these affine subspaces independently.
Denote the coreset of $P$ as $C$. For each $j \in [k]$, denote $C_j \triangleq C \cap P_j$. If $jLe_0 + e_i \in C_j$, we denote $i \in C_j$ for the sake of convenience, we also denote its coreset weight as $w_i$. Denote $v_j \triangleq \sum_{i \in C_j} w_i e_i - \sum_{i=1}^d e_i$, we consider the query $Q = \{q_1,\dots,q_k\}$ where $q_j \triangleq \frac{v_j}{\| v_j\|_2} + jLe_0 $.
Note that each $p \in P_j$ has $q_j$ as its closet query point since $L$ is large enough, thus $\ensuremath{\mathrm{cost}}(P,Q) = \sum_{j=1}^k \ensuremath{\mathrm{cost}}(P_j,Q) = \sum_{j=1}^k \ensuremath{\mathrm{cost}}(P_j,q_j)$ and $\ensuremath{\mathrm{cost}}(C,Q) = \sum_{j=1}^k \ensuremath{\mathrm{cost}}(C_j,q_j)$. We compute $\ensuremath{\mathrm{cost}}(P,Q)$ and $\ensuremath{\mathrm{cost}}(C,Q)$ by computing $\ensuremath{\mathrm{cost}}(P_j,q_j)$ and $\ensuremath{\mathrm{cost}}(C_j,q_j)$
We compute $\ensuremath{\mathrm{cost}}(P_j,q_j)$ first, that is
\begin{align*}
\ensuremath{\mathrm{cost}}(P_j,q_j) &= \sum_{i=1}^d \|jLe_0 + e_i - q_j\|^2= \sum_{i=1}^d \| e_i - \frac{v_j}{|v_j|}\|^2= 2d - 2\sum_{i=1}^d \left< e_i,\frac{v_j}{|v_j|}\right>.
\end{align*}
Similarly we have $\ensuremath{\mathrm{cost}}(C_j,q_j) = 2\sum_{i \in C_j} w_i - 2\sum_{i \in C_j} w_i \left< e_i,\frac{v_j}{|v_j|}\right>$.
Combining them all, we have
\begin{align*}
\quad \ensuremath{\mathrm{cost}}(P,Q) - \ensuremath{\mathrm{cost}}(C,Q)
= & \quad 2kd - 2\sum_{j=1}^k \sum_{i \in C_j} w_i + 2 \sum_{j=1}^k \left< \sum_{i \in C_j} w_i e_i - \sum_{i=1}^d e_i, \frac{v_j}{|v_j|}\right>\\
= & \quad 2kd - 2\sum_{j=1}^k \sum_{i \in C_j} w_i + 2 \sum_{j=1}^k \| v_j\|.
\end{align*}
The coreset property implies that $|\ensuremath{\mathrm{cost}}(P,Q) - \ensuremath{\mathrm{cost}}(C,Q)| \le \epsilon \ensuremath{\mathrm{cost}}(P,Q) \le 4\epsilon kd$, thus we have
\begin{equation}
\label{inequality:coreset_property}
2kd - 2\sum_{j=1}^k \sum_{i \in C_j} w_i + 2 \sum_{j=1}^k \| v_j\| \le 4\epsilon kd.
\end{equation}
Taking $\tilde{Q} = \{Le_0,2Le_0,3Le_0,\dots,kLe_0\}$, we have $\ensuremath{\mathrm{cost}}(P,\tilde{Q}) = kd$ and $\ensuremath{\mathrm{cost}}(C,\tilde{Q}) = \sum_{j=1}^K \sum_{i \in C_j} w_i$, the coreset property then gives that $|kd - \sum_{j=1}^K \sum_{i \in C_j} w_i| \le \epsilon kd$. Substitute this inequality to inequality~\ref{inequality:coreset_property}, we get
\begin{equation}
\label{inequality:sum_of_query_norm}
\sum_{j=1}^k \| v_j\| \le 3\epsilon kd.
\end{equation}
For each $j \in [k]$, we have that $\|v_j\| = \| \sum_{i \in C_j} w_i e_i - \sum_{i=1}^d e_i\| \ge \| \sum_{i \not \in C_j} e_i\| = \sqrt{\#(P_j) - \#(C_j)}$. Substitute this inequality to inequality~\ref{inequality:sum_of_query_norm}, we have
\begin{equation}
\label{inequality:coreset_size}
\sum_{j=1}^k \sqrt{\#( P_j) - \#(C_j)} \le 3\epsilon kd.
\end{equation}
Our goal is to show that $\#(C) \ge \frac{1}{10} kd = \frac{1}{10} \#(P)$, we prove it by contradiction. We will show that if $\#(C) \ge \frac{1}{10} \#(P)$ then the dimension $d$ is larger than $\frac{1}{45} \frac{1}{\epsilon^2}$, which contradicts to the assumption on $d$.
Assume that $\#(C) \le \frac{1}{10} \#(P) = \frac{1}{10} kd$, then for at least half of $P_j$ we have $\#(C_j) \le \frac{1}{5} \#(P_j)$ and thus $\#( P_j) - \#(C_j) \ge \frac{4}{5} \#(P_j) = \frac{4}{5}d$. Summing over these $P_j$ we have $\sum_{j=1}^k \sqrt{\#( P_j) - \#(C_j)} \ge \frac{k}{2} \sqrt{\frac{4}{5}d} = k \sqrt{\frac{1}{5} d}$. By inequality~\ref{inequality:coreset_size} we have $k \sqrt{\frac{1}{5}d} \le 3 \epsilon k d$, thus $\frac{1}{45} \frac{1}{\epsilon^2} \le d$. This leads to a contradiction.
\end{proof}
\begin{remark}
The proof assumes that the coreset is a subset of the original data set, and the proof holds for coreset with offset.
\end{remark}
\section{Generalized lower bound for \ProblemName{$k$-Means}\xspace clustering with general $S$.}
The lower bound proved above relies on the assumption that the coreset $S$ is a subset of the original dataset. Next we generalize the result by allowing arbitrary $S$ in $\mathbb{R}^{d+1}$.
\subsection{Preparation}
\paragraph{Additional notation} Let $e_0, \cdots, e_d$ be the standard basis vectors of $\mathbb{R}^{d+1}$, and $H_1,\cdots,H_{k/2}$ be $k/2$ $d$-dimensional affine subspaces, where $H_j := jLe_0 + \text{span}\left\{e_1,\dots,e_d\right\}$ for a sufficiently large constant $L$. For any $p\in \mathbb{R}^{d+1}$, we use $\Tilde{p}$ to denote the $d$-dimensional vector $p_{1:d}$ (i.e., discard the $0$-th coordinate of $p$).
\paragraph{Hard instance} The hard instance is the same as in Section~\ref{sec:lb}, except that now there are $k/2$ affine subspaces and in each affine subspace $H_j$, we only put $d/2$ points, which are $jLe_0+e_1,\cdots, jLe_0+e_{d/2}$. Similarly, we use $P_j$ to denote the data points in $H_j$ ($j=1,\cdots,k/2$) and let $P$ be the union of all $P_j$. Thus, $|P|=kd/4$. In our proof, we always put two centers in each $H_j$; for large enough $L$, all $p\in P_j$ must be assigned to centers in $H_j$.
We will use the following two technical lemmas from \citep{cohenaddad2022towards}.
\begin{lemma}\label{lem:cost-to-basis}
For any $k\ge 1$, let $\{c_1,\cdots,c_k\}$ be arbitrary $k$ unit vectors in $\mathbb{R}^d$, we have
\begin{align*}
\sum_{i=1}^{d/2} \min_{\ell=1}^k\|e_i-c_{\ell}\|^2 \ge d-\sqrt{dk/2}.
\end{align*}
\end{lemma}
\begin{lemma}\label{lem:cost-to-smallset}
Let $S$ be a set of points in $\mathbb{R}^d$ of size $t$ and $w: S\rightarrow \mathbb{R}^+$ be their weights. There exist $2$ unit vectors $v_1, v_2$, such that
\begin{align*}
\sum_{p\in S} w(p)\min_{\ell=1,2} \|p-v_{\ell}\|^2 \le \sum_{s\in P}w(p)(\|p\|^2+1)- \frac{2\sum_{p\in S} w(p)\|p\|}{\sqrt{t}}.
\end{align*}
\end{lemma}
\subsection{Proof of the Lower Bound}
\noindent
Next, we present the lower bound result and its proof.
\begin{theorem}[\bf{Same coreset lower bound when $S$ can be arbitrary}]
\label{thm:lowerbound-general}
Given an integer $k$, a real number $\epsilon \in (0,1)$, and integer $d \le \frac{1}{100\epsilon^2}$, let $P \subset \mathbb{R}^{d+1}$ be the point set described above. For any $S\subset \mathbb{R}^{d+1}$, if it is a $\varepsilon$-coreset of $P$, then we must have $|S|=\Omega(dk)$.
\end{theorem}
\begin{proof}
Note that points in $S$ might not be in any $H_j$. We first map each point $p\in S$ to an index $j_p\in [k/2]$ such that $H_{j_p}$ is the nearest subspace of $p$. The mapping is quite simple:
\begin{align*}
j_p = \arg\min_{j\in[k/2]} |p_0 - jL|,
\end{align*}
where $p_0$ is the $0$-th coordinate of $p$.
Let $\Delta_p = p_0 - j_pL$, which is the distance of $p$ to the closest affine subspace. Let $S_j:=\{p\in S: j_p=j\}$ be the set of points in $P$, whose closest affine subspace is $H_j$. Define $I:=\{j\in[k/2] : |S_j| \le d/4\}$. Consider any $k$-center set $C$ such that $H_j\bigcap C \neq \emptyset$. Then $\ensuremath{\mathrm{cost}}(P, C)\ll 0.1 L$ for sufficiently large $L$. On the other hand, $\ensuremath{\mathrm{cost}}(S, C) \ge \sum_{p\in S} \Delta_p^2$. Since $S$ is a coreset, $\Delta_p^2 \ll L$ for all $p\in S$. \footnote{Here we do not allow offsets to simplify the proof, but our technique can be extended to handle offsets.} Therefore each $p\in S$ must be very close to its closest affine subspace; in particular, we can assume that $p$ must be assigned to some center in $H_{j_p}$ (if there exists one).
In the proof follows, we consider three different set of $k$ centers $C_1,C_2$ and $C_3$ and compare the costs $\ensuremath{\mathrm{cost}}(P, C_i)$ and $\ensuremath{\mathrm{cost}}(S, C_i)$ for $i=1,2,3$. In each $C_i$, there are two centers in each $H_j$. As we have discussed above, for large enough $L$, the total cost for both $P$ and $S$ can be decomposed into the sum of costs over all affine subspaces.
For each $j\in \Bar{I}$, the corresponding centers in $H_j$ are the same across $C_1,C_2,C_3$. Let $c_j$ be any point in $H_j$ such that $c_j-jLe_0$ has unit norm and is orthogonal to $e_1,\cdots,e_{d/2}$; in other words, $\|\Tilde{c}_j\|=1$ and the first $d/2$ coordinates of $\Tilde{c}_j=1$ are all zero. Specifically, we set $c_j = jLe_0 + e_{d/2+1}$ and the two centers in $H_j$ are two copies of $c_j$ for $j\in \Bar{I}$.
We first consider the following $k$ centers denoted by $C_1$. As we have specified the centers for $j\in \Bar{I}$, we only describe the centers for each $j\in I$. Since by definition, $|S_j|\le d/4$, we can find a vector $c_j\in \mathbb{R}^{d+1}$ in $H_j$ such that $c_j-jLe_0$ has unit norm and is orthogonal to $e_1,\cdots,e_{d/2}$ and all vectors in $S_j$. Let $C_1$ be the set of $k$ points with each point in $\{c_1,\cdots,c_{k/2}\}$ copied twice. We evaluate the cost of $C_1$ with respect to $P$ and $S$.
\begin{lemma}
For $C_1$ constructed above, we have $\ensuremath{\mathrm{cost}}(P,C_1) = \frac{kd}{2}$ and
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_1) =\sum_{p\in S} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1) - 2\sum_{j\in \Bar{I}}\sum_{p\in S_j}w(p)\langle p-jLe_0, jLe_0-c_j \rangle.
\end{align*}
\end{lemma}
\begin{proof}
Since $e_i$ is orthogonal to $c_j-jLe_0$ and $c_j-jLe_0$ has unit norm for all $i,j$, it follows that
\begin{align}
\ensuremath{\mathrm{cost}}(P,C_1) &=
\sum_{j=1}^{k/2}\sum_{i=1}^{d/2} \min_{c\in C_1}\|jLe_0+e_i-c\|^2 = \sum_{j=1}^{k/2} \sum_{i=1}^{d/2} \|jLe_0+e_i -c_j\|^2 \nonumber\\
&=\sum_{j=1}^{k/2}\sum_{i=1}^{d/2}(\|e_i\|^2+\|c_j-jLe_0\|^2-2\langle e_i, c_j-jLe_0 \rangle )\nonumber\\
&=\frac{kd}{2}.\label{eqn:cost-of-c}
\end{align}
On the other hand, the cost of $C$ w.r.t.\ $S_j$ is
\begin{align}
\sum_{p\in S_j} \min_{c\in C_1}w(p)\|p-c\|^2 &= \sum_{p\in S_j} w(p)\|p-c_j\|^2 = \sum_{p\in S_j} w(p)\|p-jLe_0+ jLe_0-c_j\|^2 \nonumber\\
&= \sum_{p\in S_j}w(p)\left(\|p-jLe_0\|^2 + 1 -2\langle p-jLe_0, jLe_0-c_j \rangle \right)\nonumber\\
&= \sum_{p\in S_j} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1) -2w(p)\langle p-jLe_0, jLe_0-c_j \rangle .\label{eqn:cost-of-c-to-S}
\end{align}
Recall $\Tilde{p}\in \mathbb{R}^d$ is $p_{1:d}$.
For $j\in I$, the inner product is $0$, and thus the total cost w.r.t.\ $S$ is
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_1) = \sum_{p\in S} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1) - 2\sum_{j\in \Bar{I}}\sum_{p\in S_j}w(p)\langle p-jLe_0, jLe_0-c_j \rangle,
\end{align*}
which finishes the proof.
\end{proof}
For notational convenience, we define $\kappa := 2\sum_{j\in \Bar{I}}\sum_{p\in S_j}w(p)\langle p-jLe_0, jLe_0-c_j \rangle$.
Since $S$ is an $\varepsilon$-coreset of $P$, we have
\begin{align}\label{eqn:weight-constraints}
dk/2- \varepsilon dk/2\le \sum_{p\in S} w(p)(\Delta_p^2+\|p'\|^2 +1) - \kappa \le dk/2+\varepsilon dk/2.
\end{align}
Next we consider a different set of $k$ centers denoted by $C_2$. By Lemma~\ref{lem:cost-to-smallset}, there exists unit vectors $v^j_1,v^j_2 \in \mathbb{R}^d$ such that
\begin{align}
\sum_{p\in S_j} w(p)(\min_{\ell=1,2} \|\Tilde{p}-v^j_{\ell}\|^2+\Delta_p^2) \le \sum_{p\in S_j}w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)- \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{|S_j|}}. \label{eqn:small-coreset-cost}
\end{align}
Applying this to all $j\in I$ and get corresponding $v^j_1,v^j_2$ for all $j\in I$. Let $C_2=\{u_1^1,u_2^2,\cdots, u_1^{k/2},u_2^{k/2}\}$ be a set of $k$ centers in $\mathbb{R}^{d+1}$ defined as follows: if $j\in I$, $u_{\ell}^j$ is $v_{\ell}^j$ with an additional $0$th coordinate with value $jL$, making them lie in $H_j$; for $j\in \Bar{I}$, we use the same centers as in $C_1$, i.e., $u_{1}^j=u_{2}^j =c_j$.
\begin{lemma}
For $C_2$ constructed above, we have
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_2) \ge \frac{kd}{2}-\sqrt{d}|I| \text{ and }
\end{align*}
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_2) \le \sum_{p\in S}w(p)(\|\Tilde{p}\|^2+1+\Delta_p^2)- \sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{|S_j|}} -\kappa.
\end{align*}
\end{lemma}
\begin{proof}
By \eqref{eqn:small-coreset-cost},
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_2) &= \sum_{j=1}^{k/2} \sum_{p\in S_j}w(p)\min_{c\in C_2}\|p-c\|^2 =
\sum_{j\in I}\sum_{p\in S_j} w(p)\min_{\ell=1,2} (\|\Tilde{p}-v^j_{\ell}\|^2+\Delta^2_p) +\sum_{j\in \Bar{I}} \sum_{p\in S_j}w(p)\|p-c_j\|^2\\
&\le \sum_{p\in S}w(p)(\|\Tilde{p}\|^2+1+\Delta_p^2)- \sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{|S_j|}} -\kappa.
\end{align*}
By Lemma~\ref{lem:cost-to-basis} (with $k=2$), we have
\begin{align*}
\sum_{i=1}^{d/2} \min_{\ell=1,2}\|e_i-v^j_{\ell}\|^2 \ge {d}-\sqrt{d}.
\end{align*}
It follows that
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_2)&=\sum_{j=1}^{k/2}\sum_{i=1}^{d/2} \min_{c\in C_2}\|jLe_0+e_i-c\|^2 = \sum_{j\in I}\sum_{i=1}^{d/2} \min_{\ell=1,2} \|e_i-v^{j}_{\ell}\|^2 + \sum_{j\in \Bar{I}}\sum_{i=1}^{d/2} \|jLe_0+e_i-c\|^2 \\
&\ge \frac{kd}{2}-\sqrt{d}|I|,
\end{align*}
where in the inequality, we also used the orthogonality between $e_i$ and $c_j-jLe_0$.
\end{proof}
Since $S$ is an $\varepsilon$-coreset of $P$, we have
\begin{align*}
\frac{dk}{2}-|I|\sqrt{d} - \frac{\varepsilon dk}{2} \le (\frac{dk}{2}-|I|\sqrt{d})(1-\varepsilon) \le \sum_{p\in S}w(p)(\|\Tilde{p}\|^2+1+\Delta_p^2)- \sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{|S_j|}}-\kappa,
\end{align*}
which implies
\begin{align}
\sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{|S_j|}}
&\le \sum_{p\in S}w(p)(\|\Tilde{p}\|^2+1+\Delta_p^2) -\frac{dk-2|I|\sqrt{d}-\varepsilon kd}{2}-\kappa
\nonumber\\
&\le \frac{dk+\varepsilon dk}{2} - \frac{dk-2|I|\sqrt{d}-\varepsilon kd}{2} \quad\textnormal{by \eqref{eqn:weight-constraints}} \nonumber\\
&= |I|\sqrt{d} +\varepsilon kd. \nonumber
\end{align}
By definition, $|S_j| \le d/4$, so
\begin{align*}
\sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{d/4}}\le \sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{|S_j|}},
\end{align*}
and it follows that
\begin{align}\label{eqn:size-constraint}
\frac{\sum_{j\in I} \sum_{p\in S_j} w(p)\|\Tilde{p}\|}{\sqrt{d}}
\le \frac{|I|\sqrt{d} +\varepsilon kd} {4}.
\end{align}
Finally we consider a third set of $k$ centers $C_3$. Similarly, there are two centers per group. We set $m$ be a power of $2$ in $[d/2,d]$. Let $h_1,\cdots,h_m$ be the $m$-dimensional Hadamard basis vectors. So all $h_{\ell}$'s are $\{-\frac{1}{\sqrt{m}},\frac{1}{\sqrt{m}}\}$ vectors and $h_1=(\frac{1}{\sqrt{m}},\cdots,\frac{1}{\sqrt{m}})$. We slightly abuse notation and treat each $h_{\ell}$ as a $d$-dimensional vector by concatenating zeros in the end. For each $h_{\ell}$ construct a set of $k$ centers as follows. For each $j\in \Bar{I}$, we still use two copies of $c_j$. For $j\in I$, the $0$th coordinate of the two centers is $jL$, then we concatenate $h_{\ell}$ and $-h_{\ell}$ respectively to the first and the second centers.
\begin{lemma}\label{lem:hadamard-cost}
Suppose $C_3$ is constructed based on $h_{\ell}$. Then for all $\ell\in [m]$, we have
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_3) = \frac{kd}{2} -\frac{d|I|}{\sqrt{m}}\text{ and }
\end{align*}
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_3) = \sum_{p\in S} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2) - 2\sum_{j\in I} \sum_{p\in S_j}\langle w(p) \Tilde{p}, h^p_{\ell} \rangle -\kappa.
\end{align*}
\end{lemma}
\begin{proof}
For $j\in I$, the cost of the two centers w.r.t.\ $P_j$ is
\begin{align*}
\ensuremath{\mathrm{cost}}(P_j,C_3) = \sum_{i=1}^{d/2} \min_{s=-1,+1}\|e_i - s\cdot h_{\ell}\|^2 = \sum_{i=1}^{d/2} (2-2\max_{s=-1,+1}\langle h_{\ell},e_i\rangle)=\sum_{i=1}^{d/2} (2-\frac{2}{\sqrt{m}}) = d-\frac{d}{\sqrt{m}}.
\end{align*}
For $j\in \Bar{I}$, the cost w.r.t.\ $P_j$ is $d$ by \eqref{eqn:cost-of-c}.
Thus, the total cost over all subspaces is
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_3) = (d-\frac{d}{\sqrt{m}})|I| + \left(\frac{k}{2} -|I| \right)d = \frac{kd}{2} -\frac{d|I|}{\sqrt{m}}.
\end{align*}
On the other hand, for $j\in I$, the cost w.r.t.\ $S_j$ is
\begin{align*}
\sum_{p\in S_j} w(p)(\Delta_p^2+ \min_{s=\{-1,+1\}} \|\Tilde{p}-s\cdot h_{\ell}\|^2) &=\sum_{p\in S_j} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2 - 2\max_{s=\{-1,+1\}}\langle \Tilde{p}, s\cdot h_{\ell} \rangle)\\
& =\sum_{p\in S_j} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2 - 2\langle \Tilde{p}, h^p_{\ell} \rangle).
\end{align*}
Here $h^p_{\ell} = s^p\cdot h_{\ell}$, where $s^p=\arg\max_{s=\{-1,+1\}}\langle \Tilde{p}, s\cdot h_{\ell} \rangle$.
For $j\in \Bar{I}$, the cost w.r.t.\ $S_j$ is $\sum_{p\in S_j} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1) -2\langle p-jLe_0, jLe_0-c_j \rangle )$ by \eqref{eqn:cost-of-c-to-S}. Thus, the total cost w.r.t.\ $S$ is
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_3) = \sum_{p\in S} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2) - 2\sum_{j\in I} \sum_{p\in S_j}\langle w(p) \Tilde{p}, h^p_{\ell} \rangle -\kappa .
\end{align*}
This finishes the proof.
\end{proof}
\begin{corollary}
Let $S$ be a $\varepsilon$-coreset of $P$, and $I =\{j: |S_j|\le d/4\}$. Then
\begin{align*}
\sum_{j\in I}\sum_{p\in S_j} w(p)\|\Tilde{p}\| \ge \frac{d|I|-\varepsilon kd\sqrt{d}}{2}. \\
\end{align*}
\end{corollary}
\begin{proof}
Since $S$ is an $\varepsilon$-coreset, we have by Lemma~\ref{lem:hadamard-cost}
\begin{align*}
2\sum_{j\in I} \sum_{p\in S_j}\langle w(p) \Tilde{p}, h^p_{\ell} \rangle &\ge \sum_{p\in S} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2) -\kappa -(\frac{kd}{2} -\frac{d|I|}{\sqrt{m}})(1+\varepsilon)\\
&\ge \sum_{p\in S} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2) -\kappa -\frac{kd}{2} +\frac{d|I|}{\sqrt{m}}-\frac{\varepsilon kd}{2}\\
&\ge \frac{dk-\varepsilon dk}{2} - \frac{kd}{2} +\frac{d|I|}{\sqrt{m}}-\frac{\varepsilon kd}{2} \quad\textnormal{by \eqref{eqn:weight-constraints}}\\
&= \frac{d|I|}{\sqrt{m}} - \varepsilon kd.
\end{align*}
Note that the above inequality holds for all $\ell\in[m]$, then
$$ 2\sum_{\ell=1}^m\sum_{j\in I} \sum_{p\in S_j}\langle w(p) \Tilde{p}, h^p_{\ell}\rangle\ge d|I|\sqrt{m} - \varepsilon kdm.$$
By the Cauchy-Schwartz inequality,
\begin{align*}
\sum_{\ell=1}^m\sum_{j\in I} \sum_{p\in S_j}\langle w(p) \Tilde{p}, h^p_{\ell} \rangle= \sum_{j\in I}\sum_{p\in S_j}\langle w(p) \Tilde{p}, \sum_{\ell=1}^m h^p_{\ell} \rangle \le \sum_{j\in I}\sum_{p\in S_j} w(p)\|\Tilde{p}\| \|\sum_{\ell=1}^m h^p_{\ell} \| = \sqrt{m}\sum_{j\in I}\sum_{p\in S_j} w(p)\|\Tilde{p}\|.
\end{align*}
Therefore, we have
\begin{align*}
\sum_{j\in I}\sum_{p\in S_j} w(p)\|\Tilde{p}\| \ge \frac{d|I|-\varepsilon kd\sqrt{m}}{2} \ge \frac{d|I|-\varepsilon kd\sqrt{d}}{2}.
\end{align*}
\end{proof}
Combining the above corollary with \eqref{eqn:size-constraint}, we have
\begin{align*}
\frac{\sqrt{d}|I| -\varepsilon kd }{2} \le \frac{|I|\sqrt{d} + \varepsilon kd}{4} \implies |I| \le 3\varepsilon k\sqrt{d}.
\end{align*}
By the assumption $d\le \frac{1}{100\varepsilon^2}$, it holds that $|I| \le \frac{3k}{10}$ or $|\Bar{I}|\ge \frac{k}{2} -\frac{3k}{10}=\frac{k}{5}$. Moreover, since $|S_j|>\frac{d}{4}$ for each $j\in \Bar{I}$, we have $|S|>\frac{d}{4}\cdot \frac{k}{5} = \frac{kd}{20}$.
\end{proof}
\section{Lower Bound for Arbitrary $z$}
Using similar ideas from \citep{cohenaddad2022towards}, our proof of the lower bound for $z=2$ can be extended to arbitrary $z$. First, we provide two lemmas analogous to Lemma~\ref{lem:cost-to-basis} and \ref{lem:cost-to-smallset} for general $z\ge 1$. Their proofs can be found in Appendix A in \citep{cohenaddad2022towards}.
\begin{lemma}\label{lem:cost-to-basis-z}
For any even number $k\ge 1$, let $\{c_1,\cdots,c_k\}$ be arbitrary $k$ unit vectors in $\mathbb{R}^d$ such that for each $i$ there exist some $j$ satisfying $c_i=-c_j$. We have
\begin{align*}
\sum_{i=1}^{d/2} \min_{\ell=1}^k\|e_i-c_{\ell}\|^z \ge 2^{z/2-1}d - 2^{z/2}\max\{1, z/2\}\sqrt{\frac{kd}{2}}.
\end{align*}
\end{lemma}
\begin{lemma}\label{lem:cost-to-smallset-z}
Let $S$ be a set of points in $\mathbb{R}^d$ of size $t$ and $w: S\rightarrow \mathbb{R}^+$ be their weights. For arbitrary $\Delta_p$ for each $p$, there exist $2$ unit vectors $v_1, v_2$ satisfying $v_1=-v_2$, such that
\begin{align*}
\sum_{p\in S} w(p)\min_{\ell=1,2} \left(\|p-v_{\ell}\|^2+\Delta_p^2 \right)^{z/2} \le &\sum_{s\in P}w(p)(\|p\|^2+1+\Delta_p^2)^{z/2}\\
&-\min\{1,z/2\}\cdot \frac{2\sum_{p\in S} w(p)(\|p\|^2+1+\Delta_p^2)^{z/2-1} \|p\|}{\sqrt{t}}.
\end{align*}
\end{lemma}
In this proof, the original point set $P$ and three sets of $k$-centers, namely $C_1,C_2,C_3$, are the same as for the case $z=2$. The difference is that now $I=\{j:|S_j|\le \frac{d}{2^z}\}$ and when constructing $C_2$, we use Lemma~\ref{lem:cost-to-smallset-z} in place of Lemma~\ref{lem:cost-to-smallset}. Again, we compare the cost of $P$ and $S$ w.r.t. $C_1,C_2,C_3$ and get the following lemmas.
\begin{lemma}
For $C_1$ constructed above, we have $\ensuremath{\mathrm{cost}}(P,C_1) = \frac{kd}{4}\cdot 2^{z/2}$ and
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_1) =\sum_{j\in {I}}\sum_{p\in S_j} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1)^{z/2} +\sum_{j\in \Bar{I}}\sum_{p\in S_j}w(p)\|p-c_j\|^{z}.
\end{align*}
\end{lemma}
\begin{proof}
Since $e_i$ is orthogonal to $c_j-jLe_0$ and $c_j-jLe_0$ has unit norm for all $i,j$, it follows that
\begin{align}
\ensuremath{\mathrm{cost}}(P,C_1) &=
\sum_{j=1}^{k/2}\sum_{i=1}^{d/2} \min_{c\in C_1}\|jLe_0+e_i-c\|^{2\cdot z/2} = \sum_{j=1}^{k/2} \sum_{i=1}^{d/2} \|jLe_0+e_i -c_j\|^{2\cdot z/2} \nonumber\\
&=\sum_{j=1}^{k/2}\sum_{i=1}^{d/2}(\|e_i\|^2+\|c_j-jLe_0\|^2-2\langle e_i, c_j-jLe_0 \rangle )^{z/2}\nonumber\\
&=\frac{kd}{4}\cdot 2^{z/2}.\label{eqn:cost-of-c-z}
\end{align}
On the other hand, the cost of $C_1$ w.r.t.\ $S_j$ is
\begin{align}
\sum_{p\in S_j} \min_{c\in C_1}w(p)\|p-c\|^{2\cdot z/2} &= \sum_{p\in S_j} w(p)\|p-c_j\|^{2\cdot z/2} = \sum_{p\in S_j} w(p)\|p-jLe_0+ jLe_0-c_j\|^{2\cdot z/2} \nonumber\\
&= \sum_{p\in S_j}w(p) \left( \|p-jLe_0\|^2 + 1 -2\langle p-jLe_0, jLe_0-c_j \rangle \right)^{z/2}.\label{eqn:cost-of-c-to-S-z}
\end{align}
For $j\in I$, the inner product is $0$, and thus the total cost w.r.t.\ $S$ is
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_1) = \sum_{j\in {I}}\sum_{p\in S_j} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1)^{z/2} + \sum_{j\in \Bar{I}}\sum_{p\in S_j}w(p)\|p-c_j\|^{z},
\end{align*}
which finishes the proof.
\end{proof}
For notational convenience, we define $\kappa := \sum_{j\in \Bar{I}}\sum_{p\in S_j}w(p)\|p-c_j\|^{z}$.
Since $S$ is an $\varepsilon$-coreset of $P$, we have
\begin{align}\label{eqn:weight-constraints-z}
\frac{kd}{4}\cdot 2^{z/2} - \frac{\varepsilon kd}{4}\cdot 2^{z/2} \le \sum_{j\in {I}}\sum_{p\in S_j} w(p)(\Delta_p^2+\|\Tilde{p}\|^2 +1)^{z/2} + \kappa \le \frac{kd}{4}\cdot 2^{z/2} + \frac{\varepsilon kd}{4}2^{z/2}.
\end{align}
Next we consider a different set of $k$ centers denoted by $C_2$. By Lemma~\ref{lem:cost-to-smallset-z}, there exists unit vectors $v^j_1,v^j_2 \in \mathbb{R}^d$ satisfying $v^j_1=-v^j_2$ such that
\begin{align}
\sum_{p\in S_j} w(p)(\min_{\ell=1,2} \left(\|\Tilde{p}-v^j_{\ell}\|^2+\Delta^2_p\right)^{z/2}) \le&
\sum_{p\in S_j}w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p )^{z/2} \nonumber\\
&- \min\{1,z/2\}\frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{|S_j|}}. \label{eqn:small-coreset-cost-z}
\end{align}
Applying this to all $j\in I$ and get corresponding $v^j_1,v^j_2$ for all $j\in I$. Let $C_2=\{u_1^1,u_2^2,\cdots, u_1^{k/2},u_2^{k/2}\}$ be a set of $k$ centers in $\mathbb{R}^{d+1}$ defined as follows: if $j\in I$, $u_{\ell}^j$ is $v_{\ell}^j$ with an additional $0$th coordinate with value $jL$, making them lie in $H_j$; for $j\in \Bar{I}$, we use the same centers as in $C_1$, i.e., $u_{1}^j=u_{2}^j =c_j$.
\begin{lemma}
For $C_2$ constructed above, we have
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_2) \ge 2^{z/2}\left(\frac{kd}{4} -\max\{1, z/2\}\sqrt{d}|I|\right), \text{ and }
\end{align*}
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_2) \le& \sum_{j\in I}\sum_{p\in S_j}w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p )^{z/2} \\
&- \min\{1,z/2\}\sum_{j\in I}\frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{|S_j|}}+\kappa.
\end{align*}
\end{lemma}
\begin{proof}
By \eqref{eqn:small-coreset-cost-z},
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_2) &= \sum_{j=1}^{k/2} \sum_{p\in S_j}w(p)\min_{c\in C_2}\|p-c\|^{2\cdot z/2} =
\sum_{j\in I}\sum_{p\in S_j} w(p)\min_{\ell=1,2} (\|\Tilde{p}-v^j_{\ell}\|^2+\Delta^2_p)^{z/2} +\kappa\\
&\le
\sum_{j\in I}\sum_{p\in S_j}w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p )^{z/2} \\
&- \min\{1,z/2\}\sum_{j\in I}\frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{|S_j|}}+\kappa.
\end{align*}
By Lemma~\ref{lem:cost-to-basis-z} (with $k=2$), we have
\begin{align*}
\sum_{i=1}^{d/2} \min_{\ell=1,2}\|e_i-v^j_{\ell}\|^z \ge 2^{z/2-1}d - 2^{z/2}\max\{1, z/2\}\sqrt{d}.
\end{align*}
It follows that
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_2)&=\sum_{j=1}^{k/2}\sum_{i=1}^{d/2} \min_{c\in C_2}\|jLe_0+e_i-c\|^{z} = \sum_{j\in I}\sum_{i=1}^{d/2} \min_{\ell=1,2} \|e_i-v^{j}_{\ell}\|^{2\cdot z/2} + \sum_{j\in \Bar{I}}\sum_{i=1}^{d/2} \|jLe_0+e_i-c_j\|^{2\cdot z/2} \\
& \ge \left( 2^{z/2-1}d - 2^{z/2}\max\{1, z/2\}\sqrt{d} \right)|I| + |\Bar{I}|\frac{d}{2}\cdot 2^{z/2}= \frac{kd}{4}2^{z/2} - 2^{z/2}\max\{1, z/2\}\sqrt{d}|I|,
\end{align*}
where in the inequality, we also used the orthogonality between $e_i$ and $c_j-jLe_0$.
\end{proof}
Since $S$ is an $\varepsilon$-coreset of $P$, we have
\begin{align*}
&2^{z/2}\left(\frac{dk}{4}-\max\{1, z/2\}|I|\sqrt{d} - \frac{\varepsilon dk}{4}\right) \le 2^{z/2}\left(\frac{kd}{4} -\max\{1, z/2\}\sqrt{d}|I|\right)(1-\varepsilon) \\
&\le \sum_{j\in I}\sum_{p\in S_j}w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p )^{z/2} - \min\{1,z/2\}\sum_{j\in I}\frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{|S_j|}}+\kappa,
\end{align*}
which implies
\begin{align}
&\min\{1,z/2\}\sum_{j\in I}\frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{|S_j|}}\nonumber\\
&\le \sum_{j\in I}\sum_{p\in S_j}w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p )^{z/2} - 2^{z/2}\left(\frac{dk}{4}-\max\{1, z/2\}|I|\sqrt{d} - \frac{\varepsilon dk}{4}\right) + \kappa
\nonumber\\
&\le \frac{kd}{4}\cdot 2^{z/2} + \frac{\varepsilon kd}{4}2^{z/2} - 2^{z/2}\left(\frac{dk}{4}-\max\{1, z/2\}|I|\sqrt{d} - \frac{\varepsilon dk}{4}\right) \quad\textnormal{by \eqref{eqn:weight-constraints-z}} \nonumber\\
&= \max\{1, z/2\} |I|\sqrt{d}2^{z/2} +\frac{\varepsilon kd}{2}2^{z/2}. \nonumber
\end{align}
By definition, $|S_j| \le d/t^2$, so
\begin{align*}
\min\{1,\frac{z}{2}\} \sum_{j\in I} \frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{d/t^2}}\le \min\{1,\frac{z}{2}\}\sum_{j\in I}\frac{2\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{|S_j|}},
\end{align*}
and it follows that
\begin{align}\label{eqn:size-constraint-z}
\min\{1,\frac{z}{2}\} \sum_{j\in I} \frac{\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 +\Delta^2_p)^{z/2-1} \|\Tilde{p}\|}{\sqrt{d}}
\le \frac{\max\{1, z/2\} |I|\sqrt{d} 2^{z/2} +\frac{\varepsilon kd}{2}2^{z/2}} {2t}.
\end{align}
Finally we consider a third set of $k$ centers $C_3$. Similarly, there are two centers per group. We set $m$ be a power of $2$ in $[d/2,d]$. Let $h_1,\cdots,h_m$ be the $m$-dimensional Hadamard basis vectors. So all $h_{\ell}$'s are $\{-\frac{1}{\sqrt{m}},\frac{1}{\sqrt{m}}\}$ vectors and $h_1=(\frac{1}{\sqrt{m}},\cdots,\frac{1}{\sqrt{m}})$. We slightly abuse notation and treat each $h_{\ell}$ as a $d$-dimensional vector by concatenating zeros in the end. For each $h_{\ell}$ construct a set of $k$ centers as follows. For each $j\in \Bar{I}$, we still use two copies of $c_j$. For $j\in I$, the $0$th coordinate of the two centers is $jL$, then we concatenate $h_{\ell}$ and $-h_{\ell}$ respectively to the first and the second centers.
\begin{lemma}\label{lem:hadamard-cost-z}
Suppose $C_3$ is constructed based on $h_{\ell}$. Then for all $\ell\in [m]$, we have
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_3) \le 2^{z/2}\left( \frac{kd}{4} - \frac{d|I|}{2}\cdot \frac{\min\{1,z/2\}}{\sqrt{m}} \right), \text{ and }
\end{align*}
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_3) \ge \sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}} - 2\max\{1,\frac{z}{2}\} \sum_{j\in I}\sum_{p\in S_j} w(p)\langle \Tilde{p}, h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1}+\kappa.
\end{align*}
\end{lemma}
\begin{proof}
For $j\in I$, the cost of the two centers w.r.t.\ $P_j$ is
\begin{align*}
\ensuremath{\mathrm{cost}}(P_j,C_3) &= \sum_{i=1}^{d/2} \min_{s=-1,+1}\|e_i - s\cdot h_{\ell}\|^z = \sum_{i=1}^{d/2} (2-2\max_{s=-1,+1}\langle h_{\ell},e_i\rangle)^{z/2}= \frac{d}{2}(2-\frac{2}{\sqrt{m}})^{z/2} \\
&\le \frac{d}{2}\cdot 2^{z/2} \left(1-\frac{\min\{1,z/2\}}{\sqrt{m}}\right).
\end{align*}
For $j\in \Bar{I}$, the cost w.r.t.\ $P_j$ is $\frac{d}{2}\cdot 2^{z/2}$ by \eqref{eqn:cost-of-c-z}.
Thus, the total cost over all subspaces is
\begin{align*}
\ensuremath{\mathrm{cost}}(P,C_3) &\le \frac{d}{2}\cdot2^{z/2} \left(1-\frac{\min\{1,z/2\}}{\sqrt{m}}\right)|I| + \left(\frac{k}{2} -|I| \right)\frac{d}{2}\cdot 2^{z/2} = 2^{z/2}\left( \frac{kd}{4} - \frac{d|I|}{2}\cdot \frac{\min\{1,z/2\}}{\sqrt{m}} \right).
\end{align*}
On the other hand, for $j\in I$, the cost w.r.t.\ $S_j$ is
\begin{align*}
&\sum_{p\in S_j} w(p)(\Delta_p^2+ \min_{s=\{-1,+1\}} \|\Tilde{p}-s\cdot h_{\ell}\|^2)^{z/2} =\sum_{p\in S_j} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2 - 2\max_{s=\{-1,+1\}}\langle \Tilde{p}, s\cdot h_{\ell} \rangle)^{z/2}\\
& =\sum_{p\in S_j} w(p) (\|\Tilde{p}\|^2+1 + \Delta_p^2 - 2\langle \Tilde{p}, h^p_{\ell} \rangle)^{z/2}\\
&\ge \sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}} - 2\max\{1,\frac{z}{2}\} \sum_{p\in S_j} w(p)\langle \Tilde{p}, h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1}.
\end{align*}
Here $h^p_{\ell} = s^p\cdot h_{\ell}$, where $s^p=\arg\max_{s=\{-1,+1\}}\langle \Tilde{p}, s\cdot h_{\ell} \rangle$.
For $j\in \Bar{I}$, the total cost w.r.t.\ $S_j$ is $\kappa$. Thus, the total cost w.r.t.\ $S$ is
\begin{align*}
\ensuremath{\mathrm{cost}}(S,C_3) \ge \sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}} - 2\max\{1,\frac{z}{2}\} \sum_{j\in I}\sum_{p\in S_j} w(p)\langle \Tilde{p}, h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1}+\kappa .
\end{align*}
This finishes the proof.
\end{proof}
\begin{corollary}
Let $S$ be a $\varepsilon$-coreset of $P$, and $I =\{j: |S_j|\le d/4\}$. Then
\begin{align*}
2\max\{1,\frac{z}{2}\} \sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \|\Tilde{p}\| \ge 2^{z/2}\cdot \left( \frac{d|I|}{2}\cdot\min\{1,z/2\} - \frac{\varepsilon kd \sqrt{d}}{2}\right). \\
\end{align*}
\end{corollary}
\begin{proof}
Since $S$ is an $\varepsilon$-coreset, we have by Lemma~\ref{lem:hadamard-cost-z}
\begin{align*}
&2\max\{1,\frac{z}{2}\} \sum_{j\in I}\sum_{p\in S_j} w(p)\langle \Tilde{p}, h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \ge \sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}} + \kappa \\
&~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -2^{z/2}\left( \frac{kd}{4} - \frac{d|I|}{2}\cdot \frac{\min\{1,z/2\}}{\sqrt{m}} \right)(1+\varepsilon)\\
&\ge \frac{kd}{4}\cdot 2^{z/2} - \frac{\varepsilon kd}{4}\cdot 2^{z/2} - 2^{z/2}\left( \frac{kd}{4} - \frac{d|I|}{2}\cdot \frac{\min\{1,z/2\}}{\sqrt{m}} +\frac{\varepsilon kd}{4} \right) \quad\textnormal{by \eqref{eqn:weight-constraints-z}}\\
&= 2^{z/2}\cdot \frac{d|I|}{2}\cdot \frac{\min\{1,z/2\}}{\sqrt{m}} - \frac{\varepsilon kd}{2}\cdot 2^{z/2}.
\end{align*}
Note that the above inequality holds for all $\ell\in[m]$, then
$$ 2\max\{1,\frac{z}{2}\} \sum_{\ell=1}^m\sum_{j\in I}\sum_{p\in S_j} w(p)\langle \Tilde{p}, h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \ge 2^{z/2}\cdot \left( \frac{d|I|\sqrt{m}}{2}\cdot\min\{1,z/2\} - \frac{\varepsilon kd m}{2}\right).$$
By the Cauchy-Schwartz inequality,
\begin{align*}
\sum_{\ell=1}^m\sum_{j\in I}\sum_{p\in S_j} w(p)\langle \Tilde{p}, h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} &= \sum_{j\in I}\sum_{p\in S_j} w(p)\langle \Tilde{p}, \sum_{\ell=1}^m h^p_{\ell} \rangle (\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \\
&\le \sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \|\Tilde{p}\|\cdot \|\sum_{\ell=1}^m h^p_{\ell}\| \\
&= \sqrt{m}\sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \|\Tilde{p}\|.
\end{align*}
Therefore, we have
\begin{align*}
2\max\{1,\frac{z}{2}\} \sum_{j\in I}\sum_{p\in S_j} w(p)(\|\Tilde{p}\|^2+1 + \Delta_p^2)^{\frac{z}{2}-1} \|\Tilde{p}\| &\ge 2^{z/2}\cdot \left( \frac{d|I|}{2}\cdot\min\{1,z/2\} - \frac{\varepsilon kd \sqrt{m}}{2}\right) \\
&\ge 2^{z/2}\cdot \left( \frac{d|I|}{2}\cdot\min\{1,z/2\} - \frac{\varepsilon kd \sqrt{d}}{2}\right).
\end{align*}
\end{proof}
Combining the above corollary with \eqref{eqn:size-constraint-z}, we have
\begin{align*}
\frac{\min\{1,z/2\}}{2\max\{1,z/2\}}2^{z/2}\cdot \left( \frac{\sqrt{d}|I|}{2}\cdot\min\{1,z/2\} - \frac{\varepsilon kd }{2}\right) \le \frac{\left( \max\{1, z/2\} |I|\sqrt{d} +\frac{\varepsilon kd}{2}\right)2^{z/2}} {2t},
\end{align*}
which implies that
\begin{align*}
\left(\frac{\min\{1,(z/2)^2\}}{4\max\{1,(z/2)\}}-\frac{\max\{1,z/2\}}{2t}\right)|I| \le \frac{\min\{1,(z/2)\}\varepsilon kd}{4\max\{1,(z/2)\}}+ \frac{\varepsilon k\sqrt{d}}{4t}.
\end{align*}
So if we set $t = \frac{4\max\{1,(z/2)^2\}}{\min\{1,(z/2)^2\}}$, then
\begin{align*}
\frac{\min\{1,(z/2)^2\}}{8\max\{1,(z/2)\}}|I| \le \frac{\min\{1,(z/2)\}\varepsilon k\sqrt{d}}{2\max\{1,(z/2)\}} \implies |I| \le \frac{4\varepsilon k\sqrt{d}}{\min\{1,z/2\}}.
\end{align*}
By the assumption $d\le \frac{\min\{1,(z/2)^2\}}{100\varepsilon^2}$, it holds that $|I| \le \frac{2k}{5}$ or $|\Bar{I}|\ge \frac{k}{2} -\frac{2k}{5}=\frac{k}{10}$. Moreover, since $|S_j|>\frac{d}{t^2}$ for each $j\in \Bar{I}$, we have $|S|>\frac{d}{t^2}\cdot \frac{k}{5} = \frac{kd \min\{1,(z/2)^4\}}{\max\{1,(z/2)^4\}}$.
\section{Introduction}
\label{sec:intro}
Processing huge datasets is always computationally challenging. In this paper, we consider the coreset paradigm, which is an effective data-reduction tool to alleviate the computation burden on big data. Roughly speaking, given a large dataset, the goal is to construct a much smaller dataset, called \emph{coreset}, so that vital properties of the original dataset are preserved.
Coresets for various problems have been extensively studied~\citep{harpeled2004on,feldman2011unified,feldman2013turning,cohenaddad2022towards,braverman2022power}.
In this paper, we investigate coreset construction for \ProblemName{$k$-Median}\xspace in Euclidean spaces.
\sloppy
Coreset construction for Euclidean \ProblemName{$k$-Median}\xspace has been studied for nearly two decades~\citep{harpeled2004on,feldman2011unified,huang2018epsilon,cohenaddad2021new,cohenaddad2022towards}. For this particular problem, an $\varepsilon$-coreset is a (weighted) point set in the same Euclidean space that satisfies: given any set of $k$ centers, the \ProblemName{$k$-Median}\xspace costs of the centers w.r.t.\ the original point set and the coreset are within a factor of $1+\varepsilon$. The most important task in theoretical research here is to characterize the minimum size of $\varepsilon$-coresets. Recently, there has been great progress in closing the gap between upper and lower bounds in high-dimensional spaces. However, researches on the coreset size in small dimensional spaces are rare. There are still large gaps between upper and lower bounds even for $1$-d \ProblemName{$1$-Median}\xspace.
Clustering in small dimensional Euclidean spaces is of both theoretical and practical importance. In practice, many applications involve clustering points in small dimensional spaces. A typical example is clustering objects in $\mathbb{R}^2$ or $\mathbb{R}^3$ based on their spatial coordinates~\citep{ SpatialClusteringExample1, SpatialClusteringExample2}. Another example is spectral clustering for graph and social network analysis~\citep{von2007tutorial, SocialNetworkExample2010, SocialNetworkExampl2014, SocialNetworkExample2017}. In spectral clustering, nodes are first embedded into a small dimensional Euclidean space using spectral methods and then Euclidean clustering algorithms are applied in the embedding space. Even the simplest $1$-d \ProblemName{$k$-Median}\xspace has numerous practical applications ~\citep{ClusteringInRExample1, ClusteringInRExample2, ClusteringInRExample3}.
On the theory side, existing techniques for coresets in high dimensions may not be sufficient to obtain optimal coresets in small dimensions. For example, much smaller size is achievable in $\mathbb{R}^1$ by using geometric methods, while the sampling methods for strong coresets in high dimension~\citep{langberg2010universal, cohenaddad2021new, Wu2022OptimalUpper} seem not viable to obtain such bounds in low dimensions. This suggests that optimal coreset construction in small dimensions may require new techniques, which provides a partial explanation of why $1$-d \ProblemName{$1$-Median}\xspace is still open after two decades of research. That being said, the coreset problem for clustering in small dimensional spaces is of great theoretical interest and practical value. Yet it is largely unexplored in the literature. This paper aims to fill the gap and study the following question:
\begin{question}
\label{que:MainQuestion}
What is the tight coreset size for Euclidean \ProblemName{$k$-Median}\xspace problem in $\mathbb{R}^d$ for small $d$?
\end{question}
\subsection{Problem Definitions and Previous Results}
\paragraph{Euclidean \ProblemName{$k$-Median}\xspace.}
In the Euclidean \ProblemName{$k$-Median}\xspace problem, we are given a dataset $P\subset \mathbb{R}^d$ ($d\geq 1$) of $n$ points and an integer $k\geq 1$; and the goal is to find a $k$-center set $C \subset \mathbb{R}^d$ that minimizes the objective function
\begin{equation} \label{eq:DefCost}
\ensuremath{\mathrm{cost}}(P, C) := \sum_{p \in P}{d(p, C)} = \sum_{p\in P}{\min_{c\in C} d(p,c)},
\end{equation}
where $d(p,c)$ represents the Euclidean distance between $p$ and $c$.
It has many application domains including approximation algorithms, unsupervised learning, and computational geometry~\citep{lloyd1982least,tan2006cluster,arthur2007k,coates2012learning}.
\paragraph{Coresets.}
Let $\mathcal{C}$ denote the collection of all $k$-center sets, i.e., $\mathcal{C} := \{ C\subset \mathbb{R}^d~:~ |C|=k\}$.
\begin{definition}[\bf $\varepsilon$-Coreset for Euclidean \ProblemName{$k$-Median}\xspace~\citep{harpeled2004on}]
\label{def:coreset}
Given a dataset $P\subset \mathbb{R}^d$ of $n$ points, an integer $k\geq 1$ and $\varepsilon\in (0,1)$, an $\varepsilon$-coreset for Euclidean \ProblemName{$k$-Median}\xspace is a subset $S \subseteq P$ with weight $w : S \to \mathbb{R}_{\geq 0}$, such that
\begin{equation*}
\forall C\in \mathcal{C},
\qquad
\sum_{p \in S}{w(p) \cdot d(p, C)}
\in (1 \pm \varepsilon) \cdot \ensuremath{\mathrm{cost}}(P, C).
\end{equation*}
\end{definition}
\noindent
\sloppy
For Euclidean \ProblemName{$k$-Median}\xspace, the best known upper bound on $\varepsilon$-coreset size is $\Tilde{O}(\min\left\{\frac{k^{4/3}}{\varepsilon^2}, \frac{k}{\varepsilon^3} \right\})$~\citep{Wu2022OptimalUpper,cohenaddad2022towards} and $\Omega(\frac{k}{\varepsilon^2})$ is the best existing lower bound~\citep{cohenaddad2022towards}. The upper bound is dimension-independent, since using dimensionality reduction techniques such as Johnson–Lindenstrauss transform, the dimension can be reduced to $\Tilde{\Theta}(\frac{1}{\varepsilon^2})$. Thus, most previous work essentially only focus on $d=\Tilde{\Theta}(\frac{1}{\varepsilon^2})$, whereas the case for $d < \frac{1}{\epsilon^2}$ is largely unexplored. The lower bound requires $d = \Omega(\frac{k}{\varepsilon^2})$, as the hard instance for the lower bound is an orthonormal basis of size $\Omega(\frac{k}{\varepsilon^2})$. For constant $k$ and large enough $d$, the upper and lower bounds match up to a polylog factor.
On the contrary, for $d \ll \Theta(\frac{1}{\varepsilon^2})$, tight coreset sizes for \ProblemName{$k$-Median}\xspace are far from well-understood, even when $k=1$.
Specifically, for constant $d$, the current best upper bound is $\Tilde{O}(\frac{k}{\varepsilon^3}, \frac{kd}{\varepsilon^2})$~\citep{feldman2011unified}, and the best lower bound is $\Omega(\frac{k}{\sqrt{\varepsilon}})$~\citep{baker2020coresets}.
Thus, there is a still large gap between the upper and lower bounds for small $d$.
Perhaps surprisingly, this is the case even for $d=1$: \citet{HarPeled2005SmallerCF} present a coreset of size $\Tilde{O}(\frac{k}{\epsilon})$ in $\mathbb{R}$ while the best known lower bound is $\Omega(\frac{k}{\sqrt{\varepsilon}})$.
\subsection{Our Results}
\label{sec:contribution}
We provide a complete characterization of the coreset size (up to a logarithm factor) for $d=1$ and partially answer \Cref{que:MainQuestion} for $1< d < \Theta(\frac{1}{\epsilon^2})$. Our results are summarized in Table~\ref{tab:result}.
\begin{table}
\begin{center}
\caption{Comparison of coreset sizes for \ProblemName{$k$-Median}\xspace in $\mathbb{R}^d$. We use following abbreviations: [1] for \citep{HarPeled2005SmallerCF}, [2] for \citep{feldman2011unified}, [3] for \citep{baker2020coresets}, [4] for \citep{CohenAddad2021ImprovedCA}, [5] for \citep{cohenaddad2022towards} and [6] for \citep{Wu2022OptimalUpper}. The symbol $\dagger$ represents that the results can be generalized to \ProblemName{$(k, z)$-Clustering}\xspace (\Cref{def:coreset_general}).}
\centering
\small
\begin{tabular}{|cc|c|c|c|}
\hline
\multicolumn{2}{|c|}{Paremeters $d,k$} & Best Known Upper Bound & Best Known Lower Bound & Our Results \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{$d = 1$}} & $k=1$ & $\Tilde{O}(\varepsilon^{-1})$ [1] & $\Omega(\varepsilon^{-1/2})$ [3] & \makecell{$\Tilde{O}(\varepsilon^{-1/2})$ \\(Thm. ~\ref{thm:1d1k})} \\ \cline{2-5}
\multicolumn{1}{|c|}{} & $k > 1$ & $O(k\varepsilon^{-1})$ [1] & $\Omega(k\varepsilon^{-1/2})$ [3]& \makecell{$\Omega(k\varepsilon^{-1})$ \\ (Thm.~\ref{thm:Lowerbound1dkmedian})} \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{$1 < d < \Theta(\varepsilon^{-2})$}} & $k=1$ & $\Tilde{O}(\varepsilon^{-2})$ [4] & $\Omega(\varepsilon^{-1/2})$ [3] & \makecell{$\Tilde{O}(\sqrt{d}\varepsilon^{-1})^\dagger$ \\ (Thm.~\ref{thm:upper})} \\ \cline{2-5}
\multicolumn{1}{|c|}{} & $k>1$ & $ \Tilde{O}(\min\left\{ \frac{kd}{\varepsilon^2}, \frac{k}{\varepsilon^3}, \frac{k^{4/3}}{\varepsilon^2} \right\})$ [2,5, 6] & $\Omega(k\varepsilon^{-1/2})$ [3] & \makecell{$\Omega(kd + k\varepsilon^{-1})^\dagger$ \\ (Thm.~\ref{thm:lowerbound})} \\ \hline
\multicolumn{1}{|c|}{$ d = \Omega(\varepsilon^{-2})$ } & $k \ge 1$ & $ \Tilde{O}(\min\left\{\frac{k}{\varepsilon^3}, \frac{k^{4/3}}{\varepsilon^2} \right\})$ [5, 6] &$\Omega(k\varepsilon^{-2})$ [5] & $\Biggm/$ \\ \hline
\end{tabular}
\end{center}
\label{tab:result}
\end{table}
For $d=1$, we construct coresets with size $\Tilde{O}(\frac{1}{\sqrt{\varepsilon}})$ for \ProblemName{$1$-Median}\xspace (\Cref{thm:1d1k}) and prove that the coreset size lower bound is $\Omega(\frac{k}{\varepsilon})$ for $k \ge 2$ (\Cref{thm:Lowerbound1dkmedian}).
Previous work has shown coresets with size $\Tilde{O}(\frac{k}{\varepsilon})$ exist for \ProblemName{$k$-Median}\xspace~\citep{HarPeled2005SmallerCF} in $1$-d, and thus our lower bound nearly matches this upper bound. On the other hand, it was proved that the coreset size of \ProblemName{$1$-Median}\xspace in $1$-d is $\Omega(\frac{1}{\sqrt{\varepsilon}})$~\citep{baker2020coresets}, which shows our upper bound result for \ProblemName{$1$-Median}\xspace is nearly tight.
For $d>1$, we provide a discrepancy-based method that constructs deterministic coresets of size $\Tilde{O}(\frac{\sqrt{d}}{\varepsilon})$ for \ProblemName{$1$-Median}\xspace (\Cref{thm:upper}).
Our result improves over the existing $\Tilde{O}(\frac{1}{\varepsilon^2})$ upper bound~\citep{CohenAddad2021ImprovedCA} for $1< d < \Theta(\frac{1}{\epsilon^2})$ and matches the $\Omega(\frac{1}{\varepsilon^2})$ lower bound~\citep{cohenaddad2022towards} for $d = \Theta(\frac{1}{\epsilon^2})$. We further prove a lower bound of $\Omega(kd)$ for \ProblemName{$k$-Median}\xspace in $\mathbb{R}^d$ (\Cref{thm:lowerbound}).
Combining with our $1$-d lower bound $\Omega(\frac{k}{\varepsilon})$, this improves over the existing $\Omega(\frac{k}{\sqrt{\varepsilon}}+d)$ lower bound~\citep{baker2020coresets,cohenaddad2022towards}.
\subsection{Technical Overview}
\label{sec:technical}
We first discuss the 1-d \ProblemName{$k$-Median}\xspace problem and show that the framework of \citep{HarPeled2005SmallerCF} is optimal with significant improvement for $k=1$.
Then we briefly summarize our approaches for $2\leq d\leq \varepsilon^{-2}$.
\paragraph{The Bucket-Partitioning Framework for $1$-d \ProblemName{$k$-Median}\xspace in \citep{HarPeled2005SmallerCF}.}
Our main results in $1$-d are based on the classic bucket-partitioning framework, developed in \citep{HarPeled2005SmallerCF}, which we briefly review now.
They greedily partition a dataset $P\subset \mathbb{R}$ into $O(k \varepsilon^{-1})$ consecutive buckets $B$'s and collect the mean point $\mu(B)$ together with weight $|B|$ as their coreset $S$.
Their construction requires that the cumulative error $\delta(B) = \sum_{p\in B}|p-\mu(B)| \leq \varepsilon\cdot \ensuremath{\mathsf{OPT}}/k$ holds for every bucket $B$, where $\ensuremath{\mathsf{OPT}}$ is the optimal \ProblemName{$k$-Median}\xspace cost of $P$.
Their important geometric observation is that the induced error $|\ensuremath{\mathrm{cost}}(B,C) - |B|\cdot d(\mu(B),C) |$ of every bucket $B$ is at most $\delta(B)$, and even is 0 when all points in $B$ assign to the same center.
Consequently, only $O(k)$ buckets induce a non-zero error for every center set $C$ and the total induced error is at most $\varepsilon \cdot \ensuremath{\mathsf{OPT}}$, which concludes that $S$ is a coreset of size $O(k \varepsilon^{-1})$.
\paragraph{Reducing the Number of Buckets for $1$-d \ProblemName{$1$-Median}\xspace via Adaptive Cumulative Errors.}
In the case of $k=1$ where there is only one center $c\in \mathbb{R}$, we improve the result in~\citep{HarPeled2005SmallerCF} (\Cref{thm:1d1k}) through the following observation: $\ensuremath{\mathrm{cost}}(P,c)$ can be much larger than $\ensuremath{\mathsf{OPT}}$ when center $c$ is close to either of the endpoints of $P$, and consequently, can allow a larger induced error of coreset than $\varepsilon \cdot \ensuremath{\mathsf{OPT}}$.
This observation motivates us to adaptively select cumulative errors for different buckets according to their locations.
Inspired by this motivation, our algorithm (\Cref{alg:k1d1}) first partitions dataset $P$ into blocks $B_i$ according to clustering cost, i.e., $\ensuremath{\mathrm{cost}}(P,c)\approx 2^i\cdot \ensuremath{\mathsf{OPT}}$ for all $c\in B_i$, and then further partition each block $B_i$ into buckets $B_{i,j}$ with a carefully selected cumulative error bound $\delta(B_{i,j})\leq \varepsilon\cdot 2^i\cdot \ensuremath{\mathsf{OPT}}$.
Intuitively, our selection of cumulative errors is proportional to the minimum clustering cost of buckets, which results in a coreset.
For the coreset size, we first observe that there are only $O(\log \varepsilon^{-1})$ non-empty blocks $B_i$ (\Cref{lem:number_bucket}) since we can ``safely ignore'' the leftmost and the rightmost $\varepsilon n$ points and the remaining points $p\in P$ satisfy $\ensuremath{\mathrm{cost}}(P,p)\leq \varepsilon^{-1}\ensuremath{\mathsf{OPT}}$.
The most technical part is that we show the number $m$ of buckets in each $B_i$ is at most $O(\varepsilon^{-1/2})$ (\Cref{lem:number_subbucket}), which results in our improved coreset size $\tilde{O}(\varepsilon^{-1/2})$.
The basic idea is surprisingly simple: the clustering cost of a bucket is proportional to its distance to center $c$, and hence, the clustering cost of $m$ consecutive buckets is proportional to $m^2$ instead of $m$.
According to this idea, we find that $m^2\cdot \delta(B_{i,j})\leq 2^i\cdot \ensuremath{\mathsf{OPT}}$ for every $B_i$, which implies a desired bound $m = O(\varepsilon^{-1/2})$ by our selection of $\delta(B_{i,j}) \approx \varepsilon\cdot 2^i\cdot \ensuremath{\mathsf{OPT}}$.
\paragraph{Hardness Result for $1$-d \ProblemName{$2$-Median}\xspace: Cumulative Error is Unavoidable.}
We take $k=2$ as an example here and show the tightness of the $O(\varepsilon^{-1})$ bound by~\citep{harpeled2004on}.
The extension to $k > 2$ is standard via an idea of \citep{baker2020coresets}.
We construct the following worst-case instance $P\subset \mathbb{R}$ of size $\varepsilon^{-1}$: We construct $m = \varepsilon^{-1}$ consecutive buckets $B_1, B_2, \ldots, B_m$ such that the length of buckets exponentially increases while the number of points in buckets exponentially decreases.
We fix a center at the leftmost point of $P$ (assuming to be $0$ w. l. o. g.) and move the other center $c$ along the axis.
Such dataset $P$ satisfies the following:
\begin{itemize}
\item the clustering cost is stable: for all $c$, $f_P(c) := \ensuremath{\mathrm{cost}}(P, \left\{0,c\right\}) \approx \varepsilon^{-1}$ up to a constant factor;
\item the cumulative error for every bucket $B_i$ is $\delta(B_i)\approx 1$;
\item for every $B_i$, $\ensuremath{\mathrm{cost}}(B_i,\left\{0,c\right\})$ is a quadratic function that first decreases and then increases as $c$ moves from left to right within $B_i$, and the gap between the maximum and the minimum values is $\Omega(\delta(B_i))$.
\end{itemize}
\noindent
Suppose $S\subseteq P$ is of size $o(\varepsilon^{-1})$.
Then there must exist a bucket $B$ such that $S\cap B = \emptyset$.
We find that function $f_S(c) := \ensuremath{\mathrm{cost}}(S, \left\{0,c\right\})$ is an affine linear function when $c$ is located within $B_i$ (\Cref{lem:1d2m-cost-is-affine}).
Consequently, the maximum induced error $\max_{c\in B_i} |f_P(c) - f_S(c)|$ is at least $\Omega(\delta(B_i))$ since the estimation error of an affine linear function $f_S$ to a quadratic function $f_P$ is up to certain ``cumulative curvature'' of $f_P$ (\Cref{lem:quadratic_approximation}), which is $\Omega(\delta(B_i))$ due to our construction.
Hence, $S$ is not a coreset since $f_P(c) \approx \varepsilon^{-1}$ always holds.
We remind the readers that the above cost function $f_P$ is actually a piecewise quadratic function with $O(\varepsilon^{-1})$ pieces instead of a quadratic one, which ensures the stability of $f_P$.
This is the main difference from $k=1$, which leads to a gap of $\varepsilon^{-1/2}$ on the coreset size between $k=1$ and $k=2$.
As far as we know, this is the first such separation in any dimension.
\paragraph{Our Approaches when $2\leq d\leq \varepsilon^{-2}$.}
For \ProblemName{$1$-Median}\xspace, our upper bound result (\Cref{thm:upper}) combines a recent hierarchical decomposition coreset framework in~\citep{braverman2022power}, that reduces the instance to a hierarchical ring structure (\Cref{thm:reduction}), and the discrepancy approaches (\Cref{thm:discrepancy_upper}) developed by~\citep{karnin2019discrepancy}.
The main idea is to extend the analytic analysis of~\citep{karnin2019discrepancy} to handle multiplicative errors in a scalable way.
For \ProblemName{$k$-Median}\xspace, our lower bound result (\Cref{thm:lowerbound}) extends recently developed approaches in~\citep{cohenaddad2022towards}. Their hard instance is an orthonormal basis in $\mathbb{R}^d$, the size of which is at most $d$, and hence cannot obtain a lower bound higher than $\Omega(d)$. We improve the results by embedding $\Theta(k)$ copies of their hard instance in $\mathbb{R}^d$, each of which lies in a different affine subspace. We argue that the errors from all subspaces add up. However, the error analysis from~\citep{cohenaddad2022towards} cannot be directly used; we need to overcome several technical challenges. For instance, points in the coreset are not necessary in any affine subspace, so the error in each subspace is not a corollary of their result. Moreover, errors from different subspaces may cancel each other.
\subsection{Other Related Work}
\label{sec:related}
\paragraph{Coresets for Clustering in Metric Spaces}
Recent works \citep{cohenaddad2022towards,cohen-addad2022improved,huang2023coresets} show that Euclidean \ProblemName{$(k, z)$-Clustering}\xspace admits $\epsilon$-coresets of size $\tilde{O}(k\epsilon^{-2}\cdot \min \{\epsilon^{-z},k^{\frac{z}{z+2}}\})$ and a nearly tight bound $\tilde{O}(\epsilon^{-2})$ is known when $k=1$ \citep{CohenAddad2021ImprovedCA}.
Apart from the Euclidean metric, the research community also studies coresets for clustering in general metric spaces a lot. For example,
\citet{feldman2011unified} construct coresets of size $\Tilde{O}(k\varepsilon^{-2} \log n)$ for general discrete metric. \citet{baker2020coresets} show that the previous $\log n$ factor is unavoidable. There are also works on other specific metrics spaces: doubling metrics~\citep{huang2018epsilon} and graphs with shortest path metrics~\citep{baker2020coresets,BJKW21,cohenaddad2021new}, to name a few.
\paragraph{Coresets for Variants of Clustering}
Coresets for variants of clustering problems are also of great interest. For example, \citet{braverman2022power} construct coresets of size $\Tilde{O}(k^3 \varepsilon^{-6})$ for capacitated \ProblemName{$k$-Median}\xspace, which is improved to $\Tilde{O}(k^3 \varepsilon^{-5})$ by~\citep{huang2023coresets}. Other important variants of clustering include ordered clustering~\citep{Braverman2019CoresetsFO}, robust clustering~\citep{Robust2022}, and time-series clustering~\citep{TimeSeries2022}.
|
{
"arxiv_id": "2302.13629",
"language": "en",
"timestamp": "2023-02-28T02:25:47",
"url": "https://arxiv.org/abs/2302.13629",
"yymm": "2302"
} | \section{Introduction}
Collective decision-making in large-scale decentralized multi-robot systems is required to coordinate and organize the system~\cite{raoufi2021speed, ebert2020bayes, brambilla2013swarm, valentini2016collective}. For example, a robot swarm needs to collectively agree on a common direction in flocking or on a task allocation~\cite{raoufi2019self}.
While task allocation is an example for a discrete consensus problem similar to best-of-$n$ problems (collectively choosing from a finite and countable set), the flocking example is a continuous consensus achievement problem~\cite{valentini2017best}.
Large portions of the collective decision-making literature in swarm robotics are focused on discrete problems, such as the popular collective perception benchmark scenario~\cite{valentini2016collective}.
Here we focus on a continuous consensus achievement problem ~\cite{olfati2007consensus, ding2022consensus} in the form of a decentralized estimation scenario~\cite{leonard2022collective}. In our previous work we studied the effect of diverse information on the accuracy of collective estimation, which forms the exploration-exploitation trade-off~\cite{raoufi2021speed}. To achieve diverse-enough information, the swarm needs to expand and sample from larger area, which leads to a dispersal collective behavior. Among the proposed distributed methods in the literature on dispersion, some use information that is either costly or not available for all swarm platforms~\cite{ugur2007dispersion}. However, an approximate estimation of distance proved to be efficient
to achieve such a goal. The performance of greedy gradient descent algorithm for dispersion predicted to be challenging, especially with large number of robots ($N>10$)~\cite{bayert2019robotic}. Thus, to overcome this, we propose a threshold-based random walk algorithm that proves to be efficient {\color{newChanges}enough} for larger swarms ($N=40$).
\par
In addition, we require a form of exploitation of the collective decision as the robots need to react to their collective decisions and aggregate at areas that are determined by their consensus.
This comes with a design challenge. Should the robots separate a consensus finding phase from an exploitation phase? Either they synchronize and determine an end of the collective decision-making process or they asynchronously switch to exploitation and try to keep finding a consensus on the go.
Here we propose a solution choosing the asynchronous option.
Consequently, we face another challenge. As the robots initiate their exploitation process, they try to move towards the designated area while continuing to communicate with neighbors. They form a dynamic network topology while following the collective decision-making protocol. We know that the network topology influences the decision-making process~{\color{newChanges}\cite{srivastava2014collective, lobel2016preferences, becker2017network, mateo2019optimal, kwa2023effect}} and hence the emerging process is self-referential (network influences consensus, consensus influences spatial displacement). In that regards, there is a huge body of literature studying this effect from a network point of view. An example of such phenomenon is the homophily in social networks~\cite{khanam2022homophily, holme2006nonequilibrium}. However, studying the co-evolution of network and opinion dynamics in swarm robotics has been overlooked. In this paper, we show how the swarm of real robots disperse in an unbounded environment and then aggregating at the points where they agreed on.
\par
\newcommand\figFourWidth{1.9}
\begin{figure*}
\centering
\subcaptionbox{Initial Distribution}{\includegraphics[height=\figFourWidth in]{Figures/snapshots/snapShot_at_39.png}}%
\hfill
\subcaptionbox{Dispersed}{\includegraphics[height=\figFourWidth in]{Figures/snapshots/snapShot_at_7400.png}}%
\hfill
\subcaptionbox{Final Consensus}{\includegraphics[height=\figFourWidth in]{Figures/snapshots/snapShot_at_23800.png}}%
\hfill
\subcaptionbox{Kilobot with light conductor}{\includegraphics[height= \figFourWidth in]{Figures/kilobot_conductor_new_cropped.jpeg}}%
\caption{a-c) Snapshots from the top-view camera with detected real Kilobots (green circles) in a radial (cone-shaped) light distribution; the red lines show the possible link between two robots within the communication range. d)~The light conductor (transparent plastic) added to the Kilobots to solve the issue of shadows that are cast from robot bodies on their light sensor.}
\label{fig:snapShots_pl_ATP_err_XprXpt}
\end{figure*}
\section{Method}
Following our previous work~\cite{raoufi2021speed}, we study the co-evolution of network structure and collective estimation for a swarm of $N$ real robots. The value to estimate is a continuous, spatially distributed scalar feature of the environment. In our experiments, this will be realized by a spatially varying light intensity field. The swarm's goal is to estimate a global property of the distributed feature and approach it in the physical space. Our focus is on estimation and localization of the environmental field's mean value {\color{newChanges}(see Fig.~\ref{fig:snapShots_pl_ATP_err_XprXpt})}.
\par
We define two phases: exploration and exploitation. Having separate phases for exploration and exploitation has been shown to be more efficient than mixed phases~\cite{reina2015design}. During initial exploration (see Sec.~\ref{subsec:exploration}), we program the swarm to expand. The aim is for the individual robots to collect diverse estimates of the environmental feature. The robots are supposed to cover as much area as possible while keeping network largely connected.
{\color{newChanges} The communication range and the swarm size determine how much the swarm can expand without being disconnected. We define the end of exploration as the moment when the collective achieves a maximal area coverage while still maintaining connectivity.} During the subsequent exploitation phase (see Sec.~\ref{subsect:exploitation}), robots communicate to achieve a consensus on the mean value, and at the same time, try to move toward the spots in the environment where the measured intensity is closer to the consensus.
We showed previously that by combining these components a contour-capturing behavior emerges~\cite{raoufi2021speed}. A~possible application is to contain pollution or localize the position of a resource in the environment~\cite{zahugi2012design, kaviri2019coverage, amjadi2019cooperative, haghighat2022approach}.
\par
\newcommand\figTwoWidth{1.6}
\begin{figure}[!b]%
\centering
\includegraphics[width=0.7\linewidth]{Figures/ATP_errors/Exploration_Exploitation_ATP_err_formula_compact_task_2_new.png}
\caption{The relation between accuracy (trueness and precision) errors in collective contour capturing for the example of radial distribution. The red and blue circles show the ground truth and collective mean contours respectively, and the crosses are the positions of robots in physical space. The initial trueness error (top left circle) is reduced during exploration phase, whereas the precision error increases (bottom right). During exploration, the precision error decreases~\cite{raoufi2021speed}, and robots capture the contour (bottom left).}
\label{fig:ATP_err__XprXpt}
\end{figure}
We minimized the requirements with respect to the robotic platform to enable the implementation of the algorithm even on minimal robots, here specifically the Kilobot platform~\cite{rubenstein2012kilobot}. Although some algorithmic details are specific to our implementation on Kilobots, our model is generally applicable regardless of the swarm robotic platform. The requirements are: \textbf{a)} fully distributed algorithm; no central control, \textbf{b)} only local environmental information available, \textbf{c)} communication only to local neighbors, within a limited communication range, \textbf{d)} no prior information neither about the environment, nor the neighbors, and \textbf{e)} unbounded arena.
\par
\renewcommand\figTwoWidth{1.3}
\begin{figure*}[!t]
\centering
\subcaptionbox{}{\includegraphics[height=\figTwoWidth in]{Figures/dispersion/coverage_cm2_colBlind.png}}%
\hfill
\subcaptionbox{}{\includegraphics[height=\figTwoWidth in]{Figures/dispersion/meanDeg_colBlind.png}}%
\hfill
\subcaptionbox{}{\includegraphics[height=\figTwoWidth in]{Figures/dispersion/giantComp_withRandomWalk_colBlind.png}}%
\hfill
\subcaptionbox{}{\includegraphics[height=\figTwoWidth in]{Figures/dispersion/mDeg_vs_CovArea_colBlind.png}}%
%
\caption{Real-world experiments of the dispersion algorithm while preserving the network connectivity for 5 repetitions and swarm sizes of \{10,20,30,40\}: a)~covered area by the swarm, b)~mean degree of the communication network, c)~number of nodes in the giant component of the network versus time. The orange line shows the result for a diffusion algorithm without preserving the connectivity of the network for N$=40$ robots. d)~Trade-off between network connectivity and coverage area. The transparent lines show individual experiment, and the solid lines are the mean value for a corresponding swarm size. The mean value truncated as soon as the shortest experiment finishes.}
\label{fig:Res_Dispersion_ArCov_mDeg_gntComp}
\end{figure*}
\subsection{Exploration}
\label{subsec:exploration}
{\color{newChanges}With exploration, the variation and diversity of information available to the swarm increases.}
During the exploration phase, no information is aggregated. As we demonstrated before~\cite{raoufi2021speed}, the exploration phase reduces the trueness error (systematic bias). In principle, any dispersion behavior may achieve the goal {\color{newChanges}in an unbounded environment}.
However, due to limited connectivity of the distributed robots, a \emph{pure} random dispersion may disconnect robots from their neighbors and fragment the network. Blind random motion in an unbounded environment is dangerous as robots might get lost and never find their way back to the swarm~{\color{newChanges}\cite{hornischer2021cimax}}.
\par
As an alternative, we suggest a random walk while preserving the connectivity of the network. A~robot requires to know the approximate distance to its neighbors. We will show that even with noisy distance estimations the method is able to {\color{newChanges}keep the swarm largely connected}. With Kilobots, the estimation of the distance is calculated by considering the strength of the received infra-red (IR) signal~\cite{rubenstein2012kilobot}. Hence, making the random walk conditional on the distance to the nearest neighbor is the algorithm we implemented on robots.
{\color{newChanges}Once the minimum distance to local neighbors goes below the threshold, the robot stops and waits for its local neighbors to finish their random walk, then it switches to exploitation. Violations of the desired distance take the robot back to dispersion phase.}
By the end of this phase, the collective has the potential to make a less biased (or bias-free) estimation. Then, the swarm exploits the information distributed throughout the collective to increase the precision. See Fig.~\ref{fig:ATP_err__XprXpt} for an illustration on how exploration and exploitation can modulate the trueness and precision components of the total accuracy error.
\subsection{Exploitation}
\label{subsect:exploitation}
Exploitation operates not only in the information domain, but also in the real physical space. By exploiting the information contained in the swarm, the collective estimation converges to the mean value in the information domain. The exploitation in the physical space results in individual robots converging towards the mean contours of the environmental field. Here, we introduce two mechanisms for each of the domains: local averaging and consensus-based phototaxis.
\subsubsection{Local averaging}
The first part of exploitation is used to {\color{newChanges}reach} consensus in the information domain, which is achieved by local communication of robots. The results of interactions in this phase facilitate the wisdom of crowds effect~\cite{simons2004many, surowiecki2005wisdom}, by enabling the agents to average their imperfect estimates of environmental cues~\cite{hills2015exploration, becker2017network}. The updating rule comes from the local averaging of DeGroot model~\cite{degroot1974reaching}, and we modified it by adding a memory term~\cite{raoufi2021speed}. The ultimate updating rule is formulated as:
\begin{align}
& \hat{z}_{i}^{t+1} = \alpha \hat{z}_{i}^{t} + \frac{1-\alpha}{1 + N_i} s_{i}^{t} + \frac{1-\alpha}{1 + N_i} \sum\limits_{j \in \boldsymbol{N_i}} {\hat{z}_{j}^{t}}\ .
\label{Eq:consensus}
\end{align}
Here, each robot updates its estimation ($\hat{z}_{i}^{t+1}$) based on what it measures ($s_{i}^{t}$), and the average of its {\color{newChanges}$N_i$} neighbors' estimation, with a weighting factor $\alpha$.
Robots repeat these updates for a fixed number of iterations $t_\text{comm.}=100$. The output of this phase is the consensus value (although all robots might not have exactly the same opinion about the consensus.) Robots use this value as input for the next {\color{newChanges}phase}.
\par
The updating equation (Eq.~\ref{Eq:consensus}) can be reformulated from a network point of view~\cite{olfati2007consensus, golub2010naive}. This would convert the model to a linear system whose transition matrix is the normalized weighted adjacency matrix of the network, its states are agents' estimation and the measurements are the inputs. Assuming the general system without input, the result of such local averaging, given that the network of communication is fully connected, is the mean value of information available within the collective~\cite{golub2010naive}. Later, we briefly discuss how the connectivity of the network (mean node degree, in particular) changes the dynamic of this system.
\subsubsection{Consensus-based Phototaxis (CBPT)}
We implement a sample-based pseudo gradient descent for the motion of robots which implements homophily on networks. Homophily is the tendency to interact more with like-minded agents in a social group~\cite{khanam2022homophily}. We require a collective motion that moves robots sharing similar opinions closer to each other and thus establishes links~\cite{raoufi2021speed}. As a pseudo gradient descent method, we choose the bio-inspired phototaxis method. By CBPT the robots are guided to areas where light measurements match the consensus value.
\section{Metrics and Setup}
\subsection{Covered Area}
We measure the area that is covered by the swarm. We consider a disk centered at each robot's position with radius ($r_\text{cover}$). For Kilobots, we choose $r_\text{cover} = 3r_\text{rob} = 5\ \text{cm}$ which is roughly half its communication range {\color{newChanges}($r_\text{rob}$~is robot radius)}. We calculate the collective coverage as the (non-overlapping) intersection of areas~$A_{\text{cover},i}^{(x_i,y_i)}$ with $\|A_{\text{cover},i}^{(x_i,y_i)}\|=\pi r_\text{cover}^2$ covered by each robot~$i$ {\color{newChanges}located} at~$(x_i,y_i)$:
\begin{equation}
A_\text{cover} = \bigcap\limits_{i=1}^{\text{N}}A_{\text{cover},i}^{(x_i,y_i)}
\;.
\end{equation}
\subsection{Network Properties}
The inter-agent communication network plays a critical role for the whole scenario. It is challenging to determine the existence of actual robot-robot communication links forming the network, as these links are noisy and difficult to extract from the robot swarm during an experiment. For simplicity we assume that if the distance between two robots is less than the average communication range, then there is a link. The communication range is assumed to be $r_\text{comm}=10\ \text{cm}$~\cite{pinciroli2018simulating}. The links are estimated based on robot positions and distances obtained from tracking via a top-view camera. False positives and negatives for links between robots are possible as this is only an estimation.
\par
We record the connectivity of the network by measuring the connectivity using two metrics: mean node degree and giant component size. Although the communication network of Kilobots is not necessarily undirected (signal strength is not always symmetric), we assume an undirected network for simplicity. In- and out-degree of all nodes are equal as well as the \emph{mean} in- and out-degree. As second network metric we use giant component sizes, that is the number of nodes in the largest connected component of the network. This way we quantify how many robots have disconnected from the main cluster (implemented with NetworkX Library~\cite{hagberg2008exploring}).
\par
\subsection{Accuracy Metrics}
Collective estimation (accuracy) error is decomposed into trueness and precision error, which relates to the bias and variance decomposition of the total error. {\color{newChanges}We showed that the generality and case-independence of these metrics enable their usage in various conditions (see~\cite{raoufi2021speed} for details).} We assume as ground truth for estimation the mean value of the light intensity across the environment ${z}_\text{gt}=\bar{z}_\text{env}$. By defining the individual estimation for robot~$i$ as $\hat{z}_i$ and {\color{newChanges} collective estimation as $\hat{z}_\text{col}=\sum_{i=1}^{N}\hat{z}_i / {N}$}, we obtain for trueness, precision, and accuracy errors:
\begin{align}
E_\text{T} =& (\hat{z}_\text{col} - {z}_\text{gt})^2 \ ,
E_\text{P} = \frac{1}{\text{N}} \sum\limits_{i=1}^\text{N}(\hat{z}_i - \hat{z}_\text{col})^2 \ , \\
E_\text{A} =& \frac{1}{\text{N}} \sum\limits_{i=1}^\text{N}(\hat{z}_i - {z}_\text{gt})^2 = E_\text{T} + E_\text{P} \ .
\end{align}
As we have no direct access to a robot's current estimation, we use its position as an indicator of its estimation. For each environmental distribution, there is a mapping between the camera-detected Cartesian robot positions and the coordination of interest. For example, in the radial distribution of Fig.~\ref{fig:snapShots_pl_ATP_err_XprXpt}, the mapping $m(x_i,y_i)$ is:
\begin{equation}
\hat{z}_i = r_i = m(x_i,y_i) = \sqrt{(x_i-x_c)^2 + (y_i-y_c)^2},
\end{equation}
where, $(x_c,y_c)$ is the distribution's center, and $(x_i,y_i)$ is the detected robot's position in the captured frame.
\\
\subsection{Experimental Setup}
In our experiments we use Kilobot robot swarms~\cite{rubenstein2012kilobot} of up to 40 robots, on a {\color{newChanges}$90\times90\ \text{cm}^2$ arena of a $1.5\times2.5\ \text{m}^2$ white-board}. For tracking we use a downward-facing camera and Hough circle transformation from OpenCV Library~\cite{opencv_library}. {\color{newChanges} Otherwise mentioned, we used the same parameters as~\cite{raoufi2021speed}.}
\newcommand\figTwoHeight{0.8}
\begin{figure}[b]
\centering
\subcaptionbox{}{\includegraphics[height=\figTwoHeight in]{Figures/consensus/E_p__vs__Time_50_blue_Large.png}}%
\hfill
\subcaptionbox{}{\includegraphics[height=\figTwoHeight in]{Figures/consensus/log_E_p__vs__MDeg__25_50_100_Large.png}}%
\hfill
\subcaptionbox{}{\includegraphics[height=\figTwoHeight in]{Figures/consensus/t2ss_vs_mDeg__and_2ndEigVal__25_50_100_Large.png}}%
\caption{The simulation of consensus model on static networks with different connectivity. a) Time evolution of precision error for different networks (darker lines indicate lower connectivity). b) Steady-state precision error (last time step) versus mean node degree, c)~time to achieve a steady state (solid lines) and second largest eigenvalue of adjacency matrix (dotted lines) versus mean node degree. The results are the average of 1000 {\color{newChanges}independent Monte Carlo simulations.}}
\label{fig:consensus}
\end{figure}
\section{Results and Discussion}
\newcommand{1.15}%{3.5}{1.15
\begin{figure*}[ht]
\centering
\subcaptionbox{}{\includegraphics[height=1.15}%{3.5 in]{Figures/contourCapturing/violins_radial_px_and_cm_16boxs.png}}%
\hfill
\subcaptionbox{}{\includegraphics[height=1.15}%{3.5 in]{Figures/contourCapturing/Error_vs_Time_wide_colBlind_rescaled.png}}%
\hfill
\subcaptionbox{}{\includegraphics[height=1.15}%{3.5 in]{Figures/contourCapturing/AreaCov_n_mDeg_vs_Time_wide_colBlind.png}}
\caption{Real-world experiments of the full scenario in an environment with radial distribution for $N=40$ Kilobots. a) The distribution of robots of a single experiment in the {\color{newChanges}polar coordinate system}, b) the accuracy errors, c) the coverage area and mean node degree of the network over time. The transparent lines show the result of 8 independent real robot experiments, and the solid lines are the average over different experiments.}
\label{fig:Res_contCapturing}
\end{figure*}
We study each component of our scenario (dispersion, consensus, CBPT) as stand-alone swarm tasks. Later, we combine these components to form a complex scenario.
\subsection{Dispersion}
\label{sec:dispersion}
The aim of dispersion is to increase covered area. We measure how much area is covered by robots (Fig.~\ref{fig:Res_Dispersion_ArCov_mDeg_gntComp}-a). To indicate the dynamic network structure, we measure the mean degree of the network (Fig.~\ref{fig:Res_Dispersion_ArCov_mDeg_gntComp}-b). The results in Fig.~\ref{fig:Res_Dispersion_ArCov_mDeg_gntComp}, indicate that initially the collective starts from a dense distribution with low coverage area and high connectivity in the network. Due to dispersion, the collective expands and covers larger area while the mean degree decreases. This increase in the covered area can lead to a lower trueness error in the collective estimation.
The network gets sparser (reduced node degrees) {\color{newChanges} while the giant component size of the network does not change significantly, suggesting that the network connectivity is largely preserved.}
Later we show how reduced connectivity results in lower speed of convergence during the decision-making process. Both the covered area and mean degree converge to steady state values. Once robots stop moving, we finish the experiment.
\par
In Fig.~\ref{fig:Res_Dispersion_ArCov_mDeg_gntComp}-c, we show the size of the giant component. The algorithm keeps the majority of the swarm connected while a few robots disconnect from the swarm. In our analysis we found that often two (or more) robots stick to each other and while measuring strong signals from each other, they continue moving. They detach from the swarm, although they are members of a small cluster. As a control experiment, we tested a random walk diffusion algorithm that does not try to preserve connectivity (solid orange line in Fig.~\ref{fig:Res_Dispersion_ArCov_mDeg_gntComp}-c). Almost half of the swarm disconnects within three minutes. In comparison, our algorithm preserves connectivity well.
\subsection{Consensus}
Consensus occurs only in the information domain, which makes it difficult to measure in a real robot experiment. However, we simulated the consensus algorithm on a static network in order to show how the precision error changes over time (Fig.~\ref{fig:consensus}-a) and how its dynamics change with changing network properties, namely mean degree. We studied spatial networks with N$=\{25,50,100\}$ nodes and different connectivity to investigate the role of mean degree. For doing so, we distributed N agents uniformly in an environment, and drew a deterministic network with a specific communication range. Then, we varied the communication range (ratio to the environment size) to achieve networks with various mean degree. As agents share and update their estimation about the mean value of the distribution, they converge to the consensus estimation, and thus the precision error decreases (Fig.~\ref{fig:consensus}-a)--this is the well-known speed-vs-accuracy trade-off happening over the course of decision-making.
\par
In Fig.~\ref{fig:consensus}-b, we show how the mean node degree of the network influences the accuracy (precision) of the steady-state collective estimation. A higher mean degree leads to a lower precision error. With respect to speed of consensus, we measured the time to reach a steady state using a threshold ($\delta=1e-4$) and recorded the first passage time of the precision error. The peaks in Fig.~\ref{fig:consensus}-c show the slowest convergence time for a specific mean degree. The speed reduces significantly for lower and higher degrees.
A~low or zero mean degree means there are few or no links in the network. Convergence is fast without information flow but not accurate.
As known from graph theory, the network is immediately fully connected once the mean degree exceeds a critical value. This is where the second largest eigenvalue of the network adjacency matrix {\color{newChanges}becomes less than} one.
\newcommand{0.95}{0.95}
\begin{figure*}[!ht]
\centering
\subcaptionbox{}{\includegraphics[height=0.95 in]{Figures/contourCapturing/control/Error_vs_Time_wide_sw20.png}}%
\hfill
\subcaptionbox{}{\includegraphics[height=0.95 in]{Figures/contourCapturing/control/Error_vs_Time_wide_sw80.png}}%
\hfill
\subcaptionbox{}{\includegraphics[height=0.95 in]{Figures/contourCapturing/control/Error_vs_Time_wide_sw170.png}}%
\hfill
\subcaptionbox{}{\includegraphics[height=0.95 in]{Figures/contourCapturing/control/nRemained_E_A_Bbox_withCollective.png}}
\caption{Real-world control experiments with $N=40$ Kilobots. The accuracy errors over time for the control experiments show the performance of the individualistic method.
Each plot is the average of 5 real robot experiments for different switching times, a) $t_\text{sw} = 20$, b) $t_\text{sw} = 80$, c) $t_\text{sw} = 170$. d) Box plot of the number of robots remained within the area of interest at the last snapshot (black boxes) and the final accuracy error {\color{newChanges}(red boxes)} comparing the collective scenario and the control experiment (individual) with different switching times.}
\label{fig:Res_contCapturing_controlExp}
\end{figure*}
\subsection{Contour Capturing}
Next, we present our results for the scenario of contour capturing with a swarm of $N=40$ Kilobots. The objective is to gather the robots at the contours with mean light distribution. First, we give results of our fully distributed collective method. Second, we define a control experiment without robot-robot communication as baseline for comparison.
\subsubsection{Collective Scenario--radial distribution}
Here we present our main result in real-world experiments with Kilobots for the whole scenario by assembling the above components: dispersion while keeping the network connected, local averaging to achieve consensus, and homophily by CBPT to approach the consensus value.
For a radial light distribution,
Fig.~\ref{fig:Res_contCapturing}-a shows the radial distribution of robot positions during the experiment. Initially, the robots are distributed rather densely close to the center ($r_i \rightarrow 0$). During the dispersion the distribution becomes more uniform by spreading to larger radii.
Then the local consensus finding with minimal movement starts while the spatial distribution of robots remains largely unchanged ($200<t<400$). In a third phase robots approach the mean contour line by CBPT and the distribution contracts around 160 pixels {\color{newChanges}($\approx 25\text{cm}$)}.
\par
The temporal evolution of the trueness, precision and accuracy errors is illustrated in Fig.~\ref{fig:Res_contCapturing}-b. The trueness error quickly drops to a small value by the end of the dispersion phase ($t\approx200$ sec). However, the variation is still large although the mean value of the radial distribution is close to the ground truth. Thus, in contrast to the accurate mean value of the collective, each robots' estimation is not yet accurate. This is because robots did not aggregate any information during dispersion.
But, now that the collective is less-biased, and the network is connected robots exploit the information available within the entire collective.
This is implemented via the local average from the consensus method (see eq.~\ref{Eq:consensus}). At time~$t\approx400$~s, the swarm arrives at a consensus on the information domain, but robot positions are still off the mean contour line. During the CBPT phase, robots approach the mean value in space and precision error is reduced. We observe both a low precision error and a low accuracy error. These results confirm our previous work in simulations~\cite{raoufi2021speed}.
\par
The mean degree and area coverage of the swarm evolve in an anti-correlated manner. During dispersion, the swarm spreads out to cover more area and the spatial distribution gets sparser, hence, reducing the mean node degree.
But the process inverts during exploitation as robots get closer to each other and increase network connectivity. Covered area decreases because robots form a denser distribution around the contour line and the overlap area increases.
\subsubsection{Control experiment--no communication}
As control experiment, the robots do contour capturing without collaboration between robots or exchange of any information. During exploration, each robot walks randomly while updating and aggregating its mean value estimation. Robots {\color{newChanges}iteratively} average over measured samples. The random walk is random diffusion and without effects by other robots (in difference to Sec.~\ref{sec:dispersion}). It stops after a predetermined number of samples ($t_\text{sw}$). Then robots switch to exploitation and follow the CBPT algorithm to approach the estimated mean light spot. We used three different switching times: $t_\text{sw} = \{20, 80, 170\}$.
\par
As seen in Fig.~\ref{fig:Res_contCapturing_controlExp}-a, a too short exploration ($t_\text{sw} = 20$) does only insufficiently reduce the trueness error (red line). Whereas the precision error (blue line) remains as high as the initial value due to insufficient spatial dispersal of robots.
In Fig.~\ref{fig:Res_contCapturing_controlExp}-b, a sufficiently long exploration ($t_\text{sw} = 80$) reduces the trueness error, and manages the temporarily high precision error ($t\approx100$~s).
Fig.~\ref{fig:Res_contCapturing_controlExp}-c indicates a too long
exploration phase resulting in a larger precision error. In our previous work~\cite{raoufi2021speed}, we already showed that (in a bounded environment) too late switching can cause the precision error to remain high (for a limited time budget).
\\
The \emph{unbounded} environment is challenging as the swarm tends to loose more and more robots (lost connectivity) with increased exploration time (Fig.~\ref{fig:Res_contCapturing_controlExp}-d). In addition to the known speed-vs-accuracy trade-off, we find this new trade-off in unbounded environments. With uncontrolled diffusion, one does not only pay in speed for accuracy, but also in the number of robots that get lost.
\subsubsection{Collective Scenario--V-shape ramp distribution}
In the model simulations presented in~\cite{raoufi2021speed}, we showed that the algorithm is able to capture the mean contour line for different environmental distributions, including uni- and multi-modal ones. In this part, we tested another distribution that is of an inverted V-shape, with a peak on its diagonal as in Fig.~\ref{fig:Res_contCapturing_rotRamp}-a. The evolution of the distribution of robots over time (Fig.~\ref{fig:Res_contCapturing_rotRamp}-b) demonstrates how the swarm expands uniformly up until the exploitation phase. Then, they branch into two different clusters; one on the top left and the other on the bottom right of the diagonal. The accuracy errors of Fig.~\ref{fig:Res_contCapturing_rotRamp}-(c) have the same qualitative trends as in Fig.~\ref{fig:Res_contCapturing} for radial distribution. However, the remaining precision error at the end of the experiments indicates that the problem here is more difficult to solve. We note that here the precision error represents the dominant contribution to the total error.
\renewcommand{\figTwoHeight}{1.15}
\begin{figure}[hb]
\centering
\subcaptionbox{}{\includegraphics[height=\figTwoHeight in]{Figures/contourCapturing/RotatedRamp/snapShot_at_39500.png}}%
\hfill
\subcaptionbox{}{\includegraphics[height=\figTwoHeight in]{Figures/contourCapturing/RotatedRamp/Error_vs_Time_wide_RotRamp.png}}
\caption{Real-world results for the scenario with diagonal distribution: a)~representative example of final robot distribution showing the position of two major clusters on each side of the ramp. b)~Three errors types over time (6~repetitions).}
\label{fig:Res_contCapturing_rotRamp}
\end{figure}
\section{Conclusion}
Starting from our previous work on the speed-accuracy trade-off in collective estimation~\cite{raoufi2021speed}, we have successfully implemented a real robot swarm (Kilobots) to capture a contour in a continuous environmental field in an unbounded arena.
Our dispersion method {\color{newChanges}largely} preserves connectivity of the swarm and minimizes losing robots {\color{newChanges} during exploration}.
As another component, we introduced a sample-based optimization method inspired by phototaxis that makes the Kilobots approach the desired contour.
We added a light conductor to the robot (minimizing shadows on the sensor) to improve light measurements.
This seems to be a novel implementation of a gradient ascent for Kilobots with various potential applications.
{\color{newChanges} The codes we used in this paper are available on GitHub \cite{MRaoufi_Github}.}
\par
Previously we showed that besides the {\color{newChanges} speed-vs-accuracy} there are also exploration-vs-exploitation trade-offs~\cite{raoufi2021speed} that are generally non-trivial to resolve. With our new dispersion method, an optimal switching time to finish exploration is not required anymore. The swarm automatically ends dispersion at supposed best achievement constrained by connectivity. Here we discussed another trade-off induced by dynamic network topologies. During exploration, the temporarily low mean degree slows down collective decision-making. But the swarm expansion improves the accuracy of the estimation.
\par
In future work, we plan to study contour-capturing scenarios in dynamic environments. We also plan to analyze scalability and test different light distributions.
\section*{Acknowledgment}
We thank Marshall Lutz Mykietyshyn and Noran Abdelsalam for their contribution to real robot experiments.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13681",
"language": "en",
"timestamp": "2023-03-01T02:14:45",
"url": "https://arxiv.org/abs/2302.13681",
"yymm": "2302"
} | \section{Language Models for Code}
Large Language Models (LLMs) have gained significant attention
in the field of Natural Language Processing (NLP)
in recent years due to their ability
to perform a wide range of NLP tasks with impressive accuracy.
These models, trained on massive amounts of data, improve in accuracy as they grow from millions to billions of parameters.
LLMs for code are trained on massive amounts of data and can learn the structure and syntax of programming languages,
making them well-suited for tasks such as
code summarization, generation, and completion~\cite{al2023extending,izadi2022codefill}.
LLMs are even making their way into commercial products like GitHub's Copilot, Replits's GhostWriter and Tabnine.
Meanwhile, some have identified that LLMs can memorize large swaths of training data ~\cite{carlini2021extracting}.
Memorization enables the extraction of the data using Data Extraction Attacks.
Some attacks have even been able to extract addresses and other personal information from public models~\cite{carlini2021extracting}.
Memorization also impacts LLMs for code, with all its associated consequences. We will discuss these consequences in three categories: security, privacy and licensing.
\section{Security Implications}
Text memorization has strong security implications.
Firstly, massively mined code datasets
are not sanitized or manually curated,
the datasets could therefore contain many biases,\footref{gpt2phone}
and instances of badly written or buggy and insecure code.
A recent study found that around 40\% of GitHub Copilot's code generations
for MITRE's top 25 Common Weakness Enumerations, a list of the most dangerous software weaknesses were found to be vulnerable~\cite{hammond2022asleep}. If these models become more prevalent and trusted, they can introduce more vulnerable code into software.
\section{Privacy Implications}
Memorization enables adversaries to access training data, and everything contained within, simply by accessing the model. This has major privacy implications since code can contain private information.
Think of credentials, API keys, directory structures, logged info, or in-code discussions by developers. Code can also contain personal information like emails or contact information.
If personal data is published on the Internet, the data could be retracted and deleted from the source. But once it is mined and used to train an LLM, the information is forever embedded in a compact representation, which is queryable at scale. With query access to these models, an adversary can potentially extract this data~\cite{carlini2021extracting} and
threaten Internet users' privacy.
There are many reasons why one could publicly share private information; (1) simply by accident, or (2) a malicious actor could share this information in a doxing campaign~\cite{carlini2021extracting}.
Even if the data is published willingly, the owner has a certain use and audience in mind and might not wish to share this information with the entire world. This is referred to as the re-purposed data problem.\footnote{Does GPT-2 Know Your Phone Number?: \url{http://archive.is/LxsyA} \label{gpt2phone}}
\section{Licensing}
Publicly available source code is also subject to licences, some of which heavily regulate the use of the material.
Initially, developers raised concerns about licensed code on social media. GitHub Copilot could be prompted to produce verbatim copies of copyrighted code, without providing the required attribution or licence terms.\footnote{Matrix Transpose: \url{http://archive.is/YU5Bl}} Similarly, Copilot was producing copyrighted code while attributing the wrong author and providing the wrong license.\footnote{Fast Inverse Square: \url{http://archive.is/HNiyg}}
Later, a lawsuit was filed against GitHub, Microsoft and OpenAI, claiming that Copilot is violating the licence of open-source code.\footnote{GitHub Complaint (p.26): \url{http://archive.is/3PFAs}\label{GHComplaint}}
Broadly, open-source code is licensed under two types of licences.
\textbf{Permissive licenses}, allow users to use, modify, and distribute the software for any purpose, without requiring that the user share their work.
\textbf{Non-permissive licenses}, also known as "copyleft" licenses, require that users freely share their own software under the same licence if they distribute the software or any \textit{derivatives} of it.
Creating closed or commercial software based on non-permissively licensed code is unethical and possibly even illegal~\cite{sun2022coprotector}. But this does raise the following question:
\textbf{Does training LLMs on copyleft code infringe on their license?}
Firstly, we must determine how many LLMs for code are trained on copyleft code.
Looking at some of the most popular code models, we can observe that the vast majority are trained on open-source code. CodeBERT and CodeT5 are trained on CodeSearchNet, which contains copyleft code.
We also found that CodeBERT, CodeGen, and CodeClippy make use of The Pile, a collection of $22$ datasets, one of which is a GitHub dataset containing copyleft data.
We found that only InCoder makes an effort to prevent training on copyleft code. InCoder does however make use of a dataset of StackOverflow answers, which are licensed under varying CC-BY-SA licences, all of which require attribution.\footnote{StackOverflow license: \url{http://archive.is/obaoy}}
Despite the public attention, it is not completely clear whether Codex, the model behind Copilot, is trained on non-permissive code. Many imply that it does,\footnote{Comment on Copilot and OSS: \url{http://archive.is/6gEOU}\label{eeveeCopilot}} citing the copying of copyleft code and the fact that the system has encountered a copy of the GPL licence many times during training. The training data for Codex is not publicly available, and neither OpenAI nor Github have provided any clarification.\footnote{If Software is My Copilot, Who Programmed My Software?: \href{http://archive.is/pilW2}{Link}\label{sfcCopilot}}
LLMs for code can be seen as derivatives of their training data. So unless the model is published under the same licence as the training data and includes the copyright notice, this would be a clear violation. Moreover, many licences are not inter-compatible, i.e., the inclusion of code licensed under them automatically warrants an infringement as the combined licence agreements contain irreconcilable conditions.
Some opponents,\footnote{\label{RedaGithub}GitHub Copilot is not infringing your copyright: \url{http://archive.is/PYlm5}} including OpenAI, argue that the use of public code is an instance of transformative fair use, which is a defence that allows the use of copyrighted works in new and unexpected ways and exists in many jurisdictions including the US.\footref{RedaGithub} Yet, it is still unclear whether the fair use defence applies to ML-systems,\footref{GHComplaint} as it has not yet been tested in court. Furthermore, the fair use argument is sometimes based on the assumption that models do not memorize and emit training data, which is false. Even if the fair use argument protects the use of the data, the verbatim outputting might not be protected.
A moral argument can also be made on this issue. Training LLMs on copyleft code goes against the will of some open-source developers, who share their code for the betterment of society and who believe in the principle of free and open software so profoundly, they're willing to add a full legal clause to their work to perpetuate this ideal. The use of their work, without attribution, especially by commercial parties, is not what they had in mind.
Finally, some researchers have also proposed a different approach to this issue by letting the authors of open-source code take matters into their own hands. Using data poisoning techniques, the authors can reduce the performance and embed watermarks into the models~\cite{sun2022coprotector}.
\section{Discussion and Recommendations}
To conclude we recommend the following:
\begin{itemize}
\item The ML community should carefully consider the licence of their training material, from both a legal and an ethical point of view. The authors of published LLMs should be transparent about the licences of their training material.
\item More research should be conducted on the nature and proportionality of text memorization in LLMs for code and LLMs in general. Other topics include memorized text extraction and prevention.
\item Lawmakers should clarify whether the use of copyleft code (and copyrighted materials in general) and text to train LLMs constitutes fair use and under which conditions this clause applies.
\item Finally, the software engineering community should clarify their stance on this issue. Developers could make informed decisions and clearly denote if their source code can be used to train AI models. LLMs for code are likely to stay and bring new tools that change the way software is engineered. So the community needs to answer important questions on this matter. For instance, should open-source code be allowed for training these models? If so, should the developers be credited and compensated, and under which license should the models be released? Alternatively, do we need to revise current code licenses to clarify the community's stance?\footnote{Additional Reading Material: \href{https://github.com/AISE-TUDelft/nlbse23_reading_list}{Link to our GitHub Repository}}
\end{itemize}
\bibliographystyle{IEEEtranN}
|
{
"arxiv_id": "2302.13756",
"language": "en",
"timestamp": "2023-02-28T02:29:24",
"url": "https://arxiv.org/abs/2302.13756",
"yymm": "2302"
} | \section{Introduction}
Learning to rank is a crucial part of information retrieval system \cite{qin2010letor}. In practice, the ranking model is often trained by the user's implicit feedback, e.g. user clicks. However, there are usually many complex biases such as position bias \cite{joachims2017unbiased} in the click-through data. Therefore, Unbiased learning to rank (ULTR), dedicating to train a unbiased ranking model from such biased click-through data, has gained a lot of attention.
Currently, most of ultr models \cite{10.1145/3534678.3539468,ai2018unbiased,zhang2022towards} using deep learning are based on Position-Based Model \cite{2015Click} (PBM) which emphasizes the key role of position as a bias factor in calculating the examination bias. According to PBM, a document has a certain probability being clicked based on the probability of it being examined and its relevance to query, where the examination depends on position and relevance depends on the features encoding the query and document. However, the examination bias is often not only dependent on the ranking position of the document in real click-through data. Therefore, recently, some works begin to consider how to add user context \cite{fang2019intervention}, clicks in the same query list \cite{chen2021adapting} and search intent \cite{SearchIntent2020} to bias factors so that the model can calculate more accurate examination bias.
In this paper, we argue that perception bias that is defined as the user's misperception of document's relevance to the query through the presentation style on SERPs, is important for figuring accurate examination bias out. Since A document has to be observed before users perceive its relevance, the examination to document can be factorized into two steps: observing and then perceiving. Obviously, the rank of document is important for it being observed by users. After the document is observed, the representation style (media type, SERP height and highlighting the hit words multiple times etc.) of it on SERPs is pivotal for users to perceive its relevance. In perception step, users often mistakenly click on irrelevant documents due to their differences in representation style.
In order to accurately calculate the perception-dependent examination bias, we first propose a Multi-Feature Integration Model (MFIM) that can integration more key bias factors that can affect user perception into examination-bias estimator. And then we mine a key factor slipoff counts that can indirectly reflects the influence of all perception-bias factors. Finally, we validate the effectiveness of MFIM on Baidu-ULTR dataset \cite{zou2022large}.
\begin{figure*}[t]
\centering
\includegraphics[width=17cm]{./imgs/method.pdf}
\caption{Comparison between MFIM-based model and PBM-based models. }
\label{frame}
\end{figure*}
\section{PRELIMINARIES}
With regard to a query $q \in Q$, there is a document list ${\pi}_q$ including $n$ documents that need to be ranked according to their relevance to $q$. Let $d_k$ be a document displayed at position $k$ with the ranking features $x^r_k$ and bias factors $x^e_k$. And the probability that $d_k$ is examined by user, related to $q$ and clicked by user are denoted as $e_k \in [0,1]$, $r_k \in [0,1]$ and $\hat{c_k} \in [0,1]$ respectively. The goal of an an unbiased ranking model is to learn how to estimate accurate relevance $r_k$ from click signals $c_k \in \{0,1\}$.
According to PBM, whether $d_k$ is clicked depends on if it is examined and is related to the query, which can be formulated as:
\begin{equation}
\hat{c_k} = {e_k} \cdot {r_k}
\end{equation}
where ${e_k}$ and ${r_k}$ can be figured out by a examination-bias model $E(x^e_k, \theta_{e})$ with parameters $\theta_{e}$ and relevance model $R(x^r_k, \theta_{r})$ with parameters $\theta_{r}$. Currently, most of ULTR methods are based on \textbf{Equation (1)} to train unbiased ranking model. Their general framework is illustrated in \textbf{Fig.1 (a)}. $E(x^e_k, \theta_{e})$ usually contains only one layer of fully connected layer (fc layer) and activation function (relu), while $R(x^r_k, \theta_{r})$ applies BERT as relevance encoder generally. When training model, the $\theta_{e}$ and $\theta_{r}$ are jointly trained by loss function
\begin{equation}
L(c_k,\hat{c_k}) = - \sum_{q}^{Q} \sum_{k}^{n}(c_k \cdot log \hat{c_k} + (1-c_k) \cdot log(1-\hat{c_k}))
\end{equation}
where $\hat{c_k} = sigmoid(E(x^e_k, \theta_{e}) \cdot R(x^r_k, \theta_{r}))$, while we only putting the relevance model $R(x^r_k, \theta_{r})$ to use when testing. It is worth noting that since PBM assumes that ${e_k}$ is only related to the position $k$, therefore, the $x^e_k$ in the examination-bias model only uses the position as a bias factor for calculating the
${e_k}$ as shown in \textbf{Fig.1 (a)}.
\begin{table*}[tb]
\caption{The model performance on the expert annotation dataset with different bias factors.}
\begin{center}
\renewcommand\arraystretch{1.25}
\resizebox{\linewidth}{!}{
\begin{tabular}{c c c c c c c c c }
\toprule[1.20pt]
\textbf{Method} & Position & MType & Serph & Slipoff count &\textbf{DCG@1} & \textbf{DCG@3} & \textbf{DCG@5} & \textbf{DCG@10} \\
\hline
MFIM(PBM-based) &$\checkmark$ & & & &2.36 & 4.84 & 6.54 & 9.64 \\
MFIM &$\checkmark$ &$\checkmark$ & & $\checkmark$ &2.44 & 5.06 & 6.85 & 10.10\\
MFIM &$\checkmark$ & & $\checkmark$ & $\checkmark$ &2.48 & 5.13 & 6.95 & 10.25 \\
\toprule[1.20pt]
\end{tabular}
}
\end{center}
\end{table*}
\section{Method}
\subsection{The Mutil-Feature Integration Model}
It takes two steps to examine a document: observing it firstly and then perceiving it. To all appearance, the PBM-based methods include the effect of the document rank on user observing document, which is not enough to figure a accurate examination bias out. For the step of evaluating document, there are many complicated bias factors except the ranking of document. For example, the media type of document significantly affect one's perception of the relevance of it to a query because different queries have different requirements for the media type of the target document.
Therefore, we argue that not only the position should be included in the bias factors for calculating the examination bias but also the other bias factors used for evaluate the one's perception bias of the relevance should. In this way, we proposed a unbiased learning to rank method named Mutil-Feature Integration Model (MFIM) that include more feasible bias factors on calculating perception-dependent examination bias. Distinctly, how to find suitable bias factors for calculating the perception-dependent examination bias is the most critical point.
\subsection{User Behaviour as Bias Factors}
One of the most naive ways to find bias factors for calculating the perception-dependent examination bias is to enumerate. We can gradually integrate the bias factors such as media type (mType) and SERP height (serph) we can come up with into $x^e_k$ and conduct ablation experiments to verify their effectiveness. However, the actions of users to perceive document relevance in the real world are too complex to enumerate all biasing factors. Therefore, we propose that the user's implicit feedback behavior after clicking the document, especially the slipoff count, can replace all factors affecting user perception of the document itself to calculate the perception-dependent examination bias. Whatever the factors for one's perception bias is, the influence of these factors will eventually be reflected in the implicit behavior of the user after clicking on the document. For example, documents misperceived by users is always have fewer slipoff count than true relevant documents. Therefore, the model can easily judge whether the user has a perception bias based on the user behavior after the click.
It is worth mentioning that although using implicit user feedback such as slipoff count does not need to use explicit document perception bias factors according to the analyses above, integrating mType, serph and slipoff count is slightly better than using slipoff count alone in practice because the explicit factors can reduce the difficulty of model training.
\subsection{Model Details}
The framework of MFIM is illustrated in \textbf{Fig.1 (b)}. There are three different points compared MFIM with the general model in \textbf{Fig.1 (a)}:
1) MFIM integrates position, mType, serph and slipoff count into $x^e_k$ while the examination bias only depends on position in general model.
2) The examination-bias model is constructed more deeply to model a more complex non-linear mapping of various bias factors affecting the perception-dependent examination bias. In addition, batch normalization (bn) is vitally important to examination-bias model since it can greatly accelerate model convergence.
3) We construct a group selection layer before calculating loss function. The role of the group selection layer is to select out a subset of $\pi_q$ randomly so that avoiding the imbalance of positive and negative samples. The subset contains one clicked document and $g-1$ document that are not clicked by users, where $g < n$. The $\hat{c_k}$ in these $g$ samples will then be fed into a softmax layer. After group selection layer, the loss function of MFIM can be formulated as
\begin{equation}
L(c_k,\hat{c_k}) = - \sum_{q}^{Q} \sum_{k}^{g} (c_k \cdot log \hat{c_k} + (1-c_k )\cdot log (1-\hat{c_k}))
\end{equation}
With the help of the softmax function, the training process of MFIM is between list-wise and pair-wise.
\section{Experiments}
In this section, we elaborate our experimental setting and evaluate the performance of MFIM through a real-world experiment on Baidu-ULTR dataset.
\subsection{Experimental Set}
\subsubsection{Dataset.}
Baidu-ULTR dataset consists of two parts: 1) large scale web search sessions and 2) expert annotation dataset. The former that contains 383,429,526 queries and 1,287,710,306 documents is randomly sampled from search sessions of the Baidu search engine in April 2022. Most session contains less than 10 candidate documents with page presentation features (mType and serph etc) and user behaviors (click and slipoff count etc) of current query. The latter is also randomly sampled from the monthly collected query sessions of the Baidu search engine and the relevance of each document to the query has been judged by expert annotators who assign one of 5 labels, {bad, fair, good, excellent, perfect} to the document.
In our experimental setting, the large scale web search sessions is applied to train the ranking model and the subset of expert annotation dataset using in stage 1 is applied to validate the performance of the ranking model.
\begin{table}[tb]
\caption{Comparison with different number of fc layer in Examination-bias Model}
\begin{center}
\renewcommand\arraystretch{1.5}
\begin{tabular}{c c }
\toprule[1.25pt]
\textbf{Method} & \textbf{DCG@10} \\
\hline
MFIM-3l & 10.05 \\
MFIM-5l & \textbf{10.16} \\
MFIM-7l & 10.14 \\
\hline
MFIM-5l-$g$4 & 10.16 \\
MFIM-5l-$g$6 & \textbf{10.25} \\
MFIM-5l-$g$8 & 10.14 \\
\toprule[1.25pt]
\end{tabular}
\end{center}
\end{table}
\subsubsection{Training Details.}
The entire model is implemented by PyTorch \cite{pytorch} and trained on 8 NVIDIA A100 GPUs with batch size $16\times8$. The optimizer we used is Adam \cite{kingma2014adam} and learning rate is fixed as 5e-6. We set the maximum ranking position of candidate documents to be 10, i.e. $n=10$ and the group size $g$ is set to 6. The embedding size of every bias factor is 8. In addition, the relevance model should be pre-trained using the method whose detail can be seen at \href{https://github.com/lixsh6/Tencent_wsdm_cup2023/tree/main/pytorch_unbias}{https://github.com/lixsh6/Tencent\_wsdm\_cup2023}.
\subsubsection{Metrics}
The Discounted Cumulative Gain (DCG) is employed to assess the performance of the ranking model. For a ranked list of $N$ documents, we use the following implementation of DCG:
\begin{equation}
DCG@N = \sum_{k=1}^{N} \frac {G_k} {log_{2}(k+1)} \qquad
\end{equation}
where $G_k$ denotes the relevance label assigned to the document’s label at position $k$.
\subsection{Performance of Single Model}
The performance of taking different bias factors as input to train the unbiased ranking model are shown in \textbf{Table 1}. Note, the model using position factor only at the first row can been ragarded as the model shown in \textbf{Fig.(a)}. It can be observed that when we integrate the bias factors affecting the perception bias into $x_k^r$ on the basis of the position, the ranking ability of the model will increase accordingly, which proves MFIM is outperform to PBM-based methods.
In addition, we also conduct hyperparameter experiments including how to set the number of fc layers of the examination-bias model and the group size $g$. All results can be found in Table 2
\subsection{Model Ensemble}
In order to further improve the performance of the relevance model, we used the weighted sum of the output scores of 10 models trained under different settings that we produced during the experiment as the final relevance score. The weight of each relevance model is obtained by manual search. The dcg@10 of model Ensemble on val dataset is 10.54 (10.14 on final leaderboard)
\section{CONCLUSION}
In this paper, we introduce our method on WSDM Cup 2023 Unbiased Learning for Web Search which won the $1st$ place with a DCG@10 score of 10.14 on the final leaderboard. We have the following conclusions:
1) Including the bias factors affecting perception bias except for rank position can calculate the more accurate examination bias.
2) We mine three key perception bias factors including slippoff count, mType and serph can improve the debiasing ability of the model.
\begin{acks}
This paper is supported by Tencent Machine Learning Platform Search (Tencent-MLPS). We thank everyone that offers advice to us and everyone associated with organizing and sponsoring the WSDM Cup 2023.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.13748",
"language": "en",
"timestamp": "2023-02-28T02:29:12",
"url": "https://arxiv.org/abs/2302.13748",
"yymm": "2302"
} | \section{Introduction}
\vfill\pagebreak
\bibliographystyle{IEEEbib}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we drive a new research perspective of stereotypical behaviours detection in autism spectrum disorder, \emph{i.e.}\xspace unsupervised video anomaly detection. To better leverage the prior knowledge of ASD and improve the robustness, we propose a dual stream deep model DS-SBD\xspace that detects abnormal behaviours based on temporal trajectory of human poses and the repetition patterns of human actions.
Extensive experimental results demonstrate the effectiveness of our method and may act as a benchmark for future research. In the future, we will investigate more simple but effective proxy tasks to boost the model discriminability.
\section{Experiments}
\label{sec:exp}
\subsection{Dataset}
In our experiments, we use the self-stimulatory behaviour dataset (SSBD)~\cite{rajagopalan2013self} to evaluate the models, which is the publicly-available benchmarking dataset for stereotypical behaviour detection.
The SSBD dataset contains 75 videos with three stereotypical behaviours, \emph{i.e.}\xspace arm flapping, head banging, and spinning.
Following the setting of unsupervised VAD, we split the dataset to testing set with 20 videos and training set with rest of the videos. All sub-clips containing stereotypical behaviours are excluded in the training videos.
\subsection{Implementation Details}
We choose the Adam optimizer for training and the learning rate is set to 0.004. AlphaPose~\cite{fang2017rmpe} is used to generate the 2D human pose and VideoPose3D~\cite{videopose3d2019} is used to generate the 3D human pose. The batch size is set to 60 and the number of consecutive frames in one batch $T$ is set to 64. The repetition detection module applies the backbone of RepNet~\cite{dwibedi2020counting}.
We provide the non-overlapping sliding windows of $T$ frames to compute the final frame-level anomaly scores during testing.
\vspace{-10pt}
\subsection{Results}
We use two widely used evaluation metrics in video anomaly detection community, \emph{i.e.}\xspace micro-averaged area under receiver operation characteristic curve~(AUROC), and macro-averaged AUROC, to evaluate the models. Specifically, the micro-averaged AUROC is to compute the overall frame-level AUC by concatenating all the frames during testing. The macro-averaged AUROC is the average of AUC grouped by videos varying the threshold.
\begin{table}[h]
\centering
\vspace{-10pt}
\caption{The quantitative comparison results between the state-of-the-art model and our proposed model.}
\label{tab: result1}
\vspace{-5pt}
\begin{tabular}{lcc}
\shline
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c}{\textbf{AUROC}} \\ \cline{2-3}
& \textbf{micro} & \textbf{macro} \\ \hline
Frame-Pred.~\cite{liu2018future} & 52.52\% & 54.93\% \\
MNAD~\cite{park2020learning} & 53.70\% & 56.45\% \\
HF2VAD~\cite{liu2021hybrid} & 60.43\% & 54.35\% \\
DS-SBD\xspace-PR & 54.54\% & 51.88\% \\
DS-SBD\xspace-PP & 62.01\% & 55.54\% \\
DS-SBD\xspace-RD & 69.87\% & 72.81\% \\
DS-SBD\xspace & \textbf{71.04\%} & \textbf{73.39\%} \\
\shline
\end{tabular}
\vspace{-8pt}
\end{table}
We report the results of our DS-SBD\xspace with different proxy tasks and compare with several state-of-the-art unsupervised VAD methods, including Frame-Pred.~\cite{liu2018future}, HF2VAD~\cite{liu2021hybrid}, and MNAD~\cite{park2020learning} in Table~\ref{tab: result1}. The model performance is boosted with the three effective auxiliary tasks from 54.54\% to 71.04\% of micro-AUROC and the best macro-AUROC reaches 73.39\%, which significantly outperforms the baseline models. In addition, we observe that the repetition detection module plays a dominant role of unsupervised video anomaly detection for autism spectrum disorder because the stereotypical behaviours are often characterized by repetition. The visualization results are shown in Fig~\ref{fig:visualization}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth, trim=5 8 0 0, clip]{figs/vis.pdf}
\caption{The visualization results of our proposed method. }
\label{fig:visualization}
\end{figure}
\vspace{-25pt}
\subsection{Ablation Study}
\label{sec: ablation}
We conduct ablation studies to investigate the factors that may contribute to the anomaly detection performance.
\begin{table}[ht]
\centering
\caption{The ablation study of different pose modalities.}
\label{tab: result2}
\vspace{-5pt}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{ccccccc}
\shline
\multicolumn{2}{c}{\textbf{PR}} & \multicolumn{2}{c}{\textbf{PP}} & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{RD}}} & \multicolumn{2}{c}{\textbf{AUROC}} \\ \cline{6-7}
\textbf{2D} & \textbf{3D} & \textbf{2D} & \textbf{3D} & \multicolumn{1}{c}{} & \textbf{micro} & \textbf{macro} \\ \hline
\checkmark & & & & & 54.54\% & 51.88\% \\
& \checkmark & & & & 57.85\% & 61.75\% \\
& & \checkmark & & & 62.01\% & 55.54\% \\
& & & \checkmark & & 60.34\% & 61.40\% \\
& & & & \checkmark & 69.87\% & 72.81\% \\
\checkmark & & \checkmark & & & 61.99\% & 55.53\% \\
\checkmark & & \checkmark & & \checkmark & \textbf{71.04\%} & \textbf{73.39\%}\\
& \checkmark & & \checkmark & & 60.30\% & 61.42\% \\
& \checkmark & & \checkmark & \checkmark & 70.65\% & 73.32\% \\ \shline
\end{tabular}
}
\vspace{-15pt}
\end{table}
\noindent\textbf{2D pose vs. 3D pose.}
Although, 3D skeleton trajectory can provide the depth information of human motion, it is usually not as stable and robust compared to 2D pose prediction models because inferring 3D information from 2D frames is more challenging.
As shown in Table~\ref{tab: result2}, our method achieves better performance when taking 2D poses as input.
\noindent\textbf{Number of frames.}
Considering the temporal consistency and periodicity of each stereotypical behaviours, we also investigate whether the different number of input frames will affect the performance. As shown in Table~\ref{tab: result3}, the model achieves the best performance when the input is a relatively long sequence of frames~(\emph{e.g.}\xspace $T=64$). This is because stereotypical behaviours with low frequency often require more information from history frames to accurately discover a periodic repetition patterns.
\noindent\textbf{Weight estimation.} We estimate the $\alpha$, $\beta$, and $\gamma$ by grid search from 0 to 3. In Table~\ref{tab: result4}, the DB-SBD* achieves the best performance when $\alpha$=1.5, $\beta$=0.2, $\gamma$=1.3 with the marginal improvement compared with the default settings~($\alpha$=$\beta$=$\gamma$=1), which shows our model is relatively robust.
\begin{table}[h]
\vspace{-10pt}
\begin{minipage}[h]{0.23\textwidth}
\centering
\caption{The ablation study of different input frames.}
\label{tab: result3}
\vspace{-5pt}
\resizebox{\linewidth}{!}{
\begin{tabular}{ccc}
\shline
\multicolumn{1}{c}{\multirow{2}{*}{$T$ \textbf{frames}}} & \multicolumn{2}{c}{\textbf{AUROC}} \\ \cline{2-3}
\multicolumn{1}{c}{} & \textbf{micro} & \textbf{macro} \\ \hline
4 & 69.37\% & 72.74\% \\
8 & 70.07\% & 72.34\% \\
16 & 70.12\% & 72.93\% \\
64 & \textbf{71.04\%} & \textbf{73.39\%} \\ \shline
\end{tabular}
}
\end{minipage}
\begin{minipage}[h]{0.23\textwidth}
\centering
\caption{The ablation study of weight estimations.}
\label{tab: result4}
\vspace{-5pt}
\resizebox{\linewidth}{!}{
\begin{tabular}{lcc}
\shline
\multicolumn{1}{c}{\multirow{2}{*}{\textbf{Method}}} & \multicolumn{2}{c}{\textbf{AUROC}} \\ \cline{2-3}
\multicolumn{1}{c}{} & \textbf{micro} & \textbf{macro} \\ \hline
DS-SBD & {71.04\%} & {73.39\%} \\
DS-SBD* & \textbf{71.77\%} & \textbf{73.58\%} \\ \shline
\end{tabular}
}
\end{minipage}
\vspace{-15pt}
\end{table}
\section{Introduction}
\label{sec:intro}
Autism spectrum disorder~(ASD) is a neurological and developmental disorder~\cite{rapin1991autistic}
that begins early in childhood and even lasts throughout a person's life.
It causes problems with functioning in society and often affects how people interact, communicate, and socialize with others, resulting in stereotypical behaviours~\cite{rajagopalan2013self}.
Stereotypical behaviours refer to the abnormal and non-functional repetitive behaviours that happens with no obvious stimulus, such as arm flapping, head banging, and spinning.
They will negatively affect ASD children's performance on skill acquisition and social interaction, and as a stress indicator it could even lead to a meltdown event or cause self-damaging behaviours \cite{rad2018deep}.
As a result, monitoring, evaluating, and analyzing the stereotypical behaviours are essential for the clinicians and caregivers to treat and take care of ASD patients,
and an automated stereotypical behaviour detection system holds great potentials in the ASD patient caring and treatment.
In this paper, we focus on automatically detecting stereotypical behaviours from video recording of the ASD patients.
Ryan~\emph{et al.}\xspace\cite{de2020computer} surveyed recent vision-based methods which mainly focus on how to correctly classify the stereotypical behaviours in autism with the help of action recognition~\cite{pandian2022detecting, pandey2020guided, wei2022vision} and video classification~\cite{lakkapragada2022classification, rajagopalan2014detecting, liang2021autism, sun2020spatial, anirudh2019bootstrapping} techniques.
Existing methods perform well on a limited set of pre-defined stereotypical behaviour types through supervised learning paradigms.
However, in practice, stereotypical behaviours detection is an open-set problem, where types of ASD stereotypical behaviours are unbounded with a large variance across different patients.
Thus, there will always be novel behaviour types unseen in the training set, which previous methods are not able to detect.
Furthermore, the collection of clinical video datasets brings great challenge, due to the privacy concerns and high data annotation cost from medical professionals.
To solve the challenge of unknown behaviour types and data collection difficulty, we propose to study ASD stereotypical behaviours detection from a new perspective, \emph{i.e.}\xspace unsupervised video anomaly detection (VAD).
Unsupervised VAD learns the distribution of normal behaviours during training and distinguishes the anomaly ASD behaviours as the outlier of the learned distribution.
Since unsupervised VAD can detect any anomaly types out of the normal behaviours distribution, it is not limited by a finite set of pre-defined anomaly types.
Furthermore, unsupervised VAD does not require to collect any data containing abnormal behaviours for training. Hence, it eases the burden of collecting clinic videos containing ASD patients.
However, existing unsupervised VAD approaches~\cite{liu2021hybrid,chang2020clustering,liu2018future, gong2019memorizing,wang2022video} mainly focus on surveillance scenarios, and directly migrating them to stereotypical behaviour detection is non-trivial, for two reasons:
1) Stereotypical behaviours of ASD patients contain a specific repetitive patterns, while exiting unsupervised VAD methods can not incorporate such prior knowledge.
2) The videos of ASD patients are recorded under a unconstrained environment with various viewpoints and background noises, which brings challenges to the conventional unsupervised VAD methods focusing on surveillance videos under a constrained environment.
As a result, we propose a novel \textbf{D}ual \textbf{S}tream network for \textbf{S}tereotypical \textbf{B}ehaviours \textbf{D}etection, \textbf{DS-SBD\xspace}, where each stream tackles one of the aforementioned two challenges respectively.
Specifically, to improve the robustness over domain variance and background noises, we propose a pose trajectory module that models the temporal consistency of the human actions based on the temporal trajectory of human pose keypoints, filtering out the background noises and domain variance of the raw image frames.
Secondly, to incorporate the repetition pattern of ASD stereotypical behaviours, we propose a repetition detection module which detects the abnormal behaviours based on frame level repetitive patterns.
The proposed DS-SBD\xspace is trained in an unsupervised fashion over videos containing only normal human behaviours with three proxy tasks, \emph{i.e.}\xspace pose reconstruction task, pose prediction task, and repetition detection task.
Our main contributions are summarized as follows:
1) To tackle unknown behaviour types and data collecting difficulty, we formulate ASD stereotypical behaviours detection as an unsupervised video anomaly detection task and reorganize the existing self-stimulatory behaviour dataset~(SSBD) for evaluation.
2) To leverage the ASD stereotypical behaviour prior knowledge and improve the robustness, we propose a dual stream abnormal detection network DS-SBD\xspace ensembled by novel pose trajectory and repetition detection modules.
3) Extensive experimental results and ablation studies verify the effectiveness of each modules, suggesting DS-SBD\xspace could serve as a benchmark for this new challenging task in the future.
\section{Methodology}
\label{sec:method}
Fig. \ref{fig:model} shows the overall network structure of our proposed DS-SBD\xspace. It is a dual stream structure containing two modules, namely a pose trajectory module and a repetition prediction module. The pose trajectory module is responsible for detecting stereotypical behaviours based on human pose trajectories. The repetition module detects the abnormal behaviours based on the action repetitions over a certain period. Following the unsupervised video anomaly detection training settings, the training set only needs to contain videos with normal behaviours.
The model is expected to learn the distribution of normal behaviours from training videos, and outputs a frame-level anomaly score at the inference time to judge whether it is an out of distribution abnormal behaviour.
\subsection{Preliminaries}
Given a video with $N$ frames as $\mat{X}=[\mat{X}^1,\mat{X}^2, \ldots, \mat{X}^N]\in \mathbb{R}^{N \times C \times H \times W}$, the corresponding human poses with $K$ keypoints in $i$-th frame is denoted as $\mat{P}^i = [\mat{P}^i_1, \mat{P}^i_2, \ldots, \mat{P}^i_K] \in \mathbb{R}^{1 \times K \times d}$, where $d$ is the coordinate dimensions of human pose. The trajectory of $j$-th keypoints of one human pose is defined as $\mathsf{tr}(\mat{P}_j) = [\mat{P}^1_j, \mat{P}^2_j, \ldots, \mat{P}^N_j] \in \mathbb{R}^{N \times 1 \times d}$.
\subsection{Pose Trajectory Module}
The pose trajectory module is trained with two proxy tasks, \emph{i.e.}\xspace the pose reconstruction task and the pose prediction task.
\subsubsection{Pose Reconstruction}
The pose reconstruction (PR) proxy task takes the assumption that the pose trajectories of normal behaviours can be well reconstructed by an autoencoder while the anomaly behaviours can not.
Specifically, an LSTM based autoencoder $\mathcal{F}^{\mathrm{PR}}$ is proposed for the reconstruction proxy task.
$\mathcal{F}^{\mathrm{PR}}$ takes a human pose trajectory $\mathsf{tr}(\mat{P})$ as input and aims at reconstructing each human pose keypoints in this trajectory during training. A MSE training loss $\mathcal{L}^{\mathrm{PR}}$ is used to optimize $\mathcal{F}^{\mathrm{PR}}$:
\begin{equation}
\begin{split}
\mathcal{L}^{\mathrm{PR}} &= \| \mathcal{F}^{\mathrm{PR}}(
\mathsf{tr}(\mat{P})) - \mathsf{tr}(\mat{P}) \|^2_2 \\
&= \sum_{i=1}^N\sum_{j=1}^K\| \mathcal{F}^{\mathrm{PR}}(
\mat{P}^i_j) - \mat{P}^i_j \|^2_2
\end{split}
\end{equation}
During inference, the pose reconstruction errors of $\mathcal{F}^{\mathrm{PR}}$ on each keypoint in a frame is summed up the to get a frame-level anomaly score:
\begin{equation}
s^{\mathrm{PR}}_i = \sum_{j=1}^K\| \mathcal{F}^{\mathrm{PR}}(
\mat{P}^i_j) - \mat{P}^i_j \|^2_2
\end{equation}
\subsubsection{Pose Prediction}
The pose prediction (PP) proxy task assumes that
normal human behaviours are temporally consistent, while abnormal ones usually come with unexpected change of actions.
Specifically, given a trajectory of $T$ consecutive poses $\mathsf{tr}(\mat{P}^{1:T})$, the pose prediction task attempts to forecast the next human pose $\mat{P}^{T+1}$ with a deep model $\mathcal{F}^{\mathrm{PP}}$:
\begin{equation}
\hat{\mat{P}}^{T+1} = \mathcal{F}^{\mathrm{PP}}(
\mathsf{tr}(\mat{P}^{1:T}))
\end{equation}
Following~\cite{parsaeifard2021learning}, the pose prediction task is built upon local pose trajectory~(all keypoints) and global pose trajectory~(center point of all keypoints) forecasting. For local keypoints, $\mathcal{F}^{\mathrm{PP}}$ is an LSTM-based variational autoencoder. For a center point, a cascaded LSTM is used for prediction. Similar to the pose reconstruction task, we use MSE to optimize $\mathcal{F}^{\mathrm{PP}}$:
\begin{equation}
\begin{split}
\mathcal{L}^{\mathrm{PP}} &= \| \hat{\mat{P}}^{T+1}_c - \mat{P}^{T+1}_c \|^2_2 + \| \hat{\mat{P}}^{T+1} - \mat{P}^{T+1} \|^2_2 \\
&= \| \hat{\mat{P}}^{T+1}_c - \mat{P}^{T+1}_c \|^2_2 + \sum_{j=1}^K\| \hat{\mat{P}}^{T+1}_j - \mat{P}^{T+1}_j \|^2_2
\end{split}
\end{equation}
where $\hat{\mat{P}}^{T+1}$ and $\hat{\mat{P}}_c^{T+1}$ are the predicted local pose keypoints and the global pose keypoint of the $T+1$ frame, respectively.
Similar to the pose reconstruction module, the anomaly score of one frame is its forecasting errors given past trajectories:
\begin{equation}
s^{\mathrm{PP}}_i = \| \hat{\mat{P}}^i_c - \mat{P}^i_c \|^2_2 + \sum_{j=1}^K\| \hat{\mat{P}}^i_j - \mat{P}^i_j \|^2_2
\end{equation}
\subsection{Repetition Detection Module}
We observe that one of the most distinct characteristics of the stereotypical behaviours in autism spectrum disorder is the repetitive pattern. In other words, the anomaly behaviours would be repeated periodically over short time intervals in the videos.
To leverage this essential prior knowledge, we propose a repetition detection module~(RD). Inspired by recent repetition counting methods~\cite{levy2015live, dwibedi2020counting, hu2022transrac}, we model the repetitive patterns as a temporal self-similarity matrix $\mat{M}$,
whose elements $\mat{M}_{i,j}$ are the similarity score between the feature embedding of $i$-th frame and $j$-th frame, followed by the row-wise softmax operation~\cite{dwibedi2020counting}.
\begin{equation}
\mat{M}_{i,j} = \mathrm{softmax}( - \| x_i - x_j \|^2_2 )
\end{equation}
where $x_i$ and $x_j$ are the latent feature embeddings of $i$-th and $j$-th frames.
Based on the self-similarity matrix, the repetition detection module $\mathcal{F}^{\mathrm{RD}}$
outputs an anomaly score for each video frame, showing the probability whether this frame contains repetitive actions:
\begin{equation}
s^{\mathrm{RD}}_i = \mathcal{F}^{\mathrm{RD}}(\mat{X}^i)
\end{equation}
where $s^{\mathrm{RD}}_i$ is the $i$-th frame anomaly score of the repetition detection module, and $\mat{X}^i$ is the $i$-th input frame. The proposed repetition detection module can be trained on public repetition counting dataset or videos synthesized from the VAD training set.
\subsection{Anomaly Score}
The ultimate anomaly score for each video frame $S_i$ is the weighted sum of two anomaly scores from the pose trajectory module and one anomaly score from the repetition detection module:
\begin{equation}
S_i = \alpha \cdot \frac{s^{\mathrm{PR}}_i - \mu_{\mathrm{PR}}}{\sigma_{\mathrm{PR}}} + \beta \cdot \frac{s^{\mathrm{PP}}_i-\mu_{\mathrm{PP}}}{\sigma_{\mathrm{PP}}} + \gamma \cdot s^{\mathrm{RD}}_i
\end{equation}
where $\alpha$, $\beta$, and $\gamma$ are the weights of the three anomaly scores, $\mu_{\mathrm{PR}}$, $\sigma_{\mathrm{PR}}$, $\mu_{\mathrm{PP}}$, and $\sigma_{\mathrm{PP}}$ are the means and standard deviations of training pose reconstruction and prediction errors.
|